A Critical Center for Critical Care
U-M’s Weil Institute is charged with developing transformative technologies for patients. In the seven years since its founding, Weil Institute has collaborated with Innovation Partnerships to spin out eight companies and licensed 14 products that are making positive impacts on patients in the critical care setting. Last year alone, the center was awarded $30 million in grant funding and submitted $79 million in proposals to help keep those technologies coming.
The driver for Weil Institute’s creation was a decades-long lack of innovation in critical care despite the need for tools to facilitate care decisions, according to Weil Institute Executive Director and Professor of Emergency Medicine, Kevin Ward. Of his 30 years as an emergency medicine physician, Ward said, “my everyday ability to save a life within minutes to hours or even a day with new technologies has not changed.”
Game changers in critical care will come from integrating clinical and non-clinical sciences like engineering and data science, Ward said, and Weil Institute is organized to enable this multidisciplinary approach. Teams of clinicians, nurses, basic scientists, engineers and data scientists design and execute projects within the center, and also contribute to projects of other U-M researchers.
Unique to Weil Institute compared with most academic centers is a focus on ensuring that discoveries made at the center have the best chance to be translated, developed and deployed in hospitals. “Just getting a technology licensed doesn’t necessarily get it to the bedside,” explained Weil Institute Commercialization Coach Ken Spenser. With the bedside goal in mind, Spenser guides U-M researchers on their project design to meet FDA regulatory requirements from day one, and the center’s proposal design unit also helps with project planning.
Any U-M researcher can become a Weil Institute member for free and gain access to Weil Institute’s experts, a de-identified patient data repository and large animal labs to work towards specific research goals. The scope of Weil Institute’s research ranges from basic biology, preclinical and clinical therapeutics, diagnostics, predictive analytics and medical devices.
One focus of Weil Institute’s research is the development of machine learning algorithms that help clinicians make time-sensitive care decisions faster. These algorithms are trained on electronic health records and monitoring data collected at U-M patient bedsides. Enabling the collection, storage and analysis of these data has been no small feat for Weil Institute, and Weil Institute and the broader community are beginning to reap the benefits.
Keep reading to learn more about the predictive tools Weil Institute has already created and its big data infrastructure that will enable the next generation of technologies.
Unique to MCIRCC compared with most academic centers is a focus on ensuring that discoveries made at the center have the best chance to be translated, developed and deployed in hospitals.
Predicting Patient Deterioration
Machine learning is a scientific approach uniquely positioned to assist in developing predictive tools in critical care and MCIRCC has a pipeline of such technologies already licensed, ready to license and in development.
“Timely actions are very consequential in critical care,” said Sardar Ansari, Director of the Data Science Team at MCIRCC and a Research Assistant Professor in U-M’s Emergency Medicine Department.
Critical care clinicians need to make diagnostic and treatment decisions on a timescale of seconds to hours, while in other areas of care, that time scale is days to weeks. But the volume and types of data make accurate, timely decisions difficult.
“For critically ill and injured patients being evaluated and treated in the Emergency Department and ICU, information from blood work, physiologic monitors, ventilators and other life support devices, x-rays, CT scans, etc., can produce upwards of 100,000 data points per second. Humans can only track 4 or less pieces of data at one time on one patient, much less than 10-20 patients at a time. There is a lot of great information that exists in current sensors like the common electrocardiogram or a CT scan that is invisible or can’t be captured and processed by the human eye and brain,” Ward explained.
That’s where decision support tools like machine learning models come into play. They can help you look at everything and help you not miss a piece of data that may have impacted your decision,” said Ansari.
Among MCIRCC’s predictive analytics are AHI and PICTURE.
AHI, or Analytic for Hemodynamic Instability, is an algorithm that can alert clinicians when a patient who is being monitored with an electrocardiogram (ECG) is likely to experience hemodynamic instability, a leading cause of death in this patient population. The technology was licensed to Fifth Eye Inc. in 2018 and has since received de novo classification from FDA. The company attracted $11.5 million in venture funding last year.
Spenser and MCIRCC Managing Director Phil Jacokes point to MCIRCC’s relationship with Fifth Eye as a perfect example of how the center continues to support its technologies even after they change hands. “We look at them more as a partner than a licensee,” said Jacokes, and that relationship enabled AHI to impact patients at the bedside via the product’s FDA approval.
A suite of predictive analytics more generalized for any critical care patient is PICTURE. Using information in a patient’s electronic health record (EHR), such as lab test results, PICTURE predicts patient death or transfer to the ICU and provides clinicians with actions to take to prevent those outcomes.
The tool is already making an impact: the PICTURE software was implemented at Michigan Medicine last year and it is currently being tested there. There is a general version, PICTURE-General, that can be used in any critical care setting, and also setting-specific versions including PICTURE-Sepsis, PICTURE-Pediatric, PICTURE-COVID-19 and PICTURE-Rehab.
When tested in the COVID-19 setting, PICTURE was found to be more accurate at raising alarm bells for patient deterioration than competing, non-U-M product EPIC (see study here).
Ansari pointed to a strategic decision on data inclusion that he thinks makes PICTURE particularly accurate: diagnostic tests ordered by physicians, while part of the EHR, are ignored. A study published recently by U-M researchers not associated with MCIRCC supports that.
“We don’t want to feed back what the clinician already knows. If the clinician orders a test, that means that they’re already suspecting something,” Ansari said, and the team didn’t want to bias the model on data points that were not physiologic, or to create an echo chamber for the clinician.
Additionally, each hospital has its own protocols for which diagnostic tests are ordered and when. By ignoring diagnostic test orders, PICTURE becomes more universally applicable across hospitals.
PICTURE is available for license. For more information on PICTURE, see here.
Future work on the PICTURE tools includes incorporating new data types beyond those in an EHR.
Harnessing More Data to Expand Patient Impact
MCIRCC is poised to expand existing projects and create new, even more sophisticated clinical decision-making tools using reams of waveform data collected from over 20,000 patients since 2016.
Waveform data come from bedside monitors where information is continuously generated – things like an EKG, or a pulse oximeter that patients wear on their fingers. If tools existed that could monitor those data as they are being collected from the patient, in “real-time”, clinicians could use them to make more informed care decisions faster.
The hurdles to using waveform data are many and include establishing an infrastructure to collect, store, access and analyze them to create a decision-making tool, and then integrate that decision-making tool into the bedside monitoring system. MCIRCC and U-M have already checked these boxes, and are now focused on using the collected waveform data to train machine learning models to create tools. The center has built a team with deep expertise in signal processing, data science, data management and data quality that positions them for success towards this goal.
“The difficulty is that waveform data is raw or unstructured data,” Ansari explained. “Anything that goes into an EHR is a quantity that links to a physiological factor. For example, oxygen saturation is the amount of oxygen in your blood. So it’s a high-level piece of information that can directly go into a [machine learning] model. But the underlying waveform that that pulse oximeter measurement is calculated from is much more complex. You can’t just put it into a model – you have to take that raw data and turn it into a piece of information that a machine learning model can consume.” In addition, the waveform data is much more noisy. Because most monitors run 24-7, meaningful signals or changes can be buried among many, many less meaningful data points – or among ‘noise’.
Thanks to the infrastructure and expertise, Ansari said “anything you want to do in real-time predictive modeling, we can do it.”
Ward added “The strategic infrastructure we have put together is also allowing us to create a new generation of wearable sensors that in effect create new “vital signs” that provide us much richer physiologic information than what we commonly collect (blood pressure, temperature, and pulse and respiratory rate). We are looking to move the ICU into the home.”
MCIRCC’s members – which again, can be any U-M researcher – can request access to the de-identified patient waveform data for use such as providing preliminary support for a grant application.
U-M has a financial interest in inheRET through a licensing agreement.