Elle O’Brien

By |

My research focuses on building infrastructure for public health and health science research organizations to take advantage of cloud computing, strong software engineering practices, and MLOps (machine learning operations). By equipping biomedical research groups with tools that facilitate automation, better documentation, and portable code, we can improve the reproducibility and rigor of science while scaling up the kind of data collection and analysis possible.

Research topics include:
1. Open source software and cloud infrastructure for research,
2. Software development practices and conventions that work for academic units, like labs or research centers, and
3. The organizational factors that encourage best practices in reproducibility, data management, and transparency

The practice of science is a tug of war between competing incentives: the drive to do a lot fast, and the need to generate reproducible work. As data grows in size, code increases in complexity and the number of collaborators and institutions involved goes up, it becomes harder to preserve all the “artifacts” needed to understand and recreate your own work. Technical AND cultural solutions will be needed to keep data-centric research rigorous, shareable, and transparent to the broader scientific community.

 

Todd Hollon

By |

A major focus of the MLiNS lab is to combine stimulated Raman histology (SRH), a rapid label-free, optical imaging method, with deep learning and computer vision techniques to discover the molecular, cellular, and microanatomic features of skull base and malignant brain tumors. We are using SRH in our operating rooms to improve the speed and accuracy of brain tumor diagnosis. Our group has focused on deep learning-based computer vision methods for automated image interpretation, intraoperative diagnosis, and tumor margin delineation. Our work culminated in a multicenter, prospective, clinical trial, which demonstrated that AI interpretation of SRH images was equivalent in diagnostic accuracy to pathologist interpretation of conventional histology. We were able to show, for the first time, that a deep neural network is able to learn recognizable and interpretable histologic image features (e.g. tumor cellularity, nuclear morphology, infiltrative growth pattern, etc) in order to make a diagnosis. Our future work is directed at going beyond human-level interpretation towards identifying molecular/genetic features, single-cell classification, and predicting patient prognosis.

Xu Shi

By |

My methodological research focus on developing statistical methods for routinely collected healthcare databases such as electronic health records (EHR) and claims data. I aim to tackle the unique challenges that arise from the secondary use of real-world data for research purposes. Specifically, I develop novel causal inference methods and semiparametric efficiency theory that harness the full potential of EHR data to address comparative effectiveness and safety questions. I develop scalable and automated pipelines for curation and harmonization of EHR data across healthcare systems and coding systems.

Joshua Stein

By |

As a board-certified ophthalmologist and glaucoma specialist, I have more than 15 years of clinical experience caring for patients with different types and complexities of glaucoma. In addition to my clinical experience, as a health services researcher, I have developed experience and expertise in several disciplines including performing analyses using large health care claims databases to study utilization and outcomes of patients with ocular diseases, racial and other disparities in eye care, associations between systemic conditions or medication use and ocular diseases. I have learned the nuances of various data sources and ways to maximize our use of these data sources to answer important and timely questions. Leveraging my background in HSR with new skills in bioinformatics and precision medicine, over the past 2-3 years I have been developing and growing the Sight Outcomes Research Collaborative (SOURCE) repository, a powerful tool that researchers can tap into to study patients with ocular diseases. My team and I have spent countless hours devising ways of extracting electronic health record data from Clarity, cleaning and de-identifying the data, and making it linkable to ocular diagnostic test data (OCT, HVF, biometry) and non-clinical data. Now that we have successfully developed such a resource here at Kellogg, I am now collaborating with colleagues at > 2 dozen academic ophthalmology departments across the country to assist them with extracting their data in the same format and sending it to Kellogg so that we can pool the data and make it accessible to researchers at all of the participating centers for research and quality improvement studies. I am also actively exploring ways to integrate data from SOURCE into deep learning and artificial intelligence algorithms, making use of SOURCE data for genotype-phenotype association studies and development of polygenic risk scores for common ocular diseases, capturing patient-reported outcome data for the majority of eye care recipients, enhancing visualization of the data on easy-to-access dashboards to aid in quality improvement initiatives, and making use of the data to enhance quality of care, safety, efficiency of care delivery, and to improve clinical operations. .

S. Sandeep Pradhan

By |

My research interest include information theory, coding theory, distributed data processing, quantum information theory, quantum field theory.

Lorraine Buis

By |

I conduct research on the use of consumer-facing technologies for chronic disease self management. My work predominantly centers on the use of mobile applications that collect and manage patient generated health data overt time.

Christopher E. Gillies

By |

I am Research Faculty with the Michigan Center for Integrative Research in Critical Care (MCIRCC). Our team builds predictive algorithms, analyzes signals, and implements statistical models to advance Critical Care Medicine. We use electronic healthcare record data to build predictive algorithms. One example of this is Predicting Intensive Care Transfers and other Unforeseen Events (PICTURE), which uses commonly collected vital signs and labs to predict patient deterioration on the general hospital floor. Additionally, our team collects waveforms from the University Hospital, and we store this data utilizing Amazon Web Services. We use these signals to build predictive algorithms to advance precision medicine. Our flagship algorithm called Analytic for Hemodynamic Instability (AHI), predicts patient deterioration using a single-lead electrocardiogram signal. We use Bayesian methods to analyze metabolomic biomarker data from blood and exhaled breath to understand Sepsis and Acute Respiratory Distress Syndrome. I also have an interest in statistical genetics.

Jeffrey Regier

By |

Jeffrey Regier received a PhD in statistics from UC Berkeley (2016) and joined the University of Michigan as an assistant professor. His research interests include graphical models, Bayesian inference, high-performance computing, deep learning, astronomy, and genomics.

Harm Derksen

By |

Current research includes a project funded by Toyota that uses Markov Models and Machine Learning to predict heart arrhythmia, an NSF-funded project to detect Acute Respiratory Distress Syndrome (ARDS) from x-ray images and projects using tensor analysis on health care data (funded by the Department of Defense and National Science Foundation).

Nicholson Price

By |

I study how law shapes innovation in the life sciences, with a substantial focus on big data and artificial intelligence in medicine. I write about the intellectual property incentives and protections for data and AI algorithms, the privacy issues with wide-scale health- and health-related data collection, the medical malpractice implications of AI in medicine, and how FDA should regulate the use of medical AI.