My methodological research focus on developing statistical methods for routinely collected healthcare databases such as electronic health records (EHR) and claims data. I aim to tackle the unique challenges that arise from the secondary use of real-world data for research purposes. Specifically, I develop novel causal inference methods and semiparametric efficiency theory that harness the full potential of EHR data to address comparative effectiveness and safety questions. I develop scalable and automated pipelines for curation and harmonization of EHR data across healthcare systems and coding systems.
As a board-certified ophthalmologist and glaucoma specialist, I have more than 15 years of clinical experience caring for patients with different types and complexities of glaucoma. In addition to my clinical experience, as a health services researcher, I have developed experience and expertise in several disciplines including performing analyses using large health care claims databases to study utilization and outcomes of patients with ocular diseases, racial and other disparities in eye care, associations between systemic conditions or medication use and ocular diseases. I have learned the nuances of various data sources and ways to maximize our use of these data sources to answer important and timely questions. Leveraging my background in HSR with new skills in bioinformatics and precision medicine, over the past 2-3 years I have been developing and growing the Sight Outcomes Research Collaborative (SOURCE) repository, a powerful tool that researchers can tap into to study patients with ocular diseases. My team and I have spent countless hours devising ways of extracting electronic health record data from Clarity, cleaning and de-identifying the data, and making it linkable to ocular diagnostic test data (OCT, HVF, biometry) and non-clinical data. Now that we have successfully developed such a resource here at Kellogg, I am now collaborating with colleagues at > 2 dozen academic ophthalmology departments across the country to assist them with extracting their data in the same format and sending it to Kellogg so that we can pool the data and make it accessible to researchers at all of the participating centers for research and quality improvement studies. I am also actively exploring ways to integrate data from SOURCE into deep learning and artificial intelligence algorithms, making use of SOURCE data for genotype-phenotype association studies and development of polygenic risk scores for common ocular diseases, capturing patient-reported outcome data for the majority of eye care recipients, enhancing visualization of the data on easy-to-access dashboards to aid in quality improvement initiatives, and making use of the data to enhance quality of care, safety, efficiency of care delivery, and to improve clinical operations. .
I conduct research on the use of consumer-facing technologies for chronic disease self management. My work predominantly centers on the use of mobile applications that collect and manage patient generated health data overt time.
I am Research Faculty with the Michigan Center for Integrative Research in Critical Care (MCIRCC). Our team builds predictive algorithms, analyzes signals, and implements statistical models to advance Critical Care Medicine. We use electronic healthcare record data to build predictive algorithms. One example of this is Predicting Intensive Care Transfers and other Unforeseen Events (PICTURE), which uses commonly collected vital signs and labs to predict patient deterioration on the general hospital floor. Additionally, our team collects waveforms from the University Hospital, and we store this data utilizing Amazon Web Services. We use these signals to build predictive algorithms to advance precision medicine. Our flagship algorithm called Analytic for Hemodynamic Instability (AHI), predicts patient deterioration using a single-lead electrocardiogram signal. We use Bayesian methods to analyze metabolomic biomarker data from blood and exhaled breath to understand Sepsis and Acute Respiratory Distress Syndrome. I also have an interest in statistical genetics.
Jeffrey Regier received a PhD in statistics from UC Berkeley (2016) and joined the University of Michigan as an assistant professor. His research interests include graphical models, Bayesian inference, high-performance computing, deep learning, astronomy, and genomics.
Current research includes a project funded by Toyota that uses Markov Models and Machine Learning to predict heart arrhythmia, an NSF-funded project to detect Acute Respiratory Distress Syndrome (ARDS) from x-ray images and projects using tensor analysis on health care data (funded by the Department of Defense and National Science Foundation).
I study how law shapes innovation in the life sciences, with a substantial focus on big data and artificial intelligence in medicine. I write about the intellectual property incentives and protections for data and AI algorithms, the privacy issues with wide-scale health- and health-related data collection, the medical malpractice implications of AI in medicine, and how FDA should regulate the use of medical AI.
Samuel K Handelman, Ph.D., is Research Assistant Professor in the department of Internal Medicine, Gastroenterology, of Michigan Medicine at the University of Michigan, Ann Arbor. Prof. Handelman is focused on multi-omics approaches to drive precision/personalized-therapy and to predict population-level differences in the effectiveness of interventions. He tends to favor regression-style and hierarchical-clustering approaches, partially because he has a background in both statistics and in cladistics. His scientific monomania is for compensatory mechanisms and trade-offs in evolution, but he has a principled reason to focus on translational medicine: real understanding of these mechanisms goes all the way into the clinic. Anything less that clinical translation indicates that we don’t understand what drove the genetics of human populations.
Zhenke Wu is an Assistant Professor of Biostatistics, and a core faculty member in the Michigan Institute of Data Science (MIDAS). He received his Ph.D. in Biostatistics from the Johns Hopkins University in 2014 and then stayed at Hopkins for his postdoctoral training before joining the University of Michigan. Dr. Wu’s research focuses on the design and application of statistical methods that inform health decisions made by individuals, or precision medicine. The original methods and software developed by Dr. Wu are now used by investigators from research institutes such as CDC and Johns Hopkins, as well as site investigators from developing countries, e.g., Kenya, South Africa, Gambia, Mali, Zambia, Thailand and Bangladesh.
Profile: At a “sweet spot” of data science
By Dan Meisler
Communications Manager, ARC
If you had to name two of the more exciting, emerging fields of data science, electronic health records (EHR) and mobile health might be near the top of the list.
Zhenke Wu, one of the newest MIDAS core faculty members, has one foot firmly in each field.
“These two fields share the common goal of learning from the experience of the population in the past to advance health and clinical decisions for those to follow. I am looking forward to more work that will bring the two fields closer to continuously generate insights about human health.” Wu said. “I’m in a sweet spot.”
Wu joined U-M in Fall 2016, after earning a PhD in Biostatistics from Johns Hopkins University, and a bachelor’s in Mathematics from Fudan University. He said the multitude of large-scale studies going on at U-M and access to EHR databases were factors in his coming to Michigan.
“The University of Michigan is an exciting place that has a diversity of large-scale databases and supportive research groups in the fields I’m interested in,” he said.
Wu is collaborating with the Michigan Genomics Initiative, which is a biorepository effort at Michigan Medicine to integrate genome-wide information with EHR from approximately 40,000 patients undergoing anesthesia prior to surgery or diagnostic procedures. He’s also collaborating with Dr. Srijan Sen, Associate Professor, Department of Psychiatry and Molecular and Behavioral Neuroscience Institute, on the MIDAS-supported project “Identifying Real-Time Data Predictors of Stress and Depression Using Mobile Technology,” the preliminary results of which recently matured into an NIH-funded R01 project “Mobile Technology to Identify Mechanisms Linking Genetic Variation and Depression” that will draw broad expertise from a multi-disciplinary team of medical and data science researchers.
“One of my goals is to use an integrated and rigorous approach to predict how a person’s health status will be in the near future,” Wu said.
Wu applies hierarchical Bayesian models to these problems, which he hopes will shed light on phenomena he describes as latent constructs that are “well-known, but less quantitatively understood, e.g., intelligence quotient (IQ) in psychology.”
As another example, he cites the current challenge in active surveillance of prostate cancer patients for aggressive tumors requiring removal and/or radiation, or indolent tumors permitting continued surveillance.
“The underlying status of aggressive versus indolent cancer is not observed, which needs to be learned from the results of biopsy and other clinical measurements,” he said. “The decisions and experience of urologists and their patients will greatly benefit from more accurate understanding of the tumor status… There are lots of scientific problems in clinical, biomedical, behavioral and social sciences where you have well-known but less quantitatively understood latent constructs. These are problems that Bayesian latent variable methods can formulate and address.”
Just as Wu has a hand in two hot-button big data areas, he also sees himself as straddling the line between application and methodology.
He says the large number of data sources — sensors, mobile apps, test results, and questionnaires, to name just a few — results in richness as well as some “messiness” that needs new methodologies to adjust, integrate and translate to new scientific insights. At the same time, a valid new methodology for dealing with, for example, electronic health data, will likely find numerous different applications.
Wu says his approach was heavily influenced by his work in the Pneumonia Etiology Research for Child Health (PERCH) funded by the Gates Foundation while he was at Johns Hopkins. Pneumonia is a clinical syndrome due to lung infection that can be caused by more than 30 different species of pathogens, including bacteria, viruses and fungi. The goal of the seven-country study that enrolled more than 5,000 cases and 5,000 controls from Africa and Southeast Asia is to estimate the frequency with which each pathogen caused pneumonia in the population and the probability of each individual being infected by the list of pathogens in the lung.
“In most settings, it is extremely difficult to identify the pathogen by directly sampling from the site of infection – the child’s lung. PERCH therefore looked for other sources of evidence by standardizing and comprehensively testing biofluids collected from sites peripheral to the lung. Using hierarchical Bayesian models to infer disease etiology by integrating such a large trove of data was extremely fun and exciting”, he said.
Wu’s initial interest in math, leading to biostatistics and now data science, stems from what he called a “greedy” desire to learn the guiding principles of how the world works by rigorous data science.
“If you have new problems, you can wait for other people to ask a clean math question, or you can go work with these messy problems and figure out interesting questions and their answers,” he said.
For more on Dr. Wu, see his profile on Michigan Experts.
From experts.umich.edu.[feedzy-rss feeds=”https://experts.umich.edu/en/persons/zhenke-wu/publications/?format=rss” max=”3″ feed_title=”no” refresh=”12_hours” sort=”date_desc” meta=”yes” summary=”yes” ]