Dr. Douville is a critical care anesthesiologist with an investigative background in bioinformatics and perioperative outcomes research. He studies techniques for utilizing health care data, including genotype, to deliver personalized medicine in the perioperative period and intensive care unit. His research background has focused on ways technology can assist health care delivery to improve patient outcomes. This began designing microfluidic chips capable of recreating fluid mechanics of atelectatic alveoli and monitoring the resulting barrier breakdown real-time. His interest in bioinformatics was sparked when he observed how methodology designed for tissue engineering could be modified to the nano-scale to enable genomic analysis. Additionally, his engineering training provided the framework to apply data-driven modeling techniques, such as finite element analysis, to complex biological systems.
My research is focused on a wide range of topics from computational social sciences to bioinformatics where I do pattern recognition, perform data analysis, and build prediction models. At the core of my effort, there lie machine learning methods by which I have been trying to address problems related to social networks, opinion mining, biomarker discovery, pharmacovigilance, drug repositioning, security analytics, genomics, food contamination, and concussion recovery. I’m particularly interested in and eager to collaborate on cyber security aspect of social media analytics that includes but not limited to misinformation, bots, and fake news. In addition, I’m still pursuing opportunities in bioinformatics, especially about next generation sequencing analysis that can be also leveraged for phenotype predictions by using machine learning methods.
A typical pipeline for developing and evaluating a prediction models to identify malicious Android mobile apps in the market
Shobita Parthasarathy studies the governance of emerging science and technology as well as the politics of evidence and expertise in policymaking, in comparative and international perspective. She has a long-standing interest in the use and regulation of genomic and genetic data. Her first two books, Building Genetic Medicine: Breast Cancer, Technology, and the Comparative Politics of Health Care (MIT Press, 2007) and Patent Politics: Life Forms, Markets, and the Public Interest in the United States and Europe, (University of Chicago Press, 2017) cover these themes. Using comparative and qualitative interpretive research methods, she studies the the ethics, politics, and economics of data collection and interpretation. This includes concerns about consent and intellectual property in genomic databases, the social implications of commodifying data, the use of personal data in determining access to social services and health care, and the use of data for social justice and public good.
Her current research focuses on the politics of inclusive innovation in international development, with a focus in India. She is interested in how political culture and ideology shape what counts as inclusive “innovation”, and in the implications for social and political order—particularly the “empowerment” of poor girls and women.
I develop probabilistic and statistical models to analyze genetic and genomic data. We use these methods to study evolution, natural selection, and human history. Recently, I have been interested in applying these techniques to study viral epidemics (e.g., HIV) and cancer.
Dr. Liu’s research lab aims to develop machine learning approaches for real-world bioinformatics and medical informatics problems. We believe that computational methods are essential in order to understand many of these molecular biology problems, including the dynamics of genome conformation and nuclear organization, gene regulation, cellular networks, and the genetic basis of human diseases.
Dr. Jin Lu is an Assistant Professor of Computer and Information Science at the University of Michigan, Dearborn.
His major research interests include machine learning, data mining, optimization, matrix analysis, biomedical informatics, and health informatics. Two main directions are being pursued:
(1) Large-scale machine learning problems with data heterogeneity. Data heterogeneity is common across many high-impact application domains, ranging from recommendation system to Computer Vision, Bioinformatics and Health-informatics. Such heterogeneity can be present in a variety of forms, including (a) sample heterogeneity, where multiple resources of data samples are available as side information; (b) task heterogeneity, where multiple related learning tasks can be jointly learned to improve the overall performance; (c) view heterogeneity, where complementary information is available from various sources. My research interests focus on building efficient machine learning methods from such data heterogeneity, aiming to improve the learning model by making the best use of all data resources.
(2) Machine learning methods with provable guarantees. Machine learning has been substantially developed and has demonstrated great success in various domains. Despite its practical success, many of the applications involve solving NP-hard problems based on heuristics. It is challenging to analyze whether a heuristic scheme has any theoretical guarantee. My research interest is to employ granular data structure, e.g. sample clusters or features describing an aspect of the sample, to design new theoretically-sound models and algorithms for machine learning problems.
I am Research Faculty with the Michigan Center for Integrative Research in Critical Care (MCIRCC). Our team builds predictive algorithms, analyzes signals, and implements statistical models to advance Critical Care Medicine. We use electronic healthcare record data to build predictive algorithms. One example of this is Predicting Intensive Care Transfers and other Unforeseen Events (PICTURE), which uses commonly collected vital signs and labs to predict patient deterioration on the general hospital floor. Additionally, our team collects waveforms from the University Hospital, and we store this data utilizing Amazon Web Services. We use these signals to build predictive algorithms to advance precision medicine. Our flagship algorithm called Analytic for Hemodynamic Instability (AHI), predicts patient deterioration using a single-lead electrocardiogram signal. We use Bayesian methods to analyze metabolomic biomarker data from blood and exhaled breath to understand Sepsis and Acute Respiratory Distress Syndrome. I also have an interest in statistical genetics.
Jeffrey Regier received a PhD in statistics from UC Berkeley (2016) and joined the University of Michigan as an assistant professor. His research interests include graphical models, Bayesian inference, high-performance computing, deep learning, astronomy, and genomics.