Explore ARCExplore ARC

Patrick Schloss

By |

The Schloss lab is broadly interested in beneficial and pathogenic host-microbiome interactions with the goal of improving our understanding of how the microbiome can be used to reach translational outcomes in the prevention, detection, and treatment of colorectal cancer, Crohn’s disease, and Clostridium difficile infection. To address these questions, we test traditional ecological theory in the microbial context using a systems biology approach. Specifically, the laboratory specializes in using studies involving human subjects and animal models to understand how biological diversity affects community function using a variety of culture-independent genomics techniques including sequencing 16S rRNA gene fragments, metagenomics, and metatranscriptomics. In addition, they use metabolomics to understand the functional role of the gut microbiota in states of health and disease. To support these efforts, they develop and apply bioinformatic tools to facilitate their analysis. Most notable is the development of the mothur software package (https://www.mothur.org), which is one of the most widely used tools for analyzing microbiome data and has been cited more than 7,300 times since it was initially published in 2009. The Schloss lab deftly merges the ability to collect data to answer important biological questions using cutting edge wet-lab techniques and computational tools to synthesize these data to answer their biological questions.

Given the explosion in microbiome research over the past 15 years, the Schloss lab has also stood at the center of a major effort to train interdisciplinary scientists in applying computational tools to study complex biological systems. These efforts have centered around developing reproducible research skills and applying modern data visualization techniques. An outgrowth of these efforts at the University of Michigan has been the institutionalization of The Carpentries organization on campus (https://carpentries.org), which specializes in peer-to-peer instruction of programming tools and techniques to foster better reproducibility and build a community of practitioners.

The Schloss lab uses computational tools to integrate multi-omics tools in a culture-independent approach to understand how bacteria interact with each other and their host to drive processes such as colorectal cancer and susceptibility to Clostridium difficile infections.

Yuki Shiraito

By |

Yuki Shiraito works primarily in the field of political methodology. His research interests center on the development and applications of Bayesian statistical models and large-scale computational algorithms for data analysis. He has applied these quantitative methods to political science research including a survey experiment on public support for conflicting parties in civil war, heterogeneous effects of indiscriminate state violence, and the detection of text diffusion among a large set of legislative bills.

After completing his undergraduate education at the University of Tokyo, Yuki received his Ph.D. in Politics (2017) from Princeton University. Before joining the University of Michigan as an Assistant Professor in September 2018, he served as a Postdoctoral Fellow in the Program of Quantitative Social Science at Dartmouth College.

S. Sriram

By |

S. Sriram, PhD, is Associate Professor of Marketing in the University of Michigan Ross School of Business, Ann Arbor.

Prof. Sriram’s research interests are in the areas of brand and product portfolio management, multi-sided platforms, healthcare policy, and online education. His research uses state of the art econometric methods to answer important managerial and policy-relevant questions. He has studied topics such as measuring and tracking brand equity and optimal allocation of resources to maintain long-term brand profitability, cannibalization, consumer adoption of technology products, and strategies for multi-sided platforms. Substantively, his research has spanned several industries including consumer packaged goods, technology products and services, retailing, news media, the interface of healthcare and marketing, and MOOCs.

Samuel K Handelman

By |

Samuel K Handelman, Ph.D., is Research Assistant Professor in the department of Internal Medicine, Gastroenterology, of Michigan Medicine at the University of Michigan, Ann Arbor. Prof. Handelman is focused on multi-omics approaches to drive precision/personalized-therapy and to predict population-level differences in the effectiveness of interventions. He tends to favor regression-style and hierarchical-clustering approaches, partially because he has a background in both statistics and in cladistics. His scientific monomania is for compensatory mechanisms and trade-offs in evolution, but he has a principled reason to focus on translational medicine: real understanding of these mechanisms goes all the way into the clinic. Anything less that clinical translation indicates that we don’t understand what drove the genetics of human populations.

Antonios M. Koumpias

By |

Antonios M. Koumpias, Ph.D., is Assistant Professor of Economics in the department of Social Sciences at the University of Michigan, Dearborn. Prof. Koumpias is an applied microeconomist with research interests in public economics, with an emphasis on behavioral tax compliance, and health economics. In his research, he employs quasi-experimental methods to disentangle the causal impact of policy interventions that occur at the aggregate (e.g. states) or the individual (e.g. taxpayers) level in a comparative case study setting. Namely, he relies on regression discontinuity designs, regression kink designs, matching methods, and synthetic control methods to perform program evaluation that estimates the causal treatment effect of the policy in question. Examples include the use of a regression discontinuity design to estimate the impact of a tax compliance reminders on payments of overdue income tax liabilities in Greece, matching methods to measure the influence of mass media campaigns in Pakistan on income tax filing and the synthetic control method to evaluate the long-term effect of state Medicaid expansions on mortality.

Evolution of Annual Changes in All-cause Childless Adult Mortality in New York State following 2001 State Medicaid Expansion

Kai S. Cortina

By |

Kai S. Cortina, PhD, is Professor of Psychology in the College of Literature, Science, and the Arts at the University of Michigan, Ann Arbor.

Prof. Cortina’s major research revolves around the understanding of children’s and adolescents’ pathways into adulthood and the role of the educational system in this process. The academic and psycho-social development is analyzed from a life-span perspective exclusively analyzing longitudinal data over longer periods of time (e.g., from middle school to young adulthood). The hierarchical structure of the school system (student/classroom/school/district/state/nations) requires the use of statistical tools that can handle these kind of nested data.

 

Jeffrey S. McCullough

By |

Jeffrey S. McCullough, PhD, is Associate Professor in the department of Health Management and Policy in the School of Public Health at the University of Michigan, Ann Arbor.

Prof. McCullough’s research focuses on technology and innovation in health care with an emphasis on information technology (IT), pharmaceuticals, and empirical methods.  Many of his studies explored the effect of electronic health record (EHR) systems on health care quality and productivity. While the short-run gains from health IT adoption may be modest, these technologies form the foundation for a health information infrastructure. As scientists are just beginning to understand how to harness and apply medical information, this problem is complicated by the sheer complexity of medical care, the heterogeneity across patients, and the importance of treatment selection. His current work draws on methods from both machine learning and econometrics to address these issues. Current pharmaceutical studies examine the roles of consumer heterogeneity and learning about the value of products as well as the effect of direct-to-consumer advertising on health.

The marginal effects of health IT on mortality by diagnosis and deciles of severity. We study the affect of hospitals' electronic health record (EHR) systems on patient outcomes. While we observe no benefits for the average patient, mortality falls significantly for high-risk patients in all EHR-sensitive conditions. These patterns, combined findings from other analyses, suggest that EHR systems may be more effective at supporting care coordination and information management than at rules-based clinical decision support. McCullough, Parente, and Town, "Health information technology and patient outcomes: the role of information and labor coordination." RAND Journal of Economics, Vol. 47, no. 1 (Spring 2016).

The marginal effects of health IT on mortality by diagnosis and deciles of severity. We study the affect of hospitals’ electronic health record (EHR) systems on patient outcomes. While we observe no benefits for the average patient, mortality falls significantly for high-risk patients in all EHR-sensitive conditions. These patterns, combined findings from other analyses, suggest that EHR systems may be more effective at supporting care coordination and information management than at rules-based clinical decision support. McCullough, Parente, and Town, “Health information technology and patient outcomes: the role of information and labor coordination.” RAND Journal of Economics, Vol. 47, no. 1 (Spring 2016).

Mingyan Liu

By |

Mingyan Liu, PhD, is Professor of Electrical Engineering and Computer Science, College of Engineering, at the University of Michigan, Ann Arbor.

Prof. Liu’s research interest lies in optimal resource allocation, sequential decision theory, online and machine learning, performance modeling, analysis, and design of large-scale, decentralized, stochastic and networked systems, using tools including stochastic control, optimization, game theory and mechanism design. Her most recent research activities involve sequential learning, modeling and mining of large scale Internet measurement data concerning cyber security, and incentive mechanisms for inter-dependent security games. Within this context, her research group is actively working on the following directions.

1. Cyber security incident forecast. The goal is to predict an organization’s likelihood of having a cyber security incident in the near future using a variety of externally collected Internet measurement data, some of which capture active maliciousness (e.g., spam and phishing/malware activities) while others capture more latent factors (e.g., misconfiguration and mismanagement). While machine learning techniques have been extensively used for detection in the cyber security literature, using them for prediction has rarely been done. This is the first study on the prediction of broad categories of security incidents on an organizational level. Our work to date shows that with the right choice of feature set, highly accurate predictions can be achieved with a forecasting window of 6-12 months. Given the increasing amount of high profile security incidents (Target, Home Depot, JP Morgan Chase, and Anthem, just to name a few) and the amount of social and economic cost they inflict, this work will have a major impact on cyber security risk management.

2. Detect propagation in temporal data and its application to identifying phishing activities. Phishing activities propagate from one network to another in a highly regular fashion, a phenomenon known as fast-flux, though how the destination networks are chosen by the malicious campaign remains unknown. An interesting challenge arises as to whether one can use community detection methods to automatically extract those networks involved in a single phishing campaign; the ability to do so would be critical to forensic analysis. While there have been many results on detecting communities defined as subsets of relatively strongly connected entities, the phishing activity exhibits a unique propagating property that is better captured using an epidemic model. By using a combination of epidemic modeling and regression we can identify this type of propagating community with reasonable accuracy; we are working on alternative methods as well.

3. Data-driven modeling of organizational and end-user security posture. We are working to build models that accurately capture the cyber security postures of end-users as well as organizations, using large quantities of Internet measurement data. One domain is on how software vendors disclose security vulnerabilities in their products, how they deploy software upgrades and patches, and in turn, how end users install these patches; all these elements combined lead to a better understanding of the overall state of vulnerability of a given machine and how that relates to user behaviors. Another domain concerns the interconnectedness of today’s Internet which implies that what we see from one network is inevitably related to others. We use this connection to gain better insight into the conditions of not just a single network viewed in isolation, but multiple networks viewed together.

A predictive analytics approach to forecasting cyber security incidents. We start from Internet-scale measurement on the security postures of network entities. We also collect security incident reports to use as labels in a supervised learning framework. The collected data then goes through extensive processing and domain-specific feature extraction. Features are then used to train a classifier that generates predictions when we input new features, on the likelihood of a future incident for the entity associated with the input features. We are also actively seeking to understand the causal relationship among different features and the security interdependence among different network entities. Lastly, risk prediction helps us design better incentive mechanisms which is another facet of our research in this domain.

A predictive analytics approach to forecasting cyber security incidents. We start from Internet-scale measurement on the security postures of network entities. We also collect security incident reports to use as labels in a supervised learning framework. The collected data then goes through extensive processing and domain-specific feature extraction. Features are then used to train a classifier that generates predictions when we input new features, on the likelihood of a future incident for the entity associated with the input features. We are also actively seeking to understand the causal relationship among different features and the security interdependence among different network entities. Lastly, risk prediction helps us design better incentive mechanisms which is another facet of our research in this domain.

Xuefeng (Chris) Liu

By |

Dr. Liu has a broad research interest in the development of statistical models and techniques to address critical issues in health and nursing sciences, computational processing of Big Data in clinical Informatics and Genomics, statistical modeling and assessment of risk factors (e.g. hypertension, diabetes, central obesity, smoking) for adverse cardiovascular and renal outcomes and maternal and child health. His expertise in statistics includes, but is not limited to, repeated measures models with missing data, multilevel models, latent variable models, and Bayesian and computational statistics. Dr. Liu has led and co-led several NIH-funded projects on the quality of care for hypertensive patients.

Vijay Subramanian

By |

Professor Subramanian is interested in a variety of stochastic modeling, decision and control theoretic, and applied probability questions concerned with networks. Examples include analysis of random graphs, analysis of processes like cascades on random graphs, network economics, analysis of e-commerce systems, mean-field games, network games, telecommunication networks, load-balancing in large server farms, and information assimilation, aggregation and flow in networks especially with strategic users.