Explore ARCExplore ARC

S. Sriram

By |

S. Sriram, PhD, is Associate Professor of Marketing in the University of Michigan Ross School of Business, Ann Arbor.

Prof. Sriram’s research interests are in the areas of brand and product portfolio management, multi-sided platforms, healthcare policy, and online education. His research uses state of the art econometric methods to answer important managerial and policy-relevant questions. He has studied topics such as measuring and tracking brand equity and optimal allocation of resources to maintain long-term brand profitability, cannibalization, consumer adoption of technology products, and strategies for multi-sided platforms. Substantively, his research has spanned several industries including consumer packaged goods, technology products and services, retailing, news media, the interface of healthcare and marketing, and MOOCs.

Samuel K Handelman

By |

Samuel K Handelman, Ph.D., is Research Assistant Professor in the department of Internal Medicine, Gastroenterology, of Michigan Medicine at the University of Michigan, Ann Arbor. Prof. Handelman is focused on multi-omics approaches to drive precision/personalized-therapy and to predict population-level differences in the effectiveness of interventions. He tends to favor regression-style and hierarchical-clustering approaches, partially because he has a background in both statistics and in cladistics. His scientific monomania is for compensatory mechanisms and trade-offs in evolution, but he has a principled reason to focus on translational medicine: real understanding of these mechanisms goes all the way into the clinic. Anything less that clinical translation indicates that we don’t understand what drove the genetics of human populations.

Antonios M. Koumpias

By |

Antonios M. Koumpias, Ph.D., is Assistant Professor of Economics in the department of Social Sciences at the University of Michigan, Dearborn. Prof. Koumpias is an applied microeconomist with research interests in public economics, with an emphasis on behavioral tax compliance, and health economics. In his research, he employs quasi-experimental methods to disentangle the causal impact of policy interventions that occur at the aggregate (e.g. states) or the individual (e.g. taxpayers) level in a comparative case study setting. Namely, he relies on regression discontinuity designs, regression kink designs, matching methods, and synthetic control methods to perform program evaluation that estimates the causal treatment effect of the policy in question. Examples include the use of a regression discontinuity design to estimate the impact of a tax compliance reminders on payments of overdue income tax liabilities in Greece, matching methods to measure the influence of mass media campaigns in Pakistan on income tax filing and the synthetic control method to evaluate the long-term effect of state Medicaid expansions on mortality.

Evolution of Annual Changes in All-cause Childless Adult Mortality in New York State following 2001 State Medicaid Expansion

Kai S. Cortina

By |

Kai S. Cortina, PhD, is Professor of Psychology in the College of Literature, Science, and the Arts at the University of Michigan, Ann Arbor.

Prof. Cortina’s major research revolves around the understanding of children’s and adolescents’ pathways into adulthood and the role of the educational system in this process. The academic and psycho-social development is analyzed from a life-span perspective exclusively analyzing longitudinal data over longer periods of time (e.g., from middle school to young adulthood). The hierarchical structure of the school system (student/classroom/school/district/state/nations) requires the use of statistical tools that can handle these kind of nested data.

 

Jeffrey S. McCullough

By |

Jeffrey S. McCullough, PhD, is Associate Professor in the department of Health Management and Policy in the School of Public Health at the University of Michigan, Ann Arbor.

Prof. McCullough’s research focuses on technology and innovation in health care with an emphasis on information technology (IT), pharmaceuticals, and empirical methods.  Many of his studies explored the effect of electronic health record (EHR) systems on health care quality and productivity. While the short-run gains from health IT adoption may be modest, these technologies form the foundation for a health information infrastructure. As scientists are just beginning to understand how to harness and apply medical information, this problem is complicated by the sheer complexity of medical care, the heterogeneity across patients, and the importance of treatment selection. His current work draws on methods from both machine learning and econometrics to address these issues. Current pharmaceutical studies examine the roles of consumer heterogeneity and learning about the value of products as well as the effect of direct-to-consumer advertising on health.

The marginal effects of health IT on mortality by diagnosis and deciles of severity. We study the affect of hospitals' electronic health record (EHR) systems on patient outcomes. While we observe no benefits for the average patient, mortality falls significantly for high-risk patients in all EHR-sensitive conditions. These patterns, combined findings from other analyses, suggest that EHR systems may be more effective at supporting care coordination and information management than at rules-based clinical decision support. McCullough, Parente, and Town, "Health information technology and patient outcomes: the role of information and labor coordination." RAND Journal of Economics, Vol. 47, no. 1 (Spring 2016).

The marginal effects of health IT on mortality by diagnosis and deciles of severity. We study the affect of hospitals’ electronic health record (EHR) systems on patient outcomes. While we observe no benefits for the average patient, mortality falls significantly for high-risk patients in all EHR-sensitive conditions. These patterns, combined findings from other analyses, suggest that EHR systems may be more effective at supporting care coordination and information management than at rules-based clinical decision support. McCullough, Parente, and Town, “Health information technology and patient outcomes: the role of information and labor coordination.” RAND Journal of Economics, Vol. 47, no. 1 (Spring 2016).

Mingyan Liu

By |

Mingyan Liu, PhD, is Professor of Electrical Engineering and Computer Science, College of Engineering, at the University of Michigan, Ann Arbor.

Prof. Liu’s research interest lies in optimal resource allocation, sequential decision theory, online and machine learning, performance modeling, analysis, and design of large-scale, decentralized, stochastic and networked systems, using tools including stochastic control, optimization, game theory and mechanism design. Her most recent research activities involve sequential learning, modeling and mining of large scale Internet measurement data concerning cyber security, and incentive mechanisms for inter-dependent security games. Within this context, her research group is actively working on the following directions.

1. Cyber security incident forecast. The goal is to predict an organization’s likelihood of having a cyber security incident in the near future using a variety of externally collected Internet measurement data, some of which capture active maliciousness (e.g., spam and phishing/malware activities) while others capture more latent factors (e.g., misconfiguration and mismanagement). While machine learning techniques have been extensively used for detection in the cyber security literature, using them for prediction has rarely been done. This is the first study on the prediction of broad categories of security incidents on an organizational level. Our work to date shows that with the right choice of feature set, highly accurate predictions can be achieved with a forecasting window of 6-12 months. Given the increasing amount of high profile security incidents (Target, Home Depot, JP Morgan Chase, and Anthem, just to name a few) and the amount of social and economic cost they inflict, this work will have a major impact on cyber security risk management.

2. Detect propagation in temporal data and its application to identifying phishing activities. Phishing activities propagate from one network to another in a highly regular fashion, a phenomenon known as fast-flux, though how the destination networks are chosen by the malicious campaign remains unknown. An interesting challenge arises as to whether one can use community detection methods to automatically extract those networks involved in a single phishing campaign; the ability to do so would be critical to forensic analysis. While there have been many results on detecting communities defined as subsets of relatively strongly connected entities, the phishing activity exhibits a unique propagating property that is better captured using an epidemic model. By using a combination of epidemic modeling and regression we can identify this type of propagating community with reasonable accuracy; we are working on alternative methods as well.

3. Data-driven modeling of organizational and end-user security posture. We are working to build models that accurately capture the cyber security postures of end-users as well as organizations, using large quantities of Internet measurement data. One domain is on how software vendors disclose security vulnerabilities in their products, how they deploy software upgrades and patches, and in turn, how end users install these patches; all these elements combined lead to a better understanding of the overall state of vulnerability of a given machine and how that relates to user behaviors. Another domain concerns the interconnectedness of today’s Internet which implies that what we see from one network is inevitably related to others. We use this connection to gain better insight into the conditions of not just a single network viewed in isolation, but multiple networks viewed together.

A predictive analytics approach to forecasting cyber security incidents. We start from Internet-scale measurement on the security postures of network entities. We also collect security incident reports to use as labels in a supervised learning framework. The collected data then goes through extensive processing and domain-specific feature extraction. Features are then used to train a classifier that generates predictions when we input new features, on the likelihood of a future incident for the entity associated with the input features. We are also actively seeking to understand the causal relationship among different features and the security interdependence among different network entities. Lastly, risk prediction helps us design better incentive mechanisms which is another facet of our research in this domain.

A predictive analytics approach to forecasting cyber security incidents. We start from Internet-scale measurement on the security postures of network entities. We also collect security incident reports to use as labels in a supervised learning framework. The collected data then goes through extensive processing and domain-specific feature extraction. Features are then used to train a classifier that generates predictions when we input new features, on the likelihood of a future incident for the entity associated with the input features. We are also actively seeking to understand the causal relationship among different features and the security interdependence among different network entities. Lastly, risk prediction helps us design better incentive mechanisms which is another facet of our research in this domain.

Xuefeng (Chris) Liu

By |

Dr. Liu has a broad research interest in the development of statistical models and techniques to address critical issues in health and nursing sciences, computational processing of Big Data in clinical Informatics and Genomics, statistical modeling and assessment of risk factors (e.g. hypertension, diabetes, central obesity, smoking) for adverse cardiovascular and renal outcomes and maternal and child health. His expertise in statistics includes, but is not limited to, repeated measures models with missing data, multilevel models, latent variable models, and Bayesian and computational statistics. Dr. Liu has led and co-led several NIH-funded projects on the quality of care for hypertensive patients.

Vijay Subramanian

By |

Professor Subramanian is interested in a variety of stochastic modeling, decision and control theoretic, and applied probability questions concerned with networks. Examples include analysis of random graphs, analysis of processes like cascades on random graphs, network economics, analysis of e-commerce systems, mean-field games, network games, telecommunication networks, load-balancing in large server farms, and information assimilation, aggregation and flow in networks especially with strategic users.

Michael Elliott

By |

Michael Elliott is Professor of Biostatistics at the University of Michigan School of Public Health and Research Scientist at the Institute for Social Research. Dr. Elliott’s statistical research interests focus around the broad topic of “missing data,” including the design and analysis of sample surveys, casual and counterfactual inference, and latent variable models. He has worked closely with collaborators in injury research, pediatrics, women’s health, and the social determinants of physical and mental health. Dr. Elliott serves as an Associate Editor for the Journal of the American Statistical Association. He is currently serving as a co-investigator on the MIDAS-affiliated Reinventing Urban Transportation and Mobility project, working to develop methods to improve the representativeness of naturalistic driving data.

Timothy McKay

By |

I am a data scientist, with extensive and various experience drawing inference from large data sets. In education research, I work to understand and improve postsecondary student outcomes using the rich, extensive, and complex digital data produced in the course of educating students in the 21st century. In 2011, we launched the E2Coach computer tailored support system, and in 2014, we began the REBUILD project, a college-wide effort to increase the use of evidence-based methods in introductory STEM courses. In 2015, we launched the Digital Innovation Greenhouse, an education technology accelerator within the UM Office of Digital Education and Innovation. In astrophysics, my main research tools have been the Sloan Digital Sky Survey, the Dark Energy Survey, and the simulations which support them both. We use these tools to probe the growth and nature of cosmic structure as well as the expansion history of the Universe, especially through studies of galaxy clusters. I have also studied astrophysical transients as part of the Robotic Optical Transient Search Experiment.

This image, drawn from a network analysis of 127,653,500 connections among 57,752 students, shows the relative degrees of connection for students in the 19 schools and colleges which constitute the University of Michigan. It provides a 30,000 foot overview of the connection and isolation of various groups of students at Michigan. (Drawn from the senior thesis work of UM Computer Science major Kar Epker)

This image, drawn from a network analysis of 127,653,500 connections among 57,752 students, shows the relative degrees of connection for students in the 19 schools and colleges which constitute the University of Michigan. It provides a 30,000 foot overview of the connection and isolation of various groups of students at Michigan. (Drawn from the senior thesis work of UM Computer Science major Kar Epker)