Kevin’s research is focused on to identifying and interpreting the mechanisms responsible for the complex dynamics we observe in ecological and epidemiological systems using data science and modeling approaches. He is primarily interested in emerging and endemic pathogens, such as SARS-CoV-2, influenza, vampire bat rabies, and a host of childhood infectious diseases such as chickenpox. He uses statistical and mechanistic models to fit, forecast, and occasionally back-cast expected disease dynamics under a host of conditions, such as vaccination or other control mechanisms.
Andrew uses mathematical and statistical modeling to address public health problems. As a mathematical epidemiologist, he works on a wide range of topics (mostly related to infectious diseases and cancer prevention and survival) using an array of computational and statistical tools, including mechanistic differential equations and multistate stochastic processes. Rigorous consideration of parameter identifiability, parameter estimation, and uncertainty quantification are underlying themes in Andrew’s work.
My research primarily focuses on the following main themes: 1) development of methods for risk prediction and analyzing treatment effect heterogeneity, 2) Bayesian nonparametrics and Bayesian machine learning methods with a particular emphasis on the use of these methods in the context of survival analysis, 3) statistical methods for analyzing heterogeneity in risk-benefit profiles and for supporting individualized treatment decisions, and 4) development of empirical Bayes and shrinkage methods for high-dimensional statistical applications. I am also broadly interested in collaborative work in biomedical research with a focus on the application of statistics in cancer research.
My broad research interests are in multi-agent systems, computational economics and finance, and artificial intelligence. I apply techniques from algorithmic game theory, statistical machine learning, decision theory, etc. to a variety of problems at the intersection of the computational and social sciences. A major focus of my research has been the design and analysis of market-making algorithms for financial markets and, in particular, prediction markets — incentive-based mechanisms for aggregating data in the form of private beliefs about uncertain events (e.g. the outcome of an election) distributed among strategic agents. I use both analytical and simulation-based methods to investigate the impact of factors such as wealth, risk attitude, manipulative behavior, etc. on information aggregation in market ecosystems. Another line of work I am pursuing involves algorithms for allocating resources based on preference data collected from potential recipients, satisfying efficiency, fairness, and diversity criteria; my joint work on ethnicity quotas in Singapore public housing allocation deserves special mention in this vein. More recently, I have got involved in research on empirical game-theoretic analysis, a family of methods for building tractable models of complex, procedurally defined games from empirical/simulated payoff data and using them to reason about game outcomes.
Catherine H. Hausman is an Associate Professor in the School of Public Policy and a Research Associate at the National Bureau of Economic Research. She uses causal inference, related statistical methods, and microeconomic modeling to answer questions at the intersection of energy markets, environmental quality, climate change, and public policy.
Recent projects have looked at inequality and environmental quality, the natural gas sector’s role in methane leaks, the impact of climate change on the electricity grid, and the effects of nuclear power plant closures. Her research has appeared in the American Economic Journal: Applied Economics, the American Economic Journal: Economic Policy, the Brookings Papers on Economic Activity, and the Proceedings of the National Academy of Sciences.
Cyber-security is a complex and multi-dimensional research field. My research style comprises an inter-disciplinary (primarily rooted in economics, econometrics, data science (AI/ML/Bayesian and Frequentist Statistics), game theory, and network science) investigation of major socially pressing issues impacting the quality of cyber-risk management in modern networked and distributed engineering systems such as IoT-driven critical infrastructures, cloud-based service networks, and app-based systems (e.g., mobile commerce, smart homes) to name a few. I take delight in proposing data-driven, rigorous, and interdisciplinary solutions to both, existing fundamental challenges that pose a practical bottleneck to (cost) effective cyber-risk management, and futuristic cyber-security and privacy issues that might plague modern (networked) engineering systems. I strongly strive for originality, practical significance, and mathematical rigor in my solutions. One of my primary end goals is to conceptually get arms around complex, multi-dimensional information security and privacy problems in a way that helps, informs, and empowers practitioners and policy makers to take the right steps in making the cyber-space more secure.
My lab has two main areas of focus: molecular characteristics of head and neck cancer, and the intersection of regulatory genomics and pathway analysis. With head and neck cancer, we study tumor subtypes and biomarkers of prognosis, treatment response, and recurrence. We perform integrative omics analyses, dimension reduction methods, and prediction techniques, with the ultimate goal of identifying patient subsets who would benefit from either an additional targeted treatment or de-escalated treatment to increase quality of life. For regulatory genomics and pathway analysis, we develop statistical tests taking into account important covariates and other variables for weighting observations.
My methodological research focus on developing statistical methods for routinely collected healthcare databases such as electronic health records (EHR) and claims data. I aim to tackle the unique challenges that arise from the secondary use of real-world data for research purposes. Specifically, I develop novel causal inference methods and semiparametric efficiency theory that harness the full potential of EHR data to address comparative effectiveness and safety questions. I develop scalable and automated pipelines for curation and harmonization of EHR data across healthcare systems and coding systems.
Fred Conrad’s research concerns the development of new methods and data sources for conducting social research. His work is largely focused on survey methodology, but he also explores the use of social media content as a complement to survey data and as a source of large-scale qualitative insights. His focus is on data quality and reducing measurement error. For example, live video interviews promote more thoughtful responses, e.g., less straightlining – the tendency to give the same answer to a battery of survey questions, but they also promote less candor when answering questions on sensitive topics. Measurement error in social media include misclassification in the automated interpretation of content using methods such as sentiment analysis and topic modeling, as well as selective self-presentation (only posting flattering content). Equally challenging is not knowing the extent to which users differ from the population to which one might wish to generalize results.