Explore ARCExplore ARC

Walter S. Lasecki

By |

My lab creates systems that use a combination of both human and machine computation to solve problems quickly and reliably. We have introduced the idea of continuous real-time crowdsourcing, as well as the ‘crowd agent’ model, which uses computer-mediated groups of people submitting input simultaneously to create a collective intelligence capable of completing tasks better than any constituent member.

Peter Lenk

By |

Prof. Lenk develops Bayesian models that disaggregate data to address individuals.  He also studies Bayesian nonparametric methods and currently consider shape constraints.  Prof. Lenk teaches and uses data mining methods such as recursive partition and neural networks.

Timothy McKay

By |

I am a data scientist, with extensive and various experience drawing inference from large data sets. In education research, I work to understand and improve postsecondary student outcomes using the rich, extensive, and complex digital data produced in the course of educating students in the 21st century. In 2011, we launched the E2Coach computer tailored support system, and in 2014, we began the REBUILD project, a college-wide effort to increase the use of evidence-based methods in introductory STEM courses. In 2015, we launched the Digital Innovation Greenhouse, an education technology accelerator within the UM Office of Digital Education and Innovation. In astrophysics, my main research tools have been the Sloan Digital Sky Survey, the Dark Energy Survey, and the simulations which support them both. We use these tools to probe the growth and nature of cosmic structure as well as the expansion history of the Universe, especially through studies of galaxy clusters. I have also studied astrophysical transients as part of the Robotic Optical Transient Search Experiment.

This image, drawn from a network analysis of 127,653,500 connections among 57,752 students, shows the relative degrees of connection for students in the 19 schools and colleges which constitute the University of Michigan. It provides a 30,000 foot overview of the connection and isolation of various groups of students at Michigan. (Drawn from the senior thesis work of UM Computer Science major Kar Epker)

This image, drawn from a network analysis of 127,653,500 connections among 57,752 students, shows the relative degrees of connection for students in the 19 schools and colleges which constitute the University of Michigan. It provides a 30,000 foot overview of the connection and isolation of various groups of students at Michigan. (Drawn from the senior thesis work of UM Computer Science major Kar Epker)

Issam El Naqa

By |

Our lab’s research interests are in the areas of oncology bioinformatics, multimodality image analysis, and treatment outcome modeling. We operate at the interface of physics, biology, and engineering with the primary motivation to design and develop novel approaches to unravel cancer patients’ response to chemoradiotherapy treatment by integrating physical, biological, and imaging information into advanced mathematical models using combined top-bottom and bottom-top approaches that apply techniques of machine learning and complex systems analysis to first principles and evaluating their performance in clinical and preclinical data. These models could be then used to personalize cancer patients’ chemoradiotherapy treatment based on predicted benefit/risk and help understand the underlying biological response to disease. These research interests are divided into the following themes:

  • Bioinformatics: design and develop large-scale datamining methods and software tools to identify robust biomarkers (-omics) of chemoradiotherapy treatment outcomes from clinical and preclinical data.
  • Multimodality image-guided targeting and adaptive radiotherapy: design and develop hardware tools and software algorithms for multimodality image analysis and understanding, feature extraction for outcome prediction (radiomics), real-time treatment optimization and targeting.
  • Radiobiology: design and develop predictive models of tumor and normal tissue response to radiotherapy. Investigate the application of these methods to develop therapeutic interventions for protection of normal tissue toxicities.

Eric Schwartz

By |

Eric Schwartz, PhD, is Associate Professor of Marketing in the Ross School of Business at the University of Michigan, An Arbor.

Prof. Schwartz’s expertise focuses on predicting customer behavior, understanding its drivers, and examining how firms actively manage their customer relationships through interactive marketing. His research in customer analytics stretches managerial applications, including online display advertising, email marketing, video consumption, and word-of-mouth. The quantitative methods he uses are primarily Bayesian statistics, machine learning, dynamic programming, and field experiments. His current projects aim to optimize firms’ A/B testing and adaptive marketing experiments using a multi-armed bandit framework. As marketers expand their ability to run tests of outbound marketing activity (e.g., sending emails/direct mail, serving display ads, customizing websites), this work guides marketers to be continuously “earning while learning.” While interacting with students and managers, Professor Schwartz works to illustrate how today’s marketers bridge the gap between technical skills and data-driven decision making.

Daniel Almirall

By |

Daniel Almirall, Ph.D., is Assistant Professor in the Survey Research Center and Faculty Associate in the Population Studies Center in the Institute for Social Research at the University of Michigan.

Prof. Almirall’s current methodological research interests lie in the broad area of causal inference. He is particularly interested in methods for causal inference using longitudinal data sets in which treatments, covariates, and outcomes are all time-varying. He is also interested in developing statistical methods that can be used to form adaptive interventions, sometimes known as dynamic treatment regimes. An adaptive intervention is a sequence of individually tailored decisions rules that specify whether, how, and when to alter the intensity, type, or delivery of treatment at critical decision points in the medical care process. Adaptive interventions are particularly well-suited for the management of chronic diseases, but can be used in any clinical setting in which sequential medical decision making is essential for the welfare of the patient. They hold the promise of enhancing clinical practice by flexibly tailoring treatments to patients when they need it most, and in the most appropriate dose, thereby improving the efficacy and effectiveness of treatment.

Study Design Interests: In addition to developing new statistical methodologies, Prof. Almirall devotes a portion of his research to the design of sequential multiple assignment randomized trials (SMARTs). SMARTs are randomized trial designs that give rise to high-quality data that can be used to develop and optimize adaptive interventions.

Substantive Interests: As an investigator and methodologist in the Institute for Social Research, Prof. Almirall takes part in research in a wide variety of areas of social science and treatment (or interventions) research. He is particularly interested in the substantive areas of mental health (depression, anxiety) and substance abuse, especially as related to children and adolescents.

Pamela Giustinelli

By |

Pamela Giustinelli, is an Adjunct Research Assistant Professor in the Survey Research Center, Institute for Social Research, at the University of Michigan, Ann Arbor.

Pamela is interested in modeling, empirical, and counterfactual policy analysis of individual and multilateral decision making under uncertainty-ambiguity, especially as it applies to the family and human capital contexts. She is also interested in survey methodology, particularly as it relates to this line of research. Here are some important questions in her research agenda:

  • How do preferences, beliefs, choice sets, and other elements of a choice situation determine what choices people make and also how they make those choices? (That is, the “decision rules,” “decision protocols,” or “modes of interactions” they use.) And how are those elements formed?
  • What information do individuals and groups have or use when making decisions under uncertainty? And what information is or is not shared among decision makers in multilateral settings?
  • What are the implications of the above points for policy?
  • To inform modeling, identification, and prediction of choice behaviors, what components of individuals’ and groups’ decision processes can we sensibly measure in surveys? From whom? And in what formats?

Data science methodology: Survey design for elicitation of components of human decision processes and interactions under uncertainty/ambiguity

Data science applications: Human capital (school choice, labor supply, end-of-life living arrangements)

Satinder Singh Baveja

By |

My main research interest is in the old-fashioned goal of Artificial Intelligence (AI), that of building autonomous agents that can learn to be broadly competent in complex, dynamic, and uncertain environments. The field of reinforcement learning (RL) has focused on this goal and accordingly my deepest contributions are in RL.
A very recent effort combines Deep Learning and Reinforcement Learning.

From time to time, I take seriously the challenge of building agents that can interact with other agents and even humans in both artificial and natural environments. This has led to research in:

Over the past few years, I have begun to focus on Healthcare as an application area.

Fred Feinberg

By |

My research examines how people make choices in uncertain environments. The general focus is on using statistical models to explain complex decision patterns, particularly involving sequential choices among related items (e.g., brands in the same category) and dyads (e.g., people choosing one another in online dating), as well as a variety of applications to problems in the marketing domain (e.g., models relating advertising exposures to awareness and sales). The main methods used lie primarily in discrete choice models, ordinarily estimated using Bayesian methods, dynamic programming, and nonparametrics. I’m particularly interested in extending Bayesian analysis to very large databases, especially in terms of ‘fusing’ data sets with only partly overlapping covariates to enable strong statistical identification of models across them.

Applying Bayesian Methods to Problems in Dynamic Choice

Applying Bayesian Methods to Problems in Dynamic Choice

 

Stephen M. Pollock

By |

Professor Pollock has taught courses in decision analysis, mathematical modeling, dynamic programming, and stochastic processes. He has applied operations research and decision analysis methods to problems in defense, criminal justice, manufacturing, epidemiology and medicine. He has authored over 60 technical papers, co-edited two books, and has served as a consultant to over 30 organizations, and on the editorial boards of three major journals.

He was chair of the IOE Department, chaired the University’s Research Policies Committee and Tenure Committee, served as Director of the Engineering College’s Financial Engineering Program and Engineering Global Leadership Program, was a member the College of Engineering’s Executive Committee, and a recipient of the College’s Attwood Award.

He has served on and chaired various NSF and NRC advisory boards and panels, and on the Army Science Board. He was President of the Operations Research Society of America, was awarded the 2001 INFORMS Kimball Medal, is a fellow of INFORMS and AAAS and is a member of the National Academy of Engineering