Professor Saigal has held faculty positions at the Haas School of Business, Berkeley and the department of Industrial Engineering and Management Sciences at Northwestern University, has been a researcher at the Bell Telephone Laboratories and numerous short term visiting positions. He currently teaches courses in Financial Engineering. In the recent past he taught courses in optimization, and Management Science. His current research involves data based studies of operational problems in the areas of Finance, Transportation, Renewable Energy and Healthcare, with an emphasis on the management and pricing of risks. This involves the use of data analytics, optimization, stochastic processes and financial engineering tools. His earlier research involved theoretical investigation into interior point methods, large scale optimization and software development for mathematical programming. He is an author of two books on optimization and large set of publications in top refereed journals. He has been an associate editor of Management Science and is a member of SIAM, AMS and AAAS. He has served as the Director of the interdisciplinary Financial Engineering Program and as the Director of Interdisciplinary Professional Programs (now Integrative Design + Systems) at the College of Engineering.
Kai S. Cortina, PhD, is Professor of Psychology in the College of Literature, Science, and the Arts at the University of Michigan, Ann Arbor.
Prof. Cortina’s major research revolves around the understanding of children’s and adolescents’ pathways into adulthood and the role of the educational system in this process. The academic and psycho-social development is analyzed from a life-span perspective exclusively analyzing longitudinal data over longer periods of time (e.g., from middle school to young adulthood). The hierarchical structure of the school system (student/classroom/school/district/state/nations) requires the use of statistical tools that can handle these kind of nested data.
My current research interest is focused on improving efficiency and utilization of outpatient clinics, using data mining techniques such as decision tree analysis, Bayesian networks, neural networks, and similar techniques. While our previous and continuing research have been focused on using some of these techniques to develop more sophisticated methods of patients scheduling within physical therapy clinics, we can see the applicability of the techniques to other types of health services providers. There is also applicability to university administration in developing predictive models using data mining techniques for assessing student success.
Omid Dehzangi, PhD, is Assistant Professor of Computer and Information Science, College of Engineering and Computer Science, at the University of Michigan, Dearborn.
Wearable health technology is drawing significant attention for good reasons. The pervasive nature of such systems providing ubiquitous access to the continuous personalized data will transform the way people interact with each other and their environment. The resulting information extracted from these systems will enable emerging applications in healthcare, wellness, emergency response, fitness monitoring, elderly care support, long-term preventive chronic care, assistive care, smart environments, sports, gaming, and entertainment which create many new research opportunities and transform researches from various disciplines into data science which is the methodological terminology for data collection, data management, data analysis, and data visualization. Despite the ground-breaking potentials, there are a number of interesting challenges in order to design and develop wearable medical embedded systems. Due to limited available resources in wearable processing architectures, power-efficiency is demanded to allow unobtrusive and long-term operation of the hardware. Also, the data-intensive nature of continuous health monitoring requires efficient signal processing and data analytic algorithms for real-time, scalable, reliable, accurate, and secure extraction of relevant information from an overwhelmingly large amount of data. Therefore, extensive research in their design, development, and assessment is necessary. Embedded Processing Platform Design The majority of my work concentrates on designing wearable embedded processing platforms in order to shift the conventional paradigms from hospital-centric healthcare with episodic and reactive focus on diseases to patient-centric and home-based healthcare as an alternative segment which demands outstanding specialized design in terms of hardware design, software development, signal processing and uncertainty reduction, data analysis, predictive modeling and information extraction. The objective is to reduce the costs and improve the effectiveness of healthcare by proactive early monitoring, diagnosis, and treatment of diseases (i.e. preventive) as shown in Figure 1.
Professor Subramanian is interested in a variety of stochastic modeling, decision and control theoretic, and applied probability questions concerned with networks. Examples include analysis of random graphs, analysis of processes like cascades on random graphs, network economics, analysis of e-commerce systems, mean-field games, network games, telecommunication networks, load-balancing in large server farms, and information assimilation, aggregation and flow in networks especially with strategic users.
The basis of my work is to make the often invisible traces created by interactions students have with learning technologies available to instructors, technology solutions, and students themselves. This often requires the creation of new novel educational technologies which are designed from the beginning with detailed tracking of user activities. Coupled with machine learning and data mining techniques (e.g. classification, regression, and clustering methods), clickstream data from these technologies is used to build predictive models of student success and to better understand how technology affords benefits in teaching and learning. I’m interested in broadly scaled teaching and learning through Massive Open Online Courses (MOOCs), how predictive models can be used to understand student success, and the analysis of educational discourse and student writing.
Jeremy Taylor, PhD, is the Pharmacia Research Professor of Biostatistics in the School of Public Health and Professor in the Department of Radiation Oncology in the School of Medicine at the University of Michigan, Ann Arbor. He is the director of the University of Michigan Cancer Center Biostatistics Unit and director of the Cancer/Biostatistics training program. He received his B.A. in Mathematics from Cambridge University and his Ph.D. in Statistics from UC Berkeley. He was on the faculty at UCLA from 1983 to 1998, when he moved to the University of Michigan. He has had visiting positions at the Medical Research Council, Cambridge, England; the University of Adelaide; INSERM, Bordeaux and CSIRO, Sydney, Australia. He is a previously winner of the Mortimer Spiegelman Award from the American Public Health Association and the Michael Fry Award from the Radiation Research Society. He has worked in various areas of Statistics and Biostatistics, including Box-Cox transformations, longitudinal and survival analysis, cure models, missing data, smoothing methods, clinical trial design, surrogate and auxiliary variables. He has been heavily involved in collaborations in the areas of radiation oncology, cancer research and bioinformatics.
I have broad interests and expertise in developing statistical methodology and applying it in biomedical research, particularly in cancer research. I have undertaken research in power transformations, longitudinal modeling, survival analysis particularly cure models, missing data methods, causal inference and in modeling radiation oncology related data. Recent interests, specifically related to cancer, are in statistical methods for genomic data, statistical methods for evaluating cancer biomarkers, surrogate endpoints, phase I trial design, statistical methods for personalized medicine and prognostic and predictive model validation. I strive to develop principled methods that will lead to valid interpretations of the complex data that is collected in biomedical research.
Johann Gagnon-Bartsch, PhD, is Assistant Professor of Statistics in the College of Literature, Science, and the Arts at the University of Michigan, Ann Arbor.
Prof. Gagnon-Bartsch’s research currently focuses on the analysis of high-throughput biological data as well as other types of high-dimensional data. More specifically, he is working with collaborators on developing methods that can be used when the data are corrupted by systematic measurement errors of unknown origin, or when the data suffer from the effects of unobserved confounders. For example, gene expression data suffer from both systematic measurement errors of unknown origin (due to uncontrolled variations in laboratory conditions) and the effects of unobserved confounders (such as whether a patient had just eaten before a tissue sample was taken). They are developing methodology that is able to correct for these systematic errors using “negative controls.” Negative controls are variables that (1) are known to have no true association with the biological signal of interest, and (2) are corrupted by the systematic errors, just like the variables that are of interest. The negative controls allow us to learn about the structure of the errors, so that we may then remove the errors from the other variables.
I am a data scientist, with extensive and various experience drawing inference from large data sets. In education research, I work to understand and improve postsecondary student outcomes using the rich, extensive, and complex digital data produced in the course of educating students in the 21st century. In 2011, we launched the E2Coach computer tailored support system, and in 2014, we began the REBUILD project, a college-wide effort to increase the use of evidence-based methods in introductory STEM courses. In 2015, we launched the Digital Innovation Greenhouse, an education technology accelerator within the UM Office of Digital Education and Innovation. In astrophysics, my main research tools have been the Sloan Digital Sky Survey, the Dark Energy Survey, and the simulations which support them both. We use these tools to probe the growth and nature of cosmic structure as well as the expansion history of the Universe, especially through studies of galaxy clusters. I have also studied astrophysical transients as part of the Robotic Optical Transient Search Experiment.