Barbara Jane Ericson

By |

I have been creating free and interactive ebooks for introductory computing courses on the open-source Ruenstone platform and analyzing the clickstream data from those courses to improve the ebooks and instruction. In particular, I am interested in using educational data mining to close the feedback loop and improve the instructional materials. I am also interested in learner sourcing to automatically generate and improve assessments. I have been applying principles from educational psychology such as worked examples plus low cognitive load practice to improve instruction. I have been exploring mixed-up code (Parsons) problems as one type of practice. I created two types of adaptation for Parsons problems: intra-problem and inter-problem. In intra-problem adaptation, if the learner is struggling to solve the current problem it can dynamically be made easier. In inter-problem adaptation the difficulty of the next problem is based on the learner’s performance on the previous problem.

Dr. Donald S. Likosky

By |

Dr. Likosky is a Professor, Head of the Section of Health Services Research and Quality in the Department of Cardiac Surgery at Michigan Medicine and faculty member at the Center for Healthcare Outcomes and Policy. Dr. Likosky’s work currently focuses on leveraging: (i) mobile health technology to identify objective and scalable measures for mitigating post-surgical morbidities, and (ii) computer vision to identify objective and scalable measures of important intraoperative technical skills and non-technical practices.

Wenhao Sun

By |

We are interested in resolving outstanding fundamental scientific problems that impede the computational materials design process. Our group uses high-throughput density functional theory, applied thermodynamics, and materials informatics to deepen our fundamental understanding of synthesis-structure-property relationships, while exploring new chemical spaces for functional technological materials. These research interests are driven by the practical goal of the U.S. Materials Genome Initiative to accelerate materials discovery, but whose resolution requires basic fundamental research in synthesis science, inorganic chemistry, and materials thermodynamics.

Zhongming Liu

By |

My research is at the intersection of neuroscience and artificial intelligence. My group uses neuroscience or brain-inspired principles to design models and algorithms for computer vision and language processing. In turn, we uses neural network models to test hypotheses in neuroscience and explain or predict human perception and behaviors. My group also develops and uses machine learning algorithms to improve the acquisition and analysis of medical images, including functional magnetic resonance imaging of the brain and magnetic resonance imaging of the gut.

We use brain-inspired neural networks models to predict and decode brain activity in humans processing information from naturalistic audiovisual stimuli.

Albert S. Berahas

By |

Albert S. Berahas is an Assistant Professor in the department of Industrial & Operations Engineering. His research broadly focuses on designing, developing and analyzing algorithms for solving large scale nonlinear optimization problems. Such problems are ubiquitous, and arise in a plethora of areas such as engineering design, economics, transportation, robotics, machine learning and statistics. Specifically, he is interested in and has explored several sub-fields of nonlinear optimization such as: (i) general nonlinear optimization algorithms, (ii) optimization algorithms for machine learning, (iii) constrained optimization, (iv) stochastic optimization, (v) derivative-free optimization, and (vi) distributed optimization.

9.9.2020 MIDAS Faculty Research Pitch Video.

Alex Gorodetsky

By |

Alex Gorodetsky’s research is at the intersection of applied mathematics, data science, and computational science, and is focused on enabling autonomous decision making under uncertainty. He is especially interested in controlling, designing, and analyzing autonomous systems that must act in complex environments where observational data and expensive computational simulations must work together to ensure objectives are achieved. Toward this goal, he pursues research in wide-ranging areas including uncertainty quantification, statistical inference, machine learning, control, and numerical analysis. His methodology is to increase scalability of probabilistic modeling and analysis techniques such as Bayesian inference and uncertainty quantification. His current strategies to achieving scalability revolve around leveraging computational optimal transport, developing tensor network learning algorithms, and creating new multi-fidelity information fusion approaches.

Sample workflow for enabling autonomous decision making under uncertainty for a drone operating in a complex environment. We develop algorithms to compress simulation data by exploiting problem structure. We then embed the compressed representations onto onboard computational resources. Finally, we develop approaches to enable the drone to adapt, learn, and refine knowledge by interacting with, and collecting data from, the environment.

Nikola Banovic

By |

My research focuses on methods, applications, and ethics of Computational Modeling in Human-Computer Interaction (HCI). Understanding and modeling human behavior supports innovative information technology that will change how we study and design interactive user experiences. I envision modeling the human accurately across domains as a theoretical foundation for work in HCI in which computational models provide a foundation to study, describe, and understand complex human behaviors and support optimization and evaluation of user interfaces. I create technology that automatically reasons about and acts in response to people’s behavior to help them be productive, healthy, and safe.

Lucia Cevidanes

By |

We have developed and tested machine learning approaches to integrate quantitative markers for diagnosis and assessment of progression of TMJ OA, as well as extended the capabilities of 3D Slicer4 into web-based tools and disseminated open source image analysis tools. Our aims use data processing and in-depth analytics combined with learning using privileged information, integrated feature selection, and testing the performance of longitudinal risk predictors. Our long term goals are to improve diagnosis and risk prediction of TemporoMandibular Osteoarthritis in future multicenter studies.

The Spectrum of Data Science for Diagnosis of Osteoarthritis of the Temporomandibular Joint

Joshua Stein

By |

As a board-certified ophthalmologist and glaucoma specialist, I have more than 15 years of clinical experience caring for patients with different types and complexities of glaucoma. In addition to my clinical experience, as a health services researcher, I have developed experience and expertise in several disciplines including performing analyses using large health care claims databases to study utilization and outcomes of patients with ocular diseases, racial and other disparities in eye care, associations between systemic conditions or medication use and ocular diseases. I have learned the nuances of various data sources and ways to maximize our use of these data sources to answer important and timely questions. Leveraging my background in HSR with new skills in bioinformatics and precision medicine, over the past 2-3 years I have been developing and growing the Sight Outcomes Research Collaborative (SOURCE) repository, a powerful tool that researchers can tap into to study patients with ocular diseases. My team and I have spent countless hours devising ways of extracting electronic health record data from Clarity, cleaning and de-identifying the data, and making it linkable to ocular diagnostic test data (OCT, HVF, biometry) and non-clinical data. Now that we have successfully developed such a resource here at Kellogg, I am now collaborating with colleagues at > 2 dozen academic ophthalmology departments across the country to assist them with extracting their data in the same format and sending it to Kellogg so that we can pool the data and make it accessible to researchers at all of the participating centers for research and quality improvement studies. I am also actively exploring ways to integrate data from SOURCE into deep learning and artificial intelligence algorithms, making use of SOURCE data for genotype-phenotype association studies and development of polygenic risk scores for common ocular diseases, capturing patient-reported outcome data for the majority of eye care recipients, enhancing visualization of the data on easy-to-access dashboards to aid in quality improvement initiatives, and making use of the data to enhance quality of care, safety, efficiency of care delivery, and to improve clinical operations. .