My methodological research focus on developing statistical methods for routinely collected healthcare databases such as electronic health records (EHR) and claims data. I aim to tackle the unique challenges that arise from the secondary use of real-world data for research purposes. Specifically, I develop novel causal inference methods and semiparametric efficiency theory that harness the full potential of EHR data to address comparative effectiveness and safety questions. I develop scalable and automated pipelines for curation and harmonization of EHR data across healthcare systems and coding systems.
Our team develops machine learning algorithms for the enhancement of outcomes in cataract surgery, the most commonly performed surgery in the world. Our works focuses on developing models for postoperative refraction after cataract surgery and analysis of surgical quality.
I have been creating free and interactive ebooks for introductory computing courses on the open-source Ruenstone platform and analyzing the clickstream data from those courses to improve the ebooks and instruction. In particular, I am interested in using educational data mining to close the feedback loop and improve the instructional materials. I am also interested in learner sourcing to automatically generate and improve assessments. I have been applying principles from educational psychology such as worked examples plus low cognitive load practice to improve instruction. I have been exploring mixed-up code (Parsons) problems as one type of practice. I created two types of adaptation for Parsons problems: intra-problem and inter-problem. In intra-problem adaptation, if the learner is struggling to solve the current problem it can dynamically be made easier. In inter-problem adaptation the difficulty of the next problem is based on the learner’s performance on the previous problem.
Professor Kowalski’s recent research analyzes experiments and clinical trials with the goal of designing policies to target insurance expansions and medical treatments to individuals who stand to benefit from them the most. Her research has also explored the impact of previous Medicaid expansions, the Affordable Care Act, the Massachusetts health reform of 2006, and employer-sponsored health insurance plans. She has also used cutting-edge techniques to estimate the value of medical spending on at-risk newborns.
We are interested in resolving outstanding fundamental scientific problems that impede the computational materials design process. Our group uses high-throughput density functional theory, applied thermodynamics, and materials informatics to deepen our fundamental understanding of synthesis-structure-property relationships, while exploring new chemical spaces for functional technological materials. These research interests are driven by the practical goal of the U.S. Materials Genome Initiative to accelerate materials discovery, but whose resolution requires basic fundamental research in synthesis science, inorganic chemistry, and materials thermodynamics.
Today’s real-world problems are complex and large, often with overwhelmingly large number of unknown variables which render them doomed to the so-called “curse of dimensionality”. For instance, in energy systems, the system operators should solve optimal power flow, unit commitment, and transmission switching problems with tens of thousands of continuous and discrete variables in real time. In control systems, a long standing question is how to efficiently design structured and distributed controllers for large-scale and unknown dynamical systems. Finally, in machine learning, it is important to obtain simple, interpretable, and parsimonious models for high-dimensional and noisy datasets. Our research is motivated by two main goals: (1) to model these problems as tractable optimization problems; and (2) to develop structure-aware and scalable computational methods for these optimization problems that come equipped with certifiable optimality guarantees. We aim to show that exploiting hidden structures in these problems—such as graph-induced or spectral sparsity—is a key game-changer in the pursuit of massively scalable and guaranteed computational methods.
My research lies at the intersection of optimization, data analytics, and control.
Albert S. Berahas is an Assistant Professor in the department of Industrial & Operations Engineering. His research broadly focuses on designing, developing and analyzing algorithms for solving large scale nonlinear optimization problems. Such problems are ubiquitous, and arise in a plethora of areas such as engineering design, economics, transportation, robotics, machine learning and statistics. Specifically, he is interested in and has explored several sub-fields of nonlinear optimization such as: (i) general nonlinear optimization algorithms, (ii) optimization algorithms for machine learning, (iii) constrained optimization, (iv) stochastic optimization, (v) derivative-free optimization, and (vi) distributed optimization.
As a board-certified ophthalmologist and glaucoma specialist, I have more than 15 years of clinical experience caring for patients with different types and complexities of glaucoma. In addition to my clinical experience, as a health services researcher, I have developed experience and expertise in several disciplines including performing analyses using large health care claims databases to study utilization and outcomes of patients with ocular diseases, racial and other disparities in eye care, associations between systemic conditions or medication use and ocular diseases. I have learned the nuances of various data sources and ways to maximize our use of these data sources to answer important and timely questions. Leveraging my background in HSR with new skills in bioinformatics and precision medicine, over the past 2-3 years I have been developing and growing the Sight Outcomes Research Collaborative (SOURCE) repository, a powerful tool that researchers can tap into to study patients with ocular diseases. My team and I have spent countless hours devising ways of extracting electronic health record data from Clarity, cleaning and de-identifying the data, and making it linkable to ocular diagnostic test data (OCT, HVF, biometry) and non-clinical data. Now that we have successfully developed such a resource here at Kellogg, I am now collaborating with colleagues at > 2 dozen academic ophthalmology departments across the country to assist them with extracting their data in the same format and sending it to Kellogg so that we can pool the data and make it accessible to researchers at all of the participating centers for research and quality improvement studies. I am also actively exploring ways to integrate data from SOURCE into deep learning and artificial intelligence algorithms, making use of SOURCE data for genotype-phenotype association studies and development of polygenic risk scores for common ocular diseases, capturing patient-reported outcome data for the majority of eye care recipients, enhancing visualization of the data on easy-to-access dashboards to aid in quality improvement initiatives, and making use of the data to enhance quality of care, safety, efficiency of care delivery, and to improve clinical operations. .
Larson’s research has been in the area of “Complex Fluids,” which include polymers, colloids, surfactant-containing fluids, liquid crystals, and biological macromolecules such as DNA, proteins, and lipid membranes. He has also contributed extensively to fluid mechanics, including microfluidics, and transport modeling. He has also has carried out research over the past 16 years in the area of molecular simulations for biomedical applications. The work has involved determining the structure and dynamics of lipid membranes, trans-membrane peptides, anti-microbial peptides, the conformation and functioning of ion channels, interactions of excipients with drugs for drug delivery, interactions of peptides with proteins including MHC molecules, resulting in more than 50 publications in these areas, and in the training of several Ph.D. students and postdocs. Many of these studies involve heavy use of computer simulations and methods of statistical analysis of simulations, including umbrella sampling, forward flux sampling, and metadynamics, which involve statistical weighting of results. He also has been engaged in analysis of percolation processes on lattices, including application to disease propagation.
Alpha helical peptide bridging lipid bilayer in molecular dynamics simulations of “hydrophobic mismatch.”