Salar Fattahi

By |

Today’s real-world problems are complex and large, often with overwhelmingly large number of unknown variables which render them doomed to the so-called “curse of dimensionality”. For instance, in energy systems, the system operators should solve optimal power flow, unit commitment, and transmission switching problems with tens of thousands of continuous and discrete variables in real time. In control systems, a long standing question is how to efficiently design structured and distributed controllers for large-scale and unknown dynamical systems. Finally, in machine learning, it is important to obtain simple, interpretable, and parsimonious models for high-dimensional and noisy datasets. Our research is motivated by two main goals: (1) to model these problems as tractable optimization problems; and (2) to develop structure-aware and scalable computational methods for these optimization problems that come equipped with certifiable optimality guarantees. We aim to show that exploiting hidden structures in these problems—such as graph-induced or spectral sparsity—is a key game-changer in the pursuit of massively scalable and guaranteed computational methods.

9.9.2020 MIDAS Faculty Research Pitch Video.

My research lies at the intersection of optimization, data analytics, and control.

Albert S. Berahas

By |

Albert S. Berahas is an Assistant Professor in the department of Industrial & Operations Engineering. His research broadly focuses on designing, developing and analyzing algorithms for solving large scale nonlinear optimization problems. Such problems are ubiquitous, and arise in a plethora of areas such as engineering design, economics, transportation, robotics, machine learning and statistics. Specifically, he is interested in and has explored several sub-fields of nonlinear optimization such as: (i) general nonlinear optimization algorithms, (ii) optimization algorithms for machine learning, (iii) constrained optimization, (iv) stochastic optimization, (v) derivative-free optimization, and (vi) distributed optimization.

9.9.2020 MIDAS Faculty Research Pitch Video.

Joshua Stein

By |

As a board-certified ophthalmologist and glaucoma specialist, I have more than 15 years of clinical experience caring for patients with different types and complexities of glaucoma. In addition to my clinical experience, as a health services researcher, I have developed experience and expertise in several disciplines including performing analyses using large health care claims databases to study utilization and outcomes of patients with ocular diseases, racial and other disparities in eye care, associations between systemic conditions or medication use and ocular diseases. I have learned the nuances of various data sources and ways to maximize our use of these data sources to answer important and timely questions. Leveraging my background in HSR with new skills in bioinformatics and precision medicine, over the past 2-3 years I have been developing and growing the Sight Outcomes Research Collaborative (SOURCE) repository, a powerful tool that researchers can tap into to study patients with ocular diseases. My team and I have spent countless hours devising ways of extracting electronic health record data from Clarity, cleaning and de-identifying the data, and making it linkable to ocular diagnostic test data (OCT, HVF, biometry) and non-clinical data. Now that we have successfully developed such a resource here at Kellogg, I am now collaborating with colleagues at > 2 dozen academic ophthalmology departments across the country to assist them with extracting their data in the same format and sending it to Kellogg so that we can pool the data and make it accessible to researchers at all of the participating centers for research and quality improvement studies. I am also actively exploring ways to integrate data from SOURCE into deep learning and artificial intelligence algorithms, making use of SOURCE data for genotype-phenotype association studies and development of polygenic risk scores for common ocular diseases, capturing patient-reported outcome data for the majority of eye care recipients, enhancing visualization of the data on easy-to-access dashboards to aid in quality improvement initiatives, and making use of the data to enhance quality of care, safety, efficiency of care delivery, and to improve clinical operations. .

Ronald Gary Larson

By |

Larson’s research has been in the area of “Complex Fluids,” which include polymers, colloids, surfactant-containing fluids, liquid crystals, and biological macromolecules such as DNA, proteins, and lipid membranes. He has also contributed extensively to fluid mechanics, including microfluidics, and transport modeling. He has also has carried out research over the past 16 years in the area of molecular simulations for biomedical applications. The work has involved determining the structure and dynamics of lipid membranes, trans-membrane peptides, anti-microbial peptides, the conformation and functioning of ion channels, interactions of excipients with drugs for drug delivery, interactions of peptides with proteins including MHC molecules, resulting in more than 50 publications in these areas, and in the training of several Ph.D. students and postdocs. Many of these studies involve heavy use of computer simulations and methods of statistical analysis of simulations, including umbrella sampling, forward flux sampling, and metadynamics, which involve statistical weighting of results. He also has been engaged in analysis of percolation processes on lattices, including application to disease propagation.

Alpha helical peptide bridging lipid bilayer in molecular dynamics simulations of “hydrophobic mismatch.”

Nicholas Douville

By |

Dr. Douville is a critical care anesthesiologist with an investigative background in bioinformatics and perioperative outcomes research. He studies techniques for utilizing health care data, including genotype, to deliver personalized medicine in the perioperative period and intensive care unit. His research background has focused on ways technology can assist health care delivery to improve patient outcomes. This began designing microfluidic chips capable of recreating fluid mechanics of atelectatic alveoli and monitoring the resulting barrier breakdown real-time. His interest in bioinformatics was sparked when he observed how methodology designed for tissue engineering could be modified to the nano-scale to enable genomic analysis. Additionally, his engineering training provided the framework to apply data-driven modeling techniques, such as finite element analysis, to complex biological systems.

Jian Kang

By |

Dr. Kang’s research focuses on the developments of statistical methods motivated by biomedical applications with a focus on neuroimaging. His recent key contributions can be summarized in the following three aspects:

Bayesian regression for complex biomedical applications
Dr. Kang and his group developed a series of Bayesian regression methods for the association analysis between the clinical outcome of interests (disease diagnostics, survival time, psychiatry scores) and the potential biomarkers in biomedical applications such as neuroimaging and genomics. In particular, they developed a new class of threshold priors as compelling alternatives to classic continuous shrinkages priors in Bayesian literatures and widely used penalization methods in frequentist literatures. Dr. Kang’s methods can substantially increase the power to detect weak but highly dependent signals by incorporating useful structural information of predictors such as spatial proximity within brain anatomical regions in neuroimaging [Zhao et al 2018; Kang et al 2018, Xue et al 2019] and gene networks in genomics [Cai et al 2017; Cai et al 2019]. Dr Kang’s methods can simultaneously select variables and evaluate the uncertainty of variable selection, as well as make inference on the effect size of the selected variables. His works provide a set of new tools for biomedical researchers to identify important biomarkers using different types of biological knowledge with statistical guarantees. In addition, Dr. Kang’s work is among the first to establish rigorous theoretical justifications for Bayesian spatial variable selection in imaging data analysis [Kang et al 2018] and Bayesian network marker selection in genomics [Cai et al 2019]. Dr. Kang’s theoretical contributions not only offer a deep understanding of the soft-thresholding operator on smooth functions, but also provide insights on which types of the biological knowledge may be useful to improve biomarker detection accuracy.

Prior knowledge guided variable screening for ultrahigh-dimensional data
Dr. Kang and his colleagues developed a series of variable screening methods for ultrahigh-dimensional data analysis by incorporating the useful prior knowledge in biomedical applications including imaging [Kang et al 2017, He et al 2019], survival analysis [Hong et al 2018] and genomics [He et al 2019]. As a preprocessing step for variable selection, variable screening is a fast-computational approach to dimension reduction. Traditional variable screening methods overlook useful prior knowledge and thus the practical performance is unsatisfying in many biomedical applications. To fill this gap, Dr. Kang developed a partition-based ultrahigh-dimensional variable screening method under generalized linear model, which can naturally incorporate the grouping and structural information in biomedical applications. When prior knowledge is unavailable or unreliable, Dr. Kang proposed a data-driven partition screening framework on covariate grouping and investigate its theoretical properties. The two special cases proposed by Dr. Kang: correlation-guided partitioning and spatial location guided partitioning are practically extremely useful for neuroimaging data analysis and genome-wide association analysis. When multiple types of grouping information are available, Dr. Kang proposed a novel theoretically justified strategy for combining screening statistics from various partitioning methods. It provides a very flexible framework for incorporating different types of prior knowledge.

Brain network modeling and inferences
Dr. Kang and his colleagues developed several new statistical methods for brain network modeling and inferences using resting-state fMRI data [Kang et al 2016, Xie and Kang 2017, Chen et al 2018]. Due to the high dimensionality of fMRI data (over 100,000 voxels in a standard brain template) with small sample sizes (hundreds of participants in a typical study), it is extremely challenging to model the brain functional connectivity network at voxel-levels. Some existing methods model brain anatomical region-level networks using the region-level summary statistics computed from voxel-level data. Those methods may suffer low power to detect the signals and have an inflated false positive rate, since the summary statistics may not well capture the heterogeneity within the predefined brain regions. To address those limitations, Dr. Kang proposed a novel method based on multi-attribute canonical correlation graphs [Kang et al 2016] to construct region-level brain network using voxel-level data. His method can capture different types of nonlinear dependence between any two brain regions consisting of hundreds or thousands of voxels. He also developed permutation tests for assessing the significance of the estimated network. His methods can largely increase power to detect signals for small sample size problems. In addition, Dr. Kang and his colleague also developed theoretically justified high-dimensional tests [Xie and Kang 2017] for constructing region-level brain networks using the voxel-level data under the multivariate normal assumption. Their theoretical results provide a useful guidance for the future development of statistical methods and theory for brain network analysis.

 

This image illustrates the neuroimaging meta-analysis data (Kang etal 2014). Neuroimaging meta-analysis is an important tool for finding consistent effects over studies. We develop a Bayesian nonparametric model and perform a meta-analysis of five emotions from 219 studies. In addition, our model can make reverse inference by using the model to predict the emotion type from a newly presented study. Our method outperforms other methods with an average of 80% accuracy.

1. Cai Q, Kang J, Yu T (2020) Bayesian variable selection over large scale networks via the thresholded graph Laplacian Gaussian prior with application to genomics. Bayesian Analysis, In Press (Earlier version won a student paper award from Biometrics Section of the ASA in JSM 2017)
2. He K, Kang J, Hong G, Zhu J, Li Y, Lin H, Xu H, Li Y (2019) Covariance-insured screening. Computational Statistics and Data Analysis: 132, 100—114.
3. He K, Xu H, Kang J† (2019) A selective overview of feature screening methods with applications to neuroimaging data, WRIES Computational Statistics, 11(2) e1454
4. Chen S, Xing Y, Kang J, Kochunov P, Hong LE (2018). Bayesian modeling of dependence in brain connectivity, Biostatistics, In Press.
5. Kang J, Reich BJ, Staicu AM (2018) Scalar-on-image regression via the soft thresholded Gaussian process. Biometrika: 105(1) 165–184.
6. Xue W, Bowman D and Kang J (2018) A Bayesian spatial model to predict disease status using imaging data from various modalities. Frontiers in Neuroscience. 12:184. doi:10.3389/fnins.2018.00184
7. Jin Z*, Kang J†, Yu T (2018) Missing value imputation for LC-MS metabolomics data by incorporating metabolic network and adduct ion relations. Bioinformatics, 34(9):1555—1561.
8. He K, Kang J† (2018) Comments on “Computationally efficient multivariate spatio-temporal models for high-dimensional count-valued data “. Bayesian Analysis, 13(1) 289-291.
9. Hong GH, Kang J†, Li Y (2018) Conditional screening for ultra-high dimensional covariates with survival outcomes. Lifetime Data Analysis: 24(1) 45-71.
10. Zhao Y*, Kang J†, Long Q (2018) Bayesian multiresolution variable selection for ultra-high dimensional neuroimaging data. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 15(2):537-550. (Earlier version won student paper award from ASA section on statistical learning and data mining in JSM 2014; It was also ranked as one of the top two papers in the student paper award competition in ASA section on statistics in imaging in JSM 2014)
11. Kang J, Hong GH, Li Y (2017) Partition-based ultrahigh dimensional variable screening, Biometrika, 104(4): 785-800.
12. Xie J#, Kang J# (2017) High dimensional tests for functional networks of brain anatomic regions. Journal of Multivariate Analysis, 156:70-88.
13. Cai Q*, Alvarez JA, Kang J†, Yu T (2017) Network marker selection for untargeted LC/MS metabolomics data, Journal of Proteome Research, 16(3):1261-1269
14. Kang J, Bowman FD, Mayberg H, Liu H (2016) A depression network of functionally connected regions discovered via multi-attribute canonical correlation graphs. NeuroImage, 41:431-441.

Rudy J. Richardson

By |

Applications of computational tools for molecular modeling (Discovery Studio, ICM-Pro, MOE, and YASARA) and data science (ADMET Predictor, KNIME, Origin Pro, Prism, Python, and R) to computational toxicology, drug discovery, homology modeling, molecular dynamics, and protein structure/function prediction. Current special interests include therapeutics for neurodegenerative disorders (Alzheimer’s, Parkinson’s, and motor neuron diseases) and infectious diseases (COVID-19).

3D alignment of acetylcholinesterase (AChE) from mouse (magenta) and electric eel (gray) showing the amino acid residues of the catalytic triad.

Aditi Misra

By |

Transportation is the backbone of the urban mobility system and is one of the greatest sources of environmental emissions and pollutions. Making urban transportation efficient, equitable and sustainable is the main focus of my research. My students and I analyze small scale survey data as well as large scale spatiotemporal data to identify travel behavior trends and patterns at a disaggregate level using econometric methods, which we then scale up to the population level through predictive and statistical modeling. We also design our own data collection methods and instruments, be it a network of smart devices or stated preference experiments. Our expertise lies in identifying latent constructs that influence decisions and choices, which in turn dictate demands on the systems and subsystems. We use our expertise to design incentives and policy suggestions that can help promote sustainable and equitable multimodal transportation systems. Our team also uses data analytics, particularly classification and pattern recognition algorithms, to analyze crash context data and develop safety-critical scenarios for automated and connected vehicle (CAV) deployment. We have developed an online game based on such scenarios to promote safe shared mobility among teenagers and young adults and plan to expand research in that area. We are also currently expanding our research to explore the use of NN in context information synthesis.

This is a project where we used classification and Bayesian models to identify scenarios that are risky for pedestrians and bicyclists. We then developed an online game based on those scenarios for middle schoolers so that they are better prepared for shared road conflicts.