Salar Fattahi

By |

Today’s real-world problems are complex and large, often with overwhelmingly large number of unknown variables which render them doomed to the so-called “curse of dimensionality”. For instance, in energy systems, the system operators should solve optimal power flow, unit commitment, and transmission switching problems with tens of thousands of continuous and discrete variables in real time. In control systems, a long standing question is how to efficiently design structured and distributed controllers for large-scale and unknown dynamical systems. Finally, in machine learning, it is important to obtain simple, interpretable, and parsimonious models for high-dimensional and noisy datasets. Our research is motivated by two main goals: (1) to model these problems as tractable optimization problems; and (2) to develop structure-aware and scalable computational methods for these optimization problems that come equipped with certifiable optimality guarantees. We aim to show that exploiting hidden structures in these problems—such as graph-induced or spectral sparsity—is a key game-changer in the pursuit of massively scalable and guaranteed computational methods.

9.9.2020 MIDAS Faculty Research Pitch Video.

My research lies at the intersection of optimization, data analytics, and control.

Albert S. Berahas

By |

Albert S. Berahas is an Assistant Professor in the department of Industrial & Operations Engineering. His research broadly focuses on designing, developing and analyzing algorithms for solving large scale nonlinear optimization problems. Such problems are ubiquitous, and arise in a plethora of areas such as engineering design, economics, transportation, robotics, machine learning and statistics. Specifically, he is interested in and has explored several sub-fields of nonlinear optimization such as: (i) general nonlinear optimization algorithms, (ii) optimization algorithms for machine learning, (iii) constrained optimization, (iv) stochastic optimization, (v) derivative-free optimization, and (vi) distributed optimization.

9.9.2020 MIDAS Faculty Research Pitch Video.

Alex Gorodetsky

By |

Alex Gorodetsky’s research is at the intersection of applied mathematics, data science, and computational science, and is focused on enabling autonomous decision making under uncertainty. He is especially interested in controlling, designing, and analyzing autonomous systems that must act in complex environments where observational data and expensive computational simulations must work together to ensure objectives are achieved. Toward this goal, he pursues research in wide-ranging areas including uncertainty quantification, statistical inference, machine learning, control, and numerical analysis. His methodology is to increase scalability of probabilistic modeling and analysis techniques such as Bayesian inference and uncertainty quantification. His current strategies to achieving scalability revolve around leveraging computational optimal transport, developing tensor network learning algorithms, and creating new multi-fidelity information fusion approaches.

Sample workflow for enabling autonomous decision making under uncertainty for a drone operating in a complex environment. We develop algorithms to compress simulation data by exploiting problem structure. We then embed the compressed representations onto onboard computational resources. Finally, we develop approaches to enable the drone to adapt, learn, and refine knowledge by interacting with, and collecting data from, the environment.

John Silberholz

By |

Most of my research related to data science involves decision making around clinical trials. In particular, I am interested in how databases of past clinical trial results can inform future trial design and other decisions. Some of my work has involved using machine learning and mathematical optimization to design new combination therapies for cancer based on the results of past trials. Other work has used network meta-analysis to combine the results of randomized controlled trials (RCTs) to better summarize what is currently known about a disease, to design further trials that would be maximally informative, and to study the quality of the control arms used in Phase III trials (which are used for drug approvals). Other work combines toxicity data from clinical trials with toxicity data from other data sources (claims data and adverse event reporting databases) to accelerate detection of adverse drug reactions to newly approved drugs. Lastly, some of my work uses Bayesian inference to accelerate clinical trials with multiple endpoints, learning the link between different endpoints using past clinical trial results.

Jian Kang

By |

Dr. Kang’s research focuses on the developments of statistical methods motivated by biomedical applications with a focus on neuroimaging. His recent key contributions can be summarized in the following three aspects:

Bayesian regression for complex biomedical applications
Dr. Kang and his group developed a series of Bayesian regression methods for the association analysis between the clinical outcome of interests (disease diagnostics, survival time, psychiatry scores) and the potential biomarkers in biomedical applications such as neuroimaging and genomics. In particular, they developed a new class of threshold priors as compelling alternatives to classic continuous shrinkages priors in Bayesian literatures and widely used penalization methods in frequentist literatures. Dr. Kang’s methods can substantially increase the power to detect weak but highly dependent signals by incorporating useful structural information of predictors such as spatial proximity within brain anatomical regions in neuroimaging [Zhao et al 2018; Kang et al 2018, Xue et al 2019] and gene networks in genomics [Cai et al 2017; Cai et al 2019]. Dr Kang’s methods can simultaneously select variables and evaluate the uncertainty of variable selection, as well as make inference on the effect size of the selected variables. His works provide a set of new tools for biomedical researchers to identify important biomarkers using different types of biological knowledge with statistical guarantees. In addition, Dr. Kang’s work is among the first to establish rigorous theoretical justifications for Bayesian spatial variable selection in imaging data analysis [Kang et al 2018] and Bayesian network marker selection in genomics [Cai et al 2019]. Dr. Kang’s theoretical contributions not only offer a deep understanding of the soft-thresholding operator on smooth functions, but also provide insights on which types of the biological knowledge may be useful to improve biomarker detection accuracy.

Prior knowledge guided variable screening for ultrahigh-dimensional data
Dr. Kang and his colleagues developed a series of variable screening methods for ultrahigh-dimensional data analysis by incorporating the useful prior knowledge in biomedical applications including imaging [Kang et al 2017, He et al 2019], survival analysis [Hong et al 2018] and genomics [He et al 2019]. As a preprocessing step for variable selection, variable screening is a fast-computational approach to dimension reduction. Traditional variable screening methods overlook useful prior knowledge and thus the practical performance is unsatisfying in many biomedical applications. To fill this gap, Dr. Kang developed a partition-based ultrahigh-dimensional variable screening method under generalized linear model, which can naturally incorporate the grouping and structural information in biomedical applications. When prior knowledge is unavailable or unreliable, Dr. Kang proposed a data-driven partition screening framework on covariate grouping and investigate its theoretical properties. The two special cases proposed by Dr. Kang: correlation-guided partitioning and spatial location guided partitioning are practically extremely useful for neuroimaging data analysis and genome-wide association analysis. When multiple types of grouping information are available, Dr. Kang proposed a novel theoretically justified strategy for combining screening statistics from various partitioning methods. It provides a very flexible framework for incorporating different types of prior knowledge.

Brain network modeling and inferences
Dr. Kang and his colleagues developed several new statistical methods for brain network modeling and inferences using resting-state fMRI data [Kang et al 2016, Xie and Kang 2017, Chen et al 2018]. Due to the high dimensionality of fMRI data (over 100,000 voxels in a standard brain template) with small sample sizes (hundreds of participants in a typical study), it is extremely challenging to model the brain functional connectivity network at voxel-levels. Some existing methods model brain anatomical region-level networks using the region-level summary statistics computed from voxel-level data. Those methods may suffer low power to detect the signals and have an inflated false positive rate, since the summary statistics may not well capture the heterogeneity within the predefined brain regions. To address those limitations, Dr. Kang proposed a novel method based on multi-attribute canonical correlation graphs [Kang et al 2016] to construct region-level brain network using voxel-level data. His method can capture different types of nonlinear dependence between any two brain regions consisting of hundreds or thousands of voxels. He also developed permutation tests for assessing the significance of the estimated network. His methods can largely increase power to detect signals for small sample size problems. In addition, Dr. Kang and his colleague also developed theoretically justified high-dimensional tests [Xie and Kang 2017] for constructing region-level brain networks using the voxel-level data under the multivariate normal assumption. Their theoretical results provide a useful guidance for the future development of statistical methods and theory for brain network analysis.

 

This image illustrates the neuroimaging meta-analysis data (Kang etal 2014). Neuroimaging meta-analysis is an important tool for finding consistent effects over studies. We develop a Bayesian nonparametric model and perform a meta-analysis of five emotions from 219 studies. In addition, our model can make reverse inference by using the model to predict the emotion type from a newly presented study. Our method outperforms other methods with an average of 80% accuracy.

1. Cai Q, Kang J, Yu T (2020) Bayesian variable selection over large scale networks via the thresholded graph Laplacian Gaussian prior with application to genomics. Bayesian Analysis, In Press (Earlier version won a student paper award from Biometrics Section of the ASA in JSM 2017)
2. He K, Kang J, Hong G, Zhu J, Li Y, Lin H, Xu H, Li Y (2019) Covariance-insured screening. Computational Statistics and Data Analysis: 132, 100—114.
3. He K, Xu H, Kang J† (2019) A selective overview of feature screening methods with applications to neuroimaging data, WRIES Computational Statistics, 11(2) e1454
4. Chen S, Xing Y, Kang J, Kochunov P, Hong LE (2018). Bayesian modeling of dependence in brain connectivity, Biostatistics, In Press.
5. Kang J, Reich BJ, Staicu AM (2018) Scalar-on-image regression via the soft thresholded Gaussian process. Biometrika: 105(1) 165–184.
6. Xue W, Bowman D and Kang J (2018) A Bayesian spatial model to predict disease status using imaging data from various modalities. Frontiers in Neuroscience. 12:184. doi:10.3389/fnins.2018.00184
7. Jin Z*, Kang J†, Yu T (2018) Missing value imputation for LC-MS metabolomics data by incorporating metabolic network and adduct ion relations. Bioinformatics, 34(9):1555—1561.
8. He K, Kang J† (2018) Comments on “Computationally efficient multivariate spatio-temporal models for high-dimensional count-valued data “. Bayesian Analysis, 13(1) 289-291.
9. Hong GH, Kang J†, Li Y (2018) Conditional screening for ultra-high dimensional covariates with survival outcomes. Lifetime Data Analysis: 24(1) 45-71.
10. Zhao Y*, Kang J†, Long Q (2018) Bayesian multiresolution variable selection for ultra-high dimensional neuroimaging data. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 15(2):537-550. (Earlier version won student paper award from ASA section on statistical learning and data mining in JSM 2014; It was also ranked as one of the top two papers in the student paper award competition in ASA section on statistics in imaging in JSM 2014)
11. Kang J, Hong GH, Li Y (2017) Partition-based ultrahigh dimensional variable screening, Biometrika, 104(4): 785-800.
12. Xie J#, Kang J# (2017) High dimensional tests for functional networks of brain anatomic regions. Journal of Multivariate Analysis, 156:70-88.
13. Cai Q*, Alvarez JA, Kang J†, Yu T (2017) Network marker selection for untargeted LC/MS metabolomics data, Journal of Proteome Research, 16(3):1261-1269
14. Kang J, Bowman FD, Mayberg H, Liu H (2016) A depression network of functionally connected regions discovered via multi-attribute canonical correlation graphs. NeuroImage, 41:431-441.

Lei Ying

By |

His research is broadly in the interplay of complex stochastic systems and big-data, including large-scale communication/computing systems for big-data processing, private data marketplaces, and large-scale graph mining.

Anna G. Stefanopoulou

By |

Energy Transportation related topics: data and simulations of various cleaner and ultimately cost-effective options for transit. exploring techno-economic and environmental issues in electric ride-sharing/hailing vehicles to create clean and convenient alternatives to single-occupancy vehicles. investigation of the location and integration of chargers with energy storage and bi-directional services, along with the connection to distributed renewable power generation such as solar arrays as well as the centralized electric grid.

Powertrain related topics: measurements, models and management of batteries, fuel cells, and engines in automotive and stationary applications.

Jing Sun

By |

My areas of interest are control, estimation, and optimization, with applications to energy systems in transportation, automotive, and marine domains. My group develops model-based and data-driven tools to explore underlying system dynamics and understand the operational environments. We develop computational frameworks and numerical algorithms to achieve real-time optimization and explore connectivity and data analytics to reduce uncertainties and improve performance through predictive control and planning.

Zhixin (Jason) Liu

By |

My research focuses on quantitative modeling approaches that help business or nonprofit institutions make efficient operational decisions. My research addresses decisions that are made: 1) on either a single independent operation or multiple integrated operations, and 2) by either a single party or multiple parties, most likely different supply chain members. I am specifically interested in the allocation of resources over time and/or among different parties, which often involve scheduling, i.e., the allocation of resources over time to optimize certain objectives, capacity allocation, i.e., the allocation of production capacity from supplier to retailers in a supply chain setting, and pricing, i.e., the determination of selling price of certain products. When multiple parties are involved, decisions can be made either cooperatively or non-cooperatively. The methodologies used in my work include game theory, real analysis, optimization, approximation, simulation, and statistics.

Nicole Seiberlich

By |

My research involves developing novel data collection strategies and image reconstruction techniques for Magnetic Resonance Imaging. In order to accelerate data collection, we take advantage of features of MRI data, including sparsity, spatiotemporal correlations, and adherence to underlying physics; each of these properties can be leveraged to reduce the amount of data required to generate an image and thus speed up imaging time. We also seek to understand what image information is essential for radiologists in order to optimize MRI data collection and personalize the imaging protocol for each patient. We deploy machine learning algorithms and optimization techniques in each of these projects. In some of our work, we can generate the data that we need to train and test our algorithms using numerical simulations. In other portions, we seek to utilize clinical images, prospectively collected MRI data, or MRI protocol information in order to refine our techniques.

We seek to develop technologies like cardiac Magnetic Resonance Fingerprinting (cMRF), which can be used to efficiently collect multiple forms of information to distinguish healthy and diseased tissue using MRI. By using rapid methods like cMRF, quantitative data describing disease processes can be gathered quickly, enabling more and sicker patients can be assessed via MRI. These data, collected from many patients over time, can also be used to further refine MRI technologies for the assessment of specific diseases in a tailored, patient-specific manner.