We are interested in resolving outstanding fundamental scientific problems that impede the computational materials design process. Our group uses high-throughput density functional theory, applied thermodynamics, and materials informatics to deepen our fundamental understanding of synthesis-structure-property relationships, while exploring new chemical spaces for functional technological materials. These research interests are driven by the practical goal of the U.S. Materials Genome Initiative to accelerate materials discovery, but whose resolution requires basic fundamental research in synthesis science, inorganic chemistry, and materials thermodynamics.

Alex Gorodetsky’s research is at the intersection of applied mathematics, data science, and computational science, and is focused on enabling autonomous decision making under uncertainty. He is especially interested in controlling, designing, and analyzing autonomous systems that must act in complex environments where observational data and expensive computational simulations must work together to ensure objectives are achieved. Toward this goal, he pursues research in wide-ranging areas including uncertainty quantification, statistical inference, machine learning, control, and numerical analysis. His methodology is to increase scalability of probabilistic modeling and analysis techniques such as Bayesian inference and uncertainty quantification. His current strategies to achieving scalability revolve around leveraging computational optimal transport, developing tensor network learning algorithms, and creating new multi-fidelity information fusion approaches.

Sample workflow for enabling autonomous decision making under uncertainty for a drone operating in a complex environment. We develop algorithms to compress simulation data by exploiting problem structure. We then embed the compressed representations onto onboard computational resources. Finally, we develop approaches to enable the drone to adapt, learn, and refine knowledge by interacting with, and collecting data from, the environment.

Dr. Kang’s research focuses on the developments of statistical methods motivated by biomedical applications with a focus on neuroimaging. His recent key contributions can be summarized in the following three aspects:

Bayesian regression for complex biomedical applications

Dr. Kang and his group developed a series of Bayesian regression methods for the association analysis between the clinical outcome of interests (disease diagnostics, survival time, psychiatry scores) and the potential biomarkers in biomedical applications such as neuroimaging and genomics. In particular, they developed a new class of threshold priors as compelling alternatives to classic continuous shrinkages priors in Bayesian literatures and widely used penalization methods in frequentist literatures. Dr. Kang’s methods can substantially increase the power to detect weak but highly dependent signals by incorporating useful structural information of predictors such as spatial proximity within brain anatomical regions in neuroimaging [Zhao et al 2018; Kang et al 2018, Xue et al 2019] and gene networks in genomics [Cai et al 2017; Cai et al 2019]. Dr Kang’s methods can simultaneously select variables and evaluate the uncertainty of variable selection, as well as make inference on the effect size of the selected variables. His works provide a set of new tools for biomedical researchers to identify important biomarkers using different types of biological knowledge with statistical guarantees. In addition, Dr. Kang’s work is among the first to establish rigorous theoretical justifications for Bayesian spatial variable selection in imaging data analysis [Kang et al 2018] and Bayesian network marker selection in genomics [Cai et al 2019]. Dr. Kang’s theoretical contributions not only offer a deep understanding of the soft-thresholding operator on smooth functions, but also provide insights on which types of the biological knowledge may be useful to improve biomarker detection accuracy.

Prior knowledge guided variable screening for ultrahigh-dimensional data

Dr. Kang and his colleagues developed a series of variable screening methods for ultrahigh-dimensional data analysis by incorporating the useful prior knowledge in biomedical applications including imaging [Kang et al 2017, He et al 2019], survival analysis [Hong et al 2018] and genomics [He et al 2019]. As a preprocessing step for variable selection, variable screening is a fast-computational approach to dimension reduction. Traditional variable screening methods overlook useful prior knowledge and thus the practical performance is unsatisfying in many biomedical applications. To fill this gap, Dr. Kang developed a partition-based ultrahigh-dimensional variable screening method under generalized linear model, which can naturally incorporate the grouping and structural information in biomedical applications. When prior knowledge is unavailable or unreliable, Dr. Kang proposed a data-driven partition screening framework on covariate grouping and investigate its theoretical properties. The two special cases proposed by Dr. Kang: correlation-guided partitioning and spatial location guided partitioning are practically extremely useful for neuroimaging data analysis and genome-wide association analysis. When multiple types of grouping information are available, Dr. Kang proposed a novel theoretically justified strategy for combining screening statistics from various partitioning methods. It provides a very flexible framework for incorporating different types of prior knowledge.

Brain network modeling and inferences

Dr. Kang and his colleagues developed several new statistical methods for brain network modeling and inferences using resting-state fMRI data [Kang et al 2016, Xie and Kang 2017, Chen et al 2018]. Due to the high dimensionality of fMRI data (over 100,000 voxels in a standard brain template) with small sample sizes (hundreds of participants in a typical study), it is extremely challenging to model the brain functional connectivity network at voxel-levels. Some existing methods model brain anatomical region-level networks using the region-level summary statistics computed from voxel-level data. Those methods may suffer low power to detect the signals and have an inflated false positive rate, since the summary statistics may not well capture the heterogeneity within the predefined brain regions. To address those limitations, Dr. Kang proposed a novel method based on multi-attribute canonical correlation graphs [Kang et al 2016] to construct region-level brain network using voxel-level data. His method can capture different types of nonlinear dependence between any two brain regions consisting of hundreds or thousands of voxels. He also developed permutation tests for assessing the significance of the estimated network. His methods can largely increase power to detect signals for small sample size problems. In addition, Dr. Kang and his colleague also developed theoretically justified high-dimensional tests [Xie and Kang 2017] for constructing region-level brain networks using the voxel-level data under the multivariate normal assumption. Their theoretical results provide a useful guidance for the future development of statistical methods and theory for brain network analysis.

This image illustrates the neuroimaging meta-analysis data (Kang etal 2014). Neuroimaging meta-analysis is an important tool for finding consistent effects over studies. We develop a Bayesian nonparametric model and perform a meta-analysis of five emotions from 219 studies. In addition, our model can make reverse inference by using the model to predict the emotion type from a newly presented study. Our method outperforms other methods with an average of 80% accuracy.

1. Cai Q, Kang J, Yu T (2020) Bayesian variable selection over large scale networks via the thresholded graph Laplacian Gaussian prior with application to genomics. Bayesian Analysis, In Press (Earlier version won a student paper award from Biometrics Section of the ASA in JSM 2017)

2. He K, Kang J, Hong G, Zhu J, Li Y, Lin H, Xu H, Li Y (2019) Covariance-insured screening. Computational Statistics and Data Analysis: 132, 100—114.

3. He K, Xu H, Kang J† (2019) A selective overview of feature screening methods with applications to neuroimaging data, WRIES Computational Statistics, 11(2) e1454

4. Chen S, Xing Y, Kang J, Kochunov P, Hong LE (2018). Bayesian modeling of dependence in brain connectivity, Biostatistics, In Press.

5. Kang J, Reich BJ, Staicu AM (2018) Scalar-on-image regression via the soft thresholded Gaussian process. Biometrika: 105(1) 165–184.

6. Xue W, Bowman D and Kang J (2018) A Bayesian spatial model to predict disease status using imaging data from various modalities. Frontiers in Neuroscience. 12:184. doi:10.3389/fnins.2018.00184

7. Jin Z*, Kang J†, Yu T (2018) Missing value imputation for LC-MS metabolomics data by incorporating metabolic network and adduct ion relations. Bioinformatics, 34(9):1555—1561.

8. He K, Kang J† (2018) Comments on “Computationally efficient multivariate spatio-temporal models for high-dimensional count-valued data “. Bayesian Analysis, 13(1) 289-291.

9. Hong GH, Kang J†, Li Y (2018) Conditional screening for ultra-high dimensional covariates with survival outcomes. Lifetime Data Analysis: 24(1) 45-71.

10. Zhao Y*, Kang J†, Long Q (2018) Bayesian multiresolution variable selection for ultra-high dimensional neuroimaging data. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 15(2):537-550. (Earlier version won student paper award from ASA section on statistical learning and data mining in JSM 2014; It was also ranked as one of the top two papers in the student paper award competition in ASA section on statistics in imaging in JSM 2014)

11. Kang J, Hong GH, Li Y (2017) Partition-based ultrahigh dimensional variable screening, Biometrika, 104(4): 785-800.

12. Xie J#, Kang J# (2017) High dimensional tests for functional networks of brain anatomic regions. Journal of Multivariate Analysis, 156:70-88.

13. Cai Q*, Alvarez JA, Kang J†, Yu T (2017) Network marker selection for untargeted LC/MS metabolomics data, Journal of Proteome Research, 16(3):1261-1269

14. Kang J, Bowman FD, Mayberg H, Liu H (2016) A depression network of functionally connected regions discovered via multi-attribute canonical correlation graphs. NeuroImage, 41:431-441.

My areas of interest are control, estimation, and optimization, with applications to energy systems in transportation, automotive, and marine domains. My group develops model-based and data-driven tools to explore underlying system dynamics and understand the operational environments. We develop computational frameworks and numerical algorithms to achieve real-time optimization and explore connectivity and data analytics to reduce uncertainties and improve performance through predictive control and planning.

My core research focuses on the politics and measurement of human rights, discrimination, violence, and repression. I use computational methods to understand why governments around the world torture, maim, and kill individuals within their jurisdiction and the processes monitors use to observe and document these abuses. Other projects cover a broad array of themes but share a focus on computationally intensive methods and research design. These methodological tools, essential for analyzing data at massive scale, open up new insights into the micro-foundations of state repression and the politics of measurement.

People rely more on strong ties for job help in countries with greater inequality. Coefficients from 55 regressions of job transmission on tie strength are compared to measures of inequality (Gini coefficient), mean income per capita, and population, all measured in 2013. Gray lines indicate 95% confidence regions from 1000 simulated regressions that incorporate uncertainty in the country-level regressions (see below for more details). In each simulated regression we draw each country point from the distribution of regression coefficients implied by the estimate and standard error for that country and measure of tie strength. P values indicate the simulated probability that there is no relationship between tie strength and the other variable. Laura K. Gee, Jason J. Jones, Christopher J. Fariss, Moira Burke, and James H. Fowler. “The Paradox of Weak Ties in 55 Countries” Journal of Economic Behavior & Organization 133:362-372 (January 2017) DOI:10.1016/j.jebo.2016.12.004

Dr. Niccolò Meneghetti is an Assistant Professor of Computer and Information Science at the University of Michigan-Dearborn.

His major research interests are in the broad area of database systems, with primary focus on probabilistic databases, statistical relational learning and uncertain data management.

I am interested in how governance, communities, and inequality emerge in sociotechnical systems, and how the structure of sociotechnical systems encodes and reinforces these processes. To those ends, I develop empirical data and computational methods, focusing on latent variable models; statistical inference in networks; empirical design to study governance in organizations, platforms, and computational social systems; and causal inference and measurement in observational data.

Several sample projects:

> developing empirical populations of networks to infer social and ecological processes encoded in networks

> using probabilistic methods to infer the structure and dynamics of the illicit wildlife trade

> building from theory from political science, statistics, and education to disentangle issues of “bias” in computational systems

I have broad interests and expertise in developing statistical methodology and applying it in biomedical research. I have adapted methodologies, including Bayesian data analysis, categorical data analysis, generalized linear models, longitudinal data analysis, multivariate analysis, RNA-Seq data analysis, survival data analysis and machine learning methods, in response to the unique needs of individual studies and objectives without compromising the integrity of the research and results. Two main methods recently developed:

1) A risk prediction model for a survival outcome using predictors of a large dimension

I have develop a simple, fast yet sufficiently flexible statistical method to estimate the updated risk of renal disease over time using longitudinal biomarkers of a high dimension. The goal is to utilize all sources of data of a large dimension (e.g., routine clinical features, urine and serum markers measured at baseline and all follow-up time points) to efficiently and accurately estimate the updated ESRD risk.

2) A safety mining tool for vaccine safety study

I developed an algorithm for vaccine safety surveillance while incorporating adverse event ontology. Multiple adverse events may individually be rare enough to go undetected, but if they are related, they can borrow strength from each other to increase the chance of being flagged. Furthermore, borrowing strength induces shrinkage of related AEs, thereby also reducing headline-grabbing false positives.

Kentaro Toyama is W. K. Kellogg Professor of Community Information at the University of Michigan School of Information and a fellow of the Dalai Lama Center for Ethics and Transformative Values at MIT. He is the author of “Geek Heresy: Rescuing Social Change from the Cult of Technology.” Toyama conducts interdisciplinary research to understand how the world’s low-income communities interact with digital technology and to invent new ways for technology to support their socio-economic development, including computer simulations of complex systems for policy-making. Previously, Toyama did research in artificial intelligence, computer vision, and human-computer interaction at Microsoft and taught mathematics at Ashesi University in Ghana.