The goal of this project is the creation of a crucial building block of the research on AI and Architecture – a database of 3D models necessary to successfully run Artificial Neural Networks in 3D. This database is part of the first stepping-stones for the research at the AR2IL (Architecture and Artificial Intelligence Laboratory), an interdisciplinary Laboratory between Architecture (represented by Taubman College of Architecture of Urban Planning), Michigan Robotics, and the CS Department of the University of Michigan. A Laboratory dedicated to research specializing in the development of applications of Artificial Intelligence in the field of Architecture and Urban Planning. This area of inquiry has experienced an explosive growth in recent years (triggered in part by research conducted at UoM), as evidenced for example by the growth in papers dedicated to AI applications in architecture, as well as in the investment of the industry in this area. The research funded by this proposal would secure the leading position of Taubman College and the University of Michigan in the field of AI and Architecture. This proposal would also address the current lack of 3D databases that are specifically designed for Architecture applications.
My research focus the application and development of new algorithms for solving complex business analytics problems. Applications vary from revenue management, dynamic pricing, marketing analytics, to retail logistics. In terms of methodology, I use a combination of operations research and machine learning/online optimization techniques.
My lab researches how the human brain processes social and affective information and how these processes are affected in psychiatric disorders, especially schizophrenia and bipolar disorder. We use behavioral, electrophysiological (EEG), neuroimaging (functional MRI), eye tracking, brain stimulation (TMS, tACS), and computational methods in our studies. One main focus of our work is building and validating computational models based on intensive, high-dimensional subject-level behavior and brain data to explain clinical phenomena, parse mechanisms, and predict patient outcome. The goal is to improve diagnostic and prognostic assessment, and to develop personalized treatments.
Our research aims to address fundamental problems in both biomedical research and computer science by developing new tools tailored to rapidly emerging single-cell omic technologies. Broadly, we seek to understand what genes define the complement of cell types and cell states within healthy tissue, how cells differentiate to their final fates, and how dysregulation of genes within specific cell types contributes to human disease. As computational method developers, we seek to both employ and advance the methods of machine learning, particularly for unsupervised analysis of high-dimensional data. We have particular expertise in manifold learning, matrix factorization, and deep learning approaches.
Today’s real-world problems are complex and large, often with overwhelmingly large number of unknown variables which render them doomed to the so-called “curse of dimensionality”. For instance, in energy systems, the system operators should solve optimal power flow, unit commitment, and transmission switching problems with tens of thousands of continuous and discrete variables in real time. In control systems, a long standing question is how to efficiently design structured and distributed controllers for large-scale and unknown dynamical systems. Finally, in machine learning, it is important to obtain simple, interpretable, and parsimonious models for high-dimensional and noisy datasets. Our research is motivated by two main goals: (1) to model these problems as tractable optimization problems; and (2) to develop structure-aware and scalable computational methods for these optimization problems that come equipped with certifiable optimality guarantees. We aim to show that exploiting hidden structures in these problems—such as graph-induced or spectral sparsity—is a key game-changer in the pursuit of massively scalable and guaranteed computational methods.
My research lies at the intersection of optimization, data analytics, and control.
Larson’s research has been in the area of “Complex Fluids,” which include polymers, colloids, surfactant-containing fluids, liquid crystals, and biological macromolecules such as DNA, proteins, and lipid membranes. He has also contributed extensively to fluid mechanics, including microfluidics, and transport modeling. He has also has carried out research over the past 16 years in the area of molecular simulations for biomedical applications. The work has involved determining the structure and dynamics of lipid membranes, trans-membrane peptides, anti-microbial peptides, the conformation and functioning of ion channels, interactions of excipients with drugs for drug delivery, interactions of peptides with proteins including MHC molecules, resulting in more than 50 publications in these areas, and in the training of several Ph.D. students and postdocs. Many of these studies involve heavy use of computer simulations and methods of statistical analysis of simulations, including umbrella sampling, forward flux sampling, and metadynamics, which involve statistical weighting of results. He also has been engaged in analysis of percolation processes on lattices, including application to disease propagation.
Alpha helical peptide bridging lipid bilayer in molecular dynamics simulations of “hydrophobic mismatch.”
Dr. Kang’s research focuses on the developments of statistical methods motivated by biomedical applications with a focus on neuroimaging. His recent key contributions can be summarized in the following three aspects:
Bayesian regression for complex biomedical applications
Dr. Kang and his group developed a series of Bayesian regression methods for the association analysis between the clinical outcome of interests (disease diagnostics, survival time, psychiatry scores) and the potential biomarkers in biomedical applications such as neuroimaging and genomics. In particular, they developed a new class of threshold priors as compelling alternatives to classic continuous shrinkages priors in Bayesian literatures and widely used penalization methods in frequentist literatures. Dr. Kang’s methods can substantially increase the power to detect weak but highly dependent signals by incorporating useful structural information of predictors such as spatial proximity within brain anatomical regions in neuroimaging [Zhao et al 2018; Kang et al 2018, Xue et al 2019] and gene networks in genomics [Cai et al 2017; Cai et al 2019]. Dr Kang’s methods can simultaneously select variables and evaluate the uncertainty of variable selection, as well as make inference on the effect size of the selected variables. His works provide a set of new tools for biomedical researchers to identify important biomarkers using different types of biological knowledge with statistical guarantees. In addition, Dr. Kang’s work is among the first to establish rigorous theoretical justifications for Bayesian spatial variable selection in imaging data analysis [Kang et al 2018] and Bayesian network marker selection in genomics [Cai et al 2019]. Dr. Kang’s theoretical contributions not only offer a deep understanding of the soft-thresholding operator on smooth functions, but also provide insights on which types of the biological knowledge may be useful to improve biomarker detection accuracy.
Prior knowledge guided variable screening for ultrahigh-dimensional data
Dr. Kang and his colleagues developed a series of variable screening methods for ultrahigh-dimensional data analysis by incorporating the useful prior knowledge in biomedical applications including imaging [Kang et al 2017, He et al 2019], survival analysis [Hong et al 2018] and genomics [He et al 2019]. As a preprocessing step for variable selection, variable screening is a fast-computational approach to dimension reduction. Traditional variable screening methods overlook useful prior knowledge and thus the practical performance is unsatisfying in many biomedical applications. To fill this gap, Dr. Kang developed a partition-based ultrahigh-dimensional variable screening method under generalized linear model, which can naturally incorporate the grouping and structural information in biomedical applications. When prior knowledge is unavailable or unreliable, Dr. Kang proposed a data-driven partition screening framework on covariate grouping and investigate its theoretical properties. The two special cases proposed by Dr. Kang: correlation-guided partitioning and spatial location guided partitioning are practically extremely useful for neuroimaging data analysis and genome-wide association analysis. When multiple types of grouping information are available, Dr. Kang proposed a novel theoretically justified strategy for combining screening statistics from various partitioning methods. It provides a very flexible framework for incorporating different types of prior knowledge.
Brain network modeling and inferences
Dr. Kang and his colleagues developed several new statistical methods for brain network modeling and inferences using resting-state fMRI data [Kang et al 2016, Xie and Kang 2017, Chen et al 2018]. Due to the high dimensionality of fMRI data (over 100,000 voxels in a standard brain template) with small sample sizes (hundreds of participants in a typical study), it is extremely challenging to model the brain functional connectivity network at voxel-levels. Some existing methods model brain anatomical region-level networks using the region-level summary statistics computed from voxel-level data. Those methods may suffer low power to detect the signals and have an inflated false positive rate, since the summary statistics may not well capture the heterogeneity within the predefined brain regions. To address those limitations, Dr. Kang proposed a novel method based on multi-attribute canonical correlation graphs [Kang et al 2016] to construct region-level brain network using voxel-level data. His method can capture different types of nonlinear dependence between any two brain regions consisting of hundreds or thousands of voxels. He also developed permutation tests for assessing the significance of the estimated network. His methods can largely increase power to detect signals for small sample size problems. In addition, Dr. Kang and his colleague also developed theoretically justified high-dimensional tests [Xie and Kang 2017] for constructing region-level brain networks using the voxel-level data under the multivariate normal assumption. Their theoretical results provide a useful guidance for the future development of statistical methods and theory for brain network analysis.
This image illustrates the neuroimaging meta-analysis data (Kang etal 2014). Neuroimaging meta-analysis is an important tool for finding consistent effects over studies. We develop a Bayesian nonparametric model and perform a meta-analysis of five emotions from 219 studies. In addition, our model can make reverse inference by using the model to predict the emotion type from a newly presented study. Our method outperforms other methods with an average of 80% accuracy.
1. Cai Q, Kang J, Yu T (2020) Bayesian variable selection over large scale networks via the thresholded graph Laplacian Gaussian prior with application to genomics. Bayesian Analysis, In Press (Earlier version won a student paper award from Biometrics Section of the ASA in JSM 2017)
2. He K, Kang J, Hong G, Zhu J, Li Y, Lin H, Xu H, Li Y (2019) Covariance-insured screening. Computational Statistics and Data Analysis: 132, 100—114.
3. He K, Xu H, Kang J† (2019) A selective overview of feature screening methods with applications to neuroimaging data, WRIES Computational Statistics, 11(2) e1454
4. Chen S, Xing Y, Kang J, Kochunov P, Hong LE (2018). Bayesian modeling of dependence in brain connectivity, Biostatistics, In Press.
5. Kang J, Reich BJ, Staicu AM (2018) Scalar-on-image regression via the soft thresholded Gaussian process. Biometrika: 105(1) 165–184.
6. Xue W, Bowman D and Kang J (2018) A Bayesian spatial model to predict disease status using imaging data from various modalities. Frontiers in Neuroscience. 12:184. doi:10.3389/fnins.2018.00184
7. Jin Z*, Kang J†, Yu T (2018) Missing value imputation for LC-MS metabolomics data by incorporating metabolic network and adduct ion relations. Bioinformatics, 34(9):1555—1561.
8. He K, Kang J† (2018) Comments on “Computationally efficient multivariate spatio-temporal models for high-dimensional count-valued data “. Bayesian Analysis, 13(1) 289-291.
9. Hong GH, Kang J†, Li Y (2018) Conditional screening for ultra-high dimensional covariates with survival outcomes. Lifetime Data Analysis: 24(1) 45-71.
10. Zhao Y*, Kang J†, Long Q (2018) Bayesian multiresolution variable selection for ultra-high dimensional neuroimaging data. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 15(2):537-550. (Earlier version won student paper award from ASA section on statistical learning and data mining in JSM 2014; It was also ranked as one of the top two papers in the student paper award competition in ASA section on statistics in imaging in JSM 2014)
11. Kang J, Hong GH, Li Y (2017) Partition-based ultrahigh dimensional variable screening, Biometrika, 104(4): 785-800.
12. Xie J#, Kang J# (2017) High dimensional tests for functional networks of brain anatomic regions. Journal of Multivariate Analysis, 156:70-88.
13. Cai Q*, Alvarez JA, Kang J†, Yu T (2017) Network marker selection for untargeted LC/MS metabolomics data, Journal of Proteome Research, 16(3):1261-1269
14. Kang J, Bowman FD, Mayberg H, Liu H (2016) A depression network of functionally connected regions discovered via multi-attribute canonical correlation graphs. NeuroImage, 41:431-441.
His research is broadly in the interplay of complex stochastic systems and big-data, including large-scale communication/computing systems for big-data processing, private data marketplaces, and large-scale graph mining.
I study the percolation model, which is the model for long-range connectivity formation in systems that include polymerization, flow in porous media, cell-phone signals, and the spread of diseases. I study this on random graphs and other networks, and on regular lattices in various dimensions, using computer simulation and analysis. We have also worked on developing new algorithms. I am currently applying these methods to studying the COVID-19 pandemic, which also requires comparison with some of the vast amount of data that is available from every part of the world.
Veera Sundararaghavan is a Professor of Aerospace Engineering at the University of Michigan – Ann Arbor and the director of Multiscale Structural Simulations Laboratory. His research is on multi-length scale computational techniques for modelling and design of aerospace materials with a focus on microstructural mechanics (crystal plasticity, homogenization) and molecular simulation. He is particularly interested in new computational techniques that can revolutionize the way we compute in materials science: machine learning and quantum computing algorithms. He has made important contributions in the area of integrated computational materials engineering (ICME) including reduced order representations for microstructure-process-property relationships, Markov random fields approach for microstructure reconstruction, and parallel, multiscale algorithms for optimizing deformation, fatigue, failure and oxidation response in polycrystalline alloys, high temperature ceramic matrix composites and energetic composites. Methods of choice for data science include deep Boltzmann machines, undirected graph models (Markov random fields) and Support vector machines.