In this project, we use multi-scale models coupled with machine learning algorithms to study cardiac electromechanic coupling. The approach spans out the molecular, Brownian, and Langevin dynamics of the contractile (sarcomeric proteins) mechanism of cardiac cells and up-to-the finite element analysis of the tissue and organ levels. In this work, a novel surrogate machine learning algorithm for the sarcomere contraction is developed. The model is trained and established using in-silico data-driven dynamic sampling procedures implemented using our previously derived myofilament mathematical models.
My research focuses on building infrastructure for public health and health science research organizations to take advantage of cloud computing, strong software engineering practices, and MLOps (machine learning operations). By equipping biomedical research groups with tools that facilitate automation, better documentation, and portable code, we can improve the reproducibility and rigor of science while scaling up the kind of data collection and analysis possible.
Research topics include:
1. Open source software and cloud infrastructure for research,
2. Software development practices and conventions that work for academic units, like labs or research centers, and
3. The organizational factors that encourage best practices in reproducibility, data management, and transparency
The practice of science is a tug of war between competing incentives: the drive to do a lot fast, and the need to generate reproducible work. As data grows in size, code increases in complexity and the number of collaborators and institutions involved goes up, it becomes harder to preserve all the “artifacts” needed to understand and recreate your own work. Technical AND cultural solutions will be needed to keep data-centric research rigorous, shareable, and transparent to the broader scientific community.
The Ahmed lab studies behavioral neural circuits and attempts to repair them when they go awry in neurological disorders. Working with patients and with transgenic rodent models, we focus on how space, time and speed are encoded by the spatial navigation and memory circuits of the brain. We also focus on how these same circuits go wrong in Alzheimer’s disease, Parkinson’s disease and epilepsy. Our research involves the collection of massive volumes of neural data. Within these terabytes of data, we work to identify and understand irregular activity patterns at the sub-millisecond level. This requires us to leverage high performance computing environments, and to design custom algorithmic and analytical signal processing solutions. As part of our research, we also discover new ways for the brain to encode information (how neurons encode sequences of space and time, for example) – and the algorithms utilized by these natural neural networks can have important implications for the design of more effective artificial neural networks.
My research focuses on the development of novel Magnetic Resonance Imaging (MRI) technology for imaging the heart. We focus in particular on quantitative imaging techniques, in which the signal intensity at each pixel in an image represents a measurement of an inherent property of a tissue. Much of our research is based on cardiac Magnetic Resonance Fingerprinting (MRF), which is a class of methods for simultaneously measuring multiple tissue properties from one rapid acquisition.
Our group is exploring novel ways to combine physics-based modeling of MRI scans with deep learning algorithms for several purposes. First, we are exploring the use of deep learning to design quantitative MRI scans with improved accuracy and precision. Second, we are developing deep learning approaches for image reconstruction that will allow us to reduce image noise, improve spatial resolution and volumetric coverage, and enable highly accelerated acquisitions to shorten scan times. Third, we are exploring ways of using artificial intelligence to derive physiological motion signals directly from MRI data to enable continuous scanning that is robust to cardiac and breathing motion. In general, we focus on algorithms that are either self-supervised or use training data generated in computer simulations, since the collection of large amounts of training data from human subjects is often impractical when designing novel imaging methods.
Prof. Huang is specialized in satellite remote sensing, atmospheric radiation, and climate modeling. Optimization, pattern analysis, and dimensional reduction are extensively used in his research for explaining observed spectrally resolved infrared spectra, estimating geophysical parameters from such hyperspectral observations, and deducing human influence on the climate in the presence of natural variability of the climate system. His group has also developed a deep-learning model to make a data-driven solar forecast model for use in the renewable energy sector.
Andrew uses mathematical and statistical modeling to address public health problems. As a mathematical epidemiologist, he works on a wide range of topics (mostly related to infectious diseases and cancer prevention and survival) using an array of computational and statistical tools, including mechanistic differential equations and multistate stochastic processes. Rigorous consideration of parameter identifiability, parameter estimation, and uncertainty quantification are underlying themes in Andrew’s work.
• Computational dynamics focused on nonlinear dynamics and finite elements (e.g., a new approach for forecasting bifurcations/tipping points in aeroelastic and ecological systems, new finite element methods for thin walled beams that leads to novel reduced order models).
• Modeling nonlinear phenomena and mechano-chemical processes in molecular motor dynamics, such as motor proteins, toward early detection of neurodegenerative diseases.
• Computational methods for robotics, manufacturing, modeling multi-body dynamics, developed methods for identifying limit cycle oscillations in large-dimensional (fluid) systems.
• Turbomachinery and aeroelasticity providing a better understanding of fundamental complex fluid dynamics and cutting-edge models for predicting, identifying and characterizing the response of blisks and flade systems through integrated experimental & computational approaches.
• Structural health monitoring & sensing providing increased sensibility / capabilities by the discovery, characterization and exploitation of sensitivity vector fields, smart system interrogation through nonlinear feedback excitation, nonlinear minimal rank perturbation and system augmentation, pattern recognition for attractors, damage detection using bifurcation morphing.
Albert S. Berahas is an Assistant Professor in the department of Industrial & Operations Engineering. His research broadly focuses on designing, developing and analyzing algorithms for solving large scale nonlinear optimization problems. Such problems are ubiquitous, and arise in a plethora of areas such as engineering design, economics, transportation, robotics, machine learning and statistics. Specifically, he is interested in and has explored several sub-fields of nonlinear optimization such as: (i) general nonlinear optimization algorithms, (ii) optimization algorithms for machine learning, (iii) constrained optimization, (iv) stochastic optimization, (v) derivative-free optimization, and (vi) distributed optimization.
Alex Gorodetsky’s research is at the intersection of applied mathematics, data science, and computational science, and is focused on enabling autonomous decision making under uncertainty. He is especially interested in controlling, designing, and analyzing autonomous systems that must act in complex environments where observational data and expensive computational simulations must work together to ensure objectives are achieved. Toward this goal, he pursues research in wide-ranging areas including uncertainty quantification, statistical inference, machine learning, control, and numerical analysis. His methodology is to increase scalability of probabilistic modeling and analysis techniques such as Bayesian inference and uncertainty quantification. His current strategies to achieving scalability revolve around leveraging computational optimal transport, developing tensor network learning algorithms, and creating new multi-fidelity information fusion approaches.
Sample workflow for enabling autonomous decision making under uncertainty for a drone operating in a complex environment. We develop algorithms to compress simulation data by exploiting problem structure. We then embed the compressed representations onto onboard computational resources. Finally, we develop approaches to enable the drone to adapt, learn, and refine knowledge by interacting with, and collecting data from, the environment.
My main interest is theoretical statistics as implied to complex model from semiparametric to ultra high dimensional regression analysis. In particular the negative aspects of Bayesian and causal analysis as implemented in modern statistics.
An analysis of the position of SCOTUS judges.