My lab researches how the human brain processes social and affective information and how these processes are affected in psychiatric disorders, especially schizophrenia and bipolar disorder. We use behavioral, electrophysiological (EEG), neuroimaging (functional MRI), eye tracking, brain stimulation (TMS, tACS), and computational methods in our studies. One main focus of our work is building and validating computational models based on intensive, high-dimensional subject-level behavior and brain data to explain clinical phenomena, parse mechanisms, and predict patient outcome. The goal is to improve diagnostic and prognostic assessment, and to develop personalized treatments.
His research interest lies in the intersection of signal processing, data science, machine learning, and numerical optimization. He is particularly interested in computational methods for learning low-complexity models from high-dimensional data, leveraging tools from machine learning, numerical optimization, and high dimensional geometry, with applications in imaging sciences, scientific discovery, and healthcare. Recently, he is also interested in understanding deep networks through the lens of low-dimensional modeling.
Dr. Hadjiyski research interests include computer-aided diagnosis, artificial intelligence (AI), machine learning, predictive models, image processing and analysis, medical imaging, and control systems. His current research involves design of decision support systems for detection and diagnosis of cancer in different organs and quantitative analysis of integrated multimodality radiomics, histopathology and molecular biomarkers for treatment response monitoring using AI and machine learning techniques. He also studies the effect of the decision support systems on the physicians’ clinical performance.
Multi-center clinical trials increasingly utilize quantitative diffusion imaging (DWI) to aid in patient management and treatment response assessment for translational oncology applications. A major source of systematic bias in diffusion was discovered originating from platform-dependent gradient hardware. Left uncorrected, these biases confound quantitative diffusion metrics used for characterization of tissue pathology and treatment response leading to inconclusive findings, and increasing the requisite subject numbers and trial cost. We have developed technology to mitigate systematic diffusion mapping bias that exists on MRI scanners and are in process of deploying this technology for multi-center clinical trials. Another major source of variance and bottleneck in high-throughput analysis of quantitative diffusion maps is segmentation of tumor/tissue volume of interest (VOI) based on intensities and patterns on multi-contrast MR image datasets, as well as reliable assessment of longitudinal change with disease progression or response to treatment. Our goal is development/trial/application AI algorithms for robust (semi-) automated VOI definition in analysis of multi-dimensional MR datasets for oncology trials.
Broadly, I study legal decision making, including decisions related to crime and employment. I typically use large social science data bases, but also collect my own data using technology or surveys.
My research focuses on building infrastructure for public health and health science research organizations to take advantage of cloud computing, strong software engineering practices, and MLOps (machine learning operations). By equipping biomedical research groups with tools that facilitate automation, better documentation, and portable code, we can improve the reproducibility and rigor of science while scaling up the kind of data collection and analysis possible.
Research topics include:
1. Open source software and cloud infrastructure for research,
2. Software development practices and conventions that work for academic units, like labs or research centers, and
3. The organizational factors that encourage best practices in reproducibility, data management, and transparency
The practice of science is a tug of war between competing incentives: the drive to do a lot fast, and the need to generate reproducible work. As data grows in size, code increases in complexity and the number of collaborators and institutions involved goes up, it becomes harder to preserve all the “artifacts” needed to understand and recreate your own work. Technical AND cultural solutions will be needed to keep data-centric research rigorous, shareable, and transparent to the broader scientific community.
I build data science tools to address challenges in medicine and clinical care. Specifically, I apply signal processing, image processing and machine learning techniques, including deep convolutional and recurrent neural networks and natural language processing, to aid diagnosis, prognosis and treatment of patients with acute and chronic conditions. In addition, I conduct research on novel approaches to represent clinical data and combine supervised and unsupervised methods to improve model performance and reduce the labeling burden. Another active area of my research is design, implementation and utilization of novel wearable devices for non-invasive patient monitoring in hospital and at home. This includes integration of the information that is measured by wearables with the data available in the electronic health records, including medical codes, waveforms and images, among others. Another area of my research involves linear, non-linear and discrete optimization and queuing theory to build new solutions for healthcare logistic planning, including stochastic approximation methods to model complex systems such as dispatch policies for emergency systems with multi-server dispatches, variable server load, multiple priority levels, etc.
Prof. Stange’s research uses population administrative education and labor market data to understand, evaluate and improve education, employment, and economic policy. Much of the work involves analyzing millions of course-taking and transcript records for college students, whether they be at a single institution, a handful of institutions, or all institutions in several states. This data is used to richly characterize the experiences of college students and relate these experiences to outcomes such as educational attainment, employment, earnings, and career trajectories. Several projects also involve working with the text contained in the universe of all job ads posted online in the US for the past decade. This data is used to characterize the demand for different skills and education credentials in the US labor market. Classification is a task that is arising frequently in this work: How to classify courses into groups based on their title and content? How to identify students with similar educational experiences based on their course-taking patterns? How to classify job ads as being more appropriate for one type of college major or another? This data science work is often paired with traditional causal inference tools of economics, including quasi-experimental methods.
My research focuses on the development of novel Magnetic Resonance Imaging (MRI) technology for imaging the heart. We focus in particular on quantitative imaging techniques, in which the signal intensity at each pixel in an image represents a measurement of an inherent property of a tissue. Much of our research is based on cardiac Magnetic Resonance Fingerprinting (MRF), which is a class of methods for simultaneously measuring multiple tissue properties from one rapid acquisition.
Our group is exploring novel ways to combine physics-based modeling of MRI scans with deep learning algorithms for several purposes. First, we are exploring the use of deep learning to design quantitative MRI scans with improved accuracy and precision. Second, we are developing deep learning approaches for image reconstruction that will allow us to reduce image noise, improve spatial resolution and volumetric coverage, and enable highly accelerated acquisitions to shorten scan times. Third, we are exploring ways of using artificial intelligence to derive physiological motion signals directly from MRI data to enable continuous scanning that is robust to cardiac and breathing motion. In general, we focus on algorithms that are either self-supervised or use training data generated in computer simulations, since the collection of large amounts of training data from human subjects is often impractical when designing novel imaging methods.
Prof. Huang is specialized in satellite remote sensing, atmospheric radiation, and climate modeling. Optimization, pattern analysis, and dimensional reduction are extensively used in his research for explaining observed spectrally resolved infrared spectra, estimating geophysical parameters from such hyperspectral observations, and deducing human influence on the climate in the presence of natural variability of the climate system. His group has also developed a deep-learning model to make a data-driven solar forecast model for use in the renewable energy sector.