My lab studies how information from one sensory system influences processing in other sensory systems, as well as how this information is integrated in the brain. Specifically, we investigate the mechanisms underlying basic auditory, visual, and tactile interactions, synesthesia, multisensory body image perception, and visual facilitation of speech perception. Our current research examines multisensory processes using a variety of techniques including psychophysical testing and illusions, fMRI and DTI, electrophysiological measures of neural activity (both EEG and iEEG), and lesion mapping in patients with brain tumors. Our intracranial electroencephalography (iEEG/ECoG/sEEG) recordings are a unique resource that allow us to record neural activity directly from the human brain from clinically implanted electrodes in patients. These recordings are collected while patients perform the same auditory, visual, and tactile tasks that we use in our other behavioral and neuroimaging studies, but iEEG measures have millisecond temporal resolution as well as millimeter spatial precision, providing unparalleled information about the flow of neural activity in the brain. We use signal processing techniques and machine learning methods to identify how information is encoded in the brain and how it is disrupted in clinical contexts (e.g., in patients with a brain tumor).
My lab researches how the human brain processes social and affective information and how these processes are affected in psychiatric disorders, especially schizophrenia and bipolar disorder. We use behavioral, electrophysiological (EEG), neuroimaging (functional MRI), eye tracking, brain stimulation (TMS, tACS), and computational methods in our studies. One main focus of our work is building and validating computational models based on intensive, high-dimensional subject-level behavior and brain data to explain clinical phenomena, parse mechanisms, and predict patient outcome. The goal is to improve diagnostic and prognostic assessment, and to develop personalized treatments.
His research interest lies in the intersection of signal processing, data science, machine learning, and numerical optimization. He is particularly interested in computational methods for learning low-complexity models from high-dimensional data, leveraging tools from machine learning, numerical optimization, and high dimensional geometry, with applications in imaging sciences, scientific discovery, and healthcare. Recently, he is also interested in understanding deep networks through the lens of low-dimensional modeling.
Dr. Hadjiyski research interests include computer-aided diagnosis, artificial intelligence (AI), machine learning, predictive models, image processing and analysis, medical imaging, and control systems. His current research involves design of decision support systems for detection and diagnosis of cancer in different organs and quantitative analysis of integrated multimodality radiomics, histopathology and molecular biomarkers for treatment response monitoring using AI and machine learning techniques. He also studies the effect of the decision support systems on the physicians’ clinical performance.
Multi-center clinical trials increasingly utilize quantitative diffusion imaging (DWI) to aid in patient management and treatment response assessment for translational oncology applications. A major source of systematic bias in diffusion was discovered originating from platform-dependent gradient hardware. Left uncorrected, these biases confound quantitative diffusion metrics used for characterization of tissue pathology and treatment response leading to inconclusive findings, and increasing the requisite subject numbers and trial cost. We have developed technology to mitigate systematic diffusion mapping bias that exists on MRI scanners and are in process of deploying this technology for multi-center clinical trials. Another major source of variance and bottleneck in high-throughput analysis of quantitative diffusion maps is segmentation of tumor/tissue volume of interest (VOI) based on intensities and patterns on multi-contrast MR image datasets, as well as reliable assessment of longitudinal change with disease progression or response to treatment. Our goal is development/trial/application AI algorithms for robust (semi-) automated VOI definition in analysis of multi-dimensional MR datasets for oncology trials.
In this project, we use multi-scale models coupled with machine learning algorithms to study cardiac electromechanic coupling. The approach spans out the molecular, Brownian, and Langevin dynamics of the contractile (sarcomeric proteins) mechanism of cardiac cells and up-to-the finite element analysis of the tissue and organ levels. In this work, a novel surrogate machine learning algorithm for the sarcomere contraction is developed. The model is trained and established using in-silico data-driven dynamic sampling procedures implemented using our previously derived myofilament mathematical models.
My research focuses on building infrastructure for public health and health science research organizations to take advantage of cloud computing, strong software engineering practices, and MLOps (machine learning operations). By equipping biomedical research groups with tools that facilitate automation, better documentation, and portable code, we can improve the reproducibility and rigor of science while scaling up the kind of data collection and analysis possible.
Research topics include:
1. Open source software and cloud infrastructure for research,
2. Software development practices and conventions that work for academic units, like labs or research centers, and
3. The organizational factors that encourage best practices in reproducibility, data management, and transparency
The practice of science is a tug of war between competing incentives: the drive to do a lot fast, and the need to generate reproducible work. As data grows in size, code increases in complexity and the number of collaborators and institutions involved goes up, it becomes harder to preserve all the “artifacts” needed to understand and recreate your own work. Technical AND cultural solutions will be needed to keep data-centric research rigorous, shareable, and transparent to the broader scientific community.
A major focus of the MLiNS lab is to combine stimulated Raman histology (SRH), a rapid label-free, optical imaging method, with deep learning and computer vision techniques to discover the molecular, cellular, and microanatomic features of skull base and malignant brain tumors. We are using SRH in our operating rooms to improve the speed and accuracy of brain tumor diagnosis. Our group has focused on deep learning-based computer vision methods for automated image interpretation, intraoperative diagnosis, and tumor margin delineation. Our work culminated in a multicenter, prospective, clinical trial, which demonstrated that AI interpretation of SRH images was equivalent in diagnostic accuracy to pathologist interpretation of conventional histology. We were able to show, for the first time, that a deep neural network is able to learn recognizable and interpretable histologic image features (e.g. tumor cellularity, nuclear morphology, infiltrative growth pattern, etc) in order to make a diagnosis. Our future work is directed at going beyond human-level interpretation towards identifying molecular/genetic features, single-cell classification, and predicting patient prognosis.
In his various roles, he has helped develop several educational programs in Innovation and Entrepreneurial Development (the only one of their kind in the world) for medical students, residents, and faculty as well as co-founding 4 start-up companies (including a consulting group, a pharmaceutical company, a device company, and a digital health startup) to improve the care of surgical patients and patients with cancer. He has given over 80 invited talks both nationally and internationally, written and published over 110 original scientific articles, 12 book chapters, as well as a textbook on “Success in Academic Surgery: Innovation and Entrepreneurship” published in 2019 by Springer-NATURE. His research is focused on drug development and nanoparticle drug delivery for cancer therapeutic development as well as evaluation of circulating tumor cells, tissue engineering for development of thyroid organoids, and evaluating the role of mixed reality technologies, AI and ML in surgical simulation, education and clinical care delivery as well as directing the Center for Surgical Innovation at Michigan. He has been externally funded for 13 consecutive years by donors and grants from Susan G. Komen Foundation, the American Cancer Society, and he currently has funding from three National Institute of Health R-01 grants through the National Cancer Institute. He has served on several grant study sections for the National Science Foundation, the National Institute of Health, the Department of Defense, and the Susan G. Komen Foundation. He also serves of several scientific journal editorial boards and has serves on committees and leadership roles in the Association for Academic Surgery, the Society of University Surgeons and the American Association of Endocrine Surgeons where he was the National Program Chair in 2013. For his innovation efforts, he was awarded a Distinguished Faculty Recognition Award by the University of Michigan in 2019. His clinical interests and national expertise are in the areas of Endocrine Surgery: specifically thyroid surgery for benign and malignant disease, minimally invasive thyroid and parathyroid surgery, and adrenal surgery, as well as advanced Melanoma Surgery including developing and running the hyperthermic isolated limb perfusion program for in transit metastatic melanoma (the only one in the state of Michigan) which is now one of the largest in the nation.
My research focuses on the development of novel Magnetic Resonance Imaging (MRI) technology for imaging the heart. We focus in particular on quantitative imaging techniques, in which the signal intensity at each pixel in an image represents a measurement of an inherent property of a tissue. Much of our research is based on cardiac Magnetic Resonance Fingerprinting (MRF), which is a class of methods for simultaneously measuring multiple tissue properties from one rapid acquisition.
Our group is exploring novel ways to combine physics-based modeling of MRI scans with deep learning algorithms for several purposes. First, we are exploring the use of deep learning to design quantitative MRI scans with improved accuracy and precision. Second, we are developing deep learning approaches for image reconstruction that will allow us to reduce image noise, improve spatial resolution and volumetric coverage, and enable highly accelerated acquisitions to shorten scan times. Third, we are exploring ways of using artificial intelligence to derive physiological motion signals directly from MRI data to enable continuous scanning that is robust to cardiac and breathing motion. In general, we focus on algorithms that are either self-supervised or use training data generated in computer simulations, since the collection of large amounts of training data from human subjects is often impractical when designing novel imaging methods.