As a board-certified ophthalmologist and glaucoma specialist, I have more than 15 years of clinical experience caring for patients with different types and complexities of glaucoma. In addition to my clinical experience, as a health services researcher, I have developed experience and expertise in several disciplines including performing analyses using large health care claims databases to study utilization and outcomes of patients with ocular diseases, racial and other disparities in eye care, associations between systemic conditions or medication use and ocular diseases. I have learned the nuances of various data sources and ways to maximize our use of these data sources to answer important and timely questions. Leveraging my background in HSR with new skills in bioinformatics and precision medicine, over the past 2-3 years I have been developing and growing the Sight Outcomes Research Collaborative (SOURCE) repository, a powerful tool that researchers can tap into to study patients with ocular diseases. My team and I have spent countless hours devising ways of extracting electronic health record data from Clarity, cleaning and de-identifying the data, and making it linkable to ocular diagnostic test data (OCT, HVF, biometry) and non-clinical data. Now that we have successfully developed such a resource here at Kellogg, I am now collaborating with colleagues at > 2 dozen academic ophthalmology departments across the country to assist them with extracting their data in the same format and sending it to Kellogg so that we can pool the data and make it accessible to researchers at all of the participating centers for research and quality improvement studies. I am also actively exploring ways to integrate data from SOURCE into deep learning and artificial intelligence algorithms, making use of SOURCE data for genotype-phenotype association studies and development of polygenic risk scores for common ocular diseases, capturing patient-reported outcome data for the majority of eye care recipients, enhancing visualization of the data on easy-to-access dashboards to aid in quality improvement initiatives, and making use of the data to enhance quality of care, safety, efficiency of care delivery, and to improve clinical operations. .
For human-machine systems, I first collect data from human users, whether it’s an individual, a team, or even a society. Different kinds of methods can be used, including self-report, interview, focus groups, physiological and behavioral data, as well as user-generated data from the Internet.
Based on the data collected, I attempt to understand human contexts, including different aspects of the human users, such as emotion, cognition, needs, preferences, locations and activities. Such understanding can then be applied to different human-machine systems, including healthcare systems, automated driving systems, and product-service systems.
Based on the different design theory and methodology, from the perspective of the machine dimension, I apply knowledge of computing and communication as well as practical and theoretical knowledge of social and behavior to design various systems for human users. From the human dimension, I seek to understand human needs and decision making processes, and then build mathematical models and design tools that facilitate integration of subjective experiences, social contexts, and engineering principles into the design process of human-machine systems.
Dr. Douville is a critical care anesthesiologist with an investigative background in bioinformatics and perioperative outcomes research. He studies techniques for utilizing health care data, including genotype, to deliver personalized medicine in the perioperative period and intensive care unit. His research background has focused on ways technology can assist health care delivery to improve patient outcomes. This began designing microfluidic chips capable of recreating fluid mechanics of atelectatic alveoli and monitoring the resulting barrier breakdown real-time. His interest in bioinformatics was sparked when he observed how methodology designed for tissue engineering could be modified to the nano-scale to enable genomic analysis. Additionally, his engineering training provided the framework to apply data-driven modeling techniques, such as finite element analysis, to complex biological systems.
My research focuses on the causes, dynamics and outcomes of conflict, at the international and local levels. My methodological areas of interest include spatial statistics, mathematical/computational modeling and text analysis.
Map/time-series/network plot, showing the flow of information across battles in World War II. Z axis is time, X and Y axes are longitude and latitude, polygons are locations of battles, red lines are network edges linking battles involving the same combatants. Source: https://doi.org/10.1017/S0020818318000358
Greg’s research primarily investigates information flow in financial markets and the actions of agents in those markets – both consumers and producers of that information. His approach draws on theory from the social sciences (economics, psychology and sociology) combined with large data sets from diverse sources and a variety of data science approaches. Most projects combine data from across multiple sources, including commercial data bases, experimentally created data and extracting data from sources designed for other uses (commercial media, web scrapping, cellphone data etc.). In addition to a wide range of econometric and statistical methods, his work has included applying machine learning , textual analysis, mining social media, processes for missing data and combining mixed media.
Timothy C. Guetterman is a methodologist focused on research design and mixed methods research. His research interests include advancing rigorous methods of quantitative, qualitative, and mixed methods research, particularly strategies for intersecting and integrating qualitative and quantitative research. Tim is the PI of NIH-funded research that uses quantitative, qualitative, and mixed methods research to investigate the use of virtual human technology in health, education, and assessment. He has been applying natural language processing techniques to the analysis of mixed methods datasets. He also conducts research on teaching, learning, and developing research methods capacity as Co-PI of a William T. Grant Foundation qualitative and mixed methods research capacity building grant and in his role as evaluator and Co-I for the NIH-funded Mixed Methods Research Training Program for the Health Sciences. Tim has extensive professional experience conducting program evaluation with a focus on educational and healthcare programs.
My research is focused on a wide range of topics from computational social sciences to bioinformatics where I do pattern recognition, perform data analysis, and build prediction models. At the core of my effort, there lie machine learning methods by which I have been trying to address problems related to social networks, opinion mining, biomarker discovery, pharmacovigilance, drug repositioning, security analytics, genomics, food contamination, and concussion recovery. I’m particularly interested in and eager to collaborate on cyber security aspect of social media analytics that includes but not limited to misinformation, bots, and fake news. In addition, I’m still pursuing opportunities in bioinformatics, especially about next generation sequencing analysis that can be also leveraged for phenotype predictions by using machine learning methods.
A typical pipeline for developing and evaluating a prediction models to identify malicious Android mobile apps in the market
Dr. Hemphill studies conversations in social media and aims to promote just access to social media spaces and their data. She uses computational approaches to modeling political topics, predicting and addressing toxicity in online discussions, and tracing linguistic adaptations among extremists. She also studies digital data curation and is especially interested in ways to measure and model data reuse so that we can make informed decisions about how to allocate data resources.