The primary tools currently in use are variations of linear models (regression, MLM, SEM, and so on) as we pursue the initial aims of the NICHD funded work. We are expanding into new areas that require new tools. Our adolescent sample is diverse, selected through quota sampling of high schools close enough to UM to afford the use of neuroimaging tools, but it is not population representative. To overcome this, we have begun work to calibrate our sample with the nationally representative Monitoring the Future study, implementing pseudo-weighting and multilevel regression and post-stratification. To enable much more powerful analyses, we are aiming toward the harmonization of multiple, high quality longitudinal databases from adolescence through early adulthood. This would benefit traditional analyses by allowing cross-validation with high power, but also provide opportunities for newer data science tools such as computational modeling and machine learning approaches.
I have been involved in the building of data infrastructure in the study of elections, political systems, violence, geospatial units, demographics, and topography. This infrastructure will eventually lead to the integration of data across many domains in the social, health, population, and behavioral sciences. My core research interests are in elections and political organizations.
I am a Research Fellow in the Inter-university Consortium for Political and Social Research (ICPSR) at the University of Michigan. My research is currently supported by a NSF project, Developing Evidence-based Data Sharing and Archiving Policies, where I am analyzing curation activities, automatically detecting data citations, and contributing to metrics for tracking the impact of data reuse. I hold a Ph.D. in Geography from UC Santa Barbara and I have expertise in GIScience, spatial information science, and urban planning. My interests also include the Semantic Web, innovative GIS education, and the science of science. I have experience deploying geospatial applications, designing linked data models, and developing visualizations to support data discovery.
J. Trent Alexander is the Associate Director and a Research Professor at ICPSR in the Institute for Social Research at the University of Michigan. Alexander is a historical demographer and builds social science data infrastructure. He is currently leading the Decennial Census Digitization and Linkage Project (joint with Raj Chetty and Katie Genadek) and ResearchDataGov (joint with Lynette Hoelter). Prior to coming to ICPSR in 2017, Alexander initiated the Census Longitudinal Infrastructure Project at the Census Bureau and managed the Integrated Public Use Microdata Series (IPUMS) at the University of Minnesota.
Cyber-security is a complex and multi-dimensional research field. My research style comprises an inter-disciplinary (primarily rooted in economics, econometrics, data science (AI/ML/Bayesian and Frequentist Statistics), game theory, and network science) investigation of major socially pressing issues impacting the quality of cyber-risk management in modern networked and distributed engineering systems such as IoT-driven critical infrastructures, cloud-based service networks, and app-based systems (e.g., mobile commerce, smart homes) to name a few. I take delight in proposing data-driven, rigorous, and interdisciplinary solutions to both, existing fundamental challenges that pose a practical bottleneck to (cost) effective cyber-risk management, and futuristic cyber-security and privacy issues that might plague modern (networked) engineering systems. I strongly strive for originality, practical significance, and mathematical rigor in my solutions. One of my primary end goals is to conceptually get arms around complex, multi-dimensional information security and privacy problems in a way that helps, informs, and empowers practitioners and policy makers to take the right steps in making the cyber-space more secure.
My research is focused on the human biometric data (such as motion) to guide the design and manufacturing of assistive and proactive devices. Embedded and external sensors generate ample data which require scientific approaches to analyze and create knowledge. I have worked closely with the University of Michigan Orthotics and Prosthetics Center in the design and manufacturing of custom assistive devices using 3D-printing and cyber-based design. The goal is to create a cyber-physical system that can acquire the data from scanning, sensors, human motion, user feedback, clinician diagnosis into quantitative health metrics and guidelines to improve the quality of care for people with needs.
My lab has two main areas of focus: molecular characteristics of head and neck cancer, and the intersection of regulatory genomics and pathway analysis. With head and neck cancer, we study tumor subtypes and biomarkers of prognosis, treatment response, and recurrence. We perform integrative omics analyses, dimension reduction methods, and prediction techniques, with the ultimate goal of identifying patient subsets who would benefit from either an additional targeted treatment or de-escalated treatment to increase quality of life. For regulatory genomics and pathway analysis, we develop statistical tests taking into account important covariates and other variables for weighting observations.
My methodological research focus on developing statistical methods for routinely collected healthcare databases such as electronic health records (EHR) and claims data. I aim to tackle the unique challenges that arise from the secondary use of real-world data for research purposes. Specifically, I develop novel causal inference methods and semiparametric efficiency theory that harness the full potential of EHR data to address comparative effectiveness and safety questions. I develop scalable and automated pipelines for curation and harmonization of EHR data across healthcare systems and coding systems.
Fred Conrad’s research concerns the development of new methods and data sources for conducting social research. His work is largely focused on survey methodology, but he also explores the use of social media content as a complement to survey data and as a source of large-scale qualitative insights. His focus is on data quality and reducing measurement error. For example, live video interviews promote more thoughtful responses, e.g., less straightlining – the tendency to give the same answer to a battery of survey questions, but they also promote less candor when answering questions on sensitive topics. Measurement error in social media include misclassification in the automated interpretation of content using methods such as sentiment analysis and topic modeling, as well as selective self-presentation (only posting flattering content). Equally challenging is not knowing the extent to which users differ from the population to which one might wish to generalize results.
Greg’s research primarily investigates information flow in financial markets and the actions of agents in those markets – both consumers and producers of that information. His approach draws on theory from the social sciences (economics, psychology and sociology) combined with large data sets from diverse sources and a variety of data science approaches. Most projects combine data from across multiple sources, including commercial data bases, experimentally created data and extracting data from sources designed for other uses (commercial media, web scrapping, cellphone data etc.). In addition to a wide range of econometric and statistical methods, his work has included applying machine learning , textual analysis, mining social media, processes for missing data and combining mixed media.