My research focuses on issues in data collection with hard-to-reach populations. In particular, she examines 1) nontraditional sampling approaches for minority or stigmatized populations and their statistical properties and 2) measurement error and comparability issues for racial, ethnic and linguistic minorities, which also have implications for cross-cultural research/survey methodology. Most recently, my research has been dedicated to respondent driven sampling that uses existing social networks to recruit participants in both face-to-face and Web data collection settings. I plan to expand my research scope in examining representation issues focusing on the racial/ethnic minority groups in the U.S. in the era of big data.
My core research focuses on the politics and measurement of human rights, discrimination, violence, and repression. I use computational methods to understand why governments around the world torture, maim, and kill individuals within their jurisdiction and the processes monitors use to observe and document these abuses. Other projects cover a broad array of themes but share a focus on computationally intensive methods and research design. These methodological tools, essential for analyzing data at massive scale, open up new insights into the micro-foundations of state repression and the politics of measurement.
People rely more on strong ties for job help in countries with greater inequality. Coefficients from 55 regressions of job transmission on tie strength are compared to measures of inequality (Gini coefficient), mean income per capita, and population, all measured in 2013. Gray lines indicate 95% confidence regions from 1000 simulated regressions that incorporate uncertainty in the country-level regressions (see below for more details). In each simulated regression we draw each country point from the distribution of regression coefficients implied by the estimate and standard error for that country and measure of tie strength. P values indicate the simulated probability that there is no relationship between tie strength and the other variable. Laura K. Gee, Jason J. Jones, Christopher J. Fariss, Moira Burke, and James H. Fowler. “The Paradox of Weak Ties in 55 Countries” Journal of Economic Behavior & Organization 133:362-372 (January 2017) DOI:10.1016/j.jebo.2016.12.004
Dr. Niccolò Meneghetti is an Assistant Professor of Computer and Information Science at the University of Michigan-Dearborn.
His major research interests are in the broad area of database systems, with primary focus on probabilistic databases, statistical relational learning and uncertain data management.
I am interested in how governance, communities, and inequality emerge in sociotechnical systems, and how the structure of sociotechnical systems encodes and reinforces these processes. To those ends, I develop empirical data and computational methods, focusing on latent variable models; statistical inference in networks; empirical design to study governance in organizations, platforms, and computational social systems; and causal inference and measurement in observational data.
Several sample projects:
> developing empirical populations of networks to infer social and ecological processes encoded in networks
> using probabilistic methods to infer the structure and dynamics of the illicit wildlife trade
> building from theory from political science, statistics, and education to disentangle issues of “bias” in computational systems
I have broad interests and expertise in developing statistical methodology and applying it in biomedical research. I have adapted methodologies, including Bayesian data analysis, categorical data analysis, generalized linear models, longitudinal data analysis, multivariate analysis, RNA-Seq data analysis, survival data analysis and machine learning methods, in response to the unique needs of individual studies and objectives without compromising the integrity of the research and results. Two main methods recently developed:
1) A risk prediction model for a survival outcome using predictors of a large dimension
I have develop a simple, fast yet sufficiently flexible statistical method to estimate the updated risk of renal disease over time using longitudinal biomarkers of a high dimension. The goal is to utilize all sources of data of a large dimension (e.g., routine clinical features, urine and serum markers measured at baseline and all follow-up time points) to efficiently and accurately estimate the updated ESRD risk.
2) A safety mining tool for vaccine safety study
I developed an algorithm for vaccine safety surveillance while incorporating adverse event ontology. Multiple adverse events may individually be rare enough to go undetected, but if they are related, they can borrow strength from each other to increase the chance of being flagged. Furthermore, borrowing strength induces shrinkage of related AEs, thereby also reducing headline-grabbing false positives.
My research lies in cutting-edge methodology development in streams of Bayesian statistics, complex survey inference, missing data imputation, causal inference, and data confidentiality protection. I have extensive collaboration experiences with health services researchers and epidemiologists to improve healthcare and public health practice, and have been providing statistical support to solve sampling and analysis issues on health and social science surveys.
Carter’s research combines quantitative, theoretical, and field approaches to address challenging local to global wildlife conservation issues in the Anthropocene. His work includes projects on endangered species conservation in human-dominated areas of Nepal, post-war recovery of wildlife in Mozambique, human-wildlife coexistence in the American West, and the effects of artificial lights and human-made noise on wildlife habitat across the contiguous US. Research methods focus on: (1) spatializing both human and wildlife processes, (2) probabilistic methods to infer human-wildlife interactions (3) simulation models of coupled natural-human systems, and (4) forecasting and decision-support tools.
My research is mainly concerned with theoretical and computational hydrodynamics, with applications in nonlinear ocean wave prediction and dynamics, wave-body interactions, and wave turbulence theory. I have incorporated the data science tools in my research, especially in the following two projects:
1. Quantification of statistics of extreme ship motions in irregular wave fields: In this project, we propose a new computational framework that directly resolves the statistics (and causal factors) of extreme ship responses in a nonlinear wave field. The development leverages a range of physics and learning based approaches, including nonlinear wave simulations (potential flow), ship response simulations (e.g., CFD), dimension-reduction techniques, sequential sampling, Gaussian process regression (Kriging) and multi-fidelity methods. The key features of the new approach include (i) description of the stochastic wave field by a low-dimensional probabilistic parameter space, and (ii) use of minimum number of CFD simulations to provide most information for converged statistics of extreme motions.
2. Real-time wave prediction with data assimilation from radar measurements: In this project, we develop the real-time data assimilation algorithm adapted to the CPU-GPU hardware architecture, to reduce the uncertainties associated with radar measurement errors and environmental factors such as wind and current in the realistic ocean environment. Upon integration with advanced in-situ or remote wave sensing technology, the developed computational framework can provide heretofore unavailable real-time forecast capability for ocean waves.
I am an assistant professor in Department of Industrial and Manufacturing Systems Engineering (IMSE) at the University of Michigan-Dearborn. Prior to joining UM-Dearborn, I was a research assistant professor and postdoctoral research scholar at Vanderbilt University. My research areas of interest are uncertainty quantification, Bayesian data analytics, big data analytics, machine learning, optimization under uncertainty, and applications of data analytics and machine learning in aerospace, mechanical and manufacturing systems, and material science. The goal of my research is to develop novel computational methods to design sustainable and reliable engineering systems by leveraging the rich information contained in the high-fidelity computational simulation models, experimental data, and big operational data and historical data.