Cyber-security is a complex and multi-dimensional research field. My research style comprises an inter-disciplinary (primarily rooted in economics, econometrics, data science (AI/ML/Bayesian and Frequentist Statistics), game theory, and network science) investigation of major socially pressing issues impacting the quality of cyber-risk management in modern networked and distributed engineering systems such as IoT-driven critical infrastructures, cloud-based service networks, and app-based systems (e.g., mobile commerce, smart homes) to name a few. I take delight in proposing data-driven, rigorous, and interdisciplinary solutions to both, existing fundamental challenges that pose a practical bottleneck to (cost) effective cyber-risk management, and futuristic cyber-security and privacy issues that might plague modern (networked) engineering systems. I strongly strive for originality, practical significance, and mathematical rigor in my solutions. One of my primary end goals is to conceptually get arms around complex, multi-dimensional information security and privacy problems in a way that helps, informs, and empowers practitioners and policy makers to take the right steps in making the cyber-space more secure.
My lab has two main areas of focus: molecular characteristics of head and neck cancer, and the intersection of regulatory genomics and pathway analysis. With head and neck cancer, we study tumor subtypes and biomarkers of prognosis, treatment response, and recurrence. We perform integrative omics analyses, dimension reduction methods, and prediction techniques, with the ultimate goal of identifying patient subsets who would benefit from either an additional targeted treatment or de-escalated treatment to increase quality of life. For regulatory genomics and pathway analysis, we develop statistical tests taking into account important covariates and other variables for weighting observations.
My methodological research focus on developing statistical methods for routinely collected healthcare databases such as electronic health records (EHR) and claims data. I aim to tackle the unique challenges that arise from the secondary use of real-world data for research purposes. Specifically, I develop novel causal inference methods and semiparametric efficiency theory that harness the full potential of EHR data to address comparative effectiveness and safety questions. I develop scalable and automated pipelines for curation and harmonization of EHR data across healthcare systems and coding systems.
Fred Conrad’s research concerns the development of new methods and data sources for conducting social research. His work is largely focused on survey methodology, but he also explores the use of social media content as a complement to survey data and as a source of large-scale qualitative insights. His focus is on data quality and reducing measurement error. For example, live video interviews promote more thoughtful responses, e.g., less straightlining – the tendency to give the same answer to a battery of survey questions, but they also promote less candor when answering questions on sensitive topics. Measurement error in social media include misclassification in the automated interpretation of content using methods such as sentiment analysis and topic modeling, as well as selective self-presentation (only posting flattering content). Equally challenging is not knowing the extent to which users differ from the population to which one might wish to generalize results.
I work on the analysis of sports as they relate to economics, business, finance, history, performance modeling, analytics and prediction/forecasting. I typically use panel data econometric techniques to understand team performance in professional sports. I also have interest in forecast models.
Professor Kowalski’s recent research analyzes experiments and clinical trials with the goal of designing policies to target insurance expansions and medical treatments to individuals who stand to benefit from them the most. Her research has also explored the impact of previous Medicaid expansions, the Affordable Care Act, the Massachusetts health reform of 2006, and employer-sponsored health insurance plans. She has also used cutting-edge techniques to estimate the value of medical spending on at-risk newborns.
We are interested in resolving outstanding fundamental scientific problems that impede the computational materials design process. Our group uses high-throughput density functional theory, applied thermodynamics, and materials informatics to deepen our fundamental understanding of synthesis-structure-property relationships, while exploring new chemical spaces for functional technological materials. These research interests are driven by the practical goal of the U.S. Materials Genome Initiative to accelerate materials discovery, but whose resolution requires basic fundamental research in synthesis science, inorganic chemistry, and materials thermodynamics.
My research interests lie in design and analysis of randomized controlled trials (RCTs), partial identification, identification and inference with multi-valued treatments and instruments, and quantile regression. In one recent paper I study the optimal stratified randomization procedure in RCTs, and found a certain kind of matched-pair design is optimal. In another paper (coauthored with Joe Romano and Azeem Shaikh), we provide asymptotically exact inference procedure for matched-pair designs. In another paper we study inference with moment inequalities whose dimension grows exponentially fast with the sample size. I also have a paper in which we study the sharp identified sets for various treatment effects with multi-valued instruments and multi-values treatments.
My primary research is focused on measurement and monitoring of risks in banks, both at the individual bank level and at the level of financial system as a whole. In a recent paper, we have developed a high-dimension statistical approach to measure connectivity across different players in the financial sector. We implement our model using stock return data for US banks, insurance companies and hedge funds. Some of my early research has developed analytical tools to measure banks’ default risk using option pricing models and other tools of financial economics. These projects have often a significant empirical component that uses large financial datasets and econometric tools. Of late, I have been working on several projects related to the issue of equity and inclusion in financial markets. These papers use large datasets from financial markets to understand differences in the quantity and quality of financial services received by minority borrowers. A common theme across these projects is the issue of causal inference using state-of-the art tools from econometrics. Finally, some of ongoing research projects are related to FinTech with a focus on credit scoring and online lending.
My research is at the intersection of neuroscience and artificial intelligence. My group uses neuroscience or brain-inspired principles to design models and algorithms for computer vision and language processing. In turn, we uses neural network models to test hypotheses in neuroscience and explain or predict human perception and behaviors. My group also develops and uses machine learning algorithms to improve the acquisition and analysis of medical images, including functional magnetic resonance imaging of the brain and magnetic resonance imaging of the gut.
We use brain-inspired neural networks models to predict and decode brain activity in humans processing information from naturalistic audiovisual stimuli.