I work on the analysis of sports as they relate to economics, business, finance, history, performance modeling, analytics and prediction/forecasting. I typically use panel data econometric techniques to understand team performance in professional sports. I also have interest in forecast models.
Professor Kowalski’s recent research analyzes experiments and clinical trials with the goal of designing policies to target insurance expansions and medical treatments to individuals who stand to benefit from them the most. Her research has also explored the impact of previous Medicaid expansions, the Affordable Care Act, the Massachusetts health reform of 2006, and employer-sponsored health insurance plans. She has also used cutting-edge techniques to estimate the value of medical spending on at-risk newborns.
We are interested in resolving outstanding fundamental scientific problems that impede the computational materials design process. Our group uses high-throughput density functional theory, applied thermodynamics, and materials informatics to deepen our fundamental understanding of synthesis-structure-property relationships, while exploring new chemical spaces for functional technological materials. These research interests are driven by the practical goal of the U.S. Materials Genome Initiative to accelerate materials discovery, but whose resolution requires basic fundamental research in synthesis science, inorganic chemistry, and materials thermodynamics.
My research interests lie in design and analysis of randomized controlled trials (RCTs), partial identification, identification and inference with multi-valued treatments and instruments, and quantile regression. In one recent paper I study the optimal stratified randomization procedure in RCTs, and found a certain kind of matched-pair design is optimal. In another paper (coauthored with Joe Romano and Azeem Shaikh), we provide asymptotically exact inference procedure for matched-pair designs. In another paper we study inference with moment inequalities whose dimension grows exponentially fast with the sample size. I also have a paper in which we study the sharp identified sets for various treatment effects with multi-valued instruments and multi-values treatments.
My primary research is focused on measurement and monitoring of risks in banks, both at the individual bank level and at the level of financial system as a whole. In a recent paper, we have developed a high-dimension statistical approach to measure connectivity across different players in the financial sector. We implement our model using stock return data for US banks, insurance companies and hedge funds. Some of my early research has developed analytical tools to measure banks’ default risk using option pricing models and other tools of financial economics. These projects have often a significant empirical component that uses large financial datasets and econometric tools. Of late, I have been working on several projects related to the issue of equity and inclusion in financial markets. These papers use large datasets from financial markets to understand differences in the quantity and quality of financial services received by minority borrowers. A common theme across these projects is the issue of causal inference using state-of-the art tools from econometrics. Finally, some of ongoing research projects are related to FinTech with a focus on credit scoring and online lending.
My research is at the intersection of neuroscience and artificial intelligence. My group uses neuroscience or brain-inspired principles to design models and algorithms for computer vision and language processing. In turn, we uses neural network models to test hypotheses in neuroscience and explain or predict human perception and behaviors. My group also develops and uses machine learning algorithms to improve the acquisition and analysis of medical images, including functional magnetic resonance imaging of the brain and magnetic resonance imaging of the gut.
We use brain-inspired neural networks models to predict and decode brain activity in humans processing information from naturalistic audiovisual stimuli.
Alex Gorodetsky’s research is at the intersection of applied mathematics, data science, and computational science, and is focused on enabling autonomous decision making under uncertainty. He is especially interested in controlling, designing, and analyzing autonomous systems that must act in complex environments where observational data and expensive computational simulations must work together to ensure objectives are achieved. Toward this goal, he pursues research in wide-ranging areas including uncertainty quantification, statistical inference, machine learning, control, and numerical analysis. His methodology is to increase scalability of probabilistic modeling and analysis techniques such as Bayesian inference and uncertainty quantification. His current strategies to achieving scalability revolve around leveraging computational optimal transport, developing tensor network learning algorithms, and creating new multi-fidelity information fusion approaches.
Sample workflow for enabling autonomous decision making under uncertainty for a drone operating in a complex environment. We develop algorithms to compress simulation data by exploiting problem structure. We then embed the compressed representations onto onboard computational resources. Finally, we develop approaches to enable the drone to adapt, learn, and refine knowledge by interacting with, and collecting data from, the environment.
My main interest is theoretical statistics as implied to complex model from semiparametric to ultra high dimensional regression analysis. In particular the negative aspects of Bayesian and causal analysis as implemented in modern statistics.
An analysis of the position of SCOTUS judges.
Greg’s research primarily investigates information flow in financial markets and the actions of agents in those markets – both consumers and producers of that information. His approach draws on theory from the social sciences (economics, psychology and sociology) combined with large data sets from diverse sources and a variety of data science approaches. Most projects combine data from across multiple sources, including commercial data bases, experimentally created data and extracting data from sources designed for other uses (commercial media, web scrapping, cellphone data etc.). In addition to a wide range of econometric and statistical methods, his work has included applying machine learning , textual analysis, mining social media, processes for missing data and combining mixed media.