My research centers on studying the interaction between abstract, theoretically sound probabilistic algorithms and human beings. One aspect of my research explores connections of Machine Learning to Crowdsourcing and Economics; focused in both cases on better understanding the aggregation process. As Machine Learning algorithms are used in making decisions that affect human lives, I am interested in evaluating the fairness of Machine Learning algorithms as well as exploring various paradigms of fairness. I study how these notions interact with more traditional performance metrics. My research in Computer Science Education focuses on developing and using evidence-based techniques in educating undergraduates in Machine Learning. To this end, I have developed a pilot summer program to introduce students to current Machine Learning research and enable them to make a more informed decision about what role they would like research to play in their future. I have also mentored (and continue to mentor) undergraduate students and work with students to produce publishable, and award-winning, undergraduate research.
My broad research interests are in multi-agent systems, computational economics and finance, and artificial intelligence. I apply techniques from algorithmic game theory, statistical machine learning, decision theory, etc. to a variety of problems at the intersection of the computational and social sciences. A major focus of my research has been the design and analysis of market-making algorithms for financial markets and, in particular, prediction markets — incentive-based mechanisms for aggregating data in the form of private beliefs about uncertain events (e.g. the outcome of an election) distributed among strategic agents. I use both analytical and simulation-based methods to investigate the impact of factors such as wealth, risk attitude, manipulative behavior, etc. on information aggregation in market ecosystems. Another line of work I am pursuing involves algorithms for allocating resources based on preference data collected from potential recipients, satisfying efficiency, fairness, and diversity criteria; my joint work on ethnicity quotas in Singapore public housing allocation deserves special mention in this vein. More recently, I have got involved in research on empirical game-theoretic analysis, a family of methods for building tractable models of complex, procedurally defined games from empirical/simulated payoff data and using them to reason about game outcomes.
Cyber-security is a complex and multi-dimensional research field. My research style comprises an inter-disciplinary (primarily rooted in economics, econometrics, data science (AI/ML/Bayesian and Frequentist Statistics), game theory, and network science) investigation of major socially pressing issues impacting the quality of cyber-risk management in modern networked and distributed engineering systems such as IoT-driven critical infrastructures, cloud-based service networks, and app-based systems (e.g., mobile commerce, smart homes) to name a few. I take delight in proposing data-driven, rigorous, and interdisciplinary solutions to both, existing fundamental challenges that pose a practical bottleneck to (cost) effective cyber-risk management, and futuristic cyber-security and privacy issues that might plague modern (networked) engineering systems. I strongly strive for originality, practical significance, and mathematical rigor in my solutions. One of my primary end goals is to conceptually get arms around complex, multi-dimensional information security and privacy problems in a way that helps, informs, and empowers practitioners and policy makers to take the right steps in making the cyber-space more secure.
My primary research is focused on measurement and monitoring of risks in banks, both at the individual bank level and at the level of financial system as a whole. In a recent paper, we have developed a high-dimension statistical approach to measure connectivity across different players in the financial sector. We implement our model using stock return data for US banks, insurance companies and hedge funds. Some of my early research has developed analytical tools to measure banks’ default risk using option pricing models and other tools of financial economics. These projects have often a significant empirical component that uses large financial datasets and econometric tools. Of late, I have been working on several projects related to the issue of equity and inclusion in financial markets. These papers use large datasets from financial markets to understand differences in the quantity and quality of financial services received by minority borrowers. A common theme across these projects is the issue of causal inference using state-of-the art tools from econometrics. Finally, some of ongoing research projects are related to FinTech with a focus on credit scoring and online lending.
My research explores the interplay between corporate decisions and employee actions. I currently use anonymized mobile device data to observe individual behaviors, and employ both unsupervised and supervised machine learning techniques.
My research focuses on the intended and unintended consequences of language in financial markets. I examine this relationship across a number of contexts, such as the Federal Reserve, initial public offerings, and mergers and acquisitions. More broadly, my work aims to develop new theoretical and methodological approaches to understand the role of language in society.
Professor Saigal has held faculty positions at the Haas School of Business, Berkeley and the department of Industrial Engineering and Management Sciences at Northwestern University, has been a researcher at the Bell Telephone Laboratories and numerous short term visiting positions. He currently teaches courses in Financial Engineering. In the recent past he taught courses in optimization, and Management Science. His current research involves data based studies of operational problems in the areas of Finance, Transportation, Renewable Energy and Healthcare, with an emphasis on the management and pricing of risks. This involves the use of data analytics, optimization, stochastic processes and financial engineering tools. His earlier research involved theoretical investigation into interior point methods, large scale optimization and software development for mathematical programming. He is an author of two books on optimization and large set of publications in top refereed journals. He has been an associate editor of Management Science and is a member of SIAM, AMS and AAAS. He has served as the Director of the interdisciplinary Financial Engineering Program and as the Director of Interdisciplinary Professional Programs (now Integrative Design + Systems) at the College of Engineering.
My research focus is on the development and application of machine learning tools to large scale financial and unstructured (textual) data to extract, quantify and predict risk profiles and investment grade rating of private and public companies. Example datasets include social media and financial aggregators such as Bloomberg, Pitchbook, and Privco.
Luis Ortiz, PhD, is Assistant Professor of Computer and Information Science, College of Engineering and Computer Science, The University of Michigan, Dearborn
The study of large complex systems of structured strategic interaction, such as economic, social, biological, financial, or large computer networks, provides substantial opportunities for fundamental computational and scientific contributions. Luis’ research focuses on problems emerging from the study of systems involving the interaction of a large number of “entities,” which is my way of abstractly and generally capturing individuals, institutions, corporations, biological organisms, or even the individual chemical components of which they are made (e.g., proteins and DNA). Current technology has facilitated the collection and public availability of vasts amounts of data, particularly capturing system behavior at fine levels of granularity. In Luis’ group, they study behavioral data of strategic nature at big data levels. One of their main objectives is to develop computational tools for data science, and in particular learning large-population models from such big sources of behavioral data that we can later use to study, analyze, predict and alter future system behavior at a variety of scales, and thus improve the overall efficiency of real-world complex systems (e.g., the smart grid, social and political networks, independent security and defense systems, and microfinance markets, to name a few).
Prof. Lenk develops Bayesian models that disaggregate data to address individuals. He also studies Bayesian nonparametric methods and currently consider shape constraints. Prof. Lenk teaches and uses data mining methods such as recursive partition and neural networks.