Cyber-security is a complex and multi-dimensional research field. My research style comprises an inter-disciplinary (primarily rooted in economics, econometrics, data science (AI/ML/Bayesian and Frequentist Statistics), game theory, and network science) investigation of major socially pressing issues impacting the quality of cyber-risk management in modern networked and distributed engineering systems such as IoT-driven critical infrastructures, cloud-based service networks, and app-based systems (e.g., mobile commerce, smart homes) to name a few. I take delight in proposing data-driven, rigorous, and interdisciplinary solutions to both, existing fundamental challenges that pose a practical bottleneck to (cost) effective cyber-risk management, and futuristic cyber-security and privacy issues that might plague modern (networked) engineering systems. I strongly strive for originality, practical significance, and mathematical rigor in my solutions. One of my primary end goals is to conceptually get arms around complex, multi-dimensional information security and privacy problems in a way that helps, informs, and empowers practitioners and policy makers to take the right steps in making the cyber-space more secure.
My primary research is focused on measurement and monitoring of risks in banks, both at the individual bank level and at the level of financial system as a whole. In a recent paper, we have developed a high-dimension statistical approach to measure connectivity across different players in the financial sector. We implement our model using stock return data for US banks, insurance companies and hedge funds. Some of my early research has developed analytical tools to measure banks’ default risk using option pricing models and other tools of financial economics. These projects have often a significant empirical component that uses large financial datasets and econometric tools. Of late, I have been working on several projects related to the issue of equity and inclusion in financial markets. These papers use large datasets from financial markets to understand differences in the quantity and quality of financial services received by minority borrowers. A common theme across these projects is the issue of causal inference using state-of-the art tools from econometrics. Finally, some of ongoing research projects are related to FinTech with a focus on credit scoring and online lending.
My research explores the interplay between corporate decisions and employee actions. I currently use anonymized mobile device data to observe individual behaviors, and employ both unsupervised and supervised machine learning techniques.
My research focuses on the intended and unintended consequences of language in financial markets. I examine this relationship across a number of contexts, such as the Federal Reserve, initial public offerings, and mergers and acquisitions. More broadly, my work aims to develop new theoretical and methodological approaches to understand the role of language in society.
Professor Saigal has held faculty positions at the Haas School of Business, Berkeley and the department of Industrial Engineering and Management Sciences at Northwestern University, has been a researcher at the Bell Telephone Laboratories and numerous short term visiting positions. He currently teaches courses in Financial Engineering. In the recent past he taught courses in optimization, and Management Science. His current research involves data based studies of operational problems in the areas of Finance, Transportation, Renewable Energy and Healthcare, with an emphasis on the management and pricing of risks. This involves the use of data analytics, optimization, stochastic processes and financial engineering tools. His earlier research involved theoretical investigation into interior point methods, large scale optimization and software development for mathematical programming. He is an author of two books on optimization and large set of publications in top refereed journals. He has been an associate editor of Management Science and is a member of SIAM, AMS and AAAS. He has served as the Director of the interdisciplinary Financial Engineering Program and as the Director of Interdisciplinary Professional Programs (now Integrative Design + Systems) at the College of Engineering.
My research focus is on the development and application of machine learning tools to large scale financial and unstructured (textual) data to extract, quantify and predict risk profiles and investment grade rating of private and public companies. Example datasets include social media and financial aggregators such as Bloomberg, Pitchbook, and Privco.
Luis Ortiz, PhD, is Assistant Professor of Computer and Information Science, College of Engineering and Computer Science, The University of Michigan, Dearborn
The study of large complex systems of structured strategic interaction, such as economic, social, biological, financial, or large computer networks, provides substantial opportunities for fundamental computational and scientific contributions. Luis’ research focuses on problems emerging from the study of systems involving the interaction of a large number of “entities,” which is my way of abstractly and generally capturing individuals, institutions, corporations, biological organisms, or even the individual chemical components of which they are made (e.g., proteins and DNA). Current technology has facilitated the collection and public availability of vasts amounts of data, particularly capturing system behavior at fine levels of granularity. In Luis’ group, they study behavioral data of strategic nature at big data levels. One of their main objectives is to develop computational tools for data science, and in particular learning large-population models from such big sources of behavioral data that we can later use to study, analyze, predict and alter future system behavior at a variety of scales, and thus improve the overall efficiency of real-world complex systems (e.g., the smart grid, social and political networks, independent security and defense systems, and microfinance markets, to name a few).
Prof. Lenk develops Bayesian models that disaggregate data to address individuals. He also studies Bayesian nonparametric methods and currently consider shape constraints. Prof. Lenk teaches and uses data mining methods such as recursive partition and neural networks.
Jun Li, PhD, is Assistant Professor in the department of Technology and Operations in the Ross School of Business at the University of Michigan, Ann Arbor.
Jun Li’s main research interests are empirical operations management and business analytics, with special emphases on revenue management, pricing, consumer behavior, economic and social networks. She has worked extensively with large-scale data, including transactions, pricing, inventory and capacity, consumer online search and click stream data, supply chain relationships and disruptions, clinical and healthcare claims. She is the Winner of INFORMS Revenue Management and Pricing Practice Award for her close collaboration with retailing practitioners in implementing best response pricing algorithms. Her paper on airline pricing and consumer behavior is the finalist for Best Management Science Papers in Operations Management 2012 to 2014. She is also the principal investigator of a National Science Foundation funded project: “Gaining Visibility Into Supply Network Risks Using Large-Scale Textual Analysis”. Her work has enjoyed coverage by The Economist, New York Times and Forbes.
My research examines how people make choices in uncertain environments. The general focus is on using statistical models to explain complex decision patterns, particularly involving sequential choices among related items (e.g., brands in the same category) and dyads (e.g., people choosing one another in online dating), as well as a variety of applications to problems in the marketing domain (e.g., models relating advertising exposures to awareness and sales). The main methods used lie primarily in discrete choice models, ordinarily estimated using Bayesian methods, dynamic programming, and nonparametrics. I’m particularly interested in extending Bayesian analysis to very large databases, especially in terms of ‘fusing’ data sets with only partly overlapping covariates to enable strong statistical identification of models across them.