Overview
About: Every day, whether we realize it or not, we are constantly surrounded by AI technology. From self-driving cars, to facial recognition software, fraud prevention models, recommender systems, ChatGPT, etc., AI is rapidly transforming our lives. But do we fully comprehend the real range of potential ethical implications related to its use and regulation? This event will stimulate ideas and investigation into that question by bringing together academics, leaders and scientists in the private sector and policy regulation areas, to share their knowledge and discuss ethical challenges and trends in AI regulation, along with cutting-edge theory and implementation of ethical and transparent AI models. The event is free and open to all who develop AI methods, are current or future users of AI, or are curious about how AI will shape research and our society.
Organizers: as a facilitator of the development and application of data science (DS) and AI techniques for the broad U-M data science community, MIDAS is also imbued with the mission of promoting ethical research. In fact, one of the five research pillars that MIDAS supports is ‘Responsible Research’, focused on enhancing the scientific and societal impact of DS and AI, which takes place especially through fomenting the discussion and expansion of the Ethical AI field. On the other hand, as a prominent player in the private sector, Rocket Companies constantly strive for learning and applying responsible cutting-edge tools in AI. Joined with a common interest in the Ethical AI field, MIDAS and Rocket Companies are inviting you to share your views and learn together about breakthroughs and pressing issues regarding ethical AI.
Post-event Summary: MIDAS hosts forum on ethics in artificial intelligence (the Michigan Daily)
Schedule
H.V. Jagadish, Director, Michigan Institute for Data Science; Edgar F Codd Distinguished University Professor and Bernard A Galler Collegiate Professor of Electrical Engineering and Computer Science
Brian Stucky
Team Lead
Rocket Ethical AI
Lucia Wang
Data Scientist
Rocket Ethical AI
Trevor Ferry
Senior Product Owner
Rocket Ethical AI
Ameya Diwan
Ethical AI Analyst
Rocket Ethical AI
Systemic algorithmic harms in the mortgage industry
Lu Xian, Ph.D. Student, School of Information, University of Michigan
Matthew Bui, Assistant Professor, School of Information, University of Michigan
Abigail Jacobs, Assistant Professor, School of Information, University of Michigan
Algorithms encode social inequalities and induce harms that reverberate beyond their immediate context. To articulate how algorithmic harms come about, we look to a key locus of individual, intergenerational, and community opportunity: the mortgage industry. While racialized harms have historically been a part of the mortgage industry, attention to algorithmic harms and fairness-related issues in the mortgage context has largely focused on one particular outcome: that is, the decision to approve or deny a mortgage loan. Meanwhile, interventions to mitigate algorithmic harms often focus on immediate technical fixes on the particular algorithmic outcomes, not the unequal social contexts that enable those harms in the first place. We argue for a conceptualization of systemic harms of algorithms in the mortgage industry. Understanding that algorithmic harms are systemic requires understanding how harms arise through different but interdependent algorithmically-mediated interactions. Interventions for mitigating algorithmic harms should build upon efforts to specify the precise ways harms are enacted by and through algorithmic systems.
We document how and when algorithms interact with, and amplify, the unequal aspects of the mortgage industry. By documenting the systemic harms, we highlight the harmful impacts of the seemingly-positive expansion of opportunity and inclusion of minority communities, which the adoption of algorithmic decision-making systems promises. This analysis expands the scope of focus from the individual-based harms of an application denial to a broader, community-wide, and sociohistorical conception of harm. We call for making legible the ways algorithms reinforce and exacerbate injustices, and we urge future interventions to acutely account for context and community-based harms moving forward. We provide lessons drawn from the mortgage industry for identifying and addressing systemic algorithmic harms.
Trustworthiness and Explainable AI: Perspectives from Advanced Manufacturing
Joseph (Yossi) Cohen, Schmidt AI in Science Fellow, Michigan Institute for Data Science (MIDAS), University of Michigan
Advanced manufacturing systems require robust, timely, and trustworthy decision-making supported by real-time data. While machine learning has shown promising potential for automating certain tasks, engineering experts remain skeptical that applying data-driven techniques at scale will improve key performance indicators such as cost reduction, yield, quality, and sustainability. This talk will examine some of the existing “trustworthiness gaps” as Industry 4.0 technologies continue to disrupt the landscape of manufacturing production and operation. To address these gaps, the talk will discuss perspectives on human-centered augmented intelligence built on five key pillars: accessibility, computational efficiency, reliability, robustness, and explainability. Particular focus will be paid towards the emergence of explainable artificial intelligence (XAI) techniques and their potential role in changing how human experts can interface and interact with Industrial AI systems. Finally, the talk will conclude with a brief case study illustrating current research on industrial prognostics, allowing for a discussion on future research priorities.
Detecting and Countering Untrustworthy Artificial Intelligence (AI)
Nikola Banovic, Assistant Professor, Electrical Engineering and Computer Science, University of Michigan
The ability to distinguish trustworthy from untrustworthy Artificial Intelligence (AI) is critical for broader societal adoption of AI. Yet, the existing Explainable AI (XAI) methods attempt to persuade end-users that an AI is trustworthy by justifying its decisions. Here, we first show how untrustworthy AI can misuse such explanations to exaggerate its competence under the guise of transparency to deceive end-users—particularly those who are not savvy computer scientists. Then, we present findings from the design and evaluation of two alternative XAI mechanism that help end-users form their own explanations about trustworthiness of AI. We use our findings to propose an alternative framing of XAI that helps end-users develop AI literacy they require to critically reflect on AI to assess its trustworthiness. We conclude with implications for future AI development and testing, public education and investigative journalism about AI, and end-user advocacy to increase access to AI for a broader audience of end-users.
Development of Understandable Artificial Intelligence (UAI) Methods in Physical Sciences
Y Z, Professor, Nuclear Engineering and Radiological Sciences, University of Michigan
Despite the booming applications of AI/ML/DS methods in almost every field, one enduring challenge is the lack of explainability with the present approaches. Not being able to interpret the black-box computer models with human-understandable knowledge greatly hinders our trust and the deployment of them. Therefore, the development of Understandable/eXplainable/interpretable Artificial Intelligence (UAI/XAI) is considered as one of the main challenges. Physics and broader physical sciences provide established ground truths and thus can serve as testbeds for the development new UAI methods. To stimulate discussions, I will briefly describe one example of our research, where we used algebraic geometry tools, namely Morse-Smale complex and sublevelset persistence homology, to produce human-understandable interpretations of autoencoder-learned collective variables in atomistic trajectories. The goal of this talk is to brainstorm and foster collaboration opportunities.
From Extraction to Empowerment: Recent developments in Community-Based Computing Infrastructures
Kwame Porter Robinson, Ph.D. Student, School of Information, University of Michigan
Ron Eglash, Professor, School of Information, University of Michigan
Lionel Robert, Professor, School of Information, University of Michigan
Mark Guzdial, Professor, Electrical Engineering & Computer Science, University of Michigan
Audrey Bennett, Professor, Penny W Stamps School of Art and Design, University of Michigan
The interplay between extractive mass-production, platform domination, and accelerating automation has deeply embedded social ills, such as environmental devastation, wealth inequality, into our economic networks. But another infrastructure is possible! Community-Based Infrastructures (CBI) are a growing field of inquiry where AI, digital fabrication, and emerging technologies combine with worker-owned business, localized sustainability and related counter-hegemonic techniques. The end-goal is the participatory design of a new kind of computing infrastructure, aimed to empower low-income communities in the development of sustainable, egalitarian and democratically governed economies.
Funded by the National Science Foundation (NSF), we examine how participatory experiments involving AI, digital fabrication and other techniques can build CBIs from the bottom-up, starting with locally owned artisanal enterprises. These experiments align with the literature on solidarity economies and AI ethics by incorporating ethical computation among physical fabrication and transportation concerns. Here we utilize two approaches at three scales.
At the micro-scale, digital fabrication is used to enhance product variety and eliminate tedious aspects of artisanal labor, allowing more time and focus on creativity for artisanal business — such as textiles, urban farming and beauty salons. At the meso-scale, we create intervening technologies on goods delivery using routing algorithms, deliberative consumption as ways of reconnecting people and production. Our initial work involves multiplex routing for economic, social, and environmental sustainability through a goods delivery application in a Detroit network of producers and consumers. At the macro scale, our experiments integrate into a platform called Artisanal Futures, that promotes participatory AI development within businesses and their communities. This platform can be thought of as another possible infrastructure, a computational modernization of older Indigenous traditions, often formalized by Ostrom’s work on common-pool resource management. Through CBIs, we aim to promote social, economic, and environmental empowerment that benefit both people and the planet.
Jenna Wiens, Associate Professor, Computer Science and Engineering, University of Michigan
Jenna Wiens is an Associate Professor of Computer Science and Engineering (CSE), Associate Director of the Artificial Intelligence Lab, and co-Director of Precision Health at the University of Michigan in Ann Arbor. Her primary research interests lie at the intersection of machine learning and healthcare. Wiens received her PhD from MIT in 2014, and her notable achievements include an NSF CAREER award in 2016, being named as an Innovator Under 35 by the MIT Tech Review in 2017, and receiving a Sloan Research Fellowship in Computer Science.
We document how and when algorithms interact with, and amplify, the unequal aspects of the mortgage industry. By documenting the systemic harms, we highlight the harmful impacts of the seemingly-positive expansion of opportunity and inclusion of minority communities, which the adoption of algorithmic decision-making systems promises. This analysis expands the scope of focus from the individual-based harms of an application denial to a broader, community-wide, and sociohistorical conception of harm. We call for making legible the ways algorithms reinforce and exacerbate injustices, and we urge future interventions to acutely account for context and community-based harms moving forward. We provide lessons drawn from the mortgage industry for identifying and addressing systemic algorithmic harms.
- Moderator: David Corliss, AVP, Data Science, OnStar Insurance
- Jenna Wiens, Associate Professor, Computer Science and Engineering, University of Michigan
- Brian Stucky, Team Lead, Rocket Ethical AI
Dallas Card, Assistant Professor, School of Information, University of Michigan
Dallas Card is an Assistant Professor in the School of Information at the University of Michigan, where his research focuses on making machine learning more reliable and responsible, and on using machine learning and natural language processing to learn about society from text. His work received a best short paper nomination at ACL 2019, a distinguished paper award at FAccT 2022, and has been covered in Vox, Wired, and other outlets. Prior to starting at Michigan, Dallas was a postdoctoral researcher with the Stanford Natural Language Processing Group and the Stanford Data Science Institute, and received his Ph.D. in Machine Learning from Carnegie Mellon University.
On the Interaction between Robustness and Fairness in Machine Learning
Han Xu, Ph.D. Student, Department of Computer Science and Engineering, Michigan State University
As machine learning models become increasingly prevalent, there is a growing demand to enhance their trustworthiness. Two important aspects of this are adversarial robustness, which refers to the ability of a model to resist attacks on its data inputs, and fair machine learning, which ensures that different groups or different individuals are treated equally. However, when deploying robust methods to enhance adversarial resistance, can we unintentionally create unfairness towards certain groups? Conversely, will the methods that enhance fairness may introduce risks of attack? In this seminar, I will present my recent studies that address these two questions and introduce novel techniques to enhance both robustness and fairness of machine learning models.
Design Fiction or Design Engineering? A Speculative Sandbox for Ethical Decision-Making
Elisa Ngan, Assistant Professor of Practice, Urban Technology, Taubman College of Architecture and Urban Planning, University of Michigan
Software companies change quickly. Regulatory decisions change slowly. What is considered ethical will change as social values change, whilst machine learning and AI changes the speed and composition of these social feedback loops themselves. How might change be managed across these stakeholders to create safe, fair, and equitable software products? Positioned as a piece of magical realism, Origo is a fictional software tool which speculates upon whether it is possible for software companies and regulatory agencies to develop, implement, and test ethical protocols.
Should Privacy Rights Constrain Machine Inference? Can They?
Cameron McCulloch, Ph.D. Student, Department of Philosophy, University of Michigan
Can a distribution of goods that (a) arises from a just initial distribution and (b) evolves through legitimate steps ever be unjust? In his classic, Anarchy, State, and Utopia (1974), Robert Nozick said “No.”
This paper asks a similar question about inferences made by machine learning algorithms: If an initial set of data, D, is acquired justly (whether by a corporation, state, or individual actor), are there any inferences from D that are illegitimate? Widely shared moral intuitions suggest the answer must be “Yes.” In particular, it is often suggested that privacy rights ought to constrain what sorts of inferences companies are licensed to make about individuals on the basis of machine learning. (Consider cases like the much-discussed Target “pregnancy case,” in which a young woman was outed as pregnant to her family by marketing materials Target sent her on the basis of knowing she was pregnant, a fact the young woman never shared with Target.)
There are a variety of reasons people have gestured at which seem to draw a distinction between ordinary human inference and machine inference, reasons that supposedly ground a moral limitation on machine inference—unfair inferential power, the use of illegitimate statistical generalizations, and more. But it’s surprisingly hard to come up with an articulate constraint that is both morally non-arbitrary and practically tenable. In this “problem paper,” I present five reasons why it is difficult to come up with a principled constraint on machine inference that does not also impinge on individual cognitive liberty. The difficulties are both theoretical (difficulty skirting moral arbitrariness) and practical (difficulty shaping unsupervised deep learning networks).
Merve Hickok, President, Center for AI & Digital Policy
Merve Hickok is the President and Research Director at Center for AI & Digital Policy. The Center educates AI policy practitioners and advocates across 60+ countries and leads a research group which advises international organizations (such as European Commission, UNESCO, the Council of Europe, OECD, etc) on AI policy and regulatory developments. She is also a Data Ethics Lecturer at University of Michigan, School of Information; and the founder of AIethicist.org. She is a researcher, trainer, and consultant working on AI policy, governance and regulation. She focuses on AI bias, impact of AI systems on fundamental rights, democratic values, and social justice. She provides consultancy and training services to private & public organizations on Responsible AI – ethical and responsible development, use and governance of AI.
Merve also works with several non-profit organizations globally to advance both the academic and professional research in this field for underrepresented groups. She has been recognized by a number of organizations – most recently as one of the 100 Brilliant Women in AI Ethics™ – 2021, and as Runner-up for Responsible AI Leader of the Year – 2022 (Women in AI).
- Moderators: Michigan Data Science Fellows Elyse Thulin, efrén cruz cortés, and Bernardo Modenesi
- Dallas Card, Assistant Professor, School of Information, University of Michigan
- Merve Hickok, President, Center for AI & Digital Policy
Organizers
Bernardo Modenesi
Data Science Fellow
Michigan Institute for Data Science
the labor market is a setting increasingly disrupted by AI (both in allocation of wages and of jobs) and yet understudied in the ethical AI research space. My research agenda has been focused on the combination of unsupervised learning methods, from network theory, and discrete choice tools, in order to improve the understanding of labor market dynamics and consequently promote evidence for oversight and regulation towards labor market fairness.
AI also shapes the lives of households through mortgages. I have been also interested in exploring interpretability and fairness questions related to AI automated decisions in the mortgage industry, in partnership with the Rocket Companies. In addition to topics related to the nature of the mortgage decision algorithms, I also plan to explore the impact of mortgage decisions in opportunities in life.
efrén cruz cortés
Data Science Fellow
Michigan Institute for Data Science
efrén studies the way algorithms reproduce bias and discrimination. Automated procedures are often designed to mimic the historical data humans have generated. Therefore, unintendedly, they have learned to discriminate based on class, race, gender, and other vulnerable groups. Such a phenomenon has serious consequences, as it may lead to furthering economic inequality, depriving the poor of resources, over-incarceration of people of color, etc. efrén’s goal is to understand the dynamics of the system the algorithm belongs to and assess which structural interventions are the best actions to both avoid discrimination and accomplish the desired goal for the population of interest.
Sponsors
Questions? Contact Us.
Message the MIDAS team: midas-contact@umich.edu