Systemic algorithmic harms in the mortgage industry
Lu Xian, Ph.D. Student, School of Information, University of Michigan
Matthew Bui, Assistant Professor, School of Information, University of Michigan
Abigail Jacobs, Assistant Professor, School of Information, University of Michigan
Algorithms encode social inequalities and induce harms that reverberate beyond their immediate context. To articulate how algorithmic harms come about, we look to a key locus of individual, intergenerational, and community opportunity: the mortgage industry. While racialized harms have historically been a part of the mortgage industry, attention to algorithmic harms and fairness-related issues in the mortgage context has largely focused on one particular outcome: that is, the decision to approve or deny a mortgage loan. Meanwhile, interventions to mitigate algorithmic harms often focus on immediate technical fixes on the particular algorithmic outcomes, not the unequal social contexts that enable those harms in the first place. We argue for a conceptualization of systemic harms of algorithms in the mortgage industry. Understanding that algorithmic harms are systemic requires understanding how harms arise through different but interdependent algorithmically-mediated interactions. Interventions for mitigating algorithmic harms should build upon efforts to specify the precise ways harms are enacted by and through algorithmic systems.
We document how and when algorithms interact with, and amplify, the unequal aspects of the mortgage industry. By documenting the systemic harms, we highlight the harmful impacts of the seemingly-positive expansion of opportunity and inclusion of minority communities, which the adoption of algorithmic decision-making systems promises. This analysis expands the scope of focus from the individual-based harms of an application denial to a broader, community-wide, and sociohistorical conception of harm. We call for making legible the ways algorithms reinforce and exacerbate injustices, and we urge future interventions to acutely account for context and community-based harms moving forward. We provide lessons drawn from the mortgage industry for identifying and addressing systemic algorithmic harms.
Trustworthiness and Explainable AI: Perspectives from Advanced Manufacturing
Joseph (Yossi) Cohen, Schmidt AI in Science Fellow, Michigan Institute for Data Science (MIDAS), University of Michigan
Advanced manufacturing systems require robust, timely, and trustworthy decision-making supported by real-time data. While machine learning has shown promising potential for automating certain tasks, engineering experts remain skeptical that applying data-driven techniques at scale will improve key performance indicators such as cost reduction, yield, quality, and sustainability. This talk will examine some of the existing “trustworthiness gaps” as Industry 4.0 technologies continue to disrupt the landscape of manufacturing production and operation. To address these gaps, the talk will discuss perspectives on human-centered augmented intelligence built on five key pillars: accessibility, computational efficiency, reliability, robustness, and explainability. Particular focus will be paid towards the emergence of explainable artificial intelligence (XAI) techniques and their potential role in changing how human experts can interface and interact with Industrial AI systems. Finally, the talk will conclude with a brief case study illustrating current research on industrial prognostics, allowing for a discussion on future research priorities.
Detecting and Countering Untrustworthy Artificial Intelligence (AI)
Nikola Banovic, Assistant Professor, Electrical Engineering and Computer Science, University of Michigan
The ability to distinguish trustworthy from untrustworthy Artificial Intelligence (AI) is critical for broader societal adoption of AI. Yet, the existing Explainable AI (XAI) methods attempt to persuade end-users that an AI is trustworthy by justifying its decisions. Here, we first show how untrustworthy AI can misuse such explanations to exaggerate its competence under the guise of transparency to deceive end-users—particularly those who are not savvy computer scientists. Then, we present findings from the design and evaluation of two alternative XAI mechanism that help end-users form their own explanations about trustworthiness of AI. We use our findings to propose an alternative framing of XAI that helps end-users develop AI literacy they require to critically reflect on AI to assess its trustworthiness. We conclude with implications for future AI development and testing, public education and investigative journalism about AI, and end-user advocacy to increase access to AI for a broader audience of end-users.
Development of Understandable Artificial Intelligence (UAI) Methods in Physical Sciences
Y Z, Professor, Nuclear Engineering and Radiological Sciences, University of Michigan
Despite the booming applications of AI/ML/DS methods in almost every field, one enduring challenge is the lack of explainability with the present approaches. Not being able to interpret the black-box computer models with human-understandable knowledge greatly hinders our trust and the deployment of them. Therefore, the development of Understandable/eXplainable/interpretable Artificial Intelligence (UAI/XAI) is considered as one of the main challenges. Physics and broader physical sciences provide established ground truths and thus can serve as testbeds for the development new UAI methods. To stimulate discussions, I will briefly describe one example of our research, where we used algebraic geometry tools, namely Morse-Smale complex and sublevelset persistence homology, to produce human-understandable interpretations of autoencoder-learned collective variables in atomistic trajectories. The goal of this talk is to brainstorm and foster collaboration opportunities.
From Extraction to Empowerment: Recent developments in Community-Based Computing Infrastructures
Kwame Porter Robinson, Ph.D. Student, School of Information, University of Michigan
Ron Eglash, Professor, School of Information, University of Michigan
Lionel Robert, Professor, School of Information, University of Michigan
Mark Guzdial, Professor, Electrical Engineering & Computer Science, University of Michigan
Audrey Bennett, Professor, Penny W Stamps School of Art and Design, University of Michigan
The interplay between extractive mass-production, platform domination, and accelerating automation has deeply embedded social ills, such as environmental devastation, wealth inequality, into our economic networks. But another infrastructure is possible! Community-Based Infrastructures (CBI) are a growing field of inquiry where AI, digital fabrication, and emerging technologies combine with worker-owned business, localized sustainability and related counter-hegemonic techniques. The end-goal is the participatory design of a new kind of computing infrastructure, aimed to empower low-income communities in the development of sustainable, egalitarian and democratically governed economies.
Funded by the National Science Foundation (NSF), we examine how participatory experiments involving AI, digital fabrication and other techniques can build CBIs from the bottom-up, starting with locally owned artisanal enterprises. These experiments align with the literature on solidarity economies and AI ethics by incorporating ethical computation among physical fabrication and transportation concerns. Here we utilize two approaches at three scales.
At the micro-scale, digital fabrication is used to enhance product variety and eliminate tedious aspects of artisanal labor, allowing more time and focus on creativity for artisanal business — such as textiles, urban farming and beauty salons. At the meso-scale, we create intervening technologies on goods delivery using routing algorithms, deliberative consumption as ways of reconnecting people and production. Our initial work involves multiplex routing for economic, social, and environmental sustainability through a goods delivery application in a Detroit network of producers and consumers. At the macro scale, our experiments integrate into a platform called Artisanal Futures, that promotes participatory AI development within businesses and their communities. This platform can be thought of as another possible infrastructure, a computational modernization of older Indigenous traditions, often formalized by Ostrom’s work on common-pool resource management. Through CBIs, we aim to promote social, economic, and environmental empowerment that benefit both people and the planet.