MIDAS researchers are committed to aligning technological growth with human rights, safety, societal values, and policy integrity. Supporting MIDAS’s goals, Microsoft has provided $600,000 of resources in 2024 and another $600,00 in 2025 to support MIDAS Propelling Original Data Science (PODS) projects that focus on enhancing AI policy, developing technical solutions for regulatory compliance, and evaluating AI’s societal impact. This research support establishes a comprehensive exploration of policies on current and emerging AI technologies to ensure that AI remains a force for positive change.
This funding enabled the following projects:
(2024) A Joint Human-AI Framework for Responsible AI
Rita Chin (College of Literature, Science and the Arts)
H.V. Jagadish (College of Engineering)
Abstract
Researchers are rapidly incorporating AI into their scholarly works across disciplines. Economists leverage its intelligent responses to conduct behavioral experiments with AI as human-like subject; healthcare researchers explore diagnostic accuracy and bedside manner with AI as quasi-physician; and legal scholars posit that legal code in training datasets may yield AI better aligned to its implicit human values. AI can accelerate knowledge and discovery, but may also leave human values behind unless methods for centering human needs are incorporated into the best practices of AI researchers. In the emerging AI-in-research landscape, we envision specific means to enact guidelines to improve protection of individuals and societies while boosting AI outcomes. Our analyses will employ mixed and multidisciplinary methods to identify pertinent practices across fields to systematically address human needs and values, and produce principles and practices critical to conducting AI research in a responsible and fair manner as a Responsible Conduct of AI Research (UM RCAIR) framework and Code of AI Ethics for best practices in AI research and development.
(2024) Advancing Responsible AI by Rethinking the Roles of Marginalized Communities in the Innovation Lifecycle: Developing the UBEC Approach
Shobita Parthasarathy (Ford School of Public Policy)
Ben Green (School of Information)
Molly Kleinman (Ford School of Public Policy)
Abstract
This project advances knowledge toward a responsible, and specifically more socially equitable and just, AI research ecosystem by developing and evaluating the novel UBEC approach for the innovation lifecycle, that centers the knowledge and needs of marginalized communities and includes expertise across academic disciplines. Collaborating with local community partners, we produce two kinds of deliverables: 1) technology that is collaboratively designed (e.g., generative AI to help formerly incarcerated people in the greater Detroit metropolitan area understand rules, regulations, and social services relevant to them); and 2) briefs and reports to build civic capacity for participating in AI-related public and policy discourse (e.g., policy briefs on the use of AI in the criminal legal system). This work will also produce best practices for researchers who seek to advance equitable and just AI, at UM and beyond.
(2024) Innovating, Applying, and Educating on Fairness and Bias Methods for Educational Predictive Models
Christopher Brooks (School of Information)
Libby Hemphill (Institute for Social Research and School of Information)
Allyson Flaster (Institute for Social Research)
Abstract
Educational predictive models must be fair in order to ensure that all learners who need support are able to get it. Using 20 years of detailed student-level data from 19 universities and colleges, we demonstrate how institutions can train, share, and reuse predictive models in a way in which learner privacy is protected and the models perform well across learner identity groups. In addition to this large scale study, this project provides open educational materials to further the area of educational data science with the aim of upskilling and enabling educational researchers with state of the art techniques for achieving fair predictive models.
(2024) Evaluating Solutions to the Decline of Online Knowledge Communities
Yan Chen (School of Information)
Qiaozhu Mei (School of Information)
Abstract
This proposal is submitted to Track 2 of the PODS grant: Accelerating Responsible AI Research Ecosystems, addressing the key issue of developing frameworks and tools that mitigate the impact of AI on society and communities in the public domain. Generative AI (GenAI), notably in forms such as Large Language Models (LLMs), has had potentially disruptive impacts on user participation and contributions in online knowledge communities, including Stack Overflow and Wikipedia. This project initiates an evaluation of team-based solutions aimed at reversing the decline in these communities by designing innovative mechanisms for human-AI collaboration and evaluating them on Wikipedia. We aim to explore how teams can effectively utilize LLMs in open content production and develop technological solutions that lower the barriers to entry for new editors and integrate them into community norms. Our goal is to attract and retain new members within these online platforms. If successful, our research will not only unveil new insights into achieving complementarity between teams of human workers and AIs in collaborative content creation but also offer broader implications for the sustainability of online communities and the labor market in the era of AI.