Overview
Artificial Intelligence (AI) is transforming healthcare in major ways. It has the potential to help clinicians make better diagnoses and offer personalized treatment, while also promising to revolutionize how healthcare is delivered. However, healthcare systems that deploy AI, and patients whose care is impacted by AI face many ethical considerations. How will access to appropriate data be balanced with privacy? Who is responsible for the quality of AI tools? Are AI technologies contributing to or mitigating health inequities? Are the AI systems we are building trustworthy?
The Michigan Institute for Data and AI in Society (MIDAS), the Learning Health System Collaboratory, the E-Health and Artificial Intelligence (e-HAIL) program and Trust, Innovation and Ethics Research for Responsible AI (TIERRA) invite you to a joint mini-symposium featuring prominent speakers from the U.S. and Canada to explore ethical issues and regulations of health AI. The event aims to stimulate new ideas and collaboration for the development and implementation of ethical and trustworthy AI systems for healthcare.
Speakers and Program
Arthur Lupia, Gerald R Ford Distinguished University Professor of Political Science, Professor of Political Science, College of Literature, Science, and the Arts, Research Professor, Center for Political Studies, Institute for Social Research and Interim Vice President for Research and Innovation, Office of the Vice President for Research, University of Michigan
H.V. Jagadish, Edgar F Codd Distinguished University Professor of Electrical Engineering and Computer Science, Bernard A Galler Collegiate Professor of Electrical Engineering and Computer Science, Professor of Electrical Engineering and Computer Science, College of Engineering and Director of the Michigan Institute for Data and AI in Society, University of Michigan
Karandeep Singh, Chief Health AI Officer, UC San Diego Health; Joan and Irwin Jacobs Endowed Chair in Digital Health Innovation, UC San Diego School of Medicine
Sharon E. Davis, PhD, MS; Research Assistant Professor, Department of Biomedical Informatics; Vanderbilt University Medical Center
Successfully deploying impactful clinical AI tools is no small feat. Not only must we navigate clinical, technical, sociotechnical, and ethical challenges, but most critically, we ask patients and providers to trust and rely on these tools when making important health decisions. Such efforts compel us to be responsible stewards and ensure AI tools consistently perform as promised—overall and across demographic, clinical, and geographic populations. Learning prediction systems, an extension of the learning health system paradigm, can enable sustainable AI tools and minimize disruptions resulting from deploying these tools in evolving clinical environments. We will explore a comprehensive approach to post-deployment monitoring and maintenance of clinical prediction and AI tools, including challenges and opportunities to foster health and equity through model sustainability.
Sharon E. Davis, PhD, is a Research Assistant Professor of Biomedical Informatics. She is a biomedical informatician with formal statistical training, who focuses on the development and maintenance of predictive models to support practical, implementable clinical prediction tools. Dr. Davis received an A.B. in Environmental Sciences and Policy from Duke University, a Masters in Statistics from North Carolina State University, and a PhD in Biomedical Informatics from Vanderbilt University. Her career is guided by a commitment to leveraging health and data sciences to develop tools that empower individuals, promote healthy communities, and reduce health disparities.For over a decade, Dr. Davis served as a statistician and environmental scientist at Duke University and the University of Michigan. Her research emphasized the use of spatial analysis to address questions of maternal and child public health, as well as environmental justice. Key projects explored associations between air pollution, the built environment, psychosocial health, and pregnancy outcomes. This research led to practical solutions for community partners, including tools to support targeted community lead screening and data-driven community advocacy.
Yasir Tarabichi, MD, MSCR; Chief Medical Information Officer, Ovatient (a MetroHealth venture), Director of Clinical Research Informatics, MetroHealth, Associate Professor of Medicine, Case Western Reserve University School of Medicine
Dr. Tarabichi will discuss his safety-net healthcare organization’s journey in responsibly assessing, validating, and implementing AI-driven decision support solutions through a controlled quality improvement framework.
Dr. Tarabichi is a physician informaticist and practicing pulmonary and critical care specialist at MetroHealth, and an Associate Professor of Medicine at the Case Western Reserve University School of Medicine.
He serves as the Chief Medical Informatics Officer and Interim Medical Director of Ovatient, an intrapreneurial virtual care collaborative launched by MetroHealth and the Medical University of South Carolina.
Dr. Tarabichi’s research interests include studying the implementation of advanced clinical decision support modalities on care processes and outcomes, with an emphasis on the responsible implementation of AI-driven clinical support systems. As Director of Clinical Research Informatics, he has developed essential tools that empower researchers through robust informatics resources while championing the importance of data and analytical literacy. Dr. Tarabichi supports several national data standardization and aggregation initiatives, and serves as an elected representative of the Cosmos Governing Council. He is currently collaborating on an NIH-funded study that is modeling digital twin neighborhood from anonymized EHR data to better simulate strategies that address place-based inequities of care.
Dr. Tarabichi is also a member of the American Thoracic Society and the American Medical Informatics Association, where he is the elected chair of the Clinical Research Informatics Working Group.
Moderated by Karandeep Singh
View Q&A RecordingJing Liu, Executive Director, Michigan Institute for Data and AI in Society, University of Michigan
Jennifer L. Gibson, PhD, MA, BA, BSc; Director, University of Toronto Joint Centre for Bioethics, Sun Life Financial Chair in Bioethics; University of Toronto
AI holds promise for improving health. As with many technological innovations, however, the proliferation of AI applications in health is out-pacing the development of ethics guidance, regulation and policy. It often seems to be a foregone conclusion that AI will and should be used in healthcare and public health and that the primary ethical imperative of governments, policymakers and developers is to anticipate, mitigate and address risks associated with its use. Although this risk-based approach is pragmatic and arguably a realistic assessment of the inevitability of AI innovation efforts in health, are we losing sight of the core mission of healthcare and public health? In this presentation, I will explore the possibility of a mission-driven approach to AI innovation that sustains a primary human-centred focus on the health and wellbeing of patients and populations locally and globally.
Professor Jennifer Gibson is Sun Life Financial Chair in Bioethics and Director, Joint Centre for Bioethics and Associate Professor, Division of Clinical Public Health and Institute of Health Policy, Management, & Evaluation, Dalla Lana School of Public Health, University of Toronto. She is also Lead of the Centre for Resilience in the Institute for Pandemics, Dalla Lana School of Public Health. Jennifer has a PhD in Philosophy (bioethics and political theory). Her program of research and teaching focuses on health system and policy ethics. She is particularly interested in the role and interaction of values in decision-making at different levels in the health system.
Kellie Owens, PhD; Assistant Professor, Department of Population Health; NYU Grossman School of Medicine
This talk explores ethical considerations that arise in the development and deployment of artificial intelligence (AI) and machine learning (ML) in healthcare, including bias, equity, privacy, safety, and transparency. Presenting findings from two different empirical bioethics research projects, I first explore how to ethically and effectively govern AI systems and models in healthcare settings based on in-depth interviews with key stakeholders including data scientists, clinicians, and regulators. Second, I examine how patient-facing generative AI tools such as automated clinical documentation and AI-drafted patient messaging in online health portals may affect trust and empathy between clinicians and patients. Both projects seek to guide healthcare systems in navigating the ethical complexities of adopting new technologies, ensuring that advancements in AI serve to strengthen, rather than erode, the foundational trust in our healthcare systems and patient-clinician relationships.
I am a medical sociologist and empirical bioethicist whose work focuses on the ethical use of health information technologies. I am particularly interested in understanding when and how new technologies worsen or improve health inequities. My most recent projects explore the actionability of genomic data for healthy populations. I am also interested in developing better social and technical infrastructures to support artificial intelligence and machine learning (AI/ML) tools in healthcare.
My research is supported by an early career award from the National Human Genome Research Institute (NHGRI) at the National Institutes of Health (NIH), and has won awards from the American Sociological Association, the American Anthropological Association, and the Society for Social Studies of Science (4S).
Moderator: Jodyn Platt, Associate Professor of Learning Health Sciences, University of Michigan Medical School