MIDAS Reproducibility Challenge Showcase: Thomas Valley

August 25, 2020 1:00 PM - 2:00 PM

View Event Recording

Thomas Valley, MD, MSc

Assistant Professor – Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, University of Michigan

Establishing systematic practices to share statistical code

View Recording

Additional Speakers: Neil Kamdar (Lead Statistician, Institute for Healthcare Policy & Innovation, U-M) & Wyndy Wiitala (Research Health Scientist, US Dept. of Veteran’s Affairs)

Abstract: Better communication of analytical approaches would enhance transparency and reproducibility in clinical research. Statistical code sharing offers a solution and is growing in popularity, yet rarely occurs in practice—perhaps because practical ways to accomplish sharing within a modern research environment have yet to be outlined. We present our experiences with statistical code sharing, separating it into two steps: code review and code release. We hope that dissemination, uptake, and feedback will result in code sharing becoming second nature in clinical research.

Co-authors: Neil Kamdar, Wyndy Wiitala, Andrew Ryan, Sarah Seelye, Akbar Waljee, Brahmajee Nallamothu

The Reproducibility Showcase features a series of online presentations and tutorials from May to August, 2020.  Presenters are selected from the MIDAS Reproducibility Challenge 2020.  

A significant challenge across scientific fields is the reproducibility of research results, and third-party assessment of such reproducibility. The goal of the MIDAS Reproducibility Challenge is to highlight high-quality, reproducible work at the University of Michigan by collecting examples of best practices across diverse fields.  We received a large number of entries that illustrate wonderful work in the following areas: 

  1. Theory – A definition of reproducibility and what aspects of reproducibility are critical in a particular domain or in general.
  2. Reproducing a Particular Study – Comprehensive record of parameters and code that allows for others to reproduce the results in a particular project.
  3. Generalizable Tools – A general platform for coding or running analyses that standardizes the methods for reproducible results across studies.
  4. Robustness – Metadata, tools and processes to improve the robustness of results to variations in data, computational hardware and software, and human decisions.
  5. Assessments of Reproducibility – Methods to test the consistency of results from multiple projects, such as meta-analysis or the provision of parameters that can be compared across studies.
  6. Reproducibility under Constraints – Sharing code and/or data to reproduce results without violating privacy or other restrictions.

On Sept. 14, 2020, MIDAS will also host a Reproducibility Day, which is a workshop on concepts and best practices of research reproducibility.  Please save the date on your calendar.