Charles Mayo

By | | No Comments

My research interests are focused improving how we care for our patients by developing analytics tools that automate providing quantitative and statistical measures to augment qualitative and anecdotal evaluation. This requires technical efforts, to create databases and software, and clinical efforts, to integrate data aggregation, analysis and use into routine processes. Construction of knowledge based clinical practice improvement databases and standardizations in nomenclatures and ontologies needed to automate aggregation for all patients in a practice and enable data exchanges within and among institutions are facets of this work. A recent example includes, design implementation and use of an electronic prescription database to improve per patient treatment plan evaluation and enable longitudinal monitoring of results of practice quality improvement efforts.  We are also leading a group, sponsored by our professional societies, to define national standards for naming used in data exchanges for clinical trials. Another facet is improvement of patient treatment plan evaluation. Traditionally qualitative, visual inspection of spatial dose relationships to target and normal tissues is used to evaluate plans.  Development of algorithms to calculate vectorized dose volume histograms and other vector based spatial-dose objects provide a means to quantify those evaluations. Recently use of databases of dose information have enabled construction of statistical metrics to improve treatment plan evaluation and development of models for quantifying relationships to outcomes.

Data science applicationsdata driven clinical practice improvement, multi-institutional analysis of factors affecting patient outcomes and practice characterization, nomenclature and ontology.


Shuheng Zhou

By | | No Comments

In the “Big Data” era, data sets are often very large yet incomplete, high dimensional, and complex in nature. Analyzing and deriving critically useful information from such data poses a great challenge to today’s researchers and practitioners. The overall goal of the research agenda of my group is to develop new theoretical frameworks and algorithms for analyzing such large, complex and spatio-temporal data despite the overwhelming presence of missing values and large additive errors. We propose to develop parametric and nonparametric models and methods for (i) handling challenging situations with additive and multiplicative errors, including missing values, in observed variables; (ii) estimating dynamic time varying correlation and graphical structures; (iii) addressing fundamental challenges in “Big Data” such as data reduction, aggregation, interpretation and scale. We expect to uncover the complex structures and the associated conditional independence relationships from observation data with an ensemble of newly designed estimators. Our methods are applicable to many application domains such as neuroscience, geoscience and spatio-temporal modeling, genomics, and network data analysis.