Jeff Fessler

By |

My research group develops models and algorithms for large-scale inverse problems, especially image reconstruction for X-ray CT and MRI.  The models include those based on sparsity using dictionaries learned from large-scale data sets.  Developing efficient and accurate methods for dictionary learning is a recent focus.

For a summary of how model-based image reconstruction methods lead to improved image quality and/or lower X-ray doses, see: http://web.eecs.umich.edu/~fessler/re

To see how model-based image reconstruction methods lead to improved image quality and/or lower X-ray doses, see: http://web.eecs.umich.edu/~fessler/rehttp://web.eecs.umich.edu/~fessler/result/ct/

Michael Cafarella

By |

Michael Cafarella, PhD, is Associate Professor of Electrical Engineering and Computer Science, College of Engineering and Faculty Associate, Survey Research Center, Institute for Social Research, at the University of Michigan, Ann Arbor.

Prof. Cafarella’s research focuses on data management problems that arise from extreme diversity in large data collections. Big data is not just big in terms of bytes, but also type (e.g., a single hard disk likely contains relations, text, images, and spreadsheets) and structure (e.g., a large corpus of relational databases may have millions of unique schemas). As a result, certain long-held assumptions — e.g., that the database schema is always known before writing a query — are no longer useful guides for building data management systems. As a result, my work focuses heavily on information extraction and data mining methods that can either improve the quality of existing information or work in spite of lower-quality information.

A peek inside a Michigan data center! My students and I visit whenever I am teaching EECS485, which teaches many modern data-intensive methods and their application to the Web.

A peek inside a Michigan data center! My students and I visit whenever I am teaching EECS485, which teaches many modern data-intensive methods and their application to the Web.

Jason Mars

By |

Jason Mars is a professor of computer science at the University of Michigan where he directs Clarity Lab, one of the best places in the world to be trained in A.I. and system design. Jason is also co-founder and CEO of Clinc, the cutting-edge A.I. startup that developed the world’s most advanced conversational AI.

Jason has devoted his career to solving difficult real-world problems, building some of the worlds most sophisticated salable systems for A.I., computer vision, and natural language processing. Prior to University of Michigan, Jason was a professor at UCSD. He also worked at Google and Intel.

Jason’s work constructing large-scale A.I. and deep learning-based systems and technology has been recognized globally and continues to have a significant impact on industry and academia. Jason holds a PhD in Computer Science from UVA.

Jason Corso

By |

The Corso group’s main research thrust is high-level computer vision and its relationship to human language, robotics and data science. They primarily focus on problems in video understanding such as video segmentation, activity recognition, and video-to-text; methodology, models leveraging cross-model cues to learn structured embeddings from large-scale data sources as well as graphical models emphasizing structured prediction over large-scale data sources are their emphasis. From biomedicine to recreational video, imaging data is ubiquitous. Yet, imaging scientists and intelligence analysts are without an adequate language and set of tools to fully tap the information-rich image and video. His group works to provide such a language.  His long-term goal is a comprehensive and robust methodology of automatically mining, quantifying, and generalizing information in large sets of projective and volumetric images and video to facilitate intelligent computational and robotic agents that can natural interact with humans and within the natural world.

Relating visual content to natural language requires models at multiple scales and emphases; here we model low-level visual content, high-level ontological information, and these two are glued together with an adaptive graphical structure at the mid-level.

Relating visual content to natural language requires models at multiple scales and emphases; here we model low-level visual content, high-level ontological information, and these two are glued together with an adaptive graphical structure at the mid-level.

Vijay Subramanian

By |

Professor Subramanian is interested in a variety of stochastic modeling, decision and control theoretic, and applied probability questions concerned with networks. Examples include analysis of random graphs, analysis of processes like cascades on random graphs, network economics, analysis of e-commerce systems, mean-field games, network games, telecommunication networks, load-balancing in large server farms, and information assimilation, aggregation and flow in networks especially with strategic users.

Clayton Scott

By |

I study patterns in large, complex data sets, and make quantitative predictions and inferences about those patterns. Problems I’ve worked on include classification, anomaly detection, active and semi-supervised learning, transfer learning, and density estimation. I am primarily interested in developing new algorithms and proving performance guarantees for new and existing algorithms.

Examples of pulses generated from a neutron and a gamma ray interacting with an organic liquid scintillation detector used to detect and classify nuclear sources. Machine learning methods take several such examples and train a classifier to predict the label associated to future observations.

Examples of pulses generated from a neutron and a gamma ray interacting with an organic liquid scintillation detector used to detect and classify nuclear sources. Machine learning methods take several such examples and train a classifier to predict the label associated to future observations.

Raj Rao Nadakuditi

By |

Raj Nadakuditi, PhD, is Associate Professor of Electrical Engineering and Computer Science, College of Engineering, at the University of Michigan, Ann Arbor.

Prof. Nadakuditi received his Masters and PhD in Electrical Engineering and Computer Science at MIT as part of the MIT/WHOI Joint Program in Ocean Science and Engineering. His work is at the interface of statistical signal processing and random matrix theory with applications such as sonar, radar, wireless communications and machine learning in mind.

Prof. Nadakuditi particularly enjoys using random matrix theory to address problems that arise in statistical signal processing. An important component of his work is applying it in real-world settings to tease out low-level signals from sensor, oceanographic, financial and econometric time/frequency measurements/time series. In addition to the satisfaction derived from transforming the theory into practice, real-world settings give us insight into how the underlying techniques can be refined and/or made more robust.

Laura Balzano

By |
Professor Balzano and her students investigate problems in statistical signal processing, machine learning, and optimization, particularly dealing with large and messy data. Her applications typically have missing, corrupted, and uncalibrated data as well as heterogeneous data in terms of sensors, sensor quality, and scale in both time and space. Her theoretical interests involve classes of non-convex matrix factorization problems, such as PCA and many interesting variants such as sparse or structured principal components, orthogonality and non-negativity constraints, nonlinear variants such as low-dimensional algebraic variety models, heteroscedastic data, and even categorical data or human preference data. She concentrates on fast gradient methods and related optimization methods that are scalable to real-time operation and massive data. Her work provides algorithmic and statistical guarantees for these algorithms on the aforementioned non-convex problems, and she focuses carefully on assumptions that are realistic for the relevant application areas in sensor networks, power systems, control, and computer vision.

Jenna Wiens

By |

Jenna Wiens, PhD, is Associate Professor of Computer Science and Engineering (CSE) in the College of Engineering at the University of Michigan, Ann Arbor.

Prof. Wiens currently heads the MLD3 research group. Her primary research interests lie at the intersection of machine learning, data mining, and healthcare. Within machine learning, she is particularly interested in time-series analysis, transfer/multitask learning, causal inference, and learning intelligible models. The overarching goal of her research is to develop the computational methods needed to help organize, process, and transform patient data into actionable knowledge. Her work has applications in modeling disease progression and predicting adverse patient outcomes. For several years now, Prof. Wiens has been focused on developing accurate patient risk stratification approaches that leverage spatiotemporal data, with the ultimate goal of reducing the rate of healthcare-associated infections among patients admitted to hospitals in the US. In addition to her research in the healthcare domain, she also spends a portion of my time developing new data mining techniques for analyzing player tracking data from the NBA.

Matthew Johnson-Roberson

By |

Matthew Johnson-Roberson, PhD, is Assistant Professor of Naval Architecture and Marine Engineering and Assistant Professor of Electrical Engineering and Computer Science, College of Engineering, the University of Michigan, Ann Arbor.

The increasing economic and environmental pressures facing the planet require cost-effective technological solutions to monitor and predict the health of the earth. Increasing volumes of data and the geographic dispersion of researchers and data gathering sites has created new challenges for computer science. Remote collaboration and data abstraction offer the promise of aiding science for great social benefit. Prof. Johnson-Roberson’s research in this field has been focused on developing novel methods for the visualization and interpretation of massive environments from multiple sensing modalities and creating abstractions and reconstructions that allow natural scientists to predict and monitor the earth through remote collaboration. Through the promotion of these economically efficient solutions, his work aims to increase access to hundreds of scientific sites instantly without traveling. In undertaking this challenge he is constantly aiming to engage in research that will benefit society.

Traditional marine science surveys will capture large amounts of data regardless of the contents or the potential value of the data. In an exploratory context, scientists are typically interested in reviewing and mining data for unique geological or benthic features. This can be a difficult and time consuming task when confronted with thousands or tens of thousands of images. The technique shown here uses information theoretic methods to identify unusual images within large data sets.

Traditional marine science surveys will capture large amounts of data regardless of the contents or the potential value of the data. In an exploratory context, scientists are typically interested in reviewing and mining data for unique geological or benthic features. This can be a difficult and time consuming task when confronted with thousands or tens of thousands of images. The technique shown here uses information theoretic methods to identify unusual images within large data sets.