Loading Events
  • This event has passed.

Biostatistics Seminar: Honglak Lee, PhD (University of Michigan)

November 19, 2015 @ 3:30 pm - 5:00 pm

Title: New Directions in Deep Representation Learning Abstract: Over the recent years, deep learning has emerged as a powerful method for learning feature representations from complex input data, and it has been greatly successful in computer vision, speech recognition, and language modeling.  The recent successes typically rely on a large amount of supervision (e.g., class labels).  While many deep learning algorithms focus on a discriminative task and extract only task-relevant features that are invariant to other factors, complex sensory data is often generated from intricate interaction between underlying factors of variations (for example, pose, morphology and viewpoints for 3d object images).  In the first part of the talk, I will present my work on learning deep representations that disentangle underlying factors of variation and allow for complex reasoning and inference that involve multiple factors.  Specifically, we develop deep generative models with higher-order interactions among groups of hidden units, where each group learns to encode a distinct factor of variation.  We present several successful instances of deep architectures and their learning methods, including supervised and weakly-supervised setting.  Our models achieve strong performance in emotion recognition, face verification, data-driven modeling of 3d objects, and video prediction. In the second part of the talk, I will describe my work on learning deep representations from multiple heterogeneous input modalities.  In multimodal representation learning, it is important to capture high-level associations between multiple data modalities with a compact set of latent variables.  In particular, it still remains as a challenge to reason about the missing data modalities robustly and effectively in the testing time.  In this work, I will present advances in multimodal deep learning, with applications to challenging problems in audio-visual recognition, robotic perception, and visual-textual recognition.  In particular, I will talk about my recent work on a new multimodal deep learning method with a learning objective that explicitly encourages cross-modal associations, which provides theoretical guarantees and sheds lights on how to effectively learn shared deep representations from heterogeneous multimodal data. Finally, I will also describe my recent work on learning joint embedding from images and text for fine-grained recognition and zero-shot learning. Bio: Honglak Lee is an Assistant Professor of Computer Science and Engineering at the University of Michigan, Ann Arbor. He received his PhD from the Computer Science Department at Stanford University in 2010, advised by Prof. Andrew Ng. His research focuses on deep learning and representation learning, which spans over unsupervised and semi-supervised learning, supervised learning, transfer learning, structured prediction, graphical models, and optimization. His methods have been successfully applied to computer vision and other perception problems. He received best paper awards at ICML and CEAS. He has served as a guest editor of IEEE TPAMI Special Issue on Learning Deep Architectures, as well as area chairs of ICML, NIPS, ICCV, AAAI, IJCAI, and ICLR. He received the Google Faculty Research Award (2011), NSF CAREER Award (2015), and was selected by IEEE Intelligent Systems as one of AI’s 10 to Watch (2013).

Details

Date:
November 19, 2015
Time:
3:30 pm - 5:00 pm