Mohamed Abouelenien

By |

Mohamed Abouelenien’s areas of interest broadly cover data science topics, including applied machine learning, computer vision, and natural language processing. He established the Affective Computing and Multimodal Systems Lab (ACMS) which focuses on modeling human behavior and developing multimodal approaches for different applications. He has worked on a number of projects in these areas, including multimodal deception detection, multimodal sensing of drivers’ alertness levels and thermal discomfort, distraction detection, circadian rhythm modeling, emotion and stress analysis, automated scoring of students’ progression, sentiment analysis, ensemble learning, and image processing, among others. His research is funded by Ford Motor Company (Ford), Educational Testing Service (ETS), Toyota Research institute (TRI), and Procter & Gamble (P&G). Abouelenien has published in several top venues in IEEE, ACM, Springer, and SPIE. He also served as a reviewer for IEEE transactions and Elsevier journals and served as a program committee member for multiple international conferences.

jjpark

Jeong Joon Park

By |

3D reconstruction and generative models. I use neural and physical 3D representations to generate realistic 3D objects and scenes. The current focus is large-scale, dynamic, and interactable 3D scene generations. These generative models will be greatly useful for content creators, like games or movies, or for autonomous agent training in virtual environments. For my research, I frequently use and adopt generative modeling techniques such as auto-decoders, GANs, or Diffusion Models.

In my project “DeepSDF,” I suggested a new representation for a 3D generative model that made a breakthrough in the field. The question I answered is: “what should the 3D model be generating? Points, meshes, or voxels?” In DeepSDF paper, I proposed that we should generate a “function,” that takes input as a 3D coordinate and outputs a field value corresponding to that coordinate, where the “function” is represented as a neural network. This neural coordinate-based representation is memory-efficient, differentiable, and expressive, and is at the core of huge progress our community has made for 3D generative modeling and reconstruction.

3D faces with apperance and geometry generated by our AI model

Two contributions I would like to make. First, I would like to enable AI generation of large-scale, dynamic, and interactable 3D world, which will benefit entertainment, autonomous agent training (robotics and self-driving) and various other scientific fields such as 3D medical imaging. Second, I would like to devise a new and more efficient neural network architecture that mimics our brains better. The current AI models are highly inefficient in terms of how they learn from data (requires a huge number of labels), difficult to train continuously and with verbal/visual instructions. I would like to develop a new architecture and learning methods that address these current limitations.

Hun-Seok Kim

By | | No Comments

Hun-Seok Kim is an associate professor at the University of Michigan, Ann Arbor. His research focuses on system analysis, novel algorithms, and efficient VLSI architectures for low-power/high-performance wireless communication, signal processing, computer vision, and machine learning systems.


HTNN (Heterogeneous Transform Domains Neural Network) is a new class of transform domain deep neural networks, where convolution operations are replaced by element-wise multiplications in heterogeneous transform domains. To reduce the network complexity, this framework learns sparse-orthogonal weights in heterogeneous transform domains co-optimized with a hardware-efficient accelerator architecture to minimize the overhead of handling sparse weights. Furthermore, sparse-orthogonal weights are non-uniformly quantized with canonical-signed-digit (CSD) representations to substitute multiplications with simpler additions. The proposed approach reduces the complexity by a factor of 4.9 – 6.8 × without compromising the DNN accuracy compared to equivalent CNNs that employ sparse (pruned) weights.

Jason Mars

By |

Jason Mars is a professor of computer science at the University of Michigan where he directs Clarity Lab, one of the best places in the world to be trained in A.I. and system design. Jason is also co-founder and CEO of Clinc, the cutting-edge A.I. startup that developed the world’s most advanced conversational AI.

Jason has devoted his career to solving difficult real-world problems, building some of the worlds most sophisticated salable systems for A.I., computer vision, and natural language processing. Prior to University of Michigan, Jason was a professor at UCSD. He also worked at Google and Intel.

Jason’s work constructing large-scale A.I. and deep learning-based systems and technology has been recognized globally and continues to have a significant impact on industry and academia. Jason holds a PhD in Computer Science from UVA.