Explore ARCExplore ARC

Bryan R. Goldsmith

By |

Bryan R. Goldsmith, PhD, is Assistant Professor in the department of Chemical Engineering within the College of Engineering at the University of Michigan, Ann Arbor.

Prof. Goldsmith’s research group utilizes first-principles modeling (e.g., density-functional theory and wave function based methods), molecular simulation, and data analytics tools (e.g., compressed sensing, kernel ridge regression, and subgroup discovery) to extract insights of catalysts and materials for sustainable chemical and energy production and to help create a platform for their design. For example, the group has exploited subgroup discovery as a data-mining approach to help find interpretable local patterns, correlations, and descriptors of a target property in materials-science data.  They also have been using compressed sensing techniques to find physically meaningful models that predict the properties of perovskite (ABX3) compounds.

Prof. Goldsmith’s areas of research encompass energy research, materials science, nanotechnology, physics, and catalysis.

A computational prediction for a group of gold nanoclusters (global model) could miss patterns unique to nonplaner clusters (subgroup 1) or planar clusters (subgroup 2).

A computational prediction for a group of gold nanoclusters (global model) could miss patterns unique to nonplaner clusters (subgroup 1) or planar clusters (subgroup 2).

 

Nils G. Walter

By |

Nils G. Walter, PhD, is the Francis S. Collins Collegiate Professor of Chemistry, Biophysics and Biological Chemistry, College of Literature, Science, and the Arts and Professor of Biological Chemistry, Medical School, at the University of Michigan, Ann Arbor.

Nature and Nanotechnology likewise employ nanoscale machines that self-assemble into structures of complex architecture and functionality.  Fluorescence microscopy offers a non-invasive tool to probe and ultimately dissect and control these nanoassemblies in real-time.  In particular, single molecule fluorescence resonance energy transfer (smFRET) allows us to measure distances at the 2-8 nm scale, whereas complementary super-resolution localization techniques based on Gaussian fitting of imaged point spread functions (PSFs) measure distances in the 10 nm and longer range.  In terms of Big Data Analysis, we have developed a method for the intracellular single molecule, high-resolution localization and counting (iSHiRLoC) of microRNAs (miRNAs), a large group of gene silencers with profound roles in our body, from stem cell development to cancer.  Microinjected, singly-fluorophore labeled, functional miRNAs are tracked at super-resolution within individual diffusing particles.  Observed mobility and mRNA dependent assembly changes suggest the existence of two kinetically distinct assembly processes.  We are currently feeding these data into a single molecule systems biology pipeline to bring into focus the unifying molecular mechanism of such a ubiquitous gene regulatory pathway.  In addition, we are using cluster analysis of smFRET time traces to show that large RNA processing machines such as single spliceosomes – responsible for the accurate removal of all intervening sequences (introns) in pre-messenger RNAs – are working as biased Brownian ratchet machines.  On the opposite end of the application spectrum, we utilize smFRET and super-resolution fluorescence microscopy to monitor enhanced enzyme cascades and nanorobots engineered to self-assemble and function on DNA origami.

Artistic depiction of the SiMREPS platform we are building for the direct single molecule counting of miRNA biomarkers in crude biofluids (Johnson-Buck, A. et al. Kinetic fingerprinting to identify and count single nucleic acids. Nat Biotechnol 33, 730-732 (2015)).

Artistic depiction of the SiMREPS platform we are building for the direct single molecule counting of miRNA biomarkers in crude biofluids (Johnson-Buck, A. et al. Kinetic fingerprinting to identify and count single nucleic acids. Nat Biotechnol 33, 730-732 (2015)).

Jie Shen

By |

One of my research interests is in the digital diagnosis of material damage based on sensors, computational science and numerical analysis with large-scale 3D computed tomography data: (1) Establishment of a multi-resolution transformation rule of material defects. (2) Design of an accurate digital diagnosis method for material damage. (3) Reconstruction of defects in material domains from X-ray CT data . (4) Parallel computation of materials damage. My team also conducted a series of studies for improving the quality of large-scale laser scanning data in reverse engineering and industrial inspection: (1) Detection and removal of non-isolated Outlier Data Clusters (2) Accurate correction of surface data noise of polygonal meshes (3) Denoising of two-dimensional geometric discontinuities.

Another research focus is on the information fusion of large-scale data from autonomous driving. Our research is funded by China Natural Science Foundation with focus on (1) laser-based perception in degraded visual environment, (2) 3D pattern recognition with dynamic, incomplete, noisy point clouds, (3) real-time image processing algorithms in degraded visual environment, and (4) brain-computer interface to predict the state of drivers.

Processing and Analysis of 3D Large-Scale Engineering Data

Processing and Analysis of 3D Large-Scale Engineering Data

Emanuel Gull

By |

Professor Gull works in the general area of computational condensed matter physics with a focus on the study of correlated electronic systems in and out of equilibrium. He is an expert on Monte Carlo methods for quantum systems and one of the developers of the diagrammatic ‘continuous-time’ quantum Monte Carlo methods. His recent work includes the study of the Hubbard model using large cluster dynamical mean field methods, the development of vertex function methods for optical (Raman and optical conductivity) probes, and the development of bold line diagrammatic algorithms for quantum impurities out of equilibrium. Professor Gull is involved in the development of open source computer programs for strongly correlated systems.

Quantum impurities are small confined quantum systems coupled to wide leads. An externally applied time-dependent magnetic field induces a change in the population of spins on the impurity, leading to time-dependent switching behavior. The system's equations of motion are determined by a many-body quantum field theory and solved using a diagrammatic Monte Carlo approach. The computations were performed at Columbia University and the University of Michigan.

Quantum impurities are small confined quantum systems coupled to wide leads. An externally applied time-dependent magnetic field induces a change in the population of spins on the impurity, leading to time-dependent switching behavior. The system’s equations of motion are determined by a many-body quantum field theory and solved using a diagrammatic Monte Carlo approach. The computations were performed at Columbia University and the University of Michigan.

Venkat Raman

By |

Prof. Raman’s work focuses on the simulation of large scale combustion systems – aircraft engines, stationary power turbines, hypersonic engines – with the goal of advancing computations-aided systems design. This involves large scale computations accounting for detailed behavior of the chaotic turbulent flow in these systems, combined with enabling science in computational chemistry and algorithms. One aspect of my research is the prediction of rare events that lead to catastrophic system failure (as in flight crash, engine failure etc.). This work also involves the understanding of uncertainty in models, and streamlining knowledge in the form of mathematical models.

raman_image_large-1024x765

Paul Zimmerman

By |

Paul Zimmerman, PhD, is Assistant Professor of Chemistry, College of Literature, Science, and the Arts, at the University of Michigan, Ann Arbor.

The Zimmerman research group is a pioneer in computational chemical reaction discovery, and has developed algorithms which map out complex multi-component, multi-elementary step reaction mechanisms without reliance on prior chemical knowledge. These methods, being low computational cost compared to other quantum chemical techniques, provide a large amount of detailed and accurate chemical information to be processed and exploited. These data present an opportunity for application of statistical methods, specifically from machine learning, to determine the key chemical features that enable chemical reactions to occur. Machine learning not only provides a detailed analysis of how chemical processes work, but also provides a rapid, predictive method to determine the rate and selectivity of novel chemical reaction sequences. Ongoing work in the Zimmerman group is taking advantage of tools such as the Kernel Ridge and logistic regressions, k-nearest neighbors, and genetic algorithms to transform chemical reaction data into predictive tools applicable to many types of chemistry.

Successful k-NN organization of reaction data involving nearly 700 unique chemical reactions.

Successful k-NN organization of reaction data involving nearly 700 unique chemical reactions.

John Kieffer

By |

John Kieffer, PhD, is Professor of Materials Science and Engineering in the College of Engineering at the University of Michigan, Ann Arbor.

Prof. Kieffer’s research interests include the development of novel materials is an enabling factor for the advancement of technology. To accelerate the conception, fabrication, and deployment of materials with specific functionalities, we pursue a simulation-based predictive design approach, i.e., we devise the methodology, computational framework, and workflow, and apply these tools to develop new materials for energy applications. Our research repertoire includes first-principles quantum mechanical calculations for the prediction of the electronic structure and charge carrier mobility in organic molecules, reactive molecular dynamics simulations to study the self-assembly behavior of these molecules, and hybrid Monte Carlo/molecular dynamics techniques to investigate structural developments and processes that occur on long time scales. To validate simulation-based predictions we also carry out experimental measurements of structural dynamics and molecular transport phenomena using dielectric impedance spectroscopy and inelastic light scattering. For the latter we established a unique resource for concurrent Raman and Brillouin light scattering measurements, allowing us to simultaneously monitor the chemistry and visco-elastic properties of reacting systems at the nano-scale and in situ, without mechanical contact. Finally, we fabricate nano-porous hybrid organic-inorganic materials, including aerogels, using sol-gel synthesis techniques. Current projects include:

  • Design of organic molecular systems with specific electronic properties, long-range order, and high charge carrier mobility for photovoltaic, sensor, and light emission application
  • Development of solid state electrolytes for lithium battery applications
  • Fabrication of light-weight high-strength composite materials
  • Investigation of interfacial structures and phenomena pertaining to electronic and thermal transport processes, rheology, mechanical strength, toughness, and adhesion.

Jeffrey C. Lagarias

By |

Jeffrey C. Lagarias is the Harold Mead Stark Collegiate Professor of Mathematics in the College of Literature, Science, and the Arts at the University of Michigan, Ann Arbor.

Prof. Lagarias’ research interests are diverse. His initial training was in analytic and algebraic number theory. After receiving his PhD in 1974, he worked at Bell Laboratories and AT &T Labs until 2003, on problems in many pure and applied fields. Besides number theory, Prof. Lagarias has made contributions in harmonic analysis (wavelets and fractals), mathematical optimization (interior point methods), discrete geometry (tilings and quasicrystals), ergodic theory, low-dimensional topology (complexity of unknotting), and theoretical computer science.

At Michigan Prof. Lagarias has been active in the number theory group over the last few years, with additional work in other fields. His last 25 postings on the arXiv were in: Number Theory (16), Dynamical Systems (3), Classical Analysis and ODE?s (3), Metric Geometry (1), Optimization and Control (1), Spectral Theory (1). His doctoral students typically work on their own topics. Some have worked in topics in number theory: integer factorial ratios, character sum estimates, Diophantine equations with two separated variables; Others have worked in topics in discrete geometry: packings of regular tetrahedra, rigidity of circle configurations.

 

Judy Jin

By |

Judy Jin, PhD, is Professor of Industrial and Operations Engineering in the College of Engineering at the University of Michigan, Ann Arbor.

Prof. Jin’s research focuses on the development of new data fusion methodologies for improving system operation and quality with the emphasis on fusion of data and engineering knowledge collected from disparate sources by integrating multidisciplinary methods. Her research has been widely applied in both manufacturing and service industry by providing techniques for knowledge discovery and risk-informed decision making. Key research issues are being pursued:

  1. Advanced quality control methodologies for system monitoring, diagnosis and control with temporally and spatially dense operational/sensing data.
  2. Multi-scale data transform and high order tensor data analysis for modeling, analysis, classification, and making inferences of multistream sensing signals.
  3. Optimal sensor distribution and hierarchical variable selection methods for system abnormal detection and sensor fusion decisions, which integrates the causal probability network model, statistical change detection, set-covering algorithm, and hierarchical lasso regression.
  4. A unified approach for variation reduction in multistage manufacturing processes (MMPs) using a state space model, which blend the control theory with advanced statistics for MMPs sensing, monitoring, diagnosis and control, integrative design of process tolerance and maintenance policy considering the interaction between product quality and tool reliability.

Data science applications: (a) Smart manufacturing with sensor fusion, process monitoring, diagnosis and control (e.g., metal forming including stamping, forging, casting and rolling), assembly, ultrasonic welding, photovoltaic thin film deposition. (b) Travel time estimation and traffic prediction for intelligent transportation systems. (c) Multi-stream data analysis of human motion/vehicle crash testing data for improving vehicle design and safety. (d) Risk informed decision support for healthcare and clinical decisions. (e) Customer behavior modeling for fraud detection in healthcare and telecommunication. (f) Human decision-making behavior modeling in a dynamic/emergency environment.

jjin_image-1024x791

Alfred Hero

By |

Alfred O. Hero, PhD, is the R. Jamison and Betty Williams Professor of Engineering at the University of Michigan and co-Director of the Michigan Institute for Data Science.

The Hero group focuses on building foundational theory and methodology for data science and engineering. Data science is the methodological underpinning for data collection, data management, data analysis, and data visualization. Lying at the intersection of mathematics, statistics, computer science, information science, and engineering, data science has a wide range of application in areas including: public health and personalized medicine, brain sciences, environmental and earth sciences, astronomy, materials science, genomics and proteomics, computational social science, business analytics, computational finance, information forensics, and national defense. The Hero group is developing theory and algorithms for data collection, analysis and visualization that use statistical machine learning and distributed optimization. These are being to applied to network data analysis, personalized health, multi-modality information fusion, data-driven physical simulation, materials science, dynamic social media, and database indexing and retrieval. Several thrusts are being pursued:

  1. Development of tools to extract useful information from high dimensional datasets with many variables and few samples (large p small n). A major focus here is on the mathematics of “big data” that can establish fundamental limits; aiding data analysts to “right size” their sample for reliable extraction of information. Areas of interest include: correlation mining in high dimension, i.e., inference of correlations between the behaviors of multiple agents from limited statistical samples, and dimensionality reduction, i.e., finding low dimensional projections of the data that preserve information in the data that is relevant to the analyst.
  2. Data representation, analysis and fusion on non-linear non-euclidean structures. Examples of such data include: data that comes in the form of a probability distribution or histogram (lies on a hypersphere with the Hellinger metric); data that are defined on graphs or networks (combinatorial non-commutative structures); data on spheres with point symmetry group structure, e.g., quaternion representations of orientation or pose.
  3. Resource constrained information-driven adaptive data collection. We are interested in sequential data collection strategies that utilize feedback to successively select among a number of available data sources in such a way to minimize energy, maximize information gains, or minimize delay to decision. A principal objective has been to develop good proxies for the reward or risk associated with collecting data for a particular task (detection, estimation, classification, tracking). We are developing strategies for model-free empirical estimation of surrogate measures including Fisher information, R'{e}nyi entropy, mutual information, and Kullback-Liebler divergence. In addition we are quantifying the loss of plan-ahead sensing performance due to use of such proxies.
Correlation mining pipeline transforms raw high dimensional data (bottom) to information that can be rendered in interpretable sparse graphs and networks, simple screeplots, and denoised images (top). The pipeline controls data collection, feature extraction and correlation mining by integrating domain information and its assessed value relative to the desired task (on left) and accounting for constraints on data collection budget and uncertainty bounds (on right).

Correlation mining pipeline transforms raw high dimensional data (bottom) to information that can be rendered in interpretable sparse graphs and networks, simple screeplots, and denoised images (top). The pipeline controls data collection, feature extraction and correlation mining by integrating domain information and its assessed value relative to the desired task (on left) and accounting for constraints on data collection budget and uncertainty bounds (on right).