Amal Alhosban

By |

Amal Alhosban, is an Assistant Professor of Computer Science at the University of Michigan Flint campus. She received her Ph.D. in Computer Science at Wayne State University in 2013. Her research focuses on Semantic Web and Fault Management and Wireless Network.

Murali Mani

By |

Murali Mani, PhD, is Associate Professor of Computer Science at the University of Michigan, Flint.

The significant research problems Prof. Mani is investigating include the following: big data management, big data analytics and visualization, provenance, query processing of encrypted data, event stream processing, XML stream processing. data modeling using XML schemas, and effective computer science education. In addition, he has worked in industry on clickstream analytics (2015), and on web search engines (1999-2000). Prof. Mani’s significant publications are listed on DBLP at: http://dblp.uni-trier.de/pers/hd/m/Mani:Murali.

Illustrating how our SMART system effectively integrates big data processing and data visualization to enable big data visualization. The left side shows a typical data visualization scenario, where the different analysts are using their different visualization systems. These visualization systems can provide interactive visualizations but cannot handle the complexities of big data. They interact with a distributed data processing system that can handle the complexities of big data. The SMART system improves the user experience by carefully sending additional data to the visualization system in response to a request from an analyst so that future visualization requests can be answered directly by the visualization system without accessing the data processing system.

Illustrating how our SMART system effectively integrates big data processing and data visualization to enable big data visualization. The left side shows a typical data visualization scenario, where the different analysts are using their different visualization systems. These visualization systems can provide interactive visualizations but cannot handle the complexities of big data. They interact with a distributed data processing system that can handle the complexities of big data. The SMART system improves the user experience by carefully sending additional data to the visualization system in response to a request from an analyst so that future visualization requests can be answered directly by the visualization system without accessing the data processing system.

 

Mark Allison

By |

Mark Allison, PhD, is Assistant Professor of Computer Science in the department of Computer Science, Engineering and Physics at the University of Michigan-Flint.

Dr. Allison’s research pertains to the autonomic control of complex cyberphysical systems utilizing software models as first class artifacts. Domains being explored are microgrid energy management and unmanned aerial vehicles (UAVs) in swarms.

 

Matthew Kay

By |

Matthew Kay, PhD, is Assistant Professor of Information, School of Information and Assistant Professor of Electrical Engineering and Computer Science, College of Engineering, at the University of Michigan, Ann Arbor.

Prof. Kay’s research includes work on communicating uncertainty, usable statistics, and personal informatics. People are increasingly exposed to sensing and prediction in their daily lives (“how many steps did I take today?”, “how long until my bus shows up?”, “how much do I weigh?”). Uncertainty is both inherent to these systems and usually poorly communicated. To build understandable data presentations, we must study how people interpret their data and what goals they have for it, which informs the way that we should communicate results from our models, which in turn determines what models we must use in the first place. Prof. Kay tackles these problems using a multi-faceted approach, including qualitative and quantitative analysis of behavior, building and evaluating interactive systems, and designing and testing visualization techniques. His work draws on approaches from human-computer interaction, information visualization, and statistics to build information visualizations that people can more easily understand along with the models to back those visualizations.

 

Emily Mower Provost

By |

Research in the CHAI lab focuses on emotion modeling (classification and perception) and assistive technology (bipolar disorder and aphasia).

Behavioral Signal Processing Approach to Modeling Human-centered Data

Behavioral Signal Processing Approach to Modeling Human-centered Data

Mingyan Liu

By |

Mingyan Liu, PhD, is Professor of Electrical Engineering and Computer Science, College of Engineering, at the University of Michigan, Ann Arbor.

Prof. Liu’s research interest lies in optimal resource allocation, sequential decision theory, online and machine learning, performance modeling, analysis, and design of large-scale, decentralized, stochastic and networked systems, using tools including stochastic control, optimization, game theory and mechanism design. Her most recent research activities involve sequential learning, modeling and mining of large scale Internet measurement data concerning cyber security, and incentive mechanisms for inter-dependent security games. Within this context, her research group is actively working on the following directions.

1. Cyber security incident forecast. The goal is to predict an organization’s likelihood of having a cyber security incident in the near future using a variety of externally collected Internet measurement data, some of which capture active maliciousness (e.g., spam and phishing/malware activities) while others capture more latent factors (e.g., misconfiguration and mismanagement). While machine learning techniques have been extensively used for detection in the cyber security literature, using them for prediction has rarely been done. This is the first study on the prediction of broad categories of security incidents on an organizational level. Our work to date shows that with the right choice of feature set, highly accurate predictions can be achieved with a forecasting window of 6-12 months. Given the increasing amount of high profile security incidents (Target, Home Depot, JP Morgan Chase, and Anthem, just to name a few) and the amount of social and economic cost they inflict, this work will have a major impact on cyber security risk management.

2. Detect propagation in temporal data and its application to identifying phishing activities. Phishing activities propagate from one network to another in a highly regular fashion, a phenomenon known as fast-flux, though how the destination networks are chosen by the malicious campaign remains unknown. An interesting challenge arises as to whether one can use community detection methods to automatically extract those networks involved in a single phishing campaign; the ability to do so would be critical to forensic analysis. While there have been many results on detecting communities defined as subsets of relatively strongly connected entities, the phishing activity exhibits a unique propagating property that is better captured using an epidemic model. By using a combination of epidemic modeling and regression we can identify this type of propagating community with reasonable accuracy; we are working on alternative methods as well.

3. Data-driven modeling of organizational and end-user security posture. We are working to build models that accurately capture the cyber security postures of end-users as well as organizations, using large quantities of Internet measurement data. One domain is on how software vendors disclose security vulnerabilities in their products, how they deploy software upgrades and patches, and in turn, how end users install these patches; all these elements combined lead to a better understanding of the overall state of vulnerability of a given machine and how that relates to user behaviors. Another domain concerns the interconnectedness of today’s Internet which implies that what we see from one network is inevitably related to others. We use this connection to gain better insight into the conditions of not just a single network viewed in isolation, but multiple networks viewed together.

A predictive analytics approach to forecasting cyber security incidents. We start from Internet-scale measurement on the security postures of network entities. We also collect security incident reports to use as labels in a supervised learning framework. The collected data then goes through extensive processing and domain-specific feature extraction. Features are then used to train a classifier that generates predictions when we input new features, on the likelihood of a future incident for the entity associated with the input features. We are also actively seeking to understand the causal relationship among different features and the security interdependence among different network entities. Lastly, risk prediction helps us design better incentive mechanisms which is another facet of our research in this domain.

A predictive analytics approach to forecasting cyber security incidents. We start from Internet-scale measurement on the security postures of network entities. We also collect security incident reports to use as labels in a supervised learning framework. The collected data then goes through extensive processing and domain-specific feature extraction. Features are then used to train a classifier that generates predictions when we input new features, on the likelihood of a future incident for the entity associated with the input features. We are also actively seeking to understand the causal relationship among different features and the security interdependence among different network entities. Lastly, risk prediction helps us design better incentive mechanisms which is another facet of our research in this domain.

Necmiye Ozay

By |

Necmiye Ozay, PhD, is Assistant Professor of Electrical Engineering and Computer Science, College of Engineering, at the University of Michigan, Ann Arbor.

Prof. Ozay and her team develop the scientific foundations and associated algorithmic tools for compactly representing and analyzing heterogeneous data streams from sensor/information-rich networked dynamical systems. They take a unified dynamics-based and data-driven approach for the design of passive and active monitors for anomaly detection in such systems. Dynamical models naturally capture temporal (i.e., causal) relations within data streams. Moreover, one can use hybrid and networked dynamical models to capture, respectively, logical relations and interactions between different data sources. They study structural properties of networks and dynamics to understand fundamental limitations of anomaly detection from data. By recasting information extraction problem as a networked hybrid system identification problem, they bring to bear tools from computer science, system and control theory and convex optimization to efficiently and rigorously analyze and organize information. The applications include diagnostics, anomaly and change detection in critical infrastructure such as building management systems, transportation and energy networks.

Luis E. Ortiz

By |

Luis Ortiz, PhD, is Assistant Professor of Computer and Information Science, College of Engineering and Computer Science, The University of Michigan, Dearborn

The study of large complex systems of structured strategic interaction, such as economic, social, biological, financial, or large computer networks, provides substantial opportunities for fundamental computational and scientific contributions. Luis’ research focuses on problems emerging from the study of systems involving the interaction of a large number of “entities,” which is my way of abstractly and generally capturing individuals, institutions, corporations, biological organisms, or even the individual chemical components of which they are made (e.g., proteins and DNA). Current technology has facilitated the collection and public availability of vasts amounts of data, particularly capturing system behavior at fine levels of granularity. In Luis’ group, they study behavioral data of strategic nature at big data levels. One of their main objectives is to develop computational tools for data science, and in particular learning large-population models from such big sources of behavioral data that we can later use to study, analyze, predict and alter future system behavior at a variety of scales, and thus improve the overall efficiency of real-world complex systems (e.g., the smart grid, social and political networks, independent security and defense systems, and microfinance markets, to name a few).

Jie Shen

By |

Jie Shen, PhD, is Professor of Computer and Information Science at the University of Michigan, Dearborn.

Prof. Shen’s research interests are in the digital diagnosis of material damage based on sensors, computational science and numerical analysis with large-scale 3D computed tomography data: (1) Establishment of a multi-resolution transformation rule of material defects. (2) Design of an accurate digital diagnosis method for material damage. (3) Reconstruction of defects in material domains from X-ray CT data . (4) Parallel computation of materials damage. His team also conducted a series of studies for improving the quality of large-scale laser scanning data in reverse engineering and industrial inspection: (1) Detection and removal of non-isolated Outlier Data Clusters (2) Accurate correction of surface data noise of polygonal meshes (3) Denoising of two-dimensional geometric discontinuities.

Processing and Analysis of 3D Large-Scale Engineering Data

Processing and Analysis of 3D Large-Scale Engineering Data