Dr. Abney has pursued research in natural language understanding and natural language learning, including information extraction, biomedical text processing, integrating text analysis into web search, robust and rapid partial parsing, stochastic grammars, spoken-language information systems, extraction of linguistic information from scanned page images, dependency-grammar induction for low-resource languages, and semisupervised learning.
I develop fast and principled methods for exploring and understanding one or more massive graphs. In addition to fast algorithmic methodologies, my research also contributes graph-theoretical ideas and models, and real-world applications in two main areas: (i) Single-graph exploration, which includes graph summarization and inference; (ii) Multiple-graph exploration, which includes summarization of time-evolving graphs, graph similarity and network alignment. My research is applied mainly to social, collaboration and web networks, as well as brain connectivity graphs.
Dr. Lee’s research interests lie in machine learning and its applications to artificial intelligence. In particular, he focuses on deep learning and representation learning, which aims to learn an abstract representation of the data by a hierarchical and compositional structure. His research also spans over related topics, such as graphical models, optimization, and large-scale learning. Specific application areas include computer vision, audio recognition, robotics, text modeling, and healthcare.
I primarily work on developing scalable parallel algorithms to solve large scientific problems. This has been done with teams from several different disciplines and application areas. I’m most concerned with algorithms emphasizing in-memory approaches. Another area of research has developed serial algorithms for nonparametric regression. This is a flexible form of regression that only assumes a general shape, such as upward, rather than a parametric form such as linear. It can be applied to a range of learning and classification problems, such as taxonomy trees. I also work some in adaptive learning, designing efficient sampling procedures.
My research focus is Machine Learning, and I like to explore connections between Optimization, Statistics, and Economics. Increasingly we use ML technologies in which the required assumptions do not apply; for example, when the data are collected in an adaptive fashion from self-interested agents. In such scenarios one must work with new tools that incorporate incentives and strategic behavior.