The goal of my research is to leverage network analysis techniques to uncover how the brain mediates sex hormone influences on gendered behavior across the lifespan. Specifically, my data science research concerns the creation and application of person-specific connectivity analyses, such as unified structural equation models, to time series data; these are intensive longitudinal data, including functional neuroimages, daily diaries, and observations. I then use these data science methods to investigate the links between androgens (e.g., testosterone) and estradiol at key developmental periods, such as puberty, and behaviors that typically show sex differences, including aspects of cognition and psychopathology.
My major research revolve around the understanding of children’s and adolescents’ pathways into adulthood and the role of the educational system in this process. The academic and psycho-social development is analyzed from a life-span perspective exclusively analyzing longitudinal data over longer periods of time (e.g., from middle school to young adulthood). The hierarchical structure of the school system (student/classroom/school/district/state/nations) requires the use of statistical tools that can handle these kind of nested data.
My research includes work on communicating uncertainty, usable statistics, and personal informatics. People are increasingly exposed to sensing and prediction in their daily lives (“how many steps did I take today?”, “how long until my bus shows up?”, “how much do I weigh?”). Uncertainty is both inherent to these systems and usually poorly communicated. To build understandable data presentations, we must study how people interpret their data and what goals they have for it, which informs the way that we should communicate results from our models, which in turn determines what models we must use in the first place. I tackle these problems using a multi-faceted approach, including qualitative and quantitative analysis of behavior, building and evaluating interactive systems, and designing and testing visualization techniques. My work draws on approaches from human-computer interaction, information visualization, and statistics to build information visualizations that people can more easily understand along with the models to back those visualizations.
Dr. Teasley’s research has focused on issues of collaboration and learning, looking specifically at how sociotechnical systems can be used to support effective collaborative processes and successful learning outcomes. As Director of the LED lab, she leads learning analytics-based research to investigate how instructional technologies and digital media are used to innovate teaching, learning, and collaboration. The LED Lab is committed to providing a significant contribution to scholarship about learning at Michigan and in the broader field as well, by building an empirical evidentiary base for the design and support of technology rich learning environments.
I developed LectureTools with NSF support in response to a need to increase opportunities for student participation in larger lecture courses. It was subsequently spun off campus using NSF SBIR funding and was acquired by Echo360 which has incorporated it into its Active Learning Platform (ALP). ALP collects data on how students behave before, during and after class including how many slides they view, how many notes they type, how many questions they answer and how many gradable questions they get correct as well as what question they pose and how often do they indicate confusion.
These unique data are used to understand how student participation is related to exam grades and to build models to forecast which students will have trouble in class far earlier in the semester. My goal is to combine data from ALP with other data sets to ascertain which, if any, participation data allows the best prediction of student success.
Dr. Abney has pursued research in natural language understanding and natural language learning, including information extraction, biomedical text processing, integrating text analysis into web search, robust and rapid partial parsing, stochastic grammars, spoken-language information systems, extraction of linguistic information from scanned page images, dependency-grammar induction for low-resource languages, and semisupervised learning.
The basis of my work is to make the often invisible traces created by interactions students have with learning technologies available to instructors, technology solutions, and students themselves. This often requires the creation of new novel educational technologies which are designed from the beginning with detailed tracking of user activities. Coupled with machine learning and data mining techniques (e.g. classification, regression, and clustering methods), clickstream data from these technologies is used to build predictive models of student success and to better understand how technology affords benefits in teaching and learning. I’m interested in broadly scaled teaching and learning through Massive Open Online Courses (MOOCs), how predictive models can be used to understand student success, and the analysis of educational discourse and student writing.
My lab creates systems that use a combination of both human and machine computation to solve problems quickly and reliably. We have introduced the idea of continuous real-time crowdsourcing, as well as the ‘crowd agent’ model, which uses computer-mediated groups of people submitting input simultaneously to create a collective intelligence capable of completing tasks better than any constituent member.