BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//MIDAS - ECPv5.1.5//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:MIDAS
X-ORIGINAL-URL:https://midas.umich.edu
X-WR-CALDESC:Events for MIDAS
BEGIN:VTIMEZONE
TZID:America/Detroit
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20201101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Detroit:20200921T160000
DTEND;TZID=America/Detroit:20200921T170000
DTSTAMP:20210417T235945
CREATED:20200731T200036Z
LAST-MODIFIED:20200928T141544Z
UID:35527-1600704000-1600707600@midas.umich.edu
SUMMARY:MIDAS Seminar Series Presents: Ryan Adams - Princeton University
DESCRIPTION: \nRyan Adams\nProfessor of Computer Science\, Princeton University \nDirector\, Undergraduate Certificate in Statistics and Machine Learning\, Princeton University \nSome New Ideas for Unbiased Gradient Estimation in Optimization\nView Recording\nOptimization is at the heart of machine learning\, and gradient computation is central to many optimization techniques. Stochastic optimization\, in particular\, has taken center stage as the principal method of fitting many models\, from deep neural networks to variational Bayesian posterior approximations. Generally\, one uses data subsampling to efficiently construct unbiased gradient estimators for stochastic optimization\, but this is only one possibility. In this talk\, I will discuss two alternative approaches to constructing unbiased gradient estimates. The first approach uses randomized truncation of objective functions defined as loops or limits. Such objectives arise in settings ranging from hyperparameter selection\, to fitting parameters of differential equations\, to variational inference using lower bounds on the log-marginal likelihood. The second approach revisits the Jacobian accumulation problem at the heart of automatic differentiation\, observing that it is possible to collapse the linearized computational graph of\, e.g.\, deep neural networks\, in a randomized way such that less memory is used but little performance is lost. These projects are joint work with students Alex Beatson\, Deniz Oktay\, Joshua Aduol\, Nick McGreivy\, and collaborators at Toronto and Tsinghua. \nBio: Ryan Adams is a machine learning researcher and Professor of Computer Science at Princeton University. Ryan completed his Ph.D. in physics under David MacKay at the University of Cambridge\, where he was a Gates Cambridge Scholar and a member of St. John’s College. Following his Ph.D. Ryan spent two years as a Junior Research Fellow at the University of Toronto as a part of the Canadian Institute for Advanced Research. From 2011-2016\, he was an Assistant Professor at Harvard University in the School of Engineering and Applied Sciences. In 2015\, Ryan sold the company he co-founded\, Whetlab\, to Twitter and he spent three years in industry at Twitter and Google before joining the faculty at Princeton in 2018. Ryan has won paper awards at ICML\, UAI\, and AISTATS\, received the DARPA Young Faculty Award and the Alfred P. Sloan Fellowship. He also co-hosted the popular Talking Machines podcast. \n
URL:https://midas.umich.edu/event/midas-seminar-series-presents-ryan-adams-princeton-university/
CATEGORIES:featured,Featured Events,MIDAS Weekly Seminar Series
END:VEVENT
END:VCALENDAR