Skip to Main Content
College Home Page
E C E Home Page

EE Seminars

Learn Policy Optimally via Efficiently Utilizing Data


  Add to Google Calendar
Date:  Fri, September 20, 2019
Time:  11:30am - 12:30pm
Location:  Holmes Hall 389
Speaker:  Dr Lin Yang

Abstract: 

Recent years have witnessed increasing empirical successes in reinforcement learning. Nevertheless, it is an irony that many theoretical problems in this field are not well understood even in the most basic setting. For instance, the optimal sample and time complexities of policy learning in finite-state Markov decision process (MDP) still remain unclear. Given a state-transition sampler, we develop a novel algorithm that learns an approximate-optimal policy in near-optimal time and using a minimal number of samples. The algorithm makes updates by processing samples in a “streaming” fashion, which requires small memory and naturally adapts to large-scale data. Our result resolves the long-standing open problem on the sample complexity of Markov decision process and provides new insights on how to use data efficiently in learning and optimization. Given this result as a starting point, we show that we can additionally reduce the dimensionality of an MDP via linear embedding, optimally.

Bio:

Dr. Lin Yang is Assistant Professor of the Electric and Computer Engineering Department at UCLA. Prior to this, he was a postdoctoral researcher at Princeton University. He obtained two Ph.D. degrees simultaneously in Computer Science and in Physics & Astronomy from Johns Hopkins University in 2017. He obtained a bachelor's degree from Tsinghua University. His research focuses on developing fast algorithms for large-scale optimization and machine learning. His algorithms have been applied to real-world applications including accelerating astrophysical discoveries and improving network security. He has published numerous papers in top Computer Science conferences including NeurIPS, ICML, STOC, and PODS. At Johns Hopkins, he was a recipient of the Dean Robert H. Roy Fellowship.



Return to EE Seminars