Skip to Main Content
College Home Page
E C E Home Page

EE Seminars

Almost Sure and High Probability Guarantees for SGD with Random Reshuffling


  Add to Google Calendar
Date:  Mon, December 18, 2023
Time:  11:00am - 12:00pm
Location:  Holmes Hall 389
Speaker:  Professor Xiao Li, The Chinese University of Hong Kong

Abstract

We study the stochastic gradient method with random reshuffling (RR) for smooth nonconvex optimization problems. Though this method is widely utilized in practice, e.g., in the training of neural networks, its convergence behavior is only understood in several limited settings. We first conduct a novel convergence analysis for the non-descent RR method with diminishing step sizes based on the KL inequality, which generalizes the standard KL framework. In particular, we show that RR converges almost surely at a rate depending on the KL exponent. Additionally, by studying the concentration property of RR’s sampling procedure, we establish a new high-probability sample complexity guarantee for RR, which effectively characterizes the efficiency of a single RR execution. Our numerical experiments support our theoretical findings.

Biography 

Beginning in the summer of 2020, Xiao Li assumed the role of Assistant Professor at the School of Data Science at The Chinese University of Hong Kong, Shenzhen. Prior to this position, he completed his Ph.D. at The Chinese University of Hong Kong between 2016 and 2020 under the guidance of Professors Thierry Blu and Anthony Man-Cho So. His undergraduate studies were pursued at Zhejiang University of Technology from 2012 to 2016.

Professor Xiao Li focuses his work on the convergence of continuous optimization, machine learning, and signal processing. His primary interest lies in crafting and scrutinizing deterministic and stochastic optimization algorithms, delving into the examination of distinct nonconvex and non-smooth formulations that emerge within the realms of machine learning and signal processing.

Return to EE Seminars