Online Learning and Prediction with Eventual Almost Sure Guarantees
Add to Google Calendar
Date: Wed, June 16, 2021
Time: 1:00pm - 3:00pm
Location: online, email for details
Speaker: Changlong Wu, candidate for PhD, advisor: Dr. Naranya Prasad Santhanam
Date: Wed, June 16, 2021
Time: 1:00pm - 3:00pm
Location: online, email for details
Speaker: Changlong Wu, candidate for PhD, advisor: Dr. Naranya Prasad Santhanam
In this dissertation, we introduce a general prediction paradigm where a learner predicts properties of an underlying model and of future samples from the model by observing past observations from the model. The prediction game continues for infinite steps in an online fashion as the sample size grows with new observations. After each prediction, the predictor incurs a binary (0-1) loss. The probability model underlying a sample is otherwise unknown except that it belongs to a known class of models. The goal of a learner is to make only finitely many errors (i.e. loss 1) with probability 1 under the generating model, no matter what the underlying model may be in the known model class.
Any model class and loss pair that admit predictors that make finitely many errors in the above fashion is called eventually almost surely(ore.a.s.for abbreviation) predictable. Our main contributions of this dissertation are general characterizations for the e.a.s.-predictable class and loss pairs. Using our general characterizations, we establish tight necessary and sufficient conditions for a wide range of prediction problems to bee.a.s.-predictable, which include problems in, e.g., hypothesis testing, online learning and risk management theory. Moreover, our results establish striking connections between the e.a.s.-predictability and regularization of the class.
We then go beyond the e.a.s.-predictability by introducing various variations of the eventual almost sure guarantee. In particular, we say a class and loss pair to be e.a.s.-learnable if it is e.a.s.-predictable and, in addition, we are able to identify the last error with any given confidence using a universal stopping rule. We provide general characterizations for the e.a.s.-learnability, which is tight in many natural settings.