Beyond Worst Case In Machine Learning: The Oracle Model

Date
-
Speaker
Alon Gonen
Place
BIU Engineering Building 1103, Room #329
Abstract

In recent years there has been an increasing gap between the success of machine learning algorithms and our ability to explain their success theoretically.
Namely, many of the problems that are solved to a satisfactory degree of precision are computationally hard in the worst case. Fortunately, there are often reasonable assumptions which help us to get around these worst-case impediments and allow us to rigorously analyze heuristics that are used in practice.
In this talk I will advocate a complementary approach, where instead of explicitly characterizing some desired "niceness" properties of the data, we assume access to an optimization oracle that solves a relatively simpler task. This allows us to identify the sources of hardness and extend our theoretical understanding to new domains. Furthermore we will show that seemingly innocents (and arguably justifiable) modifications to the oracle can lead to tractable reductions and even to bypass hardness results.
We demonstrate these ideas using the following results: i) An efficient algorithm for non-convex online learning using an optimization oracle. b) A faster boosting algorithm using a "simple" weak learner. iii) An efficient reduction from online to private learning.

Joint works with Naman Agarwal, Noga Alon, Elad Hazan, and Shay Moran.

Bio:
Alon Gonen is a postdoctoral at University of California San Diego. Prior to that he was a postdoctoral at Princeton university. He completed his MSc and PhD at the Hebrew University in under the supervision of Shai Shalev-Shwartz. His research focuses on theoretical aspects of machine learning. 

Last Updated Date : 23/12/2019