Generalization Performance of Stochastic Gradient Methods

Contact: Saeed Masiha

The first goal of this project is to use the different notions of algorithmic stability to study the generalization behavior of stochastic first-order optimization and the effect of variance reduction and adaptation on generalization error. The second goal of this project is whether SGD can be seen as performing regularized empirical risk minimization i.e., studying implicit regularization, a popular theory for why SGD generalizes so well.

Requirements:

Good knowledge of machine learning theory

Strong Python programming skills

Experience with training deep neural networks (NN) with ML libraries such as Pytorch

If interested, please send your CV and a transcript of your grades to mohammadsaeed.masiha@epfl.ch