Chargement...
 

Historique: Seminar09012018

Aperçu de cette version: 1

January, Tuesday 9th

14:30 (room to be specified) (see location)
Note: unusual room (next building)

Michèle Sébag & Marc Schoenauer

(TAU team)

Title: Stochastic Gradient Descent: Going As Fast As Possible But Not Faster


Abstract

When applied to training deep neural networks, stochastic gradient
descent (SGD) often incurs steady progression phases, interrupted by
catastrophic episodes in which loss and gradient norm explode. A
possible mitigation of such events is to slow down the learning process.

This paper presents a novel approach to control the SGD learning rate,
that uses two statistical tests. The first one, aimed at fast learning,
compares the momentum of the normalized gradient vectors to that of
random unit vectors and accordingly gracefully increases or decreases
the learning rate. The second one is a change point detection test,
aimed at the detection of catastrophic learning episodes; upon its
triggering the learning rate is instantly halved.
Both abilities of speeding up and slowing down the learning rate
allows the proposed approach, called SALeRa, to learn as fast as
possible but not faster. Experiments on real-world benchmarks show that
SALeRa performs well in practice, and compares favorably to the state of
the art.



Contact: guillaume.charpiat at inria.fr
All TAU seminars: here

Historique

Avancé
Information Version
mer. 03 de Jan, 2018 14h50 guillaume from 129.175.15.11 2
Afficher
mer. 03 de Jan, 2018 14h42 guillaume from 129.175.15.11 1
Afficher