Tuesday, 10th of September14h (room R2014, 660 building) (see location)
Agnostic Feature SelectionUnsupervised feature selection is mostly assessed along a supervised
learning setting, depending on whether the selected features efficiently
permit to predict the (unknown) target variable.
Another setting is proposed in this paper: the selected features aim to
efficiently recover the whole dataset. The proposed algorithm, called
AgnoS, combines an AutoEncoder? with structural regularizations to
sidestep the combinatorial optimization problem at the core of feature
The extensive experimental validation of AgnoS on the scikit-feature
benchmark suite demonstrates its ability compared to the state of the
art, both in terms of supervised learning and data compression.
Learning with Random Learning RatesIn neural network optimization, the learning rate of the
gradient descent strongly affects performance. This prevents reliable out-
of-the-box training of a model on a new problem. We propose the All
Learning Rates At Once (Alrao) algorithm for deep learning architectures:
each neuron or unit in the network gets its own learning rate, randomly
sampled at startup from a distribution spanning several orders of mag-
nitude. The network becomes a mixture of slow and fast learning units.
Surprisingly, Alrao performs close to SGD with an optimally tuned learn-
ing rate, for various tasks and network architectures. In our experiments,
all Alrao runs were able to learn well without any tuning.
Contact: guillaume.charpiat at inria.fr
All TAU seminars: here