Chargement...
 

Historique: Seminar14022017

Aperçu de cette version: 1

February 14th

14:30, Shannon amphitheatre (building 660) (see location):

Victor Berger (Thales Services, ThereSIS)


Title: VAE/GAN as a generative model


Abstract:


We investigate the problem of data generation, i.e., the unsupervised training of a model to generate samples from a distribution generalizing a dataset. We use from 1 an approach combining the Variational Autoencoder (VAE) 2 model with the well-known Generative Adversarial Network (GAN) 3. As observed in recent literature, training a GAN model is tedious and subject to instability in the optimization process. We reproduce results from 1 and explore different architectures and techniques for taming these instabilities.

In this presentation, we first introduce the VAE and GAN models. Then we detail the approach from 1, and provide experimental results in favor of the following conclusions: combining VAE and GAN stabilizes the training and induces smoothness in the latent space of the generative network, while keeping the sharpness of the generated images.

1 Larsen, A. B. L., Sønderby, S. K., Larochelle, H., & Winther, O. (2015). Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300.
2 Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
3 Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672-2680).




Contact: guillaume.charpiat at inria.fr

Historique

Avancé
Information Version
jeu. 09 de Feb, 2017 11h32 guillaume from 129.175.15.11 3
Afficher
mer. 08 de Feb, 2017 23h15 guillaume from 129.175.15.11 2
Afficher
mer. 08 de Feb, 2017 23h13 guillaume from 129.175.15.11 1
Afficher