February 14th14:00, Shannon amphitheatre (building 660) (see location):
Victor Berger (Thales Services, ThereSIS)
Title: VAE/GAN as a generative model
We investigate the problem of data generation, i.e., the unsupervised training of a model to generate samples from a distribution generalizing a dataset. We use from  an approach combining the Variational Autoencoder (VAE)  model with the well-known Generative Adversarial Network (GAN) . As observed in recent literature, training a GAN model is tedious and subject to instability in the optimization process. We reproduce results from  and explore different architectures and techniques for taming these instabilities.
In this presentation, we first introduce the VAE and GAN models. Then we detail the approach from , and provide experimental results in favor of the following conclusions: combining VAE and GAN stabilizes the training and induces smoothness in the latent space of the generative network, while keeping the sharpness of the generated images.
 Larsen, A. B. L., Sønderby, S. K., Larochelle, H., & Winther, O. (2015). Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300.
 Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
 Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672-2680).
Contact: guillaume.charpiat at inria.fr