Thursday, 12th of September14h30 (room R2014, 660 building) (see location)
From abstract items to latent space to observed data and back: Compositional Variational Auto-EncoderConditional Generative Models are now acknowledged an essential
tool in Machine Learning. This paper focuses on their control.
While many approaches aim at disentangling the data through the
coordinate-wise control of their latent representations, another
direction is explored in this paper. The proposed CompVAE handles
data with a natural multi-ensemblist structure (i.e. that can naturally
be decomposed into elements). Derived from Bayesian variational
principles, CompVAE learns a latent representation leveraging both
observational and symbolic information. A first contribution of the
approach is that this latent representation supports a
compositional generative model, amenable to multi-ensemblist
operations (addition or subtraction of elements in the composition).
This compositional ability is enabled by the invariance and generality
of the whole framework w.r.t. respectively, the order and number of the
elements. The second contribution of the paper is a proof of concept on
synthetic 1D and 2D problems, demonstrating the efficiency of the
Overview and unifying conceptualization of Automated Machine LearningWe introduce a novel generic mathematical formulation of AutoML, resting on formal definitions of hyperparameter optimization (HPO) and meta-learning. In light of this formulation, we decompose various algorithms and show that HPO does not really address the AutoML problem, more than “classical” machine learning algorithms, while meta-learning does. Other branches of machine learning such as transfer learning and ensemble learning are also reviewed, re-formulated and unified. Thus, our framework allows us to systematically classify methods and provides us with formal tools to facilitate theoretical developments and future empirical research. Our brief survey of existing methods indicates that these tools already help us gain useful insights.
AutoCV Challenges AnalysisWe present the results of recent AutoCV challenge and AutoCV2 challenge. These two competitions share similar challenge design and both search for a fully automated general solution for classification tasks in computer vision, with an emphasis on any-time performance. Winning solutions adopted deep learning techniques such as AutoAugment?, MobileNet? and ResNet? which has helped to achieve both good any-time performance and final performance. Almost no (leaderboard) over-fitting is observed and this suggests that the winning solutions could be universally applied to any computer vision classification tasks. Together with the release of open-sourced winning solutions, we also describe an enriching repository of uniformly formatted datasets that serve to benchmark both tasks and algorithms, and enable research on meta-learning.
Contact: guillaume.charpiat at inria.fr
All TAU seminars: here