Fullscreen
Loading...
 
List of TAO Seminars (reverse chronological order) Tao
Print

Seminars


The page below lists the coming and past seminars, and provides a link to the presentations that you may have missed. Click on a presentation title for the abstract.

Alert emails are sent to the TAU team and to the announcement mailing-list tau-seminars to which anyone can subscribe by clicking here . NB: if you do not receive a confirmation by email when you try to subscribe, please contact me directly.
Some of these presentations are organized with the GT Deep Net ; to subscribe to the related announcement mailing-list, click there .
From October 2020 the seminars will take place on Tuesday afternoon at 14h30 online at https://bbb2.lri.fr/b/gui-hfj-kdg
The presentations are recorded and available here .



2020

November

  • Tuesday, 24th of November, 15h, online : Ievgen Redko (Data Intelligence team at Hubert Curien laboratory, University Jean Monnet of Saint-Etienne): Deep Neural Networks Are Congestion Games: From Loss Landscape to Wardrop Equilibrium and Beyond
    Abstract
    The theoretical analysis of deep neural networks (DNN) is arguably among the most challenging research directions in machine learning (ML) right now, as it requires from scientists to lay novel statistical learning foundations to explain their behaviour in practice. While some success has been achieved recently in this endeavour, the question on whether DNNs can be analyzed using the tools from other scientific fields outside the ML community has not received the attention it may well have deserved. In this paper, we explore the interplay between DNNs and game theory (GT), and show how one can benefit from the classic readily available results from the latter when analyzing the former. In particular, we consider the widely studied class of congestion games, and illustrate their intrinsic relatedness to both linear and non-linear DNNs and to the properties of their loss surface. Beyond retrieving the state-of-the-art results from the literature, we argue that our work provides a very promising novel tool for analyzing the DNNs and support this claim by proposing concrete open problems that can advance significantly our understanding of DNNs when solved.
  • Monday, 2nd of November, 14h30, online : Pierre Jobic (TAU/BioInfo): Demography Inference with deep learning on sets with attention mechanisms in population geneticsrecording
    Abstract
    Demography inference from population genetics data has been well dominated by the famous likelihood-free inference algorithm: Approximate Bayesian Computation (ABC). However, recent deep learning architectures relying only on raw genomic SNP data show promising inference of effective population sizes at specific time steps. I will present our new architecture: MixAttSPIDNA. It is based on the team's previous architecture SPIDNA2 which is permutation invariant, and it adds attention mechanism that improved the features expressivity. We will analyze its performances and compare it with previous methods. MixAttSPIDNA has better performances than SPIDNA and ABC.

October

  • Tuesday, 20th of October : CANCELLED
  • Thursday, 15th of October, 14h30, online : Giancarlo Fissore (TAU): Relative gradient optimization of the Jacobian term in unsupervised deep learningrecording
    Abstract
    Learning expressive probabilistic models correctly describing the data is a ubiquitous problem in machine learning. A popular approach for solving it is mapping the observations into a representation space with a simple joint distribution, which can typically be written as a product of its marginals — thus drawing a connection with the field of nonlinear independent component analysis. Deep density models have been widely used for this task, but their likelihood-based training requires estimating the log-determinant of the Jacobian and is computationally expensive, thus imposing a trade-off between computation and expressive power. In this work, we propose a new approach for exact likelihood-based training of such neural networks. Based on relative gradients, we exploit the matrix structure of neural network parameters to compute updates efficiently even in high-dimensional spaces; the computational cost of the training is quadratic in the input size, in contrast with the cubic scaling of the naive approaches. This allows fast training with objective functions involving the log-determinant of the Jacobian without imposing constraints on its structure, in stark contrast to normalizing flows. An implementation of our method can be found at https://github.com/fissoreg/relative-gradient-jacobian
  • Tuesday, 13th of October, 14h30, online : Ahmed Skander Karkar (Criteo): A Principle of Least Action for the Training of Neural Networksrecording , slides
    Abstract
    Neural networks have been achieving high generalization performance on many tasks despite being highly over-parameterized. Since classical statistical learning theory struggles to explain this behavior, much effort has recently been focused on uncovering the mechanisms behind it, in the hope of developing a more adequate theoretical framework and having a better control over the trained models. In this work, we adopt an alternate perspective, viewing the neural network as a dynamical system displacing input particles over time. We conduct a series of experiments and, by analyzing the network's behavior through its displacements, we show the presence of a low kinetic energy displacement bias in the transport map of the network, and link this bias with generalization performance. From this observation, we reformulate the learning problem as follows: finding neural networks which solve the task while transporting the data as efficiently as possible. This offers a novel formulation of the learning problem which allows us to provide regularity results for the solution network, based on Optimal Transport theory. From a practical viewpoint, this allows us to propose a new learning algorithm, which automatically adapts to the complexity of the given task, and leads to networks with a high generalization ability even in low data regimes.
  • Tuesday, 6th of October, 14h30, online : Pierre Wolinski (University of Oxford): Initializing a neural network on the edge of chaosslides
    Abstract
    How to initialize correctly weights and biases of an infinitely wide and deep neural network ? Glorot and Bengio (2010), then He et al. (2015), have proposed a simple answer based on the preservation of the variance of the preactivations during the forward pass. Afterwards, Poole et al. proposed the concept of "Edge of Chaos" in the paper "Exponential expressivity in deep neural networks through transient chaos" (2016). They proposed another definition of "correct" initialization. Instead of looking at the variance of the preactivations, they considered the evolution of the correlation between two inputs during the forward pass. This new point of view led to finer results, as the evidence of a phase-transition-like phenomenon according to the initialization distribution. Moreover, we are now able to predict the typical depth at which information can be propagated or backpropagated at initialization. Since the theoretical results of Edge of Chaos rely on an infinite-width assumption, some links have been drawn with the Neural Tangents Kernels (NTK).
...

April


March

February

  • Friday, 28th of February, 11h: Rémi Flamary (Univ. Côte d'Azur): Optimal transport: Gromov-Wasserstein divergence and extensions
  • Friday, 28th of February, 15h: [FormalDeep] Julien Girard (TAU/CEA-list) will present the paper Beyond the Single Neuron Convex Barrier for Neural Network Certification
  • Friday, 14th of February, 11h: Stéphane Rivaud (Sony): Perceptual GAN for audio synthesis

January




2019

December


November


October


September


July


June


May


April


March


February


January



2018

December


November

  • Thursday, 22nd of November, 11h11 (usual room R2014): Adrian Alan Pol (CERN): Machine Learning applications to CMS Data Quality Monitoring
  • Thursday, 15th of November, 11h11 (usual room R2014): Philippe Esling (IRCAM): Artificial creative intelligence: variational inference and deep learning for modeling musical creativity slides

October


September


June


May


April


March

  • March, Tuesday 27th: Nizam Makdoud (TAU team): Intrinsic Motivation, Exploration and Deep Reinforcement Learning
  • March, Tuesday 20th: Hugo Richard (Parietal/TAU teams, INRIA): Data based analysis of visual cortex using deep features of videos (more information...)
  • March, Tuesday 13th: David Rousseau (Laboratoire de l'Accélérateur Linéaire (LAL), Orsay): TrackML : The High Energy Physics Tracking Challenge (more information...)
  • March, Tuesday 6th: Ulisse Ferrari (Institut de la Vision): Neuroscience & big-data: Collective behavior in neuronal ensembles (more information...)
  • March, Friday 2nd: François Landes (IPhT): Physicists using and playing with Machine Learning tools: two examples (more information...)

February

  • February, Tuesday 27th: Wendy Mackay (INRIA/LRI ExSitu team): Human-Computer Partnerships: Leveraging machine learning to empower human users (more information...)
  • February, Tuesday 20th: Jérémie Sublime (ISEP): Unsupervised learning for multi-source applications and satellite image processing (more information...)
  • February, Friday 16th: Rémi Leblond (INRIA Sierra team): SeaRNN: training RNNs with global-local losses (more information...)
  • February, Tuesday 13th: Zoltan Szabo (CMAP & DSI, École Polytechnique): Linear-time Divergence Measures with Applications in Hypothesis Testing (more information...)


January

  • January, Tuesday 23rd (usual room 2014): Olivier Goudet & Diviyan Kalainathan (TAU): End-to-end Causal Generative Neural Networks (more information...)
  • January, Friday 19th, whole day (IHES): workshop stats maths/info du plateau de Saclay (more information... )
  • January, Tuesday 9th (room 435, "salle des thèses", building 650): Michèle Sébag & Marc Schoenauer (TAU): Stochastic Gradient Descent: Going As Fast As Possible But Not Faster (more information...)

2017

December

  • December, Tuesday 19th, 14:30 (room 455, building 650): Antonio Vergari (LACAM, University of Bari 'Aldo Moro', Italy): Learning and Exploiting Deep Tractable Probabilistic Models (more information...)
  • December, Wednesday 13th, 14:30 (room 445, building 650): Robin Girard (Mines ParisTech Sophia-Antipolis): Data mining and optimisation challenges for the energy transition (more information...)
  • December, first week: break (NIPS)

November

  • November, Wednesday 22th, 14:30 (room 2014): Marylou Gabrié (ENS Paris, Laboratoire de Physique Statistique): Mean-Field Framework for Unsupervised Learning with Boltzmann Machines (more information...)
  • November, Friday 17th, 11:00 (Shannon amphitheatre): [ GT DeepNet ] Levent Sagun (IPHT Saclay): Over-Parametrization in Deep Learning (more information...)
  • November, Wednesday 15th, 14:30 (room 2014): Diviyan Kalainathan & Olivier Goudet (TAU): Causal Generative Neural Networks (more information...)
  • November, Thursday 9th, 11:00 (Shannon amphitheatre): Claire Monteleoni (CNRS-LAL / George Washington University): Machine Learning Algorithms for Climate Informatics, Sustainability, and Social Good (more information...)

October

  • October, Tuesday 24th, 14:30 (Shannon amphitheatre): Benjamin Guedj (MODAL team, Inria Lille): A quasi-Bayesian perspective to NMF: theory and applications (more information...)
  • October, Wednesday 18th, 14:30 (room 2014): Théophile Sanchez (TAU): End-to-end Deep Learning Approach for Demographic History Inference (more information...)
  • October, Wednesday 11th, 14:00 (room 2014): Victor Estrade (TAU): Robust Deep Learning : A case study (more information...)
  • October, Wednesday 4th, 14:30 (room 2014): Hugo Richard (Parietal/TAU): Data based alignment of brain fmri images (more information...)

September

  • September, Tuesday 19th, 11:00 (Shannon amphitheatre): Carlo Lucibello (Politecnico di Torino): Probing the energy landscape of Artificial Neural Networks (more information...)

July

  • July, Tuesday 4th, from 11:00 to 13:00 (Shannon amphitheatre): presentation of Brice Bathellier's team + MLspike by Thomas Deneux (more information...)

June

  • June, Friday 30th, 14:30 (room 2014): internships presentation by Giancarlo Fissore: Learning dynamics of Restricted Boltzmann Machines, and by Clément Leroy: Free Energy Landscape in a Restricted Boltzmann Machine (RBM) (more information...)
  • June, Thursday 29th, 14:30 (Shannon amphitheatre): [ GT DeepNet ] Alexandre Barachant: Information Geometry: A framework for manipulation and classification of neural time series (more information...)
  • June, Tuesday 27th, 14:30 (room 2014) Réda Alami et Raphaël Féraud (Orange Labs): Memory Bandits : A bayesian Approach for the Switching Bandit Problem (more information...)
  • June, Monday 12th, 14:30 (Shannon amphitheatre): [ GT DeepNet ] Romain Couillet (Centrale-Supélec): A Random Matrix Framework for BigData Machine Learning (more information...)

May

  • May, Wednesday 24th, 16:00 (room 2014): Priyanka Mandikal (TAU): Anatomy Localization in Medical Images using Neural Networks (more information...)

April

  • April, Friday 28th, 14:30 (Shannon amphitheatre): [ GT DeepNet ] Jascha Sohl-dickstein (Google Brain): Deep Unsupervised Learning using Nonequilibrium Thermodynamics (more information...)
  • April, Tuesday 3rd: Thomas Schmitt: RecSys challenge 2017 (more information...)

March

  • March, Thursday 2nd, 14:30 (Shannon amphitheatre): Marta Soare (Aalto University): Sequential Decision Making in Linear Bandit Setting (more information...)

February

  • February 22nd, 11h: Enrico Camporeale (CWI): Machine learning for Space-Weather forecasting
  • February, Thursday 16th (Shannon amphi.), 14h30: [ GT DeepNet ] Corentin Tallec: Unbiased Online Recurrent Optimization (more information...)
  • February 14th (Shannon amphi.), 14h: [ GT DeepNet ] Victor Berger (Thales Services, ThereSIS): VAE/GAN as a generative model (more information...)

January

  • January 25th, 10h30: Romain Julliard (Muséum National d'Histoire Naturelle): 65 Millions d'Observateurs (more information...)
  • January 24th: Daniela Pamplona (Biovision team, INRIA Sophia-Antipolis / TAO): Data Based Approaches in Retinal Models and Analysis (more information...)



2016


November

  • November 30th: Martin Riedmiller (Google DeepMind). Deep Reinforcement learning for learning machines (more information...)
  • November 29th: Amaury Habrard (Universite Jean Monnet de Saint-Etienne). Domain Adaptation with Optimal Transport: Mapping Estimation and Theory (more information...)
  • November 24th: [ GT DeepNet ] Rico Sennrich (University of Edinburgh). Neural Machine Translation: Breaking the Performance Plateau (more information...)

June

  • June 28th: Lenka Zdeborova (CEA,Ipht). Solvable models of unsupervised feature learning LRI_matrix_fact.pdf

Mai

  • May 3rd: Emile Contal (ENS-Cachan). The geometry of Gaussian processes and Bayesian optimization. slides_semstat16.pdf

April

  • April 26: Marc Bellemare (Google DeepMind). Eight Years of Research with the Atari 2600 (more information...)
  • April 12: Mikael Kuusela (EPFL). Shape-constrained uncertainty quantification in unfolding elementary particle spectra at the Large Hadron Collider.(more information...)

March

  • March 22nd: Matthieu Geist (Supélec Metz): Reductions from inverse reinforcement learning to supervised learning (more information...)
  • March 15: Richard Wilkinson (University of Sheffield): Using surrogate models to accelerate parameter estimation for complex simulators (more information...)
  • March 1st: Pascal Germain (Université Laval, Québec): A Representation Learning Approach for Domain Adaptation (more information...)

February


January

  • January 26th: Laurent Massoulié: Models of collective inference.(more information...).
  • January 19th: Sébastien Gadat: Regret bounds for Narendra-Shapiro bandit algorithms (more information...)..


2015

December



November


  • November 19th: Phillipe Sampaio: A derivative-free trust-funnel method for constrained nonlinear optimization (more information...).


October



  • October 20th: Jean Lafond: Low Rank Matrix Completion with Exponential Family Noise (more information...).

  • October 13th
    • Flora Jay:Inferring past and present demography from genetic data (more information...).
    • Marcus Gallagher: Engineering Features for the Analysis and Comparison Black-box Optimization Problems and Algorithms (more information...).



September


  • Sept. 28th
    • Olivier Pietquin, Approximate Dynamic Programming for Two-Player Zero-Sum Markov Games OlivierPietquin_ICML15.pdf
    • Francois Laviolette, Domain Adaptation (slides soon)

July




June


  • June 15th: Claire Monteleoni:Climate Informatics: Recent Advances and Challenge Problems for Machine Learning in Climate Science
  • June 2nd: Robyn Francon: Reversing Operators for Semantic Backpropagation

May

  • May 18th:Andras Gyorgy:Adaptive Monte Carlo via Bandit Allocation

April


  • April 28th:Vianney Perchet:Optimal Sample Size in Multi-Phase Learning(more information...)
  • April 27th:Hédi Soula, TBA
  • April 21th: Gregory Grefenstette, INRIA Saclay: Personal semantics(more information...)
  • April 7th: Paul Honeine: Relever deux défis majeurs en apprentissage par méthodes à noyaux:problème de pré-image et apprentissage en ligne (more information...)

March

  • March 31th: Bruno Scherrer (Inria Nancy): Non-Stationary Modified Policy Iteration (more information...)
  • March 24th: Christophe Schülke(ESPCI): Community detection with modularity: a statistical physics approach (more information...)
  • March 10th: Balazs Kegl: Rapid Analytics and Model Prototyping (more information...)

February

  • February 24th: Madalina Drugan (Vrije Universiteit Brussel, Belgium): Multi-objective multi-armed bandits (more information...)
  • February 20th: Holger Hoos (University of British Columbia, Canada): séminaire MSR - see the slides
  • February 17th :Aurélien Bellet: The Frank-Wolfe Algorithm: Recent Results and Applications to High-Dimensional Similarity Learning and Distributed Optimization more information...
  • February 10th, Manuel Lopes 15interlearnteach.pdf

January

  • January 27th :Raphaël Baillyra: Tensor factorization for multi-relational learning ((more information...)
  • January 13th : Francesco Caltagirone: On convergence of Approximate Message Passing (talk_Caltagirone.pdf)
  • January 6th : Emilie Kaufmann: Bayesian and frequentist strategies for sequential resource allocation (Emilie_Kauffman.pdf)

Seminars 2014

November

  • November 4th :Joaquin Vanschoren:OpenML: Networked science in machine learning

October

  • Oct. 28th,
    • Antoine Bureau, "Bellmanian Bandit Network"
This paper presents a new reinforcement learning (RL) algorithm called Bellmanian Bandit Network (BBN), where action selection in each state is formalized as a multi-armed bandit problem. The first contribution lies in the definition of an exploratory reward inspired from the intrinsic motivation criterion from -1-, combined with the RL reward. The second contribution is to use a network of multi-armed bandits to achieve the convergence toward the optimal Q-value function. The BBN algorithm is comparatively validated to -1-.
References:
-1- Manuel Lopes, Tobias Lang, Marc Toussaint, and Pierre-Yves Oudeyer. Exploration in model-based reinforcement learning by empirically estimating learning progress. In Neural Information Processing System (NIPS), 2012.

    • Basile Mayeur
Abstract:
Taking inspiration from inverse reinforcement learning, the proposed Direct Value Learning for Reinforcement Learning (DIVA) approach uses light priors to gener- ate inappropriate behavior’s, and use the corresponding state sequences to directly learn a value function. When the transition model is known, this value function directly defines a (nearly) optimal controller. Otherwise, the value function is extended to the (state,action) space using off-policy learning.
The experimental validation of DIVA on the Mountain car shows the robustness of the approach comparatively to SARSA, based on the assumption that the tar- get state is known. Lighter assumptions are considered in the Bicycle problem, showing the robustness of DIVA in a model-free setting.

    • Thomas Schmitt, "Exploration / exploitation: a free energy-based criterion"
We investigate a new strategy, based on a free energy criterion to solve the exploration/exploitation dilemma. Our strategy promotes exploration using an entropy term.


September

  • Sept. 29th, Rich Caruana

Old seminars

Contributors to this page: guillaume , sebag , furtlehn , maillard@lri.fr , cecile , evomarc , BasileMayeur , ThomasS , Antoine.Bureau , hansen , kegl and lopes .
Page last modified on Monday 23 of November, 2020 17:36:36 CET by guillaume.