The page below lists the coming and past seminars, and provides a link to the presentations that you may have missed.
Alert emails are sent to the lists tao at lri.fr and tao-seminar at lri.fr.
You may also be interested in the Seminaires d'apprentissage et de statistique de l'universite Paris Saclay
Google Calendar Ical format Google group
All seminars below take place on Tuesday at 14h30 in room 2014, unless specified otherwise.
Some are recorded and available here.
- March, Thursday 2nd, 14:30 (Shannon amphitheatre): Marta Soare (Aalto University): Sequential Decision Making in Linear Bandit Setting (more information...)
- February 22nd, 11h: Enrico Camporeale (CWI): Machine learning for Space-Weather forecasting
- February, Thursday 16th (Shannon amphi.), 14h30: [ GT DeepNet ] Corentin Tallec: Unbiased Online Recurrent Optimization (more information...)
- February 14th (Shannon amphi.), 14h: [ GT DeepNet ] Victor Berger (Thales Services, ThereSIS): VAE/GAN as a generative model (more information...)
- January 25th, 10h30: Romain Julliard (Muséum National d'Histoire Naturelle): 65 Millions d'Observateurs (more information...)
- January 24th: Daniela Pamplona (Biovision team, INRIA Sophia-Antipolis / TAO): Data Based Approaches in Retinal Models and Analysis (more information...)
- November 30th: Martin Riedmiller (Google DeepMind). Deep Reinforcement learning for learning machines (more information...)
- November 29th: Amaury Habrard (Universite Jean Monnet de Saint-Etienne). Domain Adaptation with Optimal Transport: Mapping Estimation and Theory (more information...)
- November 24th: [ GT DeepNet ] Rico Sennrich (University of Edinburgh). Neural Machine Translation: Breaking the Performance Plateau (more information...)
- June 28th: Lenka Zdeborova (CEA,Ipht). Solvable models of unsupervised feature learning LRI_matrix_fact.pdf
- May 3rd: Emile Contal (ENS-Cachan). The geometry of Gaussian processes and Bayesian optimization. slides_semstat16.pdf
- April 26: Marc Bellemare (Google DeepMind). Eight Years of Research with the Atari 2600 (more information...)
- April 12: Mikael Kuusela (EPFL). Shape-constrained uncertainty quantification in unfolding elementary particle spectra at the Large Hadron Collider.(more information...)
- March 22nd: Matthieu Geist (Supélec Metz): Reductions from inverse reinforcement learning to supervised learning (more information...)
- March 15: Richard Wilkinson (University of Sheffield): Using surrogate models to accelerate parameter estimation for complex simulators (more information...)
- March 1st: Pascal Germain (Université Laval, Québec): A Representation Learning Approach for Domain Adaptation (more information...)
- February 9th: François Dufour (INRIA Bordeaux) (more information...)
- January 26th: Laurent Massoulié: Models of collective inference.(more information...).
- January 19th: Sébastien Gadat: Regret bounds for Narendra-Shapiro bandit algorithms (more information...)..
- December 15th: Joon Kwon: SPARSE REGRET MINIMIZATION.(more information...).
- November 19th: Phillipe Sampaio: A derivative-free trust-funnel method for constrained nonlinear optimization (more information...).
- October 27: Audrey Durand: Bandits for healthcare (more information...).
- October 20th: Jean Lafond: Low Rank Matrix Completion with Exponential Family Noise (more information...).
- October 13th
- Sept. 28th
- Olivier Pietquin, Approximate Dynamic Programming for Two-Player Zero-Sum Markov Games OlivierPietquin_ICML15.pdf
- Francois Laviolette, Domain Adaptation (slides soon)
- July 2nd:Alaa Saade:MaCBetH : Matrix Completion with the Bethe Hessian(more information...)
- June 15th: Claire Monteleoni:Climate Informatics: Recent Advances and Challenge Problems for Machine Learning in Climate Science
- June 2nd: Robyn Francon: Reversing Operators for Semantic Backpropagation
- May 18th:Andras Gyorgy:Adaptive Monte Carlo via Bandit Allocation
- April 28th:Vianney Perchet:Optimal Sample Size in Multi-Phase Learning(more information...)
- April 27th:Hédi Soula, TBA
- April 21th: Gregory Grefenstette, INRIA Saclay: Personal semantics(more information...)
- April 7th: Paul Honeine: Relever deux défis majeurs en apprentissage par méthodes à noyaux:problème de pré-image et apprentissage en ligne (more information...)
- March 31th: Bruno Scherrer (Inria Nancy): Non-Stationary Modified Policy Iteration (more information...)
- March 24th: Christophe Schülke(ESPCI): Community detection with modularity: a statistical physics approach (more information...)
- March 10th: Balazs Kegl: Rapid Analytics and Model Prototyping (more information...)
- February 24th: Madalina Drugan (Vrije Universiteit Brussel, Belgium): Multi-objective multi-armed bandits (more information...)
- February 20th: Holger Hoos (University of British Columbia, Canada): séminaire MSR - see the slides
- February 17th :Aurélien Bellet: The Frank-Wolfe Algorithm: Recent Results and Applications to High-Dimensional Similarity Learning and Distributed Optimization more information...
- February 10th, Manuel Lopes 15interlearnteach.pdf
- January 27th :Raphaël Baillyra: Tensor factorization for multi-relational learning ((more information...)
- January 13th : Francesco Caltagirone: On convergence of Approximate Message Passing (talk_Caltagirone.pdf)
- January 6th : Emilie Kaufmann: Bayesian and frequentist strategies for sequential resource allocation (Emilie_Kauffman.pdf)
- November 4th :Joaquin Vanschoren:OpenML: Networked science in machine learning
- Oct. 28th,
- Antoine Bureau, "Bellmanian Bandit Network"
-1- Manuel Lopes, Tobias Lang, Marc Toussaint, and Pierre-Yves Oudeyer. Exploration in model-based reinforcement learning by empirically estimating learning progress. In Neural Information Processing System (NIPS), 2012.
- Basile Mayeur
Taking inspiration from inverse reinforcement learning, the proposed Direct Value Learning for Reinforcement Learning (DIVA) approach uses light priors to gener- ate inappropriate behavior’s, and use the corresponding state sequences to directly learn a value function. When the transition model is known, this value function directly defines a (nearly) optimal controller. Otherwise, the value function is extended to the (state,action) space using off-policy learning.
The experimental validation of DIVA on the Mountain car shows the robustness of the approach comparatively to SARSA, based on the assumption that the tar- get state is known. Lighter assumptions are considered in the Bicycle problem, showing the robustness of DIVA in a model-free setting.
- Thomas Schmitt, "Exploration / exploitation: a free energy-based criterion"
- Oct. 14th, Holger Hoos Slides attached.
- Sept. 29th, Rich Caruana