(Cached) Tao
Refresh Print

Reservoir Computing


INRIA-TAO Ludovic Arnold, Nicolas Bredeche, Fei Jiang, Cédric Hartland. Hélène Paugam-Moisy, Julien Perez, Marc Schoenauer, Michèle Sebag, Sébastien Rebecchi, Sylvain Chevallier
INRIA-Alchemy Hugues Berry, Benoît Siri, Olivier Temam
LIMSI-PS Philippe Tarroux, Jean-Sylvain Liénard, Mathieu Dubois
Autres participants extérieurs : Pierre Yger (UNIC, Gif ), Yann Chevaleyre (LAMSADE, Paris-Dauphine )

Research Themes

Reservoir Computing is a generic name that appeared at the end of 2006, gathering new paradigms for computing large recurrent neural networks with random connectivity, such as: - ESN = Echo State Networks (Jaeger, 2001) - LSM = Liquid State Machines (Maass, Natschlager, Markram, 2002)Since they are dynamical, non-linear systems, recurrent networks are both highly powerful and very hard to tame. Highly powerful for solving complicated engineering tasks: control and modeling complex dynamical systems, temporal pattern recognition, speech recognition, autonomous robotics. Very hard to tame as long as the designer aims to control all the clockwork of the network dynamics for designing efficient learning rules.The trick of Reservoir Computing is to design an internal network - the so-called "reservoir" - as a large random structure, with sparse connectivity, and to take advantage of its free auto-organization by learning with easy rules a set of "read-out" neurons, in order to extract relevant information from the reservoir dynamics. Neurons involved in such structures can be traditional neurons (threshold, sigmoid units) as well as spiking neurons that take into account the precise timing of spike firing, drawing their inspiration from cognitive processing and memory in the brain.Understanding and designing efficient models and methods for Reservoir Computing is a matter for research at the border of machine learning (rethinking learning rules with a temporal dimension) and complex systems (optimizing the emergence of a complex dynamical behavior from the structure of the reservoir). Coupling Reservoir Computing and Evolutionary Computation is also a topic of interest for the present working group.


The Reservoir Computing SIG meets every two weeks. See the TAO calendar and the private page (restricted access) for details.

  • 16/10/2008 - ESN, LSM or other: which structure and learning rule ? for which task ?
  • 30/10/2008 - Article: "An experimental unification of reservoir computing methods", D. Verstraeten, B. Schrauwen, M. D'Haene, D. Stroobandt (2007) - Neural Networks, 20:391-403, read by H.Paugam
  • 12/11/2008 - Invited talk: "Cost-efficiency of human brain networks" - D. Meunier (BMU, Cambridge, UK)
  • 04/12/2008 - Article: " On computational power and the order-chaos phase transition in reservoir computing", B.Schrauwen, L.Bûsing, R. Legenstein, NIPS'2008, read by M. Dubois (LIMSI)
  • 22/01/2009 - Discussion about learning rules used for LSM in literature: p-delta rule for a robot to learn a "gamma" trajectory?
  • 28/01/2009 - Simulators in use at TAO: RC toolbox from Ghent (B) team, presented by F. Jiang; Jaeger's toolbox for ESN, presented by J. Perez
  • 12/02/2009 - An introduction to the notion of "Deep Networks", based on article "Reducing the Dimensionality of Data with Neural Networks", G.E. Hinton and R.R. Salakhutdinov (2006) - Science, 313:504-507, read by H. Paugam and M. Sebag.
  • 05/03/2009 - Article: "A unified architecture for natural language processing: Deep neural networks with multitask learning", R. Collobert and J. Weston, ICML'2008, read by G. Wisniewski and A. Allauzen (LIMSI)
  • 20/03/2009 - Article: "A statistical analysis of information processing properties of lamina-specific cortical microcircuit models", S. Haeusler and W. Maass (2007) - Cerebral Cortex, 17:149-162, read by H. Berry (INRIA-Alchemy)
  • 17/04/2009 - An introduction to "Information Bottleneck" - L. Arnold. Discussion: links with deep networks?
  • 14/05/2009 - Deep Belief Networks: How Deep are They ? (based on Hinton's papers) - L.Arnold
  • 04/06/2009 - Polychronous groups for information coding (based on papers with Régis Martinez - LIRIS, Lyon) - H. Paugam-Moisy
  • 17/09/2009 - E?cient Topology Optimization for Deep Neural Networks (based on Master M2R report) - L. Arnold
  • 01/10/2009 - Article: "Reservoir Computing approaches to recurent neural network training", M. Lukosevicius, H. Jaeger (2009) - Computer Science Review, 3:127-149, read by H. Paugam-Moisy
  • 29/10/2009 - Deep Belief Networks and Vision (based on papers by Lee, Ng et al.) - L. Arnold
  • 12/03/2010 - A spiking neuron approach for the division of labor in a swarm of foraging agents - S. Chevallier
  • 03/06/2010 - Curriculum Learning, based on an article by Bengio et al. (2009) - P. Allegraud
  • 17/06/2010 - Distributed event-driven simulator of spiking neuron networks and biologically inspired modeling - A. Mouraud
  • 25/06/2010 - Bayesian Magic and the Indian Buffet Process - L. Arnold
  • 09/09/2010 - Unsupervised learning of visual features through STDP (based on T. Masquelier and S. Thorpe parpers) - Read by S. Chevallier - Discussion: a possible deep spiking network approach?
  • 30/09/2010 - Near optimal signal recovery from random projections: universal encoding strategies? (Candès and Tao, 2006) - Read by S. Rebecchi
  • 03/12/2010 - Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization (Duarte-Carvajalino and Sapiro, 2009) - Read by S. Rebecchi
  • 06/10/2011 - Towards deep sparse neural networks - S. Rebecchi
  • 08/12/2011 - Sparsity is natural in RBM - L. Arnold; Neural networks and sparsity - S. Rebecchi; Discussion on stacking sparse codes
  • 09/02/2012 - Semantic place recognition using tiny images and deep belief networks - A. Hasasneh


L. Arnold, H. Paugam-Moisy, M. Sebag. (2010). Optimisation de la Topologie pour les Réseaux de Neurones Profonds. RFIA'2010. January, Caen, France.

R. Martinez, H. Paugam-Moisy. (2009) Algorithms for Structural and Dynamical Polychronous Groups Detection . Int. Conf. on Artificial Neural Networks (ICANN'09), Springer, LNCS 5769, p.75-84. September, Limassol, Cyprus.

F. Jiang, H. Berry and M. Schoenauer. (2008) Supervised and Evolutionary Learning of Echo State Networks . 10th International Conference on Parallel Problem Solving From Nature, PPSN-2008, September, Dortmund, Germany.

F. Jiang, H. Berry and M. Schoenauer. (2008) Unsupervised Learning of Echo State Networks: Balancing the Double Pole. Genetic and Evolutionary Computation Conference, GECCO-2008, July, Atlanta, GA, USA.

H. Paugam-Moisy, R. Martinez, S. Bengio. (2008) Delay learning and polychronization for reservoir computing . Neurocomputing 71(7-9):1143-1158. Elsevier.

F. Jiang, H. Berry and M. Schoenauer. (2007) Optimizing the topology of complex neural networks . European Conference on Complex Systems (ECCS'07), October, Dresden, Germany.

C. Hartland, N. Bredeche. (2007) Using Echo State Networks for Robot Navigation Behavior Acquisition. 4th IEEE International Conference on Robotics and Biomimetics (ROBIO 2007). pp.201-206. Sanya, China.

A. Devert, N. Bredeche, M. Schoenauer. (2007) Unsupervised Learning of Echo State Networks: A Case Study in Artificial Embryogeny. 8th Int, Conf, on Artificial Evolution (EA'07), Springer LNCS 4926, p278-290, Tours, France.

P. Series and P. Tarroux (1999) Synchrony and delay activity in cortical column models. Neurocomputing 26-27:505-510. Elsevier.

Contributors to this page: srebecchi , sebag , cSylvain , hpaugam , rros and evomarc .
Page last modified on Wednesday 08 of February, 2012 14:09:19 CET by srebecchi.