Crossing the Chasm
Participants
Alejandro Arbelaez, Luis Da Costa, Alvaro Fialho, Youssef Hamadi, Nikolaus Hansen, Balázs Kégl, Róbert Busa-Fekete, Philippe Rolet, Marc Schoenauer, Michèle SebagResearch Themes
Many forefront techniques in both Machine Learning and Stochastic Search have been very successful in solving difficult real-world problems. However, their application to newly encountered problems, or even to new instances of known problems, remains a challenge, even for experienced researchers of the field - not to mention newcomers, even if they are skilled scientists or engineers from other areas. Theory and/or practical tools are still missing to make them crossing the chasm (from Geoffrey A. Moore's book about the diffusion of innovation).
The difficulties faced by the users arise mainly from the significant range of algorithm and/or parameter choices involved when using this type of approaches, and the lack of guidance as to how to proceed for selecting them. Moreover, state-of-the-art approaches for real-world problems tend to represent bespoke problem-specific methods which are expensive to develop and maintain.
More specifically, the following research conducted in TAO is concerned with Crossing the Chasm
- Adaptive Operator Selection, or how to adapt the mechanism that chooses among the different variation operators in Evolutionary Algorithms. We have proposed two original features
- Using a Multi-Armed Bandit algorithm for operator selection GECCO'08
- Using Extreme values rather than averages as a reward for operators PPSN'08
- On-going work is investigating the recombination of the above ideas with the Compass approach of our colleagues from Angers University (J.Maturana, F.Saubion: A Compass to Guide Genetic Algorithms. PPSN 2008: 256-265)
- Adaptation for Continuous Optimization: building on the well-known Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm, that adapts the covariance matrix of the Gaussian mutation of an Evolution Strategy based on the path followed by the evolution, several improvements and generalizations have been proposed:
- an adaptive mechanism that can apply to any distribution-based search strategy has been proposed at PPSN'08. The mechanism renders the underlying search strategy rotational invariant and facilitates an adaptation to non-separable problems. On non-separable, badly scaled problems the performance of many search algorithms can improve by orders of magnitude.
- a version of CMA with linearly scaling computational complexity and linear space requirement has been proposed at PPSN'08 (compared to quadratic for the original algorithm). In high dimensional search spaces (larger than hundred) the new variant can be advantageous not only on cheap to evaluate search problems but even on very expensive non-separable problems.
- Meta-parameter tuning for Machine Learning Algorithms: Non-parametric learning algorithms usually require the tuning of hyper-parameters that determine the complexity of the learning machine. Tuning this parameters is usually done manually based on (cross) validation schemes. The goal of this theme is to develop principled methods to carry out this optimization task automatically using global optimization algorithms. The theme is part of the MetaModel project.
- Learning Heuristics Choice in Constraint Programming: several heuristics have been proposed to choose which branch to explore next within Constraint Programming algorithms. The idea we are exploring is to learn which one is the best given the characteristics of the current node of the tree (e.g. domain sizes, number of still unsatisfied constraints, etc).
- Active Learning, or how to choose next sample depending on previously seems examples
- Designing Problem Descriptors is a longer-term goal: being able to accurately describe a given problem (or instance) will allow us to then learn from extensive experiments what are the good algorithms/parameters for classes of instances, or even indvidual instances (see e.g. F.Hutter, Y.Hamadi, H.H.Hoos, and K.Leyton-Brown. Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms, CP'2006).
Related projects
- The goal of the Microsoft-INRIA Joint lab project on Adaptive search for E-Science, is precisely to automate the parameter tuning of search algorithms.
- EvoTest is a European STRAP that started in September 2006, dealing with Evolutionary generation of test data. The work of TAO is to provide the Evolution Engine to the framework, including on the one hand the GUIDE interface for easy EA design, and adding to it automatic parameter tuning facilities.
- MetaModel (Advanced methodologies for modeling interdependent systems - applications in experimental physics) ANR "jeune chercheur" project.