Fullscreen
Loading...
 
Tao
Print

Seminar19122017

December, Tuesday 19th

14:30 (room 455, building 650) (see location )
Note: unusual room (next building)

Antonio Vergari

(LACAM, Department of Computer Science, University of Bari 'Aldo Moro', Italy)

Title: Learning and Exploiting Deep Tractable Probabilistic Models


Abstract

Making use and sense of the unlabeled data that surrounds us is one of the challenges we have to face as scientists, data science practitioners, or decision makers. Fitting a probabilistic model in order to recover the probability distribution that generated the data, i.e. to perform density estimation, is one of the key way to tackle such a problem in Machine Learning.
The king task on this estimators is to perform inference, that is reasoning about the possible and uncertain configurations of our world.

Tractable probabilistic models (TPMs) allow several classes of probabilistic queries to be computed in a feasible way, i.e., in polynomial time in the number of the random variables considered.
In this talk, the focus is on Sum-Product Networks (SPNs), recently proposed TPMs that are expressive enough to model complex probability distributions, offer tractable inference for a wide range of query types and can be learned efficiently in an unsupervised way. Moreover one can learn one SPN only once and exploiting it several times later, by reusing it both for answering several inference query types ad libitum, and to be employed for predictive tasks as well.

In particular, we show how SPNs can be extended and learned on mixed domains-comprising continuous, discrete and categorical features-without requiring users to specify a parametric form the random variables involved nor for their interactions. These Mixed SPNs still allow a wide range of queries to be answered in a tractable time, overcoming many of the limitations that classical mixed probabilistic models face and ultimately paving the way to automating density estimation.

As an additional way to exploit SPNs, we show how such TPMs can be naturally extended to Representation Learning by equipping them with encoding and decoding routines to operate on both categorical and continuous embeddings.
By exploiting MPE inference, one can obtain rich representations that are proven to be highly effective when employed to encode the input, output features or both in a Multi-Label Classification scenario.



Contact: guillaume.charpiat at inria.fr
All TAU seminars: here

Contributors to this page: guillaume .
Page last modified on Monday 18 of December, 2017 16:08:43 CET by guillaume.