Bézout-Facebook scientific workshop

November 4th, 2019 — A one-day “meeting with the Industry” of the Bézout Labex is organized on the site of Marne-la-Vallée. It will be the opportunity to see various talks from researchers at FaceBook AI Paris and from researchers at the Labex Bézout, on the topic of machine learning.
Talks will be in English.

Scientific coordinator: Laurent Najman, LIGM

For organizational reasons, registration is free but mandatory., as the number of places is limited. Lunch will be offered, subject to availability, on a “first come, first served” basis (meaning you need to register asap if you want to eat).

A confirmation email will be sent for the registration, and one later for the lunch.
Registration link is here.

Program

  • 09h30 – 09h50 — Welcome coffee
  • 09h50 – 10h00 — Introduction – Eric Colin de Verdière & Laurent Najman
  • 10h00 – 11h00 — Mathieu Aubry
  • 11h00 – 12h00 — Olivier Teytaud
  • 12h00 – 13h30 — Lunch
  • 13h30 – 14h30 — Camille Couprie
  • 14h30 – 15h30 — François-Xavier Vialard
  • 15h30 – 15h45 — Coffee break
  • 16h00 – 17h30 — Short talks from the Labex
    • Romuald Elie
    • Zineb Belkacemi and Inass Sekkat
    • Giovanni Chierchia and Benjamin Perret

The meeting will take place in the auditorium of bibliothèque Georges Perec.

Abstracts

Mathieu Aubry / LIGM
Deep Learning on historical images
I will first present how deep learning can be used effectively to predict the price of artworks based on a large collection of auction history and what we can learn from the predictions.  I will then present Deep Learning approaches developed with Xi Shen and Alexei A. Efros specifically to work with historical and artistic images. The two main challenges we face in many digital humanity problems are the absence of supervised training data and the diversity of the modalities of the depictions (e.g. drawings, oil paintings and photographs).  They thus developed an unsupervised domain adaptation approach. I will present in details how it allows us to recognize watermarks at scale and discover near duplicate patterns in large collections of artworks.
More information about this approach and results are available at http://imagine.enpc.fr/~shenx/Watermark/ and  http://imagine.enpc.fr/~shenx/ArtMiner/

Olivier Teytaud / Facebook AIR Paris
Zero learning
We present recent research trends in (possibly partially observable) Markov Decision Processes, combining Monte Carlo Tree Search and Deep Learning with zero learning.

Camille Couprie / Facebook AIR Paris
Image generative modeling for future prediction or inspirational purposes 
Generative models, and in particular adversarial ones, are becoming prevalent in computer vision as they enable enhancing artistic creation, inspire designers, prove usefulness in semi-supervised learning or robotics applications.
An important prerequisite towards intelligent behavior is the ability to anticipate future events.
Predicting the appearance of future video frames is a proxy task towards pursuing this ability. We will present how generative adversarial networks (GANs) can help, and novel approaches predicting in higher level feature spaces of semantic segmentations. In a second part, we will see how to develop the abilities of GANs to deviate from training examples to generate novel images. Finally, as a limitation of GANs is the production of raw images of low resolution, we present solutions to produce vectorized results.

François-Xavier Vialard / LIGM
New losses based on optimal transport and metric learning estimation in image registration.
This talk presents two different topics:
(1) In the first part, we present a new divergence between probability measures based on entropic regularized optimal transport. We show the well-posedness of this loss and its computational feasibility based on Sinkhorn algorithm. We then explain the interests of this loss in machine learning and shape matching.
Reference : Interpolating between Optimal Transport and MMD using Sinkhorn divergences. AIStats 2019.
(2) The second part is about metric estimation in diffeomorphic (medical) image registration. Most of the standard method rely on a user-defined smoothing operator that encodes the smoothness of the deformation. Departing from end-to-end deep learning framework, we use a shallow network to data adapt the metric.
Reference: Metric learning for image registration: CVPR 2019.

Romuald ELIE / LAMA
Reinforcement learning for Mean Field Games
Learning by experience in Multi-Agent Systems (MAS) is a  difficult and exciting task, due to the lack of stationarity of the environment, whose dynamics evolves as the population learns. In order to design scalable algorithms for systems with a large population of interacting agents (e.g. swarms), we will focus on Mean Field Multi-Agent Systems , where the number of agents is asymptotically infinite. After a short introduction to the corresponding Mean Field Game environment, we will investigate the quality of learned  Nash equilibrium in such context, using fictitious play algorithms. The theoretical results are illustrated with numerical experiments in a continuous action-space environment, where the approximate best response of the iterative fictitious play scheme is computed with a deep RL algorithm. The talk is mainly based on a joint work with Julien Pérolat (Deepmind), Mathieu Laurière (Princeton), Matthieu Geist (Google) et Olivier Pietquin (Google).

Zineb Belkacemi (Sanodi and CERMICS) and Inass Sekkat (CERMICS)
Interactions between machine learning and computational statistical physics
We present two projects:
– one where techniques of ML are used in molecular dynamics, namely to reduce the dimensionality of biomolecular systems through the use of autoencoders to infer reaction coordinates and compute free energies to bias the sampling of the configurational space
– one where techniques of molecular dynamics are used in ML, namely adaptive Langevin dynamics used to reduce the bias due to mini-batching for large scale Bayesian inference

Giovanni Chierchia and Benjamin Perret / LIGM
Ultrametric Fitting by Gradient Descent
We study the problem of fitting an ultrametric distance to a dissimilarity graph in the context of hierarchical cluster analysis. Standard hierarchical clustering methods are specified procedurally, rather than in terms of the cost function to be optimized. We aim to overcome this limitation by presenting a general optimization framework for ultrametric fitting. Our approach consists of modeling the latter as a constrained optimization problem over the continuous space of ultrametrics. So doing, we can leverage the simple, yet effective, idea of replacing the ultrametric constraint with an equivalent min-max operation injected directly into the cost function. The proposed reformulation leads to an unconstrained optimization problem that can be efficiently solved by gradient descent methods. The flexibility of our framework allows us to investigate several cost functions, following the classic paradigm of combining a data fidelity term with a regularization. While we provide no theoretical guarantee to find the global optimum, the numerical results obtained over a number of synthetic and real datasets demonstrate the good performance of our approach with respect to state-of-the-art agglomerative algorithms. This makes us believe that the proposed framework sheds new light on the way to design a new generation of hierarchical clustering methods.
This talk is based on a paper accepted to NeurIPS 2019.