PGMODays 2022

 

This year, the PGMO will celebrate its 10 years!

2 days with plenary lectures by invited speakers, review of the PGMO Program, PGMO PhD award ceremony and parallel sessions on PGMO main topics. and a special evening event !

 

PGMODays 2022

29th november , 2022 and 30th november, 2022

 

EDF Lab Paris-Saclay, Palaiseau

 

PROGRAM PGMODAYS

 

Booklet of Abstracts

Booklet of Abstracts

ORATEURS 2022

 

  • Bolte Jérome, Université Toulouse 1 Capitole France : Site web
     
  • Cardaliaguet Pierre, Université Paris Dauphine France : Site web
     
  • Cornuejols Gerard, Carnegie Mellon University USA : Site web
     
  • Gondzio Jacek, The University of Edinburgh Scotland UK : Site web
     
  • Ljubic Ivana, ESSEC France : Site web
     
  • Sagastizabal Claudia, University of Campinas Brazil : Site web

BOLTE Jérôme

A Glance at Nonsmooth Automatic differentiation

I will recall the fundamental role of nonsmooth automatic differentiation as the core learning mechanism in modern AI. I will then show how a recent theory we developed with E. Pauwels — «Conservative gradients»— helps to understand fundamental phenomena, such as the convergence of learning phases in deep learning, the optimization of learning parameters, the nonsmooth cheap gradient principle, or the differentiation of algorithms.

Cardaliaguet Pierre

Recent aspects of mean field control

Mean field control consists in the optimal control of a large population of devices: it is motivated for instance by the optimal control of a fleet of vehicles or a fleet of drones, or the optimal charging of batteries,… Very often the system is simplified into the optimal control of a partial differential equation of McKean-Vlasov type. In this talk I will present a series of works (obtained with S. Daudin, J. Jackson and P. Souganidis) in which we investigate quantitatively the validity of this approximation.

Cornuejols Gerard

Dyadic Linear Programming

A rational number is dyadic if it is an integer multiple of 1/2k for some nonnegative integer k. Dyadic numbers are important for numerical computations because they have a finite binary representation, and therefore they can be represented exactly on a computer in floating-point arithmetic.When real or rational numbers are approximated by dyadic numbers on a computer, approximation errors may propagate and accumulate throughout the computations. So it is natural to ask when linear programs have dyadic optimal solutions. We address this question in joint work with Ahmad Abdi, Levent Tuncel, and Bertrand Guenin.

Gondzio Jacek

Applications of Interior Point methods: from Sparse Approximations to Discrete Optimal Transport

A variety of problems in modern applications of optimization require a selection of a 'sparse' solution, a vector with preferably few nonzero entries.  Such problems may originate from very different applications in computational statistics, signal or image processing or compressed sensing, finance, machine learning and discrete optimal transport, to mention just a few. Sparse approximation problems are often solved with dedicated and highly specialised first-order methods of optimization.

In this talk I will argue that these problems may be very efficiently solved by the more reliable optimization techniques which involve some use of the (inexact) second-order information as long as this is combined with appropriately chosen iterative
techniques of linear algebra, such as for example methods from the Krylov-subspace family. Two particular classes of methods, the Newton Conjugate Gradient and the Interior Point Method will be interpreted as suitable homotopy type approaches and will be applied to solve problems arising from: compressed sensing, multi-period portfolio optimization, classification of data coming from functional Magnetic Resonance Imaging, restoration of images corrupted by Poisson noise, classification via regularized logistic regression, and discrete optimal transport. In all these cases, the performance of the proposed methods will be compared against competitive first-order methods. Computational evidence will be provided to demonstrate that specialized second-order methods compare favourably and often outperform the cutting-edge first-order methods.

For more details, see:

[1]  V. De Simone, D. di Serafino, J. Gondzio, S. Pougkakiotis, and M. Viola,
Sparse approximations with interior point methods,
SIAM Review (accepted 24 Nov 2021).
https://arxiv.org/abs/2102.13608

[2]  F. Zanetti and J. Gondzio,
A sparse interior point method for linear programs arising
in discrete optimal transport,
Tech Report (22 June 2021).
https://arxiv.org/abs/2206.11009

 

Ljubic Ivana

Bilevel Optimization Under Uncertainty

Bilevel optimization problems can serve as a powerful tool for modeling hierarchical decision making processes. The resulting problems are highly challenging to solve---both in theory and practice. Fortunately, there have been significant algorithmic advances in the field of bilevel optimization so that we can solve much larger and also more complicated problems today compared to what was possible to solve two decades ago.

This results in more and more challenging bilevel problems that researchers try to solve today. In this talk I will give an overview of one of these more challenging classes of bilevel problems: bilevel optimization under uncertainty. We will discuss classical ways of addressing uncertainties in bilevel optimization using stochastic or robust techniques. Moreover, the sources of uncertainty in bilevel optimization can be much richer than for usual, single-level problems, since not only the problem's data can be uncertain but also the (observation of the) decisions of the two players can be subject to uncertainty. Thus, we will also discuss bilevel optimization under limited observability, the area of problems considering only near-optimal decisions, and intermediate solution concepts between the optimistic and pessimistic cases.
The talk is based on the article by Yasmine Beck, Ivana Ljubić, Martin Schmidt: A Survey on Bilevel Optimization Under Uncertainty, https://optimization-online.org/2022/06/8963/

Sagastizàbal Claudia Alejandra

Some splitting methods from a nonsmooth optimizer's perspective

For large-scale optimization, popular approaches such as the ADMM and the Progressive Hedging algorithm exploit separable structures by solving in parallel individual sub-problems which are then coordinated by performing a simple algebraic step (typically, a projection onto a linear subspace)

While parallelism is the strength of all Douglas-Rachford-like splitting methods, their weak points are the adjustment of certain proximal parameter and the lack of a fully implementable stopping test.  These difficulties, which date back to Spingarn's method of partial inverses, stem from the very design of these approaches, which were created to find zeroes of operators.

We discuss how to endow some splitting methods with an optimization perspective that introduces knowledge about the objective function throughout the iterative process. The resulting new family of methods à la bundle incorporates a projective step to coordinate parallel information while allowing the variation of the proximal parameters and the construction of a reliable stopping test. A Bundle Progressive Hedging algorithm, derived from the general theory, illustrates the interest of
proposal. Credit to co-authors will be given during the talk.

Website