• Accueil

  • La FMJH soutient

  • Recherche

  • PGMODays

     

    PGMODays

    Cette année, le PGMO fêtera ses 10 ans !

    2 jours de conférences plénières, des sessions d'exposés sur des sujets d'actualité du PGMO, la remise du prix de thèse 2022 ainsi qu'une soirée spéciale 10 ans.

     

    RÉSERVEZ LES DATES

    PGMODays 2022

    29 novembre 2022 et 30 novembre 2022

    EDF'Lab, Palaiseau

    Plan d'accès EDF'Lab

     

    Programme et informations à venir

    None

    PGMODAYS Process abstracts submissions

    Submissions on all aspects of optimization and operations research, theoretical or applied, and their interfaces with data sciences are welcome.

     

    Click for more informations about submissions process.

     

    partager

     

    Orateurs 2022

    • Bolte Jérome, Université Toulouse 1 Capitole France : Site web
       
    • Cardaliaguet Pierre, Université Paris Dauphine France : Site web
       
    • Cornuejols Gerard, Carnegie Mellon University USA : Site web
       
    • Gondzio Jacek, The University of Edinburgh Scotland UK : Site web
       
    • Ljubic Ivana, ESSEC France : Site web
       
    • Sagastizabal Claudia, University of Campinas Brazil : Site web

    Cardaliaguet Pierre

    Recent aspects of mean field control

    Mean field control consists in the optimal control of a large population of devices: it is motivated for instance by the optimal control of a fleet of vehicles or a fleet of drones, or the optimal charging of batteries,… Very often the system is simplified into the optimal control of a partial differential equation of McKean-Vlasov type. In this talk I will present a series of works (obtained with S. Daudin, J. Jackson and P. Souganidis) in which we investigate quantitatively the validity of this approximation.

    Gondzio Jacek

    Applications of Interior Point methods: from Sparse Approximations to Discrete Optimal Transport

    A variety of problems in modern applications of optimization require a selection of a 'sparse' solution, a vector with preferably few nonzero entries.  Such problems may originate from very different applications in computational statistics, signal or image processing or compressed sensing, finance, machine learning and discrete optimal transport, to mention just a few. Sparse approximation problems are often solved with dedicated and highly specialised first-order methods of optimization.

    In this talk I will argue that these problems may be very efficiently solved by the more reliable optimization techniques which involve some use of the (inexact) second-order information as long as this is combined with appropriately chosen iterative
    techniques of linear algebra, such as for example methods from the Krylov-subspace family. Two particular classes of methods, the Newton Conjugate Gradient and the Interior Point Method will be interpreted as suitable homotopy type approaches and will be applied to solve problems arising from: compressed sensing, multi-period portfolio optimization, classification of data coming from functional Magnetic Resonance Imaging, restoration of images corrupted by Poisson noise, classification via regularized logistic regression, and discrete optimal transport. In all these cases, the performance of the proposed methods will be compared against competitive first-order methods. Computational evidence will be provided to demonstrate that specialized second-order methods compare favourably and often outperform the cutting-edge first-order methods.

    For more details, see:

    [1]  V. De Simone, D. di Serafino, J. Gondzio, S. Pougkakiotis, and M. Viola,
    Sparse approximations with interior point methods,
    SIAM Review (accepted 24 Nov 2021).
    https://arxiv.org/abs/2102.13608

    [2]  F. Zanetti and J. Gondzio,
    A sparse interior point method for linear programs arising
    in discrete optimal transport,
    Tech Report (22 June 2021).
    https://arxiv.org/abs/2206.11009

     

    Ljubic Ivana

    Bilevel Optimization Under Uncertainty

    Bilevel optimization problems can serve as a powerful tool for modeling hierarchical decision making processes. The resulting problems are highly challenging to solve---both in theory and practice. Fortunately, there have been significant algorithmic advances in the field of bilevel optimization so that we can solve much larger and also more complicated problems today compared to what was possible to solve two decades ago.

    This results in more and more challenging bilevel problems that researchers try to solve today. In this talk I will give an overview of one of these more challenging classes of bilevel problems: bilevel optimization under uncertainty. We will discuss classical ways of addressing uncertainties in bilevel optimization using stochastic or robust techniques. Moreover, the sources of uncertainty in bilevel optimization can be much richer than for usual, single-level problems, since not only the problem's data can be uncertain but also the (observation of the) decisions of the two players can be subject to uncertainty. Thus, we will also discuss bilevel optimization under limited observability, the area of problems considering only near-optimal decisions, and intermediate solution concepts between the optimistic and pessimistic cases.
    The talk is based on the article by Yasmine Beck, Ivana Ljubić, Martin Schmidt: A Survey on Bilevel Optimization Under Uncertainty, https://optimization-online.org/2022/06/8963/

    Sagastizàbal Claudia Alejandra

    Some splitting methods from a nonsmooth optimizer's perspective

    For large-scale optimization, popular approaches such as the ADMM and the Progressive Hedging algorithm exploit separable structures by solving in parallel individual sub-problems which are then coordinated by performing a simple algebraic step (typically, a projection onto a linear subspace)

    While parallelism is the strength of all Douglas-Rachford-like splitting methods, their weak points are the adjustment of certain proximal parameter and the lack of a fully implementable stopping test.  These difficulties, which date back to Spingarn's method of partial inverses, stem from the very design of these approaches, which were created to find zeroes of operators.

    We discuss how to endow some splitting methods with an optimization perspective that introduces knowledge about the objective function throughout the iterative process. The resulting new family of methods à la bundle incorporates a projective step to coordinate parallel information while allowing the variation of the proximal parameters and the construction of a reliable stopping test. A Bundle Progressive Hedging algorithm, derived from the general theory, illustrates the interest of
    proposal. Credit to co-authors will be given during the talk

    • PGMODAYS-Bat-EDF
    • PGMODAYS-Amphi6
    • PGMODAYS-Pause1
    • PGMODAYS-Amphi5
    • PGMODAYS-Amphi3