• Accueil

  • Programmes

  • PGMODays

    Mardi 28 novembre & Mercredi 29 novembre 2023

    EDF Lab Paris-Saclay, Palaiseau

    How to come

     

    PGMODAYS BOOKLET OF ABSTRACTS

     

    Download Booklet

     

     

     

    PGMODAYS PROGRAM

     

    The program is online

    Program

     

     

     

    INVITED SPEAKERS

    DROBINSKI Philippe

    CNRS, École Polytechnique, Institut Pierre Simon Laplace

     

    GOEMANS Michel

    Massachusetts Institute of Technology

     

    GRIGORI Laura

    École Polytechnique Fédérale de Lausanne and Paul Scherrer Institut, Switzerland

     

    PALAGI Laura

    Università di Roma La Sapienza

    DROBINSKI Philippe

    Integration of climate variability and climate change in renewable energy planning

    The trajectory outlined in the Paris Agreement to keep global warming below 2 C dictates not only the timing but also the speed at which the transformation of our energy system must take place to decarbonize energy production. Complying with the Paris Agreement requires reducing the carbon content of energy by about 75% and therefore making a rapid transition from fossil production to production based on low-carbon technologies. Among these technologies are those based on renewable energies. The variability of the climate itself induces a fluctuating or even an intermittent production of variable renewable energy (solar, wind, marine), challenging the balance of the electricity grid. In this context, to  speak of energy transition is to face the problem of increasing the pene- tration of low-carbon energy production while limiting the variability so as to ensure the socio-technical feasibility and economic viability. The problem is not simple, and the delicate balance between urgency (drastically reducing emissions) and utopia (choosing a strategy for low carbon energies and analyzing opportunities and obstacles) needs to be clearly defined.

    GOEMANS Michel

    Submodular Functions on Modular Lattices

    Submodular set functions are central to discrete optimization and are ubiquitous in many areas, including machine learning. To some extent, submodular functions can be viewed as the analog of convex functions in the continuous setting. In this talk, I will first introduce and present basic properties of submodular functions and algorithms for associated optimization problems. I will then discuss extensions to the much less studied setting of submodular functions on lattices, especially modular lattices. This perspective will allow us to derive a surprising (and easy) result, connecting two problems whose solutions are seemingly unrelated.

    GRIGORI Laura

    Randomization techniques for solving large scale linear algebra problems

    In this talk we discuss recent progress in using randomization for solving large scale linear problems. We present a randomized version of the Gram-Schmidt process for orthogonalizing a set of vectors and its usage in the Arnoldi iteration. This leads to introducing new Krylov subspace methods for solving large scale linear systems of equations and eigenvalue problems. The new methods retain the numerical stability of classic Krylov methods while reducing communication and being more efficient on modern massively parallel computers.

    PALAGI Laura

    Ease-controlled Random Reshuffling Gradient Algorithms for nonconvex finite sum optimization

    We consider minimizing the sum of a large number of smooth and non-convex functions, which is a typical problem encountered when training deep neural networks on huge datasets.
    We propose ease-controlled modifications of the traditional online gradient schemes, either incremental (IG) or random reshuffling (RR) gradient methods, which converges to stationary points under weak and basic assumptions. Indeed, besides the compactness of level sets, we require the lonely assumption of Lipschitz continuity of the gradients of the component functions.
    The algorithmic schemes control the IG/RR iteration by using a watchdog rule and a derivative-free line search that activates only sporadically to guarantee convergence. The schemes also allow controlling the updating of the learning rate used in the main IG/RR iteration, avoiding the use of preset rules, thus overcoming another tricky aspect in implementing online methods. We also propose a variant to further reduce the need for the objective function computation.
    We performed an extensive computational test using different Deep Networks Architectures and a benchmark of large datasets of varying sizes and we compare performance to state-of-the-art optimization methods for ML. The tests show that the schemes are efficient, in particular when dealing with ultra-deep architectures.

    Laura Palagi, laura.palagi@uniroma1.it
    Department of Computer, Control and Management Engineering,
    Sapienza University of Rome, Italy

    Joint paper with
    Corrado Coppola, corrado.coppola@uniroma1.it
    Giampaolo Liuzzi, giampaolo.liuzzi@uniroma1.it
    Ruggiero Seccia, ruggiero.seccia @uniroma1.it

     

    • PGMODAYS-Bat-EDF
    • PGMODAYS-Amphi6
    • PGMODAYS-Pause1
    • PGMODAYS-Amphi5
    • PGMODAYS-Amphi3