About

  • I am a PostDoc in Scool in Inria Lille, working under the supervision of Philippe Preux and Bruno Raffin.
  • Currently, I am working on Continuous Time Reinforcement Learning and how to combine it with Scientific Machine Learning methods. Continuous Time Reinforcement Learning (CTRL), compared to Discrete Time Reinforcement Learning (DTRL), deals with the continuity of the problem. In this context, the dynamics of the system are expressed as a PDE (Partial Derivative Equation) for deterministic environments and SDE (Stochastic Derivative Equation) for stochastic environments. The value function (a useful measure to estimate the quality of a policy of actions) can be found from Hamiltonian-Jacobi-Bellman equation that replaces Bellman equation in discrete time. Despite promising performance on simple use cases, CTRL methods do not match the performance of DTRL algorithms in general case. But the emerging trend of SciML that intends to combine neural networks and PDE/SDE, like physics informed neural networks or Neural ODEs are bringing new tools to address CTRL.
  • Before, I did my PhD in Computer Science in University of Bordeaux and Inria Bordeaux. During my PhD, I was working on the topics at the convergence between High Performance Computing (HPC) and Artificial Intelligence (AI). My main objective was to study how efficiently to train deep neural networks in terms of memory usage. My thesis title is Memory Saving Strategies for Deep Neural Network Training supervised by Olivier Beaumont and Alexis Joly.
  • Research interests: Efficient Training of Neural Networks, Reinforcement Learning, Deep Learning, AI for Science

Check out my CV if you want to learn more.

News

subscribe via RSS