Luc Pommeret

Profile picture of Luc Pommeret, graduate student in Machine Learning and Logic

Research Engineer | Machine Learning and Logic | LISN

Contact: Email address

Professional Experience

LISN (Interdisciplinary Laboratory of Digital Sciences)

Research Engineer

under the supervision of Sophie Rosset and Thomas Gerald

  • Research on the use of atomic propositions in natural language processing (NLP)
  • Development of evaluation tools for atomic proposition-based systems
  • Improvement of information retrieval through proposition atomization

Previously at LISN:

Final year internship | April 2025 - October 2025 (6 months)

under the supervision of Sophie Rosset, Sahar Ghannay and Christophe Servan

  • Evaluation and improvement of a RAG-type dialogue and question-answering system using open-source LLMs
  • Implementation of an atomization step to improve retrieval
  • State of the art on the use of atomic propositions in NLP and training a propositioner

LISN Internship Report

IRIF (Institute for Research in Fundamental Computer Science)

Research Assistant Intern | April 2024 - September 2024

under the supervision of Michel de Rougemont

  • Study of emerging capabilities in large language models (LLMs)
  • Study of the impact of noise in training data on LLM performance
  • Study of the internal representations of transformers who play chess and tic-tac-toe with probing techniques and Sparse Auto Encoders

IRIF Internship Report

Publications & Talks

JDSE 2025 - AtomicEval: Evaluation Framework for Atomic Proposition Autonomy with French Propositioner

September 25-26, 2025 — Université Paris-Saclay

Luc Pommeret, Sophie Rosset, Christophe Servan, Sahar Ghannay.

[HAL] [PDF]

PFIA 2024 - Exploring Emergent Skills with Chess-GPT

5 July 2024 — La Rochelle (France)

In my research, I'm exploring the emerging capabilities of large language models (LLMs) applied to chess, focusing on a model called ChessGPT. I'm examining three main properties: move legality, puzzle solving, and chessboard position prediction. My experiments introduce noise into the training data to observe its impact on the model's performance. The results I've obtained suggest that adding noise can improve certain capabilities of the model, particularly move legality. My work also raises interesting questions about the model's internal representation of the game and opens up avenues for future research, notably the study of the Platonic representation hypothesis in this context.