Open to opportunities

Modeling human behavior to build safer AI

Research scientist with 10+ years bridging cognitive science and machine learning. Currently building multimodal AI systems for social robots. Previously healthcare ML and affective computing research.

Where cognition meets computation

For over a decade, I've been fascinated by one question: how do we build AI systems that truly understand humans? My journey started with a PhD in affective computing at Sorbonne University, where I developed models to recognize and synthesize social behaviors in virtual agents.

Since then, I've applied this lens to healthcare (predicting patient behavior with interpretable ML at Semeia), human-AI collaboration (modeling rapport at Inria), and now social robotics (building multimodal perception systems at Enchanted Tools).

I believe the path to beneficial AI runs through understanding human cognition—and making AI systems interpretable enough that we can verify they're aligned with our values.

10+
Years in ML Research
12
Publications
3
Research Labs
1
Social Robot

Building at the intersection

Sep 2024 — Jan 2025
AI Researcher – Pathology Detection
e-sensia — Paris

Developed multimodal AI systems for emergency pathology detection from speech and video, combining fine-tuned audio language models (Whisper, Wav2Vec2, EnCodec) with raw signal processing and synchronized visual analysis. Led preparation of competitive innovation funding applications (BPI France Pionnier IA, i-Lab; CIR technical dossiers) and collaborated with clinical partners to validate models on real-world emergency medical data. Short-term engagement concluded due to organizational maturity mismatch with early-stage startup structure.

2022 — 2025
Multimodal ML Expert
Enchanted Tools — Paris

Leading ML systems for Mirokai social robot. Integrating VLMs, LLMs, and speech models for natural human-robot interaction. Building agentic pipelines with safety constraints. Managing team of ML engineers.

2021 — 2022
Postdoctoral Research Scientist
Inria COML, Justine Cassell's team — Paris

Developed interpretable deep learning models to predict rapport in human interactions from multimodal cues. Built reusable toolkit for multimodal feature extraction and analysis.

2018 — 2021
Research Scientist
Semeia — Paris

Built predictive models on French National Health Data. Improved medication adherence prediction from 60% to 90%. Implemented SHAP/LIME for clinical interpretability. Published at NeurIPS and MICCAI.

2014 — 2018
PhD Researcher
Sorbonne University / ISIR / Telecom ParisTech

Thesis on multimodal social signal analysis for affective virtual agents. Visiting scholar at USC ICT. Developed computational frameworks for extracting and recognizing social behaviors.

What I work on

🧠

Affective Computing

Understanding and modeling human emotions, social signals, and interpersonal dynamics through computational methods.

🔍

AI Interpretability

Making ML models transparent and explainable, from SHAP/LIME applications in healthcare to mechanistic understanding of neural networks.

🎯

Multimodal Learning

Integrating vision, speech, and language understanding for robust human-AI interaction in embodied systems.

🤖

Social Robotics

Building AI systems that interact safely and naturally with humans, with appropriate social behaviors and safety constraints.

⚖️

AI Alignment

Applying insights from human behavior modeling to build AI systems that remain aligned with human values and intentions.

🏥

Healthcare ML

Predictive models for patient behavior, treatment adherence, and care pathways with clinical interpretability.

Research outputs

View all on Google Scholar →

Let's build something meaningful

Interested in AI safety, human-AI interaction, or interpretability research? I'm always open to conversations about research collaborations or opportunities.