Research scientist with 10+ years bridging cognitive science and machine learning. Currently building multimodal AI systems for social robots. Previously healthcare ML and affective computing research.
For over a decade, I've been fascinated by one question: how do we build AI systems that truly understand humans? My journey started with a PhD in affective computing at Sorbonne University, where I developed models to recognize and synthesize social behaviors in virtual agents.
Since then, I've applied this lens to healthcare (predicting patient behavior with interpretable ML at Semeia), human-AI collaboration (modeling rapport at Inria), and now social robotics (building multimodal perception systems at Enchanted Tools).
I believe the path to beneficial AI runs through understanding human cognition—and making AI systems interpretable enough that we can verify they're aligned with our values.
Developed multimodal AI systems for emergency pathology detection from speech and video, combining fine-tuned audio language models (Whisper, Wav2Vec2, EnCodec) with raw signal processing and synchronized visual analysis. Led preparation of competitive innovation funding applications (BPI France Pionnier IA, i-Lab; CIR technical dossiers) and collaborated with clinical partners to validate models on real-world emergency medical data. Short-term engagement concluded due to organizational maturity mismatch with early-stage startup structure.
Leading ML systems for Mirokai social robot. Integrating VLMs, LLMs, and speech models for natural human-robot interaction. Building agentic pipelines with safety constraints. Managing team of ML engineers.
Developed interpretable deep learning models to predict rapport in human interactions from multimodal cues. Built reusable toolkit for multimodal feature extraction and analysis.
Built predictive models on French National Health Data. Improved medication adherence prediction from 60% to 90%. Implemented SHAP/LIME for clinical interpretability. Published at NeurIPS and MICCAI.
Thesis on multimodal social signal analysis for affective virtual agents. Visiting scholar at USC ICT. Developed computational frameworks for extracting and recognizing social behaviors.
Understanding and modeling human emotions, social signals, and interpersonal dynamics through computational methods.
Making ML models transparent and explainable, from SHAP/LIME applications in healthcare to mechanistic understanding of neural networks.
Integrating vision, speech, and language understanding for robust human-AI interaction in embodied systems.
Building AI systems that interact safely and naturally with humans, with appropriate social behaviors and safety constraints.
Applying insights from human behavior modeling to build AI systems that remain aligned with human values and intentions.
Predictive models for patient behavior, treatment adherence, and care pathways with clinical interpretability.