Announcements

Mar, 2026 ICLR 2026 Workshops: We will present our work on self-distillation (1, 2) with oral presentations at the SPOT workshop, LLA workshop, and TTU workshop, and as spotlight at the RSI workshop. We also present majority voting for code generation at the TTU workshop.
Feb, 2026 We release three papers on self-distillation: [1] on self-distillation from demonstrations enabling continual learning, [2] on reinforcement learning via self-distillation, and [3] on online self-distillation from raw user interactions. Read more.
Jan, 2026 ICLR 2026: Specialization after Generalization: Towards Understanding Test-Time Training in Foundation Models has been accepted!
Nov, 2025 Gave an invited lecture on test-time training at EPFL. Slides are here.
Sep, 2025 NeurIPS 2025: DISCOVER: Automated Curricula for Sparse-Reward Reinforcement Learning has been accepted! We will also present our work towards understanding the effectiveness of test-time training in foundation models with an oral presentation at the CCFM workshop.
Jul, 2025 COLM 2025: Local Mixtures of Experts: Essentially Free Test-Time Training via Model Merging has been accepted! We will also present our work on test-time scaling via prefix-confidence at the SCALR workshop.
May, 2025 ICML 2025: Active Fine-Tuning of Multi-Task Policies has been accepted! We will also present our work on test-time offline RL at the PUT workshop and our work on curricula for sparse-reward RL at the EXAIT workshop.
Feb, 2025 Very excited to share notes on Probabilistic AI that I have been writing with Andreas Krause!
Jan, 2025 ICLR 2025: Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs has been accepted!
Jan, 2025 AISTATS 2025: LITE: Efficiently Estimating Gaussian Probability of Maximality has been accepted!
Oct, 2024 NeurIPS 2024: Our work Transductive Active Learning: Theory and Applications was accepted! We will also present our work on efficiently learning at test-time with LLMs with an oral presentation at the FITML workshop.
Jun, 2024 ICML 2024: Our work on Transductive Active Learning with Application to Safe Bayesian Optimization was accepted as an oral presentation (top 5%) at the ARLET workshop.
Mar, 2024 ICLR 2024: Our work on Active Few-Shot Fine-Tuning was accepted at the BGPT workshop!
Feb, 2024 I received the ETH Medal for my Master’s thesis on transductive active learning 🎉! Big thanks to my incredible collaborators Bhavya Sukhija, Lenart Treven, Yarden As, and Andreas Krause.