Jonas Hübotter

Doctoral Researcher at ETH Zurich. I work on Local Learning and Active Fine-Tuning.

prof_pic.png

I am a doctoral researcher in the Learning and Adaptive Systems Group at ETH Zurich working with Andreas Krause. Prior to this, I obtained a Master’s degree in Theoretical Computer Science and Machine Learning from ETH Zurich and a Bachelor’s degree in Computer Science and Mathematics from the Technical University of Munich. As an intern at Citadel Securities, I have previously worked with Guillaume Basse and Sören Künzel on time-series prediction. I am a recipient of the ETH Medal.

My research aims to improve the performance of foundation models by utilizing tools from active learning for few-shot learning, active inference, and adaptive computation. Beyond this, I have broad interests including (approximate) probabilistic inference, optimization, and online learning.

Always feel free to reach out to me with things you find exciting.

Contacts:jhuebotter@ethz.ch Google Scholar GitHub Linkedin

Announcements

Feb, 2025 Very excited to share notes on Probabilistic AI that I have been writing with Andreas Krause!
Jan, 2025 ICLR 2025: Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs has been accepted!
Oct, 2024 NeurIPS 2024: Our work Transductive Active Learning: Theory and Applications was accepted! We will also present our work on efficiently learning at test-time with LLMs with an oral presentation at the Fine-Tuning in Modern ML workshop.
Jun, 2024 ICML 2024: Our work on Transductive Active Learning with Application to Safe Bayesian Optimization was accepted as an oral presentation (top 5%) at the Aligning RL Experimentalists and Theorists workshop.
Mar, 2024 ICLR 2024: Our work on Active Few-Shot Fine-Tuning was accepted at the Bridging the Gap Between Practice and Theory in Deep Learning workshop!

Selected Publications

  1. ICLR 2025 Best Paper
    Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs
    Jonas Hübotter, Sascha Bongni, Ido Hakimi , and 1 more author
    In International Conference on Learning Representations , 2025
    Best Paper Award at NeurIPS Workshop on Fine-Tuning in Modern Machine Learning, 2024.
  2. NeurIPS 2024 Oral
    Transductive Active Learning: Theory and Applications
    Jonas Hübotter, Bhavya Sukhija, Lenart Treven , and 2 more authors
    In Advances in Neural Information Processing Systems , 2024
    Oral Presentation at ICML Workshop on Aligning Reinforcement Learning Experimentalists and Theorists, 2024.

Talks

  • Efficiently Learning at Test-Time with LLMs via Transductive Active Learning
    Invited Talk, Trillion Parameter Consortium (TPC) Seminar Series, 5 Mar 2025.
  • Efficiently Learning at Test-Time: Active Fine-Tuning of LLMsrecording, slides
    Contributed Talk, NeurIPS Workshop on Fine-Tuning in Modern Machine Learning, Vancouver, 14 Dec 2024.
  • Interview with Machine Learning Street Talk (MLST) podcast, Nov 2024.
  • Transductive Active Learning for Fine-Tuning Large (Language) Modelsslides
    Invited Talk, Machine Learning and Modelling Seminar, Czech Academy of Sciences, Prague, 21 Nov 2024.
  • Efficiently Learning at Test-Time with LLMsrecording, slides
    Invited Talk, Zurich AI Meetup, Zurich, 3 Dec 2024.
    Invited Talk, Tufa Labs AI Meetup, Zurich, 29 Oct 2024.
  • Transductive Active Learning with Application to Safe Bayesian Optimizationrecording, slides
    Contributed Talk, ICML Workshop on Aligning Reinforcement Learning Experimentalists and Theorists, Vienna, 26 Jul 2024.
  • Active Fine-Tuning of Large Neural Networksslides
    Contributed Talk, Machine Learning Seminar, ETH Zurich, 18 Apr 2024.

Supervision

I have had the privilege of advising several BSc and MSc students during their theses and semester projects. Some of these projects have led to publications.

  • Nicolas Menet: Efficiently Estimating Gaussian Probability of Maximality (with Parnian Kassraie, AISTATS 2025)
  • Sascha Bongni: Active Fine-Tuning of Large Language Models (ICLR 2025)
  • Pablo Lahmann: Safe Control as Inference (with Yarden As)
  • Anh Duc Nguyen: Safe Bayesian Optimization without Regret
You can find a list of potential projects of our research group here. If you want to work with me, please send me an email describing your area of interest. Please also attach your CV and up-to-date transcripts.