Jonas Hübotter

Doctoral Researcher at ETH Zurich. I work on Active Fine-Tuning and Local Learning.

prof_pic.png

I am a doctoral researcher in the Learning and Adaptive Systems Group at ETH Zurich working with Andreas Krause. Prior to this, I obtained a Master’s degree in Theoretical Computer Science and Machine Learning from ETH Zurich and a Bachelor’s degree in Computer Science and Mathematics from the Technical University of Munich. As an intern at Citadel Securities, I have previously worked with Guillaume Basse and Sören Künzel on time-series prediction. I am a recipient of the ETH Medal.

My research aims to improve the performance of foundation models by utilizing tools from active learning for few-shot learning, active inference, and adaptive computation. Beyond this, I have broad interests including (approximate) probabilistic inference, optimization, and online learning.

Always feel free to reach out to me with things you find exciting.

Contacts:jhuebotter@ethz.ch Google Scholar GitHub Linkedin

Announcements

Sep, 2024 Our work Transductive Active Learning: Theory and Applications was accepted at NeurIPS 2024!
Jun, 2024 Our work on Transductive Active Learning with Application to Safe Bayesian Optimization was accepted as an oral presentation (top 5%) at the Workshop on Aligning RL Experimentalists and Theorists at ICML 2024.
Mar, 2024 Our work on Active Few-Shot Fine-Tuning was accepted at the Workshop on Bridging the Gap Between Practice and Theory in Deep Learning at ICLR 2024!

Selected Publications

  1. Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs
    Jonas Hübotter, Sascha Bongni, Ido Hakimi , and 1 more author
    arXiv preprint arXiv:2410.08020, 2024
  2. Active Fine-Tuning of Generalist Policies
    Marco Bagatella, Jonas Hübotter, Georg Martius , and 1 more author
    arXiv preprint arXiv:2410.05026, 2024
  3. NeurIPS Oral
    Transductive Active Learning: Theory and Applications
    Jonas Hübotter, Bhavya Sukhija, Lenart Treven , and 2 more authors
    In NeurIPS , 2024
    Oral Presentation at ICML Workshop on Aligning Reinforcement Learning Experimentalists and Theorists, 2024.

Talks

  • Invited Talk
    Machine Learning and Modelling Seminar, Charles University Prague, Nov 2024.
    Transductive Active Learning for Fine-Tuning Large (Language) Models.
  • Contributed Talk
    ICML Workshop on Aligning Reinforcement Learning Experimentalists and Theorists, Vienna, Jul 2024.
    Transductive Active Learning with Application to Safe Bayesian Optimization, slides.

Supervision

I have had the privilege of advising several BSc and MSc students during their theses and semester projects.

  • Nicolas Menet: Efficiently Estimating Gaussian Probability of Maximality (with Parnian Kassraie)
  • Sascha Bongni: Active Fine-Tuning of Large Language Models
  • Pablo Lahmann: Safe Control as Inference (with Yarden As)
  • Anh Duc Nguyen: Safe Bayesian Optimization without Regret
You can find a list of potential projects here. If you are interested in working with me, please reach out.

Fun Projects

ActiveFT: A PyTorch Library for Active Fine-Tuning

Efficiently fine-tune large neural networks by intelligent active data selection.

SOCO: A Rust Library for Smoothed Online Convex Optimization

Algorithms for online convex optimization with an associated cost for movement in the decision space. Useful for resource allocation, contextual sequence prediction, portfolio management, and object tracking.

Solutions to Algorithms Lab 2021

Solutions to a wide variety of competitive programming-type questions.

Plaain

A serverless web app to organize and stream media from anywhere.

bifolia

Website of a landscape architecture firm.