ICML 2026 Workshop

Continual Adaptation at Scale:
Towards Sustainable AI

Hybrid Event · Date TBA

About

Training foundation models is currently so costly that only few can afford it. The immense data, compute, and energy demands are increasingly unsustainable. Continual adaptation offers a viable alternative, where AI models can learn quickly and continually through every day interactions, just like humans and animals.

Unfortunately, foundation models lack this rapid adaptability: new behavior can be induced by prompting or fine-tuning, but there are no easy ways to quickly shape the behavior — for instance, to permanently add, remove, or modify their skill set in a sustainable way. This workshop aims to discuss new research directions that will enable fast continual adaptation at scale to drive more sustainable AI.

Key Research Directions

Invited Speakers

Razvan Pascanu

Razvan Pascanu

Google DeepMind / MILA

Sara Hooker

Sara Hooker

Adaption

Bing Liu

Bing Liu

University of Illinois at Chicago

Stephanie Chan

Stephanie Chan

Google DeepMind

Colin Raffel

Colin Raffel

University of Toronto / Vector Institute / Hugging Face

Jaehong Yoon

Jaehong Yoon

Nanyang Technological University

Schedule

All times are local to the venue.

Time Type Activity
08:00 – 08:10 Opening Opening Remarks
08:10 – 08:50 Invited Talk Razvan Pascanu
08:50 – 09:35 Oral Session Oral Session 1
09:35 – 10:15 Invited Talk Colin Raffel
10:15 – 11:00 Break Poster Session 1 & Coffee
11:00 – 11:40 Invited Talk Sara Hooker
11:40 – 12:20 Invited Talk Jaehong Yoon
12:20 – 13:30 Break Lunch Break
13:30 – 14:10 Invited Talk Bing Liu
14:10 – 14:55 Oral Session Oral Session 2
14:55 – 15:40 Break Poster Session 2 & Coffee
15:40 – 16:20 Invited Talk Stephanie Chan
16:20 – 16:30 Awards Best Paper Awards
16:30 – 17:00 Panel Speaker Panel

Call for Papers

We invite submissions on the following topics:

Important Dates

Submission Guidelines

Submitted papers are composed of a main body, which can be up to 4 pages long, followed by unlimited pages for references and an appendix, all in a single file. For details concerning the format of the papers, please see the LaTeX style files (link coming soon). All submissions must be via OpenReview (link coming soon) and anonymized, otherwise, they will automatically be rejected. In particular, any submission whose main body goes over the 4 page limit will be automatically rejected.

Organizers

Ghada Sokar

Ghada Sokar

Google DeepMind

Gintare Karolina Dziugaite

Gintare Karolina Dziugaite

Google DeepMind

Emtiyaz Khan

Emtiyaz Khan

RIKEN-AIP

Rupam Mahmood

Rupam Mahmood

University of Alberta

Martin Mundt

Martin Mundt

University of Bremen

Daniel Marczak

Daniel Marczak

Warsaw University of Technology