| 📑 Paper | 🐱 Github Repo |
Shengchao Hu1,2, Yuhang Zhou1, Ziqing Fan1,2, Jifeng Hu3, Li Shen4, Ya Zhang1,2, Dacheng Tao5
1 Shanghai Jiao Tong University, 2 Shanghai AI Laboratory, 3 Jilin University, 4 Sun Yat-sen University, 5 Nanyang Technological University.
Training a generalizable agent to continually learn a sequence of tasks from offline trajectories is a natural requirement for long-lived agents, yet remains a significant challenge for current offline reinforcement learning (RL) algorithms. Specifically, an agent must be able to rapidly adapt to new tasks using newly collected trajectories (plasticity), while retaining knowledge from previously learned tasks (stability). However, systematic analyses of this setting are scarce, and it remains unclear whether conventional continual learning (CL) methods are effective in continual offline RL (CORL) scenarios.
In this study, we develop the Offline Continual World benchmark and demonstrate that traditional CL methods struggle with catastrophic forgetting, primarily due to the unique distribution shifts inherent to CORL scenarios. To address this challenge, we introduce CompoFormer, a structure-based continual transformer model that adaptively composes previous policies via a meta-policy network. Upon encountering a new task, CompoFormer leverages semantic correlations to selectively integrate relevant prior policies alongside newly trained parameters, thereby enhancing knowledge sharing and accelerating the learning process. Our experiments reveal that CompoFormer outperforms conventional CL methods, particularly in longer task sequences, showcasing a promising balance between plasticity and stability.
Download the dataset MT50 via this Google Drive link.
When your environment is ready, you could run the following script with corresponding dataset folder:
bash runs/cw10.sh # cw10If you find this work is relevant with your research or applications, please feel free to cite our work!
@inproceedings{CompoFormer,
title={Continual Task Learning through Adaptive Policy Self-Composition},
author={Hu, Shengchao and Zhou, Yuhang and Fan, Ziqing and Hu, Jifeng and Shen, Li and Zhang, Ya and Tao, Dacheng},
year={2024},
}
This repo benefits from DT and FACIL. Thanks for their wonderful works!

