Weighted Ensemble Models Are Strong Continual Learners

Imad Eddine MAROUF*, Subhankar Roy, Enzo Tartaglione, Stéphane Lathuilière ;

Abstract


"In this work, we study the problem of continual learning (CL) where the goal is to learn a model on a sequence of tasks, under the assumption that the data from the previous tasks becomes unavailable while learning on the current task data. CL is essentially a balancing act between learning on the new task (plasticity) and maintaining the performance on the previously learned concepts (stability). To address the stability-plasticity trade-off, we propose to perform weight-ensembling of the model parameters of the previous and current tasks. This weighted-ensembled model, which we call Continual Model Averaging (or CoMA), attains high accuracy on the current task by leveraging plasticity, while not deviating too far from the previous weight configuration, ensuring stability. We also propose an improved variant of CoMA, named Continual Fisher-weighted Model Averaging (or CoFiMA), that selectively weighs each parameter in the weights ensemble by leveraging the Fisher information of the weights of the model. Both variants are conceptually simple, easy to implement, and effective in attaining state-of-the-art performance on several standard CL benchmarks. Code is available at: https://github.com/IemProg/CoFiMA."

Related Material


[pdf] [supplementary material] [DOI]