What does over-parametrisation risk in continual learning?

 title: 'Fig. 1: Comparison of the strengths of humans and statistical ML machines, illustrating the complementary ways they generalise in human-AI teaming scenarios. Humans excel at compositionality, common sense, abstraction from a few examples, and robustness. Statistical ML excels at large-scale data and inference efficiency, inference correctness, handling data complexity, and the universality of approximation. Overgeneralisation biases remain challenging for both humans and machines. Collaborative and explainable mechanisms are key to achieving alignment in human-AI teaming. See Table 3 for a complete overview of the properties of machine methods, including instance-based and analytical machines.'

In continual learning, over-parameterization can increase the risk of catastrophic forgetting, which refers to the model's tendency to lose previously learned information when it is adapted to new data or tasks. Larger models may exhibit a higher degree of catastrophic forgetting as they struggle to balance retaining essential knowledge with incorporating new information.

A naive approach to continual learning can lead to significant challenges, suggesting that strategies need to preserve or memorize learned signals effectively. This highlights the necessity for methods that enable robust memorization of important information while managing the computational costs associated with such techniques[1].