Abstract
We present a new method for addressing the challenge of continual learning wherein an agent must adapt to new tasks while maintaining high performance on previously learned tasks. To accomplish this, an agent must identify previously acquired information that generalizes to the new task while also adapting its internal model to learn information that is specific to the new task. Our approach is based on neurogenesis, which involves adding new neurons to a previously trained neural network in an intelligent way. To our knowledge, we are the first to leverage probabilistic programming within the framework of evolutionary computation to optimize the growth of neural networks for continual learning. Through a series of experiments, we show that our approach is able to consistently find better performing solutions than genetic algorithms and it is able to do so faster.