Deep learning, powered by overparameterised Deep Neural Networks (DNNs), has seen a surge in interest in recent years. Although these networks are often pruned to a fraction of their size post-training, the Lottery Ticket Hypothesis (LTH) suggests that equally trainable, sparser subnetworks exist within them. This paper presents a new evolutionary algorithm, Neuroevolutionary Ticket Search (NeTS), which finds these efficient subnetworks in feed-forward or convolutional DNN architectures. Tested on common training datasets, NeTS can prune DNNs prior to significant gradient descent training, leading to notable performance benefits.

This content is only available as a PDF.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.