The success of deep learning is to some degree based on our ability to train models quickly using GPU or TPU hardware accelerators. Markov Brains, which are also a form of neural networks, could benefit from such an acceleration as well. However, Markov Brains are optimized using genetic algorithms, which present an even higher demand on the acceleration hardware: Not only inputs to the network and its outputs need to be communicated but new network configurations have to be loaded and tested repeatedly in large numbers. FPGAs are a natural substrate to implement Markov Brains, who are already made from deterministic logic gates. Here a Markov Brain hardware accelerator is implemented and tested, showing that Markov Brains can be computed within a single clock cycle, the ultimate hardware acceleration. However, how current FPGA design and supporting development toolchains are limiting factors, and if there is a future size speed trade-off are explored here as well.

This content is only available as a PDF.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit