In this article, we present a solution to the maximum clique problem using a gradient-ascent learning algorithm of the Hopfield neural network. This method provides a near-optimum parallel algorithm for finding a maximum clique. To do this, we use the Hopfield neural network to generate a near-maximum clique and then modify weights in a gradient-ascent direction to allow the network to escape from the state of near-maximum clique to maximum clique or better. The proposed parallel algorithm is tested on two types of random graphs and some benchmark graphs from the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS). The simulation results show that the proposed learning algorithm can find good solutions in reasonable computation time.

This content is only available as a PDF.
You do not currently have access to this content.