A strategy for finding approximate solutions to discrete optimization problems with inequality constraints using mean field neural networks is presented. The constraints x ≤ 0 are encoded by x⊖(x) terms in the energy function. A careful treatment of the mean field approximation for the self-coupling parts of the energy is crucial, and results in an essentially parameter-free algorithm. This methodology is extensively tested on the knapsack problem of size up to 103 items. The algorithm scales like NM for problems with N items and M constraints. Comparisons are made with an exact branch and bound algorithm when this is computationally possible (N ≤ 30). The quality of the neural network solutions consistently lies above 95% of the optimal ones at a significantly lower CPU expense. For the larger problem sizes the algorithm is compared with simulated annealing and a modified linear programming approach. For "nonhomogeneous" problems these produce good solutions, whereas for the more difficult "homogeneous" problems the neural approach is a winner with respect to solution quality and/or CPU time consumption. The approach is of course also applicable to other problems of similar structure, like set covering.

This content is only available as a PDF.
You do not currently have access to this content.