NEURAL NETWORKS FOR OPTIMIZATION PROBLEMS WITH INEQUALITY CONSTRAINTS - THE KNAPSACK-PROBLEM

被引:55
作者
OHLSSON, M
PETERSON, C
SODERBERG, B
机构
关键词
D O I
10.1162/neco.1993.5.2.331
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A strategy for finding approximate solutions to discrete optimization problems with inequality constraints using mean field neural networks is presented. The constraints x less-than-or-equal-to 0 are encoded by xTHETA(x) terms in the energy function. A careful treatment of the mean field approximation for the self-coupling parts of the energy is crucial, and results in an essentially parameter-free algorithm. This methodology is extensively tested on the knapsack problem of size up to 10(3) items. The algorithm scales like NM for problems with N items and M constraints. Comparisons are made with an exact branch and bound algorithm when this is computationally possible (N less-than-or-equal-to 30). The quality of the neural network solutions consistently lies above 95% of the optimal ones at a significantly lower CPU expense. For the larger problem sizes the algorithm is compared with simulated annealing and a modified linear programming approach. For ''nonhomogeneous'' problems these produce good solutions, whereas for the more difficult 'homogeneous'' problems the neural approach is a winner with respect to solution quality and/or CPU time consumption. The approach is of course also applicable to other problems of similar structure, like set covering.
引用
收藏
页码:331 / 339
页数:9
相关论文
共 9 条