The maximum clique problem (MCP) is a classic graph optimization problem with many real-world applications. This problem is NP-complete and computationally intractable even to approximate with certain absolute performance bounds. In this paper, we present the design of a new discrete competitive Hopfield neural network for finding near-optimum solutions for the MCP. Other competitive Hopfield-style networks have been proposed to solve the MCP. However, recent results have shown that these models can sometimes lead to inaccurate results and oscillatory behaviors in the convergence process. Thus, the network sometimes does not converge to cliques of the considered graph, where this error usually increases with the size and the density of the graph. In contrast, we propose in this paper appropriate dynamics for a binary competitive Hopfield network in order to always generate local/global minimum points corresponding to maximal/maximum cliques of the considered graph. Moreover, an optimal modelling of the network is developed, resulting in a fast two-level winner-take-all scheme. Extensive simulation runs show that our network performs better than the other competitive neural approaches in terms of the solution quality and the computation time. Scope and purpose The MCP is a classic one in computer science and in graph theory, and is known to be NP-complete. Although many algorithms have been proposed for the MCP, usually the exponentially increasing computation time prohibits us from solving large-scale problems. The MCP is related to many real-life problems. These include classification theory, coding theory, project selection, fault tolerance, computer vision, signal transmission theory, VLSI circuit design, information retrieval, biological systems and economics. Hence, simple algorithms which yield acceptable solutions sufficiently fast are quite important for such related practical problems. A very powerful neural approach for the MCP has been presented by Lee and Takefuji, which is based on a competitive Hopfield-type network called maximum neural network. Lee and Takefuji showed that the maximum neural network solves large-scale MCPs in a reasonable computation time where the best-known algorithms cannot solve it. However, some authors have discussed the convergence properties of Takefuji's model. According to these works, our simulation results in the MCP show that the percentage of randomly generated initial states that do not converge to a maximal clique of the graph oscillates between 8% and 62%, where this error increases with the size and the density of the graph. In contrast, we present in this paper a competitive Hopfield model that always guarantees convergence to a maximum/maximal clique of the graph. Experimental results on the MCP show that our network is computationally superior to the parallel maximum neural network in terms of the solution quality and in terms of the computation time on a conventional sequential machine. Moreover, this superiority increases with the size and the density of the graph. (C) 2002 Elsevier Science Ltd. All rights reserved.