This paper discusses the trade-off between accuracy, reliability and computing time in the binary-encoded genetic algorithm (GA) used for global optimization over continuous variables. An experimental study is performed on a large set of analytical test functions. We show first the limitations of the "standard GA", which mostly requires a high computing time, though exhibiting a low success rate, due to premature convergence. We then point out the disappointing effect of carefully choosing and tuning the "classical" GA parameters, such as the code and mutation or crossover operators. Indeed, Gray coding and double crossover helped improving on speed, but did not answer the problem of a too homogeneous population. To fight the premature convergence of GA, we emphasize at last two deciding alterations made to the algorithm: an adaptive reduction of the definition interval of each variable and the use of a scale factor in the calculation of the crossover probabilities. The enhanced OA so achieved is discussed in detail and intensively tested on more than 20 test functions of 1-20 variables. (C) 2000 Elsevier Science Ltd. All rights reserved.