Automatic speech dialogue systems are becoming common. Zn order to assess their performance, a large sample of real dialogues has to be collected and evaluated. This process is expensive, labor intensive, hind prone to errors. To alleviate this situation we propose a user simulation to conduct dialogues with the system under investigation. Using stochastic modeling of real users we can both debug and evaluate a speech dialogue system while it is still in the lab, thus substantially reducing the amount, of field testing with real users.