The UMass Mobile Robot Project is investigating the problem of intelligent navigation of an autonomous robot vehicle. Model-based processing of the visual sensory data is the primary mechanism used for controlling movement through the environment, measuring progress towards a given goal, and avoiding obstacles. Goal-oriented navigation takes place through a partially modeled, unchanging environment that contains no unmodelled obstacles; this simplified environment provides a foundation for research in more complicated domains. The navigation system integrates perception, planning, and execution of actions. Of particular importance is that the planning processes are reactive and reason about landmarks that should be perceived at various stages of task execution. Correspondence between image features and expected landmark locations are used at several abstraction levels to ensure proper plan execution. This system and some experiments which demonstrate the performance of its components is described. © 1990 IEEE