Awareness of the viewer's gaze position in a virtual environment can lead to significant savings in scene processing If fine detail information is presented "just in time" only at locations corresponding to the participant's gaze, i.e., in a gaze-contingent manner. This paper describes the evolution of a gaze-contingent video display system, "gcv", Gcv is a multithreaded, real-time program which displays digital video and simultaneously tracks a subject's eye movements. Treating the eye tracker as an ordinary positional sensor, gcv's architecture shares many similarities with contemporary virtual environment system designs. Performance of the present system is evaluated in terms of (1) eye tracker sampling latency and video transfer rates, and (2) measured eye tracker accuracy and slippage. The programming strategies developed for incorporating the viewer's point-of-regard are independent of proprietary eye tracking equipment and are applicable to general gaze-contingent virtual environment designs.