A A knowledge-based system has been developed for segmenting computed tomography (CT) images. Its modular architecture includes an anatomical model, image processing engine, inference engine and blackboard. The model contains a priori knowledge of size, shape, X-ray attenuation and relative position of anatomical structures. This knowledge is used to constrain low-level segmentation routines. Model-derived constraints and segmented image objects are both transformed into a common feature space and posted on the blackboard. The inference engine then matches image to model objects, based on the constraints. The transformation to feature space allows the knowledge and image data representations to be independent. Thus a high-level model can be used, with data being stored in a frame-based semantic network. This modularity and explicit representation of knowledge allows for straightforward system extension. We initially demonstrate an application to lung segmentation in thoracic CT, with subsequent extension of the knowledge-base to include tumors within the lung fields. The anatomical model was later augmented to include basic brain anatomy including the skull and blood vessels, to allow automatic segmentation of vascular structures in CT angiograms for 3-D rendering and visualization.