An adaptive medical image visualization system based on a hierarchical neural network structure and intelligent decision fusion is presented in this paper. The system consists of a feature generator utilizing both histogram and spatial information computed from a medical image, a wavelet transform for compressing the size of the feature vector, a competitive layer neural network for clustering images into different subclasses, a bi-modal linear estimator and a RBF network based non-linear estimator for each subclass, as well as an intelligent decision fusion process to integrate estimates from both estimators of different subclasses to compute the final display parameters. Both estimators can adapt to new types of medical images simply by training them with those images, thus making the algorithm adaptive and extendable. The large training image set is hierarchically organized for efficient user interaction and effective re-mapping of the width/center settings in the training data set. The adaptation capabilities of our system are achieved by modifying the width/center values in the training data through a width/center mapping function, which is estimated from the new width/center settings of some representative images adjusted by the user. While the RBF network based estimator performs very well for images which are similar to those in the training data set, the bi-modal linear estimator provides reasonable estimation for a wide range of images, which may not be included in the training data set. The decision fusion step makes the final estimation of the display parameters accurate for trained images and robust for the unknown images. The algorithm has been tested on a wide range of MR images and shown satisfactory results. Although the current algorithm is very comprehensive, its execution time is kept within reasonable range.