The probabilistic neural network (PNN) represents an interesting parallel implementation of a Bayes strategy for pattern classification. Its training phase consists in generating a new neuron for each training pattern, whose weights equal the pattern components. This noniterative training procedure is extremely fast, but leads to a very high number of neurons in those cases in which large data sets are available. This letter proposes a modified version of the PNN learning phase which allows a considerable simplification of network structure by including a vector quantization of learning data.