In previous work we reported high classification rates for learning vector quantization (LVQ) networks trained to classify phoneme tokens shifted in time. It has since been shown that the framework of minimum classification error (MCE) and generalized probabilistic descent (GPD) can treat LVQ as a special case of a general method for gradient descent on a rigorously defined classification loss measure that closely reflects the misclassification rate. This framework allows us to extend LVQ into a prototype-based minimum error classifier (PBMEC) appropriate for the classification of various speech units which the original LVQ was unable to treat. Speech categories are represented using a prototype-based multi-state architecture incorporating a dynamic time warping procedure. We present results for the difficult E-set task, as well as for isolated word recognition for a vocabulary of 5240 words, that reveal clear gains in performance as a result of using PBMEC. In addition, we discuss the issue of smoothing the loss function from the perspective of increasing classifier robustness.