A statistical quantization model is used to analyze the effects of quantization when digital techniques are used to implement a real-valued feedforward multilayer neural network. In this process, we introduce a parameter that we call the effective nonlinearity coefficient, which is important in the studying the quantization effects. We develop, as functions of the quantization parameters, general statistical formulations of the performance degradation of the neural network caused by quantization. Our formulations predict (as intuitively one may think) that the network's performance degradation gets worse when the number of bits is decreased; that a change of the number of hidden units in a layer has no effect on the degradation; that for a constant effective nonlinearity coefficient and number of bits, an increase in the number of layers leads to worse performance degradation of the network; and that the number of bits in successive layers can be reduced if the neurons of the lower layer are nonlinear.