In this paper we examine the key features of simple neural networks and their application to pattern recognition. Beginning with a three-layer backpropagation network we examine the mechanisms of pattern classification. We relate the numbers of input, output and hidden nodes to the problem features and parameters. In particular, each hidden neuron corresponds to a discriminant in the input space. We point out that the interactions between number of discriminants, the size and distribution of the training set, and numerical magnitude make it very difficult to provide precise guidelines. We found that the shape of the threshold function plays a major role in both pattern recognition, and quantitative prediction and interpolation. Tuning the sharpness parameter could have a significant effect on neural network performance. This feature is currently under-utilized in many applications. For some applications linear discriminant is poor choice. As an example the conventional BPN finds it almost impossible to separate concentric semicircles or "onion rings", which are readily separated if the a quadratic discriminant is used. Our work suggests varying performance for different basis functions in different applications. Four different types of basis functions were discussed in this study, including two which, as far as we know, have not been previously published.