Accuracy, robustness, and minimality are fundamental issues in system-level design. Such properties are generally associated with constraints limiting the feasible model space. The paper focuses on the optimal selection of feedforward neural networks under the accuracy, robustness, and minimality constraints. Model selection, with respect to accuracy, can be carried out within the theoretical framework delineated by the final prediction error (FPE), generalization error estimate (GEN), general predicion error (GPE) and network information criterion (NIC) or cross-validation-based techniques. Robustness is an appealing feature since a robust application provides a graceful degradation in performance once affected by perturbations in its structural parameters (e.g., associated with faults or finite precision representations). Minimality is related to model selection and attempts to reduce the computational load of the solution (with also silicon area and power consumption reduction in a digital implementation). A novel sensitivity analysis derived by the FPE selection criterion is suggested in the paper to quantify the relationship between performance loss and robustness; based on the definition of weak and acute perturbations, we introduce two criteria for estimating the robustness degree of a neural network. Finally, by ranking the features of the obtained models we identify the best constrained neural network.