A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. Our algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. An objective test is devised to detect and reject outliers within given confidence limits, and the corresponding criterion is consistently built into the evaluation of the likelihood function. The procedure is described in general and applied to both real and simulated data. From the Vyssotsky sample of K and M disk dwarfs we derive, using our method, the calibration M(v) = (10.78 +/- 0.02) + (5.86 +/- 0.07)(R - I - 1) in the Kron (R-I) color range [0.35, 1.3], with an intrinsic rms magnitude dispersion of the sequence sigma-md = 0.45+/-0.02 mag. This dispersion may be partly due to the distribution of the metallicity, which is ignored in this paper.