The computation of effect sizes is a key feature of meta-analysis. In treatment outcome meta-analyses, the standardized mean difference statistic on posttest scores (d) is usually the effect size statistic used. Hoc-ever, when primary studies do not report the statistics needed to compute d, many methods for estimating d from other data have been developed. Little is known about the accuracy of these estimates, yet meta-analysts frequently use them on the assumption that they are estimating the same population parameter as d. This study investigates that assumption empirically. On a sample of 140 psychosocial treatment or prevention studies from a variety of areas, the present study shows that these estimates yield results that are often not equivalent to din either mean or variance. The frequent mixing of d and other estimates of din past meta-analyses. therefore, mag have led to biased effect size estimates and inaccurate significance tests.