There is evidence that computer-aided diagnosis (CAD) can be used to improve radiologists' performance. However, one of the potential drawbacks of CAD is that a computer-detected false positive may induce a false positive by a radiologist. To examine this issue, we performed two experiments to compare radiologists' false positives with those of the computer and to determine radiologists' ability to discriminate between the computer's true- and false-positive detections. In the first experiment, radiologists were shown 50 mammograms and on each film were asked to indicate 3 regions that could contain clustered microcalcifications and, using a 100-point scale, to give their level of confidence that microcalcifications were present in the region. In the second experiment, the radiologists were shown regions-of-interest, printed on film, containing either a computer-detected true cluster or a computer-detected false positive. The radiologists gave their confidence that there were actual clustered microcalcifications present. There was less than 1% overlap between false positives by the computer and radiologists. Furthermore, based on ROC analysis, radiologists were able to discriminate between computer true and false positives.