The reliability of peer review of scientific documents and the evaluative criteria scientists apply to judge the work of their peers are critically re-examined with special attention to the consistently low levels of reliability that have been reported. Referees of grant proposals agree much more about what is unworthy of support than about what does have scientific value. In the case of manuscript submissions this seems to depend on whether a discipline (or subfield) is general and diffuse (e.g., cross disciplinary physics, general fields of medicine, cultural anthropology, social psychology) or specific and well defined (e.g. nuclear physics, medical speciality areas, physical anthropoloty, and behavioral neuroscience). In the former there is likewise substantially more agreement on rejection than acceptance, but in the latter both the wide differential in manuscript rejection rates and the high correlation between referee recommendations and editorial decisions suggests that reviewers and editors agree more on acceptance than on rejection. Several suggestions are made for improving the reliability and quality of peer review. Further research is needed, especially in the physical sciences.