Background: Control chart methodology has been widely touted for monitoring and improving quality in the health care setting. P charts and U charts are frequently recommended for rate and ratio statistics, but their practical value in infection control may be limited because they (1) are not risk-adjusted, and (2) perform poorly with small denominators. The Standardized Infection Ratio is a statistic that overcomes both these obstacles. It is risk-adjusted, and it effectively increases denominators by combining data from multiple risk strata into a single value. Setting: The AICE National Database Initiative is a voluntary consortium of US hospitals ranging in size from 50 to 900 beds. The infection control professional submits monthly risk-stratified data for surgical site infections, ventilator-associated pneumonia, and central line-associated bacteremia. Methods: Run charts were constructed for 51 hospitals submitting data between 1996 and 1998. Traditional hypothesis tests (P values < .05) flagged 128 suspicious points, and participating infection control professionals investigated and categorized each flag as a "real problem" or "background variation." This gold standard was used to compare the performance of 5 unadjusted and 11 risk-adjusted control charts. Results: Unadjusted control charts (C, P, and U charts) performed poorly. Flags based on traditional 3-sigma limits suffered From sensitivity < 50%, whereas 2-sigma limits suffered from specificity < 50%. Risk-adjusted charts based on the Standardized Infection Ratio performed much better The most consistent and useful control chart was the mXmR chart. Under optimal conditions, this chart achieved a sensitivity and specificity > 80%, and a receiver operating characteristic area of 0.84 (P < .00001). Conclusions: These findings suggest a specific statistic (the Standardized Infection Ratio) and specific techniques that could make control charts valuable and practical tools for infection control.