Comparisons of Methods for Multiple Hypothesis Testing in Neuropsychological Research

被引:151
作者
Blakesley, Richard E. [1 ]
Mazumdar, Sati [1 ,2 ]
Dew, Mary Amanda [2 ,3 ,4 ]
Houck, Patricia R. [2 ]
Tang, Gong [1 ]
Reynolds, Charles F., III [2 ]
Butters, Meryl A. [2 ]
机构
[1] Univ Pittsburgh, Dept Biostat, Pittsburgh, PA 15260 USA
[2] Univ Pittsburgh, Sch Med, Dept Psychiat, Pittsburgh, PA 15260 USA
[3] Univ Pittsburgh, Dept Epidemiol, Pittsburgh, PA 15260 USA
[4] Univ Pittsburgh, Dept Psychol, Pittsburgh, PA 15260 USA
关键词
multiple hypothesis testing; correlated outcomes; familywise error rate; p value adjustment; neuropsychological test performance data; BONFERRONI PROCEDURE; ADJUSTMENT METHODS; CLINICAL-TRIALS;
D O I
10.1037/a0012850
中图分类号
B849 [应用心理学];
学科分类号
040203 ;
摘要
Hypothesis testing with multiple outcomes requires adjustments to control Type 1 error inflation, which reduces power to detect significant differences. Maintaining the prechosen Type I error level is challenging when outcomes are correlated. This problem concerns many research areas, including neuropsychological research in which multiple, interrelated assessment measures are common. Standard p value adjustment methods include Bonferroni-, Sidak-, and resampling class methods. In this report, the authors aimed to develop it multiple hypothesis testing strategy to maximize power while controlling Type I error. The authors conducted a sensitivity analysis, using a neuropsychological dataset, to offer a relative comparison of the methods and a simulation study to compare the robustness of the methods with respect to varying patterns and magnitudes of correlation between outcomes. The results lead them to recommend the Hochberg and Hommel methods (step-up modifications of the Bonferroni method) for mildly correlated outcomes and the step-down minP method (a resampling based method) for highly correlated outcomes. The authors note caveats regarding the implementation of these methods using available software.
引用
收藏
页码:255 / 264
页数:10
相关论文
共 25 条