IRT Modeling of Tutor Performance to Predict End-of-Year Exam Scores

被引:13
作者
Ayers, Elizabeth [1 ]
Junker, Brian [1 ]
机构
[1] Carnegie Mellon Univ, Dept Stat, Pittsburgh, PA 15213 USA
关键词
cognitive modeling; Bayesian inference; intelligent tutoring systems; item response theory; reliability;
D O I
10.1177/0013164408318758
中图分类号
G44 [教育心理学];
学科分类号
0402 ; 040202 ;
摘要
Interest in end-of-year accountability exams has increased dramatically since the passing of the No Child Left Behind Act in 2001. With this increased interest comes a desire to use student data collected throughout the year to estimate student proficiency and predict how well they will perform on end-of-year exams. This article uses student performance on the Assistment System, an online mathematics tutor, to show that replacing percentage correct with an Item Response Theory estimate of student proficiency leads to better fitting prediction models. In addition, it uses other tutor performance metrics to further increase prediction accuracy. Prediction error bounds are also calculated to attain an absolute measure to which the models can be compared.
引用
收藏
页码:972 / 987
页数:16
相关论文
共 30 条
[1]  
[Anonymous], WINBUGS BAYESIAN INF
[2]  
[Anonymous], CONQUEST GEN ITEM RE
[3]  
[Anonymous], 2021, Bayesian Data Analysis
[4]  
ANOZIE N, 2006, P AAAI WORKSH ED DAT, P1
[5]  
AYERS E, 2006, P ED DAT MIN AM ASS, P14
[6]   The effect of curriculum-based external exit exam systems on student achievement [J].
Bishop, JH .
JOURNAL OF ECONOMIC EDUCATION, 1998, 29 (02) :171-182
[7]  
Cronbach LJ, 1951, PSYCHOMETRIKA, V16, P297
[8]   Generating items during testing: Psychometric issues and models [J].
Embretson, SE .
PSYCHOMETRIKA, 1999, 64 (04) :407-433
[9]  
FAROOQUE P, 2005, BEHAV SKILLS MCAS AS
[10]  
Feng MY, 2006, LECT NOTES COMPUT SC, V4053, P31