Different Approaches to Assessing the Quality of Explanations Following a Multiple-Document Inquiry Activity in Science

被引:33
作者
Wiley J. [1 ]
Hastings P. [2 ]
Blaum D. [3 ]
Jaeger A.J. [1 ]
Hughes S. [2 ]
Wallace P. [3 ]
Griffin T.D. [1 ]
Britt M.A. [3 ]
机构
[1] University of Illinois at Chicago, Chicago, IL
[2] DePaul University, Chicago, IL
[3] Northern Illinois University, DeKalb, IL
基金
美国国家科学基金会;
关键词
Automatic assessment; Causal relations; Causal structure; Explanations; Machine learning; Mental models; Natural language processing;
D O I
10.1007/s40593-017-0138-z
中图分类号
学科分类号
摘要
This article describes several approaches to assessing student understanding using written explanations that students generate as part of a multiple-document inquiry activity on a scientific topic (global warming). The current work attempts to capture the causal structure of student explanations as a way to detect the quality of the students’ mental models and understanding of the topic by combining approaches from Cognitive Science and Artificial Intelligence, and applying them to Education. First, several attributes of the explanations are explored by hand coding and leveraging existing technologies (LSA and Coh-Metrix). Then, we describe an approach for inferring the quality of the explanations using a novel, two-phase machine-learning approach for detecting causal relations and the causal chains that are present within student essays. The results demonstrate the benefits of using a machine-learning approach for detecting content, but also highlight the promise of hybrid methods that combine ML, LSA and Coh-Metrix approaches for detecting student understanding. Opportunities to use automated approaches as part of Intelligent Tutoring Systems that provide feedback toward improving student explanations and understanding are discussed. © 2017, International Artificial Intelligence in Education Society.
引用
收藏
页码:758 / 790
页数:32
相关论文
共 71 条
[1]  
Bejan C.A., Hathaway C., UTD-SRL: A pipeline architecture for extracting frame semantic structures. Proceedings of the 4th International Workshop on Semantic Evaluations (pp. 460–463). Prague, Czech Republic: Association for Computational Linguistics, (2007)
[2]  
Bennington B.J., The carbon cycle and climate change, (2009)
[3]  
Braaten M., Windschitl M., Working toward a stronger conceptualization of scientific explanation for science education, Science Education, 95, pp. 639-669, (2011)
[4]  
Braten I., Stromso H.I., Britt M.A., Trust matters: examining the role of source evaluation in students’ construction of meaning within and across multiple texts, Reading Research Quarterly, 44, pp. 6-28, (2009)
[5]  
Britt M.A., Aglinskas C., Improving student’s ability to use source information, Cognition and Instruction, 20, pp. 485-522, (2002)
[6]  
Britt M.A., Wiemer-Hasting P., Larson A., Perfetti C.A., Automated feedback on source citation in essay writing, International Journal of Artificial Intelligence in Education, 14, pp. 359-374, (2004)
[7]  
Chklovski T., Pantel P., VerbOcean: Mining the web for fine-grained semantic verb relations, (2004)
[8]  
Condon W., Large-scale assessment, locally-developed measures, and automated scoring of essays: fishing for red herrings?, Assessing Writing, 18, 1, pp. 100-108, (2013)
[9]  
Crossley S.A., Kyle K., McNamara D.S., The tool for the automatic analysis of text cohesion (TAACO): automatic assessment of local, global, and text cohesion, Behavior Research Methods, pp. 1-11, (2015)
[10]  
Crossley S.A., McNamara D.S., Cohesion, coherence, and expert evaluations of writing proficiency. Proceedings of the 32nd annual conference of the Cognitive Science, Society, pp. 984-989, (2010)