A task-oriented non-interactive evaluation methodology for information retrieval systems

被引:21
作者
Reid J. [1 ]
机构
[1] Department of Computer Science, Queen Mary and Westfield College, University of London
来源
Information Retrieval | 2000年 / 2卷 / 1期
关键词
Nature of relevance; Task framework; Test collection; User-centred evaluation;
D O I
10.1023/A:1009906420620
中图分类号
学科分类号
摘要
Past research has identified many different types of relevance in information retrieval (IR). So far, however, most evaluation of IR systems has been through batch experiments conducted with test collections containing only expert, topical relevance judgements. Recently, there has been some movement away from this traditional approach towards interactive, more user-centred methods of evaluation. However, these are expensive for evaluators in terms both of time and of resources. This paper describes a new evaluation methodology, using a task-oriented test collection, which combines the advantages of traditional non-interactive testing with a more user-centred emphasis. The main features of a task-oriented test collection are the adoption of the task, rather than the query, as the primary unit of evaluation and the naturalistic character of the relevance judgements. © 2000 Kluwer Academic Publishers.
引用
收藏
页码:115 / 129
页数:14
相关论文
共 3 条
[1]  
Barry C.L., Schamber L., Users' criteria for relevance evaluation: A cross-situational comparison, Information Processing and Management, 34, pp. 219-236, (1998)
[2]  
Bates M., Information search tactics, Journal of the American Society for Information Science, 30, pp. 205-214, (1979)
[3]  
Beaulieu M., Robertson S.E., Rasmussen E.M., Evaluating interactive systems in TREC, Journal of the American Society for Information Science, 47, pp. 85-94, (1995)