Assessing professional competence: from methods to programmes

被引:746
作者
van der Vleuten, CPM [1 ]
Schuwirth, LWT [1 ]
机构
[1] Univ Maastricht, Dept Educ Dev & Res, NL-6200 MD Maastricht, Netherlands
关键词
education; medical; undergraduate; methods; standards; educational measurement; professional competence;
D O I
10.1111/j.1365-2929.2005.02094.x
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
INTRODUCTION We use a utility model to illustrate that, firstly, selecting an assessment method involves context-dependent compromises, and secondly, that assessment is not a measurement problem but an instructional design problem, comprising educational, implementation and resource aspects. In the model, assessment characteristics are differently weighted depending on the purpose and context of the assessment. IMPRICAL AND THEORETICAL DEVELOPMENTS Of the characteristics in the model, we focus on reliability, validity and educational impact and argue that they are not inherent qualities of any instrument. Reliability depends not on structuring or standardisation but on sampling. Key issues concerning validity are authenticity and integration of competencies. Assessment in medical education addresses complex competencies and thus requires quantitative and qualitative information from different sources as well as professional judgement. Adequate sampling across judges, instruments and contexts can ensure both validity and reliability. Despite recognition that assessment drives learning, this relationship has been little researched, possibly because of its strong context dependence. ASSESSMENT AS INTARUCTIONAL DESIGN When assessment should stimulate learning and requires adequate sampling, in authentic contexts, of the performance of complex competencies that cannot be broken down into simple parts, we need to make a shift from individual methods to an integral programme, intertwined with the education programme. Therefore, we need an instructional design perspective. IMPLICATIONS FOR DEVELOPMENT AND RESEARCH Programmatic instructional design hinges on a careful description and motivation of choices, whose effectiveness should be measured against the intended outcomes. We should not evaluate individual methods, but provide evidence of the utility of the assessment programme as a whole.
引用
收藏
页码:309 / 317
页数:9
相关论文
共 48 条
[1]  
Accreditation Council for Graduate Medical Education, OUTC PROJ
[2]  
[Anonymous], 1987, ASSESS EVAL HIGH EDU, DOI DOI 10.1080/0260293870120307
[3]  
[Anonymous], 2002, International Handbook of Research in Medical Education, DOI [DOI 10.1007/978-94-010-0462-6_29, 10.1007/978-94-010-0462-6_29]
[4]  
[Anonymous], J CONTIN ED HLTH PRO
[5]  
[Anonymous], 1995, EDUC RES-UK, DOI [10.3102/0013189X024005005, DOI 10.3102/0013189X024005005, DOI 10.3102/2F0013189X024005005]
[6]   The influence of testing context and clinical rotation order on students' OSCE performance [J].
Blaskiewicz, RJ ;
Park, RS ;
Chibnall, JT ;
Powell, JK .
ACADEMIC MEDICINE, 2004, 79 (06) :597-601
[7]  
Coles Colin, 2002, J Contin Educ Health Prof, V22, P3, DOI 10.1002/chp.1340220102
[8]   Assessing health professionals [J].
Crossley, J ;
Humphris, G ;
Jolly, B .
MEDICAL EDUCATION, 2002, 36 (09) :800-804
[9]  
DAVIS WK, 2002, INT HDB RES MED ED, P917
[10]   The use of self-, peer and co-assessment in higher education: a review [J].
Dochy, F ;
Segers, M ;
Sluijsmans, D .
STUDIES IN HIGHER EDUCATION, 1999, 24 (03) :331-350