Clinical work sampling: A new approach to the problem of in-training evaluation

被引:53
作者
Turnbull J. [1 ]
MacFadyen J. [1 ]
Van Barneveld C. [1 ]
Norman G. [2 ]
机构
[1] Department of Medicine, University of Ottawa, Ottawa, Ont.
[2] Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ont.
关键词
Clinical work sampling; In-training evaluation;
D O I
10.1046/j.1525-1497.2000.06099.x
中图分类号
学科分类号
摘要
OBJECTIVE: Existing systems of in-training evaluation (ITE) have been criticized as being unreliable and invalid methods for assessing student performance during clinical education. The purpose of this study was to assess the feasibility, reliability, and validity of a clinical work sampling (CWS) approach to ITE. This approach focused on the following: (1) basing performance data on observed behaviors, (2) using multiple observers and occasions, (3) recording data at the time of performance, and (4) allowing for a feasible system to receive feedback. PARTICIPANTS: Sixty-two third-year University of Ottawa students were assessed during their 8-week internal medicine inpatient experience. MEASUREMENTS AND MAIN RESULTS: Four performance rating forms (Admission Rating Form, Ward Rating Form, Multidisciplinary Team Rating Form, and Patient's Rating Form) were introduced to document student performance. Voluntary participation rates were variable (12%-64%) with patients excluded from the analysis because of low response rate (12%). The mean number of evaluations per student per rotation (19) exceeded the number of evaluations needed to achieve sufficient reliability. Reliability coefficients were high for the Ward Form (.86) and the Admission Form (.73) but not for the Multidisciplinary Team (.22) Form. There was an examiner effect (rater leniency), but this was small relative to real differences between students. Correlations between the Ward Form and the Admission Form were high (.47), while those with the Multidisciplinary Team Form were lower (.37 and .26, respectively). The CWS approach ITE was considered to be content valid by expert judges. CONCLUSIONS: The collection of ongoing performance data was reasonably feasible, reliable, and valid.
引用
收藏
页码:556 / 561
页数:5
相关论文
共 19 条
[1]  
Hull A.L., Hodder S., Berger B., Et al., Validity of three clinical performance assessments of internal medicine clerks, Acad Med, 70, pp. 517-522, (1995)
[2]  
Short J.P., The importance of strong evaluation standards and procedures in training residents, Acad Med, 68, pp. 522-525, (1993)
[3]  
Phelan S., Evaluation of the noncognitive professional traits of medical students, Acad Med, 68, pp. 799-803, (1993)
[4]  
Hunt D.D., Functional and dysfunctional characteristics of the prevailing model of clinical evaluation systems in North American medical schools, Acad Med, 67, pp. 254-259, (1992)
[5]  
Gray J.D., Global rating scales in residency education, Acad Med., 71, 1 SUPPL., (1996)
[6]  
Kaplan C.B., Centor R.M., The use of nurses to evaluate house-officers' humanistic behavior, J Gen Intern Med, 5, pp. 410-414, (1990)
[7]  
Dauphinee D., Assessing clinical performance: Where do we stand and what might we expect?, JAMA, 274, pp. 741-743, (1995)
[8]  
Van der Vleuten C.P.M., Norman G.R., De Graaff E., Pitfalls in the pursuit of objectivity: Issues of reliability, Med Educ, 25, pp. 110-118, (1991)
[9]  
Maxim B.R., Dielman T.E., Dimensionality, internal consistency and interrater reliability of clinical performance ratings, Med Educ, 27, pp. 130-137, (1987)
[10]  
Stillman P.L., Positive effects of a clinical performance assessment program, Acad Med, 66, pp. 481-483, (1991)