R.P. Lin1, C. Martinez1, B.M. Abell1, P.P. Lin1, J. Lee1 1George Washington University School Of Medicine And Health Sciences, Surgery, Washington, DC, USA
Introduction: Standardized multiple choice knowledge examinations have been used to assess knowledge during clerkships. Recent trends have shifted reliance from these objective examinations to competency based evaluations and entrustable professional activities (EPA). The purpose of this study was to determine the relationship between subjective evaluations and evaluations of EPAs with an in house case-based examination and a standardized knowledge examination (NBME) during the third year surgery clerkship.
Methods: A retrospective cross sectional study of third year student grades April 2022-May 2023 was performed, n=169. Subjective evaluations and EPA evaluations (EPA 1, 2, 3, 6) were based on 5-point scales. The subjective evaluations were assessed using a uniform evaluation across all clerkships at the study institution while the EPA evaluations were assessed using a previously published scale of entrustability. Correlation between subjective evaluations, EPA evaluations, a case-based examination, and the NBME examination were determined using Pearson’s test.
Results: There was a moderate correlation between student performance on an in-house case-based examination compared to the NBME examination (r = 0.38). When students were stratified by performance on the in-house examination, students who performed below the mean on the in-house examination demonstrated a strong correlation with poorer performance on the NBME examination, with average NBME scores approximately 8.5 points below the national mean (r = 0.76) Subjective clinical evaluations demonstrated moderate correlation with the case-based examination (r = 0.43), but poorly the NBME examination (r = 0.11). Higher EPA scores correlated strongly with the in-house examination (ρ = 0.66).
Conclusion: A variety of assessment tools were used to evaluate the summative performance of surgery clerkship students. Subjective clinical evaluations correlated poorly with a standardized knowledge based examination, but correlated better with a case-based examination. Examinations which better align with the open-ended questioning and clinical discussions taking place during clinical care may better reflect student performance and application of knowledge. Similarly, EPA scores which assesses the degree of independence in clinical care correlate well with a case-based examination. These data suggest that assessments which simulate the clinical environment may address some of the shortcomings of multiple choice examinations.