Improving Medical Student Performance on Clinical Skills Exams

Date

2019-04-01

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

BACKGROUND: Clinical Skills Exams were created in an effort to determine the readiness of medical students to enter into residency. These exams place students in a simulated patient encounter and grade them on their proficiency in handling that encounter. The National Board of Medical Examiners (NBME) instituted the United States Medical Licensing Exam: Step 2 Clinical Skills (CS), a Clinical Skills Exam, as part of the pathway to licensing physicians. LOCAL PROBLEM: UT Southwestern (UTSW) has noticed a rise in the number of students failing CS. The grading of CS is confidential; therefore, it has been difficult for UTSW to assess which students are at risk for failure. Through the initiation of the school administered clinical skills examination, The Objective Structured Clinical Exam (OSCE), UTSW was able to correlate poor performance on the exam with an increased likelihood of failure on the CS. This correlation has also been demonstrated in other studies. However, information on what factors lead to improved student performance on a clinical skills exam was lacking. METHODS: We reviewed 236 student records for the class of 2020 to ascertain what extracurricular clinical experiences students had taken in advance of the exam. We used bivariate and multivariate analysis to determine which of these experiences significantly impacted a student's OSCE exam score. The OSCE uses the same published grading criteria as the CS and is treated by the university as a proxy for the CS. The students are graded in three categories: Integrated Clinical Encounter (ICE), Spoken English-Language Proficiency (SEP) and Clinical Interpretation Skills (CIS). They are graded by their standardized patients and by faculty members overseeing the exam. The scores for each encounter are then averaged together to create a student's final score. We grouped the score into three categories for bivariate analysis: High Pass, Low Pass, and Fail. We then analyzed the number of students that fell in each category. We also used a separate computerized exam, the Clinical Data Interpretation (CDI) Exam, to ensure a representative sample. Box Plots, Chi Square, and multivariate analysis were used to analyze our data. We chose to use Box plots to examine the distribution of the data, and give us a starting point for analysis. From there chi-square analysis provided us with information on which intervention had the most significant effect on OSCE Scores. Finally multivariate analysis was performed to search for interaction between the interventions, and to check for a linear relationship between MOSCE and OSCE scores. INTERVENTIONS: Mock OSCE (MOSCE), Student-Run Free Clinic (SRFC) Volunteering, and Thee Longitudinal Outpatient Orientation Clerkship (LOOC) RESULTS: As SEP scores were well above 90% for all students and no significant findings were discovered in our initial box plots, it was dropped from further analyses. Our box plots suggested a positive association of OSCE CIS and ICE subcomponent scores taking the MOSCE. This association proved to be statistically significant by linear regression, multivariable regression, and multivariable analyses. There was insignificant association of OSCE exam with participation in an SRFC by Chi Square analysis. These results may have been insignificant due to insufficient study power. CONCLUSION: Based on these results, it appears that taking the Mock OSCE examination is associated with improved student performance in both the CIS and the ICE subcomponents of the OSCE. In light of these findings, we suggested making the Mock OSCE more widely available to all UTSW students for the 2018 administration of the exam.

General Notes

Table of Contents

Subjects

Clinical Competence, Educational Measurement, Problem-Based Learning, Students, Medical

Citation

Related URI