Browsing by Subject "Educational Measurement"
Now showing 1 - 7 of 7
- Results Per Page
- Sort Options
Item The 18-Month Curriculum: The Impact of Mixed Learning Clerkships(2022-05) Kwon, Adelaide Jiwon; Sendelbach, Dorothy; Faulkner, Christopher; Tessnow, AlexBACKGROUND: UT Southwestern implemented a new, 18-month, shortened pre-clerkship curriculum in the Fall of 2015 in order to increase the clerkship phase from 1 year to 18 months, beginning mid-second year. In keeping with this new curriculum schedule, it was necessary to combine MS2 and MS3 students on clerkships from January to June; during that time, both MS2 and MS3 students are on the same clerkship simultaneously, creating opportunities for informal near-peer learning. Each clerkship handles this mixing differently: some create mixed-year teams, others group students together with their peers, while still others do not create teams at all. OBJECTIVE: We are interested in determining the impact of mixing students on team dynamics, narrative assessments, and grades. HYPOTHESIS: Upperclassmen and underclassmen participating on the same clerkships in a mixed learning environment will report predominantly positive experiences but will have significant differences in their grades and narrative assessments compared to each other. METHODS: A tri-pronged approach was taken, looking at qualitative student perspectives, qualitative attending perspectives, and quantitative scores on de-identified student evaluations. In the first arm of the study, students in the classes of 2020 and 2021 were sent a voluntary survey seeking comments regarding their experiences on mixed clerkships and to indicate their interest in participating in a follow-up 30-minute focus group interview. Four purposeful focus groups of 4-5 students, two groups from each class, were interviewed to learn more about perceived advantages and disadvantages of mixed clerkships. Interviews were audio-recorded, transcribed, and coded for themes. Transcripts were validated by the interviewees and de-identified prior to analysis. In the second and third arms of the study, student evaluations from 2019 in the Internal Medicine, Pediatrics, and Psychiatry clerkships were gathered and de-identified. From these, 120 narrative assessments were randomly selected for qualitative analysis. Final grades and four scored skills were selected for quantitative analysis. Finally, de-identified quantitative data from 2015-2016, prior to the implementation of the 18-month curriculum, were analyzed for additional insights. RESULTS: In the first arm of the study, interviewees reported overall positive experiences with mixed clerkships. Common perceived advantages as an MS2 paired with MS3s included being able to ask "dumb" questions without being judged; being taught "practical" skills such as how to use the electronic medical record (EMR) and how to write a note; having fears allayed through candid discussions; and having a sense of camaraderie. Common advantages as an MS3 paired with MS2s included satisfaction in teaching MS2s and being motivated by MS2 enthusiasm. Perceived disadvantages were less commonly mentioned but still present, and mostly centered around fears of being compared by evaluators. In the second and third arms of the study, the mixed clerkships of Pediatrics and Psychiatry tended to show more significant differences in MS2 and MS3 evaluations than the non-mixed clerkship of Internal Medicine. However, this was highly nuanced, and there was some, but not consistent, division between differences observed in "learned" skills such as history-taking or note-writing and "inherent" skills such as professionalism or confidence. Comparison with students on the old curriculum also revealed significant differences with different MS2 and MS3 cohorts suggesting both positive and negative effects of the mixed learning environment on student evaluations. CONCLUSION: Overall, students reported more positive experiences and perceived advantages on mixed clerkships than negative experiences and disadvantages. However, attendings reported a higher proportion of significant differences between less and more experienced students on mixed clerkships. Despite this, the impact of cross-class comparison versus other factors, such as natural student improvement, contributing to these differences is unclear, and therefore mixed clerkships resulting from the 18-month curriculum have had a generally positive impact despite some negative effects. Further research, including research into other assessments of educational outcomes such as shelf exam scores as well as research into cross-clerkship comparisons, is needed to provide a fuller picture of the impact of these mixed clerkships.Item Extended Time as a Testing Accommodation for Students with Disabilities(2014-12-22) Jennings, Caroline R.; Stavinoha, Peter L.; Silver, Cheryl H.; Holland, Alice A.The most commonly requested, and most frequently granted, testing accommodation is extended time. However, the literature concerning the utility, fairness, and students’ perceptions of extended time as a testing accommodation for students with disabilities is lacking and inconclusive. Existing research has focused on K-12, as well as college and graduate level, students with the majority of students diagnosed with learning disabilities. Some data suggest a benefit of extended time for students with disabilities; however, the majority of data suggest a benefit to all students regardless of disability status. As requests for extended time increase, there is growing concern regarding the fairness of extended time as a testing accommodation. Additionally, students’ perceptions and feelings regarding extended time as a testing accommodation vary greatly. In addition to a summary of these relevant issues, areas for future research are presented to inform clinical practice and ensure that educators, policy makers, and practitioners are meeting the rehabilitative and academic needs of students with disabilities.Item Formative Feedback Passport: A Tool to Engage Students in Reflecting on and Incorporating Mid-Point Feedback(2022-05) Postma, Heather Elizabeth; Abraham, Reeni; Collins, Sarah; Brinker, StephanieBACKGROUND: The importance of formative feedback in undergraduate medical education is widely appreciated; however, it is historically inadequate. The formative feedback passport (FF tool) was developed and implemented in the Internal Medicine (IM) clerkship at UT Southwestern to address these concerns. OBJECTIVE: This dissertation examines the FF tool's role in facilitating an optimal formative feedback process. METHODS: Initially, a pre-intervention study was conducted, in which randomly selected, de-identified FF tools from the 2018-2019 IM clerkship were analyzed for content and quality of student reflection. Results revealed that student reflection on feedback was insufficient, with transformative, thoughtful reflection seldom occurring. In response to these findings, the FF tool's instructions were augmented with a module to encourage critical reflection. Subsequently, a post-intervention study was conducted, in which randomly selected, de-identified FF tools from the 2020-2021 IM clerkship were analyzed in the same manner as the pre-intervention study. RESULTS: 149 feedback tips from 50 FF tools were included in the pre-intervention study, and 162 feedback tips from 54 FF tools were included in the post-intervention study for a total of 311 feedback tips from 104 FF tools analyzed. In both studies, most of the feedback tips aligned with established standards for resident and faculty evaluators. In addition, the mean scores for each REFLECT Rubric criterion were significantly higher in the post-intervention group than the pre-intervention group. CONCLUSION: The FF tool is an invaluable electronic feedback tool in the IM clerkship at UT Southwestern. It enables documentation of formative feedback and promotes critical student engagement, reflection upon feedback, and the development of plans to implement feedback for improvement. Study results demonstrate that students receive formative feedback from an attending, resident, or patient that is appropriate in content. Additionally, study results demonstrate that the intervention improved student engagement and quality of reflection on feedback using the FF tool.Item How Learning Strategies and Academic Parameters Predict Medical Student Success(2024-01-30) O'Connell, Michael; Badia, Rohit; Tellez, Juan; Cook, Grayden; Sachs, ArleneBACKGROUND: Student scores on the Learning and Study Strategies Inventory (LASSI), an assessment of academic skills, have been shown in previous studies to be significantly different between U.S. medical students based on their scores in various examinations during the preclerkship curriculum. This study aimed to evaluate LASSI and other early academic performance markers for predicting the likelihood of shelf exam underperformance in third-year medical students. METHODS: A retrospective analysis of student-specific demographic information and medical school exam performance from 220 medical students from the University of Texas Southwestern was performed. Students were then categorized based on underperformance (score in <25th percentile) on each NBME shelf exam and statistical analysis was performed to identify predictors of shelf underperformance. RESULTS: For predicting Surgery shelf underperformance, pre-clerkship final exam average (PCA), STEP 1, and LASSI Time Management (TMT) were statistically significant in univariate analysis. Internal Medicine: PCA, STEP 1, LASSI Attitude (ATT), Test Strategies (TST), and TMT. Pediatrics: PCA and STEP 1 quartile. Obstetrics-Gynecology: PCA, STEP 1, and LASSI Anxiety (ANX), with ANX an independent predictor on multivariate analysis. Neurology: PCA, STEP 1, LASSI ANX, Information Processing (INP), TST, and average LASSI, with PCA, LASSI Concentration (CON), TMT, and ANX independent predictors on multivariate analysis. Family Medicine: PCA, STEP 1, LASSI ANX, TST, and Using Academic Resources (UAR), with PCA an independent predictor on multivariate analysis. Psychiatry: only STEP 1 was significant. CONCLUSION: In contrast to previous studies, no single LASSI scale was significantly associated with underperformance on all 7 NBME shelf exams. Univariate analysis identified several LASSI scales that correlated with NBME underperformance, but the drastic inter-clerkship heterogeneity makes use of these scales in early academic intervention impractical. Conversely, PCA was found to be strongly associated with shelf exam underperformance.Item Identifying Predictors of Performance on USMLE® Step 1(2017-03-27) Shah, Sachin; Sendelbach, Dorothy; Mihalic, Angela; Sachs, ArleneBACKGROUND: USMLE® Step 1 is considered by residency program directors to be one of the most important factors in selecting medical students for interviews, so it is important for both students and medical schools to maximize scores. However, despite its importance, very little has been studied to determine indicators of performance on the exam. OBJECTIVE: The goals of this study are: 1) to determine if data available prior to admission can predict Step 1 performance, 2) to evaluate if success in pre-clinical courses at UT Southwestern correlates to success on Step 1, 3) to analyze survey data and determine correlations between studying resources/strategies and Step 1 scores, and 4) to develop a mathematical model to flag students at risk of scoring poorly on Step 1. METHODS: This study utilized data from the UT Southwestern Medical School Class of 2018 (n=238). First, Undergraduate GPA, MCAT® scores, and demographic information were correlated with Step 1 scores and medical school grades to determine if pre-admissions factors could predict Step 1 or medical school performance. Second, medical school exam scores from the second pre-clinical year and NBME® Comprehensive Basic Science Self-Assessment (CBSSA) scores were analyzed to determine their correlation with Step 1 scores. Third, the class was surveyed during a dedicated six-week study period before the exam as well as after the exam, and results were analyzed to determine how students prepared. Each question from the surveys was correlated with exam scores to identify which factors led to higher test scores. Finally, the factors with the highest correlations to Step 1 scores were used to develop a mathematical model to predict Step 1 scores using multiple linear regression. This model was then tested to determine its effectiveness at identifying at-risk students. RESULTS: MCAT® biological and physical sciences scores and undergraduate GPA had moderate correlation with Step 1 scores (both R2 = 0.10) and weak correlation with medical school grades (R2 = 0.060 and 0.058, respectively). Of all factors studied, the initial CBSSA scores had the highest predictive value of Step 1 scores (R2 = 0.60). Cumulative medical school grades were also highly correlated with Step 1 scores (R2 = 0.52). The weekly pre-exam surveys indicate that each successive week of study produced smaller gains in points on Step 1, and most students did not see significant point increases after 6 weeks of study. The post-exam survey shows that students who answered greater than 4000 practice questions scored an average of 254 ± 5.3 (95% CI), whereas those who answered fewer than 1700 questions scored an average of 230 ± 10.6 (95% CI). There was no significant difference between those who studied 4 versus 7 dedicated weeks, or those who studied fewer than 250 hours versus more than 600 hours during the dedicated study time. However, students who started to study 6 months prior to the exam scored 252 ± 6.9 (95% CI), whereas students who waited until the dedicated preparation time scored 237 ± 3.6 (95% CI). A model incorporating a pre-admissions and medical school performance factors was developed that accounted for 64% of the score variability, with a standard error of ±8.87. When used to identify those at risk of scoring below 220, the model had a sensitivity of 81% and specificity of 86%. CONCLUSION: The study finds that MCAT® and undergraduate GPA are mediocre determinants of medical school and USMLE performance and should be used cautiously in the admissions process. Conversely, medical school grades and CBSSA scores are very accurate predictors of Step 1 scores, and students who do well on medical school exams generally do well on Step 1. Students should plan five to six weeks for dedicated study, but start reviewing material as early as possible and focus on question-based resources. The model developed from this data can be a useful tool to identify at-risk students for early intervention. This project demonstrates that objective data analysis can be used to guide students towards optimal preparation for Step 1, as well as identify those at risk of performing poorly on the exam.Item Improving Medical Student Performance on Clinical Skills Exams(2019-04-01) Thomas, Areon Scott; Wagner, James; Lysikowski, Jerzy; Reed, W. GaryBACKGROUND: Clinical Skills Exams were created in an effort to determine the readiness of medical students to enter into residency. These exams place students in a simulated patient encounter and grade them on their proficiency in handling that encounter. The National Board of Medical Examiners (NBME) instituted the United States Medical Licensing Exam: Step 2 Clinical Skills (CS), a Clinical Skills Exam, as part of the pathway to licensing physicians. LOCAL PROBLEM: UT Southwestern (UTSW) has noticed a rise in the number of students failing CS. The grading of CS is confidential; therefore, it has been difficult for UTSW to assess which students are at risk for failure. Through the initiation of the school administered clinical skills examination, The Objective Structured Clinical Exam (OSCE), UTSW was able to correlate poor performance on the exam with an increased likelihood of failure on the CS. This correlation has also been demonstrated in other studies. However, information on what factors lead to improved student performance on a clinical skills exam was lacking. METHODS: We reviewed 236 student records for the class of 2020 to ascertain what extracurricular clinical experiences students had taken in advance of the exam. We used bivariate and multivariate analysis to determine which of these experiences significantly impacted a student's OSCE exam score. The OSCE uses the same published grading criteria as the CS and is treated by the university as a proxy for the CS. The students are graded in three categories: Integrated Clinical Encounter (ICE), Spoken English-Language Proficiency (SEP) and Clinical Interpretation Skills (CIS). They are graded by their standardized patients and by faculty members overseeing the exam. The scores for each encounter are then averaged together to create a student's final score. We grouped the score into three categories for bivariate analysis: High Pass, Low Pass, and Fail. We then analyzed the number of students that fell in each category. We also used a separate computerized exam, the Clinical Data Interpretation (CDI) Exam, to ensure a representative sample. Box Plots, Chi Square, and multivariate analysis were used to analyze our data. We chose to use Box plots to examine the distribution of the data, and give us a starting point for analysis. From there chi-square analysis provided us with information on which intervention had the most significant effect on OSCE Scores. Finally multivariate analysis was performed to search for interaction between the interventions, and to check for a linear relationship between MOSCE and OSCE scores. INTERVENTIONS: Mock OSCE (MOSCE), Student-Run Free Clinic (SRFC) Volunteering, and Thee Longitudinal Outpatient Orientation Clerkship (LOOC) RESULTS: As SEP scores were well above 90% for all students and no significant findings were discovered in our initial box plots, it was dropped from further analyses. Our box plots suggested a positive association of OSCE CIS and ICE subcomponent scores taking the MOSCE. This association proved to be statistically significant by linear regression, multivariable regression, and multivariable analyses. There was insignificant association of OSCE exam with participation in an SRFC by Chi Square analysis. These results may have been insignificant due to insufficient study power. CONCLUSION: Based on these results, it appears that taking the Mock OSCE examination is associated with improved student performance in both the CIS and the ICE subcomponents of the OSCE. In light of these findings, we suggested making the Mock OSCE more widely available to all UTSW students for the 2018 administration of the exam.Item The teaching and evaluation of clinical competence: an undergraduate medical education perspective(2015-01-30) Wagner, James M.