R. Nunez1, E. Farah1, J. Hazen1, S. Castillo1, A. Abreu1, A. Guzzetta1, D.J. Scott1, G. Sankaranarayanan1, H. Zeh1, P.M. Polanco1 1University Of Texas Southwestern Medical Center, Department Of Surgery, Dallas, TX, USA
Introduction: As the adoption of robotic surgery continues to grow, collaboration, expertise, and skills of surgical teams become crucial for ensuring patient outcomes. The bedside assistant has an increasingly autonomous role, performing critical aspects of the operation that can affect surgical outcomes. To date, there isn’t a standardized proficiency-based curriculum or skill assessment tool for robotic bedside assistance training. We aimed to assess the effectiveness of a novel video-based assessment (VBA) tool to evaluate bedside assist performance and to compare this performance to experts' benchmarks.
Methods: General surgery interns underwent a 2-hour simulated training session focused on robotic bedside assistance. Prior to and following the training, participants were tasked with a set of robot docking and instrument setup drills. Their performances were captured on video and evaluated using a newly established grading rubric, which included detailed evaluations across multiple domains. Additionally, participants' knowledge was assessed through a 20-item multiple-choice test covering console and bedside assistance. Six experienced surgical physician assistants were also assessed to establish an expert benchmark.
Results: Twenty (n=20) general surgery interns participated in the study, 55% were female (n = 11), and 45% were White (n = 9). Most participants regarded this course as “vital” for their training (n = 15, 75%) and the majority reported high confidence levels with their bedside assist skills after the simulated training (n = 15, 75%). When evaluating video recordings of trainees, the mean performance score (min 5, max 30) increased from 14.3 to 22.9 (p < 0.001). Similarly, there was a notable improvement in their performance on the 20 multiple-choice questions from an average of 9.4 correct answers to 16.3 (p < 0.001). However, while interns’ post-training bedside performance did not reach the experts’ benchmark (p< 0.001), there was no significant difference in theoretical knowledge compared to the experts (p= 0.327).
Conclusion: The robotic bedside assist simulation course led to significant improvement in both the theoretical knowledge and practical skills of interns. Our study confirms the validity of our novel VBA tool for evaluating bedside assist skills discriminating the performance of experts and novices and identifying improvement after deliberate training.