01.21 Establishment of Inter-Rater Reliability in Developing a Video Review Tool for the Trauma Bay

M. Sturdevant1, L. Tien1, N. Owings1, B. Adam1, A. Lee1, A. Abuzeid1, J. Mckenzie1, B. Ange1, C. Blades1, E. Mabes1  1Medical College Of Georgia, Department Of Surgery, Augusta, GA, USA

Introduction:
The instillation of cameras in our trauma bays necessitated development of a tool to analyze resuscitation flow, communication, and systems-based issues. Trauma video reviews have been implemented to guide quality and performance improvement projects, streamlining resuscitations and identifying issues that can be addressed and tracked. To establish a video review tool, inter-rater reliability (IRR) needs to be established across video reviewers. Six trauma attendings reviewed the videos using a standardized survey with both subjective and objective questions, making changes to the tool during a period of “teaching” for the faculty on how to perform the surveys. This pilot study was conducted to establish our IRR as a first step in validating our Trauma Video Review Tool.  

Methods:
The Trauma Video Review Tool was designed to assess the trauma team's ability to follow our updated institutional guidelines in performing the steps of the primary and secondary surveys in a trauma with consideration for appropriate evaluation adjuncts, clinical decision making, communication, and situational awareness. Survey questions contain validity measurements and use small scales to eliminate ambiguity and improve validity amongst raters. The agreement among rates was evaluated on a scale of poor agreement to good agreement.  

Results:
Six videos were reviewed and all were level 1 traumas with the majority (85%) blunt mechanisms. Twenty-eight evaluations were completed. Percent agreement between the six video raters was calculated to determine IRR using Fleiss’ Kappa. All videos had fair to good agreement with the exception of video 2 which scored 0.149 IRR. There was a weakly positive linear progression correlation of the Fleiss Krappa score from videos reviewed over time (figure 1).  

Conclusion:
This is a pilot study evaluating the video review tool development and to evaluate IRR among the attending video raters. We expect that evaluation and scoring of videos will guide our program on institutional quality improvement projects in streamlining resuscitations. With implementation of trauma protocols, this review tool will serve as a valuable device to validate the effectiveness of the enforced protocols and provide important information on systems-based issues.