75.02 Eye-tracking in Educational Assessment: An Automated Procedure to Define Dynamic Areas of Interest

E. Fichtel1, J. Park2, S. Parker3, N. Lau1, S. D. Safford2  1Virginia Tech,Grado Department Of Industrial And Systems Engineering,Blacksburg, VA, USA 2Virginai Tech Carilion School Of Medicine,Surgery,Roanoke, VA, USA 3Virginia Tech Carilion Research Institute,Roanoke, VA, USA

Introduction:
Quality of assessment in medical education impacts training efficiency and patient outcomes. Eye tracking has demonstrated potential to provide unobtrusive and valid assessment of surgical skills by highlighting where experts and trainees focus during critical periods of surgical procedures. The locations of the expert eye-gazes can be used to define the Areas of Interest (AOIs) which can serve as evaluation criteria for where novices should focus. That is, eye-tracking provides a means to determine whether novices observe the same fields as the experts. However, when eye-gazes of experts are changing constantly over the course of a procedure, defining the AOIs can be time consuming and unnecessarily subjective because commercial software rely on the evaluator to specify the AOIs manually. To improve eye-tracking assessment, we developed a procedure that can be easily automated with common scripting language (e.g., R, Python) for defining dynamic AOIs for data analysis.

Methods:
The procedure for generating dynamic AOIs was developed with eye gaze samples collected from three expert surgeons viewing videos of laparoscopic cholecystectomy on a computer. Raw data on when (i.e., timestamps) and where (i.e., coordinates) expert gazes fell on the monitor was exported and using R statistics software the dynamic AOIs were defined. The R script removed invalid data (e.g., eye gaze outside of the monitor), and executed a loop to specify a circular-shaped AOI for every predefined time interval. The location and size of AOI center were based on the eye-gaze of an expert and 3 degrees of visual angle, respectively. The R script outputted a text file that was imported into a commercial software for quantitative eye-gaze analysis. At this exploratory analysis stage, we performed an ANOVA to test whether eye-gaze agreement between three expert surgeons would be lower for 10 videos with than 9 without adverse events.

Results:
This procedure created dynamic AOIs that closely resembled the heat map of expert eye gazes in the commercial eye-gaze analysis software (Figure 1), lending credibility to the validity of the procedure. Further, ANOVA indicated a significant decline in agreement between experts for videos with adverse events (F(1, 35)=10.02, p=.003), suggesting the dynamic AOIs were sensitive to change in complexity between surgeries.

Conclusion:
Our method of automatically generating dynamic AOIs can alleviate labor and subjectivity of the evaluators in manually defining AOIs for analysis. Future work will introduce dynamic AOI shapes to reflect complex environment in surgery. Our method should improve efficiency, sensitivity and reliability of analyzing eye-gaze in dynamic surgical environments.