56.08 Ultrasound Guided Central Venous Catheter Insertion: is Safer, but in whose hands? A meta-analysis.

C. J. Lee1, R. S. Chamberlain1,2,3  1Saint Barnabas Medical Center,Surgery,Livingston, NJ, USA 2New Jersey Medical School,Surgery,Newark, NJ, USA 3St. George’s University School Of Medicine,St. George’s, St. George’s, Grenada

Introduction:  Real-time ultrasound guidance for the placement of central venous catheters (CVCs) is purported to increase placement success and to reduce complications but in whose hands?  This meta-analysis assesses all available evidence comparing landmark-guided (LG) to ultrasound-guided (UG) CVC insertion in regards to success, safety/complications and the experience of the operator placing the CVC.

Methods:  A comprehensive literature search of Medline, PubMed, and the Cochrane Central Register of Controlled Trials was performed. 17 prospective, randomized controlled trials were identified which compared UG with LG techniques of CVC placement and specified operator experience. Data were extracted on study design, study size, operator experience, rate of successful catheter placement, number of attempts to success, and rate of accidental arterial puncture. A meta-analysis was constructed to analyze the data.

Results: 17 trials with a total of 3,686 subjects were included. 1,684 CVCs were placed by LG, and1,822 by UG. Among the UG group, 910 subjects were placed by junior operators (<5 years experience), and 912 by senior operators. In the LG group, 754 subjects were placed by junior operators and 930 by senior operators. UG CVC insertion was associated with 14% increase in the likelihood of successful CVC placement by junior operators (risk ratio (RR), 1.14; 95% CI, 1.08-1.21) and a 8% increase in senior operators (RR 1.08; 95% CI, 1.02-1.14). The mean number of needle attempts until CVC placement was lower in the UG placement group (Standard difference in means -0.85; 95% CI, -1.18 to -0.51) however no significant difference was seen between junior and senior operators. UG was associated with a 49% decrease in the likelihood of accidental arterial puncture among junior operators (RR 0.51; 95% CI, 0.28-0.95 and  an 86% decrease in senior operators (RR 0.14; 95% CI, 0.07-0.26). Statistically significant difference between junior and senior operators was only observed in regards to accidental arterial puncture (p=0.004).

Conclusion: UG CVC placement in adult patients is effective in improving overall success rate and decreasing the number of attempts until cannulation and arterial puncture rate regardless of operator experience level.  Accidental arterial punctures during UG CVC were lower in both groups, but a significantly higher benefit was observed among senior operators compared to junior operators.  UG SVC improves success rate, efficiency, and safety in regards to CVC placement, even among those most comfortable or confident in traditional landmark guided techniques.

56.09 Monitoring the Autonomic Nervous Activities to Evaluate the Mental Workload of Surgical Operations

K. Yamanouchi1, N. Hayashida1,2, S. Kuba1, C. Sakimura1, F. Fujita1, K. Kanetaka1, M. Takatsuki1, T. Kuroki1, N. Takamura2, S. Eguchi1  1Nagasaki University,Department Of Surgery,Nagasaki, Nagasaki, Japan 2Nagasaki University,Department Of Global Health, Medicine And Welfare, Atomic Bomb Disease Institute,Nagasaki, Nagasaki, Japan

Introduction:  Surgeons sometimes experience tension and stress for long periods of time when performing operations.  The heart rate variability (HRV) is the variability within single heart beats compared to each other, and has been used as a parameter of the mental load, for example, to assess pilots’ stress levels during flight.  The purpose of this study was to evaluate the mental workload of surgeons before, during and after surgical operations, especially during (1) pancreatoduodenectomy (PD), performed by open or laparoscopic methods, and (2) living donor liver transplantation (LDLT), both of which are complex and usually take a long time to complete.  Additionally, the parameters were compared in various aspects during the operations.

Methods:  We studied 2 surgeons who each had more than 20 years of experience.  Before, during and after the operations, the surgeons put on a small monitoring device, which included a bi-axial accelerometer (ACM), thermometer, ECG, central processing unit (CPU), and memory IC.  By using the frequency domain method with ECG, we measured the high frequency (HF; with a frequency ranging from 0.15 to 0.4 Hz), which represents parasympathetic activity, the low frequency (LF; with a frequency ranging from 0.04 to 0.15 Hz)/HF frequency ratio, which represents the sympathetic activity, as well as the heart rate.

Results: (1) In all 5 cases of PD, the HF components were significantly lower and the LF/HF ratios were higher during the operation than those before the operation (181.4 vs 479.5 Hz p<0.01, 9.1 vs 2.9 p<0.01).  When we reviewed the fluctuations of the values during and after the operation, the HF did not drop in any of these cases.  (2) Out of the 4 LDLT, the HF were lower in 2 and the LF/HF higher in 3 procedures during the operation compared to those before the operation.  These sympathetic nerve-predominant changes in status did not returned to the baseline for at least one hour after surgery.  LDLT is associated with risks of bleeding due to the patients’ liver failure or portal hypertension, and consists of various fatal aspects, i.e., removal of the whole liver, reconstruction of the hepatic veins, portal veins, hepatic artery and bile duct, and reperfusion of the implanted liver.  When we compared the values in various aspects in LDLT, in all cases, the HF were significantly lower and/or LF/HF were significantly higher during the reconstruction of vessels or bile ducts than during the removal of the liver.

Conclusion: While the sympathetic nerve activity increased during surgeries in most cases compared with the levels before the operation, the autonomic nerve status did not return to the baseline for quite a while after surgery.  Moreover, various procedures during operations induce diverse autonomic nerve changes in surgeons.  Thus, monitoring the changes in the autonomic nervous activities could provide a powerful tool for the objective evaluation of the mental workload of operations for surgeons.

 

56.10 More Than A “Camera Holder”: Teaching Laparoscopic Skills To Students During The Surgery Clerkship

P. I. Abbas1,2, J. G. Holder-Haynes1, D. J. Taylor1, B. G. Scott1, M. L. Brandt1,2, B. J. Naik-Mathuria1,2  1Baylor College Of Medicine,Michael E. DeBakey Department Of Surgery,Houston, TX, USA 2Texas Children’s Hospital,Division Of Pediatric Surgery,Houston, TX, USA

Introduction:

The majority of general surgery operations are now performed laparoscopically. Students are usually delegated to holding the camera, a primarily passive learning experience as opposed to skills they might actively learn during open procedures. We introduced laparoscopic skills sessions to medical students during their first surgery rotation in order to provide hands-on experience. We hypothesized that the exposure and ability to practice during the sessions would improve basic laparoscopic skills and increase student interest in a surgical career.  

Methods:

All students on the core general surgery rotation attended 2 separate 1-hour sessions in the surgical simulation lab. Instruction was provided by Department of Surgery faculty members. Surveys were used to assess prior exposure to laparoscopic surgery and video games (VG) as well as interest in a surgical career before and after the course. Course effectiveness was assessed by a pre and post-instruction laparoscopic peg transfer exercise.

Results:

One hundred and one students participated in the course. Eighty-two students had documented pre and post-instruction peg transfer times. There was an overall improvement in median transfer times after instruction (pre 63 sec (IQR 46-84.5) vs. post 50.5 sec (IQR 39-65.2), p<0.001). When stratified by gender, men (n=40) had faster median pre-intervention peg transfer times (65 sec (IQR 51-88)) vs 81 sec (IQR 65-98) for women (n=61) (p=0.030). However, both genders had equivalent post-instruction transfer times (men 48 sec (IQR 36-61) vs women 51.3 sec (IQR 43.2-68.3), p=0.478). Similarly, students with prior VG use had faster pre-intervention peg transfer time (72sec (IQR 52-88)) vs no VG use (90 sec (IQR 74-149)) (p=0.002). However, post-instruction time was similar (VG 48.5 sec (IQR 39.5-60.5) vs no VG 58.2 sec (IQR 45.6-73), p=0.183). There was no difference in transfer time between students with exposure to <5 laparoscopic cases (79 sec (IQR 57.8-97.5)) and students with exposure to 5-10 lap cases: 81 sec (IQR 56.3-88); p=0.139. Fifty students completed both pre and post surveys. There was no significant increase (pre-24% vs post-34%, p=0.29) or decrease (pre-32% vs post 22%, p=0.13) in interest in surgical career after the course.

Conclusion:

A laparoscopic course for medical students on their first surgery rotation is effective in improving laparoscopic skills. Although male gender and video game use may be associated with better intrinsic skills, instruction and practice allows female students and non-video game users to “catch up.” There was a possible trend towards a change in student perception about a career in surgery. A larger sample size would be required to determine whether learning laparoscopic skills during the surgery core clerkship would truly increase interest in surgery as a career.
 

54.09 Neighborhood Socioeconomic Status Predicts Violent Injury Recidivism

V. E. Chong1, W. S. Lee1, G. P. Victorino1  1UCSF-East Bay,Surgery,Oakland, CA, USA

Introduction:  Measures of individual socioeconomic status, such as income, educational attainment, employment level, and insurance status, are known to correlate with violent injury recidivism. Accordingly, tertiary violence prevention programs at many trauma centers target these areas to help violently injured patients avoid recurrent violent victimization. A person’s environment may also influence their risk of being involved in violence, and as such, neighborhood socioeconomic status may impact the likelihood of recurrent injury. As this potential link has yet to be completely studied, we conducted a review of victims of interpersonal violence treated at our trauma center, hypothesizing that the median income of their neighborhood of residence predicts recurrent violent victimization.

Methods:  We conducted a retrospective analysis of our urban trauma center’s registry, identifying patients who were victims of interpersonal violence from 2005-2010. We included patients ages 12-24, as this is the age of eligibility for our hospital’s violence intervention program. We focused on this age group because we currently have the resources to further address their needs. Patients who died from their trauma were excluded. Recurrent episodes of violent injury were identified, with follow-up through 2012. Median income for the patient’s zip code of residence was derived from US census estimates and divided into quartiles. Multivariate logistic regression was conducted to evaluate predictors of violent injury recidivism.

 

Results: During the study period, 1,888 patients presented to our trauma center as a result of interpersonal violence. Mechanism of injury included blunt assault (n=451; 24%), stabbing (n=266; 14%), and gunshot wound (n=1171; 62%). We identified 162 recidivist events (8.6%). Neighborhood median income ranged from $25,818 to $137,770. Univariate analysis was performed, and multivariate logistic regression confirmed the following factors as independent predictors of violent injury recidivism: male gender (OR=2.54 [1.33, 4.86]; p=0.005), black race (OR=2.14 [1.16, 3.93]; p=0.01), and the two lowest neighborhood median income quartiles, < $37,609 (OR=1.7 [1.15, 2.51]; p=0.008) and $37,609 to $40,062 (OR=1.85 [1.13, 3.02]; p=0.01). 

 

Conclusion: For young patients that present with violent injury, the median income of their neighborhood of residence is independently correlated with their risk of recidivism. Low neighborhood socioeconomic status may be associated with a patient’s disrupted sense of safety after violent injury, and may represent a shortage of resources necessary to help patients avoid future violence. As such, tertiary violence prevention programs aimed at reducing violent injury recidivism should include services at both the individual level, examples of which include job training and educational support, and the neighborhood level, including advocacy efforts focused on community safety and social justice.

 

54.10 Endovascular Repair of Ruptured Abdominal Aortic Aneurysms Improves Hospital Resource Utilization

A. Cha1, V. Dombrovskiy1, N. Nassiri1, R. Shafritz1, S. Rahimi1  1Robert Wood Johnson – UMDNJ,Vascular Surgery,New Brunswick, NJ, USA

Introduction: Endovascular repair has shown improved early survival and decreased postoperative complications in treating ruptured abdominal aortic aneurysms. However, less is understood about hospital resource utilization for endovascular compared to open surgical repair of ruptured abdominal aortic aneurysms. The purpose of this study was to evaluate and compare hospital length of stay and cost for these two procedures.

Methods: Patients with ruptured abdominal aortic aneurysms who underwent endovascular repair or open surgery were queried from MedPAR files 2005-2009. Because hospital length of stay and cost were not normally distributed and highly skewed, each parameter estimate is presented as median and intergroup comparison is performed with Wilcoxon rank sum test.

Results: Among 8,480 selected Medicare patients 1,939 underwent endovascular repair and 6,541 experienced open surgery. Total hospital and ICU (intensive care unit) length of stay after endovascular repair (6 days and 2 days, respectively) were significantly shorter than after open surgery (9 days and 4 days; P<0.0001 for both). Total hospital cost for all services provided to beneficiary during hospital stay after endovascular repair was greater than after open surgery ($100,875 vs $89,035; P<0.0001) but ICU cost for endovascular repair ($5,516) was significantly smaller than for open surgery  ($8,600; P<0.0001). The following costs in the endovascular repair group were also less expensive than in the open surgery group: for anesthesia services ($1,673 vs $1,866; P=0.0033), laboratory costs ($6,736 vs $10,396; P<0.0001), pharmaceutical costs ($5,619 vs $11,026; P<0.0001). However, cost of medical/surgical supplies among patients with endovascular repair ($36,710) was significantly greater than among those with open surgery ($8,766; p<0.0001) that resulted in more expensive total hospital cost for endovascular repair.

Conclusion: Use of endovascular technique for repair of ruptured abdominal aortic aneurysms compared to open surgery significantly reduces total hospital and ICU length of stay that can increase hospital capacity.  Because of the expensive cost of supplies for endovascular repair, this procedure slightly increases total hospital cost, but provides financial savings for many services related to hospital stay.

55.01 Using Human Factor Analysis to Improve Perioperative Handoffs Following Congenital Heart Surgery

A. Phillips1, V. Olshove1, S. Sheth1, E. Zahn1, C. Chrysostomou1, K. Catchpole1, B. L. Gewertz1  1Cedars-Sinai Medical Center,Surgery,Los Angeles, CA, USA

Introduction:  The transfer of patients from the operating room to the intensive care unit is a critical time and requires the co-ordination of many personnel and devices as well as the accurate exchange of patient information. We used human factors analytic techniques to improve specific elements of handoffs (task design and sequencing, teamwork, communication and workspace). We measured the impact of the changed handoff protocol on disturbances in care ( “flow disruptions”, FD) and clinical outcomes. We hypothesis that compliance with an enriched handoff process (EHP) would improve efficiency and reduce disruptions in patient care. 

Methods: In a prospective, interrupted time-series observational study in the Congenital Heart Intensive Care (CHPICU), Cardiothoracic Intensive Care, Neonatal Intensive Care, twenty-nine consecutive patients were studied in two distinct intervals: Group 1 patients (n=16) were admitted with a traditional handoff process while Group 2 patients (n=13) were cared for after the EHP was developed, and cared for in the CHPICU. Through direct observation, 5 categories of FD were assessed (organization, teamwork, patient factors, equipment, communication). In addition, 5 key clinical outcomes were evaluated 6 hrs after admission to the ICU (vital signs, bleeding, general condition, respiratory status and drugs).

Results: Time (minutes) to complete the handoff was less in Group 2 (23.5±7.5) compared to Group 1 (35±14.6, p<0.05). FD were reduced after implementation of the EHP (Group 1, 1.5±1.2 per patient, Group 2 0.8±1.2 per patient, p=0.12), with redution in FD due to personnel (31.3% to 15.4%), patient (12.5% to 7.7%), and equipment (43.8% to 38.5%). Communication FD were 12.5% in both groups. The incidence of all negative clinical outcomes per patient at 6 hrs post admission were reduced after implementation of the EHP, 0.9±0.8 to 0.2±0.4, p<0.05. 

Conclusion: Developing a standardized and structured perioperative handoff process specific for each institution can improve care. The use of a enriched handoff process reduces the time to compete the hand-off, reduces flow distrubances, and leads to a reduction in negative clinical outcomes at 6 hours. Uniform compliance remains a challenge and merits continued attention, as well as equipement and communication flow disturbances during the hand-off. 

 

55.02 Describing Surgeons’ Attitudes and Beliefs towards Preoperative Advanced Care Planning Discussions

G. Awad Elkarim1,2, K. M. Devon1,3,4, L. Gotlib Conn2, B. Henry1,2, M. F. McKneally1, A. B. Nathens1,2  1University of Toronto,Toronto, Ontario, Canada 2Sunnybrook Health Sciences Centre,Toronto, ONTARIO, Canada 3Women’s College Hospital,Toronto, Ontario, Canada 4University Health Network,Toronto, Ontario, Canada

Introduction:  Patients undergoing high-risk surgery may experience major, life-threatening complications in the intraoperative or postoperative period. Previous literature suggests that preoperative discussions of patients’ values and preferences for goals of care may prevent postoperative conflicts between surrogate decision makers and the surgical care team and ensure patients’ autonomy. Surgeons’ beliefs and attitudes towards preoperative advanced care planning (ACP) discussions are not known.

Methods:  A purposive sample of surgeons who perform high-risk operations (mortality > 1%) at academic hospitals were interviewed using a semi-structured questionnaire. Representation from several surgical divisions revealed common themes and variations. We interviewed participants until theoretical saturation was achieved (n=13). Interview transcripts were initially coded independently using the grounded theory approach and constant comparison. Codes were reviewed collaboratively, a coding scheme was established, and transcripts were re-coded based on the coding scheme. Codes were analyzed for themes, trends and variations. 

Results: Of  the 13 surgeons interviewed, only two ask their patients preoperatively if they had expressed their values and preferences for goals of care to their family. We have identified six common challenges for having preoperative ACP conversations with high-risk surgical patients: (1) Anticipated low likelihood of prolonged stay in the intensive care unit and need for life sustaining treatments; (2) Uncertainty in predicting the course of recovery in the postoperative phase of care in complex cases; (3) Perceived psychological burden of the conversation preoperatively; (4) Providing an optimistic surgical management versus negativity of advanced care planning; (5) Patients wanting to focus on the positives; (6) Preoperative discussions overwhelm patients. We have also identified three elements that would facilitate this conversation preoperatively: (a) Patients have had an end of life discussion before or have had previous experience with high-risk surgery; (b) The higher the risk of surgery, the easier it is to segue into an ACP discussion; (c) The older the patient, the more cognizant  of the concept of death. Most surgeons indicated that they rely on the family to reflect patients’ best interests and that preoperative ACP conversations may not be necessary or beneficial for all high-risk surgical patients, particularly for surgeries where the chance of mortality is < 10%. 

Conclusion: Very few surgeons explicitly ask their patients if they had expressed their preferences for goals of care to family members. There are many challenges and few enablers to initiate a preoperative ACP discussion. Surgeons weigh the risks and the benefits of having this conversation preoperatively and accordingly make the decision in the best interests of their patients.

55.03 Evaluating The Impact Of A Dedicated Rounding Surgeon On Patient Care

P. I. Abbas1,2, I. J. Zamora1,2, S. C. Elder2, T. C. Lee1,2, J. G. Nuchtern1,2  1Baylor College Of Medicine,Michael E. DeBakey Department Of Surgery,Houston, TX, USA 2Texas Children’s Hospital,Division Of Pediatric Surgery,Houston, TX, USA

Introduction: The implementation of shorter resident work hour restrictions has increased the need for more faculty involvement for continuity of patient care. To address this need, a surgeon of the week (SOW) program was initiated at our institution in July 2013 to assign an attending surgeon to round on all pediatric surgery patients on the academic service. The purpose of this study was to assess the impact of the surgical rounder on care provided on the surgical floor.

Methods: This is a retrospective study comparing data before and after implementation of the SOW system. The goal was to assess the impact of SOW in three categories: patient care, patient safety, and nursing satisfaction. Administrative data were collected from July to December 2013 (post-implementation) and compared to July to December 2012 (pre-implementation). A 10-point Likert scale survey was administered to the daytime nursing /ancillary staff on the pediatric surgical floor to gauge the perceived effect of the SOW system on three areas of interest: nursing satisfaction, perceived patient satisfaction, and nursing to physician communication. Data are reported as means.

Results:  As background, 6,752 separate inpatient encounters were documented at our institution for the entire fiscal year of 2013 with 5,541 operations performed at the main campus. When analyzing pre and post-implementation data, total inpatient E&M encounters increased after SOW implementation (3482 vs 4000, respectively), as did work  RVUs. The total number of patient complaints decreased after implementation of the SOW with the reduction in patient safety reports accounting for a majority of the total decrease [Table].

Twenty of the daytime nursing staff on the pediatric surgical ward completed the satisfaction survey with 12 employed prior to the initiation of SOW. The overall satisfaction score of the SOW program among all 20 participants was 8.3. When the 12 employees who had experience before the SOW program were analyzed, there was a noted increase in satisfaction after the SOW program in all three areas [Table]. The Cronbach’s alpha coefficient for the overall survey was 0.95, 0.94 for nursing satisfaction, 0.95 for perceived patient satisfaction, and 0.95 for nursing to physician communication.

Conclusion: Initiation of the SOW program has improved patient care through the presence of a dedicated rounding surgeon, as evidenced by a decrease in patient safety complaints. Additionally, the SOW program increased work RVUs, led to an improved work environment with increased nursing staff satisfaction, and enhanced nursing-physician communication.

 

55.04 A Statewide Analysis of Specialized Care for Pediatric Appendicitis

L. R. Putnam1,3,4, L. K. Nguyen2, K. P. Lally1,3,4, L. Franzini2, K. Tsao1,3,4, M. T. Austin1,3,4  4Children’s Memorial Hermann Hospital,Houston, TX, USA 1University Of Texas Health Science Center At Houston,Department Of Pediatric Surgery,Houston, TX, USA 2University Of Texas School Of Public Health,Department Of Management, Policy, And Community Health,Houston, TEXAS, USA 3Center For Surgical Trials And Evidence-based Practice,Houston, TX, USA

Introduction:

Appendicitis is the most common surgical disease in children, yet few data exist to support care by pediatric surgeons or within children’s hospitals (specialized care). We hypothesized that children treated with specialized care are younger, have more severe disease, and experience equal or better outcomes than children treated by general surgeons or in non-children’s hospitals.

Methods:

A retrospective cohort study of Texas Blue Cross / Blue Shield (BCBS) claims data from 2008 – 2012 was performed of all children (< 18 years) who underwent appendectomy for acute appendicitis. Children’s hospitals were identified based on the number and/or percentage of pediatric admissions over a five year period. Hospital length of stay (LOS) was used as a surrogate for disease severity; complicated appendicitis was defined as LOS ≥ 3 days.  Primary outcomes included adverse events, 30-day readmissions, and LOS. Chi-square, Student’s t-test, and multivariate regression were performed.

Results:

Of the 3,886 pediatric appendectomies, 894 (23%) were performed by pediatric surgeons and 1558 (40%) were performed in children’s hospitals. Children treated by pediatric surgeons were younger (10.9 ± 0.1 vs 12.0 ± 0.1 years, p<0.01), had more severe disease (48% vs 42% complicated, p=0.01), underwent fewer computed tomography (CT) exams (64% vs 71%, p<0.01), and were hospitalized longer (3.6 ± 3.3 days vs. 3.0 ± 2.1 days, p<0.01). Similarly, children treated in children’s hospitals were younger (11.1 ± 3.4 vs 12.2 ± 3.4, p<0.01), underwent fewer CTs (59% vs 77%, p<0.01), were more likely to undergo laparoscopic appendectomy (82% vs 75%, p<0.01), and were hospitalized longer (3.3 ± 2.8 days vs 3.0 ± 2.2 days, p<0.01). On multivariate analysis, specialized care did not predict adverse events or 30-day readmissions. However, longer hospital LOS was associated with treatment by pediatric surgeons (0.5 days, p<0.01) or within a children’s hospital (0.2 days, p<0.01).

Conclusion:

Privately-insured children in Texas treated for appendicitis by pediatric surgeons or in children’s hospitals were younger, had more severe disease, and were hospitalized longer but were less likely to undergo preoperative CT. While outcomes were similar between children receiving specialized and non-specialized care, there is likely a role for specialized care for younger children and for those with more severe disease.  Future studies should evaluate costs and efficiency of specialized care given fewer CTs but longer LOS for children.

55.05 Surgical Antibiotic Prophylaxis Requires More Than Operating Room Interventions

C. M. Chang1,6, L. R. Putnam2,5,6, J. M. Podolnick1,5, S. Sakhuja1,5, M. Matuszczak3,6, M. T. Austin2,5,6, L. S. Kao4,5, K. P. Lally3,5,6, K. Tsao2,5,6  6Children’s Memorial Hermann Hospital,Houston, TX, USA 1University Of Texas Health Science Center At Houston,Medical School,Houston, TX, USA 2University Of Texas Health Science Center At Houston,Department Of Pediatric Surgery,Houston, TX, USA 3University Of Texas Health Science Center At Houston,Department Of Pediatric Anesthesia,Houston, TX, USA 4University Of Texas Health Science Center At Houston,Department Of General Surgery,Houston, TX, USA 5Center For Surgical Trials And Evidence-based Practice,Houston, TX, USA

Introduction:

Proper prophylactic antibiotic administration includes adherence to all components: appropriate administration, type, dose, timing, and redosing. We previously demonstrated that 52% of operations suffered from at least one incorrect component of proper administration. In response, a multiphase, multifaceted prophylactic antibiotic program was created with the hypothesis that overall adherence to prophylactic institutional guidelines would increase.

Methods:

From 2011-2014, three 10-month interventional periods were conducted which implemented adoption of Surgical Care Improvement Project-based pediatric antibiotic prophylaxis guidelines (2011), integration of checkpoints into the pre-incision surgical safety checklist/creation of a computerized physician order entry module (2012), and role assignment to anesthesiology for administration (2013); audit/feedback was performed throughout. Following each period, an 8-week direct-observational assessment was performed. Perioperative factors that may influence guideline adherence including wound class, surgical specialty, patient weight, and anesthesia provider were analyzed. Spearman’s rank correlation and chi-squared analysis were performed, p<0.05 was considered significant.

Results:

1,052 operations were observed. Prophylactic antibiotics were indicated in 629 (60%) in which 625 (99%) received them. Conversely, antibiotics were not indicated in 421 cases (40%) in which 358 (85%) did not receive antibiotics. For cases requiring antibiotics, adherence to the four administration components remained unchanged (54% to 55%, p=0.99). Only redosing significantly improved (7% to 53%, p=0.02), whereas correct type declined (98% to 70%, p<0.01, Table). This decline was mostly attributed to two surgeons who were unaware of updated 2013 institutional guidelines, but utilized an acceptable antibiotic. Otherwise, correct type and overall adherence in 2014 would have been 89% and 72%, respectively. Adherence to guidelines did not differ significantly based on ASA class, surgical specialty, patient weight, anesthesia provider, or surgical wound class (all p>0.05).

Conclusions:

Despite multiple interventions to improve antibiotic prophylaxis, overall adherence did not significantly increase. Although most interventions were directed at point of administration in the operating room, proper dissemination of institutional guidelines remains a challenge. Future strategies will require additional educational initiatives as well as a perioperative approach towards process standardization to improve adherence to antibiotic prophylaxis administration.

55.06 Surgical Never Events and Contributing Human Factors

C. A. Thiels1, J. M. Nienow1, T. M. Lal1, J. M. Aho1, K. S. Pasupathy1, T. I. Morgenthaler1, R. R. Cima1, R. C. Blocker1, S. Hallbeck1, J. Bingener1  1Mayo Clinic,Rochester, MN, USA

Introduction: National Quality Forum never events continue to occur despite highly trained teams following procedures designed to avoid these harms. We report the first prospective analysis of human factors contributing to surgical and procedural never events using a rigorous, validated Human Factors Analysis and Classification System (HFACS).

 

Methods: From 8/2009 – 12/2013 all surgical and procedural never events (i.e. retained foreign object (RFO), wrong site/side procedure, wrong implant, wrong procedure) underwent systematic error causation analysis by a team consisting of the patients’ care providers, patient safety specialists, and surgical leadership as soon as the event was discerned. During the root-cause analysis meeting, contributing human factors were categorized using HFACS.  Factors were analyzed according to incident type and Reason’s 4 levels of error causation (actions, organizational influences, supervision, preconditions for actions). The causes were further categorized into subgroups (e.g. action into compliance versus error; error into decision or perception based) and finally into one of 159 subcategories of contributing factors (e.g. confirmation bias, communication failure), termed nano-codes.

 

Results: During the 4.5 year period over half a million procedures were performed and 62 never events were evaluated using HFACS. Concurrent with counter measures the event frequency decreased from 2010 to 2013. A total of 603 contributing nano-codes were identified, grouped by four major error causes (Table 1). The relative contribution of identified error causes to each type of never event revealed that actions (n=251) and preconditions to actions (n=281) accounted for the majority of the nano-codes across all four types of never events. The most common action nano-codes were ‘failed to understand’ (n=33, decision error) and ‘confirmation bias’ (n=32, perception error). The most commonly coded pre-condition nano-code was ‘channeled attention on a single issue’ (n=28, cognitive factor) and ‘inadequate communication’ (n=25, team resource management). As subgroups, cognitive factors, decision errors, technology and communication predominate as underlying causes to surgical and procedural never events.

 

Conclusion: To our knowledge, this is the first report of a validated human factors analysis applied prospectively to surgical and procedural never event review. Targeting interventions to address cognitive factors and team resource management as well as perceptual biases may reduce decision errors and will likely be most successful in further improving patient safety.  The results delineate targets to further reduce never events from our healthcare system.

55.07 Predicting Observation Status for Patients Older Patients with Small Bowel Obstruction

N. Goel1, L. Gutnik1, G. Reznor1, D. Rivera Morales1, A. Salim1, M. J. Zinner1, Z. R. Cooper1  1Brigham And Women’s Hospital,Trauma, Burns And Surgical Critical Care,Boston, MA, USA

Introduction:

The Two-Midnight Rule established by the Centers for Medicare and Medicaid Services in 2013, states that inpatient tests, treatments, and services are eligible for payment only if the hospital stay crosses two midnights (MN). Therefore, to avoid non-payment from Medicare, it is critically important for surgeons to correctly identify patients who will stay less than 2 MN (observation status) at the time of admission. Small bowel obstruction (SBO) is the most common cause of surgical admission with estimated costs of over $1 billion per year. The purpose of this study was to identify patient factors predictive of a hospital stay less than two midnights (MN) among patients > 65 years old who present to the emergency room with small bowel obstruction.

Methods:

A retrospective review of patients > 65 years old admitted to a tertiary academic medical center with small bowel obstruction from 2006-2013. We used data from the Research Depository of Patient Records (RPDR) to identify patients with small bowel obstruction using ICD-9 codes (560.0 intussusception, 560.1 paralytic ileus, 560.81 adhesions (intestinal/peritoneal), 560.8 other specified obstruction, 560.9 other unspecified obstruction). We used chart review to collect patient demographics (age, race, sex, zip code) and clinical characteristics (neoplasm, inflammatory bowel disease (IBD), recent surgery, laboratory values and radiographic findings including prior anastomosis, adhesions, mass or tumor, or other) at the time of presentation to the emergency room. The unit of analysis was the hospital admission for SBO. Chi-square tests were used to compare differences between patients admitted for < 2 MN and > 2 MN and multivariate logistic regression was used to identify predictors of admission for < 2 MN.

Results:

Among 855 older patients admitted with SBO, 39 (4.8%) stayed < 2 MN. As compared to longer hospital stay, stay < 2MN was associated with age 65-74 years (71.8% vs. 53.1%, p=0.03), IBD (10.3% vs. 1.6%, p=0.006) and prior anastomosis on CT(15.4% vs. 3.2%, (p=0.001). However, only a minority of patients staying < 2MN had any of these characteristics. In multivariate logistic regression, age 65-74 (OR 2.10, 95% CI 1.02-4.32), IBD (OR 6.07, 95% CI 1.76-20.91), and prior anastomosis (OR 4.67, 95% CI 1.72-12.60) were predictive.

Conclusion:

Approximately 5% of older patients previously admitted for SBO would not meet current CMS criteria for admission. Although age 65-74 years, presence of IBD, and obstruction due to prior anastomosis, predict stay < 2MN, most patients who stay ≥ 2MN also have these characteristics. Useful identifers of older patients with SBO who will stay < 2 MN are elusive. Under the CMS rules, hospitals are at high risk of reduced payment for some older patients admitted with SBO.

55.08 1 Year Outcomes For Medicaid vs. non-Medicaid Patients After Laparoscopic Roux-en-Y Gastric Bypass

E. Y. Chen1, B. Fox1, A. Suzo2, S. A. Jolles1, J. A. Greenberg1, G. M. Campos1, M. J. Garren1, L. M. Funk1  1University Of Wisconsin,Department Of Surgery,Madison, WI, USA 2Ohio State University,Department Of Surgery,Columbus, OH, USA

Introduction:

The Medicaid system pays for nearly half of the obesity-related medical costs in the U.S. with 45 states providing bariatric surgery coverage to varying degrees. Given that new Medicaid enrollments have reached nearly seven million people since passage of the Affordable Care Act in 2010, understanding bariatric surgery outcomes and costs for Medicaid patients is critical. The purpose of this study is to compare one-year surgical outcomes and costs between Medicaid and non-Medicaid patients who underwent laparoscopic Roux-en-Y gastric bypass surgery.

Methods:

Our study is a retrospective review that included all patients who underwent a primary laparoscopic Roux-en-Y gastric bypass from January 1, 2010 to June 1, 2013 at the University of Wisconsin Hospital and Clinics (220 patients). Of these patients, 33 Medicaid patients were identified and matched with 99 non-Medicaid patients (1:3 study design). Ninety-day and one-year outcomes and complications were extracted from electronic medical records. One-year facility costs (inpatient, outpatient, and emergency department) were obtained from the UW information technology division. Fisher’s exact and students T-tests or Wilcoxon rank sums were used to compare categorical and continuous variables, respectively.

Results:

Medicaid patients were younger (age 39.0 vs. 48.7; p<0.001) but had similar preoperative body mass indices (49.6 vs. 47.1; p=0.09) and similar preoperative comorbidities with the exception of hyperlipidemia (24.2% vs. 50.5%; p=0.01) when compared to non-Medicaid patients (Table 1). Length of stay (2.2 vs. 2.3 days; p=1.00) and 90-day overall complication rates (42.4 vs. 31.3; p=0.29) were similar between Medicaid and non-Medicaid patients, respectively. Emergency department visits (48.2% vs. 27.4%; p =0.06) and hospital readmissions (37.0% vs. 14.7%; p=0.01) were more common for Medicaid patients. Medicaid patients had less overall excess body weight loss (50.7% vs. 65.6%; p =0.001) but had similar rates of comorbidity resolution one year following surgery. Median overall costs during the one-year follow-up period were similar between Medicaid and non-Medicaid patients ($21,160 vs. $24,215; p=0.92). There were no deaths during the one-year follow-up period.

Conclusion:

One-year outcomes following laparoscopic Roux-en-Y gastric bypass were largely similar between Medicaid patients and non-Medicaid patients at our institution. Emergency department visits and readmissions were more common for Medicaid patients, but this did not translate into increased costs for the Medicaid system. Concern for increased overall costs may not be a valid justification for state Medicaid programs to deny their patients bariatric surgery coverage.
 

55.10 Inappropriate warfarin use in trauma: Time for a safety initiative

H. H. Hon1, A. Elmously1, C. D. Stehly1,2, J. Stoltzfus3, S. P. Stawicki1,2, B. A. Hoey1  1St. Luke’s University Health Network,Regional Level I Trauma Center,Bethlehem, PA, USA 2St. Luke’s University Health Network,Department Of Research & Innovation,Bethlehem, PA, USA 3St. Luke’s University Health Network,The Research Institute,Bethlehem, PA, USA

Introduction: Warfarin continues to be widely prescribed in the United States for a variety of conditions. Several studies have demonstrated that pre-injury warfarin may worsen outcomes in trauma patients. We hypothesized that a substantial proportion of our trauma population was receiving pre-injury warfarin for inappropriate indications and that a significant number of such patients had subtherapeutic or supratherapeutic international normalized ratios (INR). Our secondary aim was to determine if pre-injury warfarin is associated with increased mortality.

Methods: A 10-year retrospective review of registry data from a Level I trauma center was conducted between 2004 and 2013. Abstracted variables included patient age, Injury Severity Score (ISS), Abbreviated Injury Score (AIS) for head, mortality, hospital length of stay (HLOS), indication(s) for anticoagulant therapy, admission Glasgow Coma Scale, and admission INR determinations. Warfarin indication(s) and suitability were verified using the most recent American College of Chest Physicians (ACCP) Guidelines. Inappropriate warfarin administration was defined as use inconsistent with ACCP guidelines. For outcome comparisons, a random sample of trauma patients who were not taking warfarin was designated as "control group". Additionally, severe traumatic brain injury (sTBI) was defined as AIS head ≽4. Statistical analyses were conducted using the chi-square and the Mann-Whitney rank sum tests, as appropriate.

Results: A total of 21,136 blunt trauma patients were evaluated by the trauma service during the study period. Of those 1,481 (7%) were receiving warfarin prior to injury, with an additional 1,947 (~10% of the non-warfarin sample) designated as "non-warfarin controls". According to the ACCP Guidelines, 264/1,481 (17.8%) patients in the warfarin group were receiving anticoagulation inappropriately. Moreover, >63% of the patients were outside of the intended therapeutic window with regard to their INR (41.1% subtherapeutic, 22.2% supratherapeutic). Overall, median HLOS was greater in patients taking pre-injury warfarin (4 days vs. 2 days, p <0.01). Mortality was higher in the warfarin group (6.1% or 91/1,481) vs. the control group (2.6% or 50/1,947, p<0.01). Patients with sTBI (AIS head ≽4) were had significantly greater mortality in the warfarin group (26.4% or 56/212) vs. the control group (14.9% or 22/148, p<0.01).

Conclusion: A significant number of trauma patients admitted to our institution were noted to take warfarin for inappropriate indications. Moreover, a significant proportion of patients taking warfarin had either subtherapeutic or supratherapeutic INRs. Pre-injury warfarin was associated with increased mortality and HLOS in this study, especially in the subset of patients with sTBI. National safety initiatives directed at appropriate initiation and management of warfarin are necessary.
 

56.01 Operative Performance: Quantifying The Surgeon’s Response to Tissue Characteristics

A. D. D’Angelo1, D. N. Rutherford1,2, R. D. Ray1, A. Mason2, C. M. Pugh1  1University Of Wisconsin,Department Of Surgery, School Of Medicine And Public Health,Madison, WI, USA 2University Of Wisconsin,Department Of Kinesiology, School Of Education,Madison, WI, USA

Introduction: Objective measures of surgical skills are needed. Motion capture technology allows for quantification of psychomotor performance. The study aim was to evaluate how tissue characteristics influence psychomotor performance during a suturing task. Our hypothesis was that participants would alter their technique to improve performance with each subsequent stitch placed while suturing.

Methods: Surgical attendings (N=6), residents (N=4) and medical students (N=5) performed three interrupted sutures on three simulated tissue types: foam (dense connective tissue), rubber balloons (artery) and tissue paper (friable tissue). An optical motion tracking system captured performance data from participants’ bilateral hand movements. In prior studies, these three tissue types were able to differentiate experience level using path length and total procedure time. Additionally, longer path lengths and suture time helped to confirm that suturing on tissue paper (friable tissue) was the more difficult task compared to foam and balloon. In the current study, path length and suture time were stratified by stitch number to investigate changes in performance with each subsequent stitch. Repeated measures ANOVA was used to evaluate for main effects of stitch order on path length and suture time and interactions between stitch order, material and experience.

Results:When participants sutured the tissue paper (friable tissue), they changed their path length in a linear fashion with the first stitch on the tissue paper having the longest path length and the third stitch the shortest (F(4,44)=4.64, p=.003) (Table 1). Similarly, it took participants less time to perform subsequent stitches in the tissue paper, with the first stitch taking the most time and the third stitch taking the least of time (F(4,44)=5.14, p=.017). Participants did not change their path lengths and suture times when placing the first through third stitches in the foam and balloon materials (p=.910, p=.769). The interaction of experience level and stitch order was non-significant for both path length and suture time (F(4,22)=1.18, p=.345; F(4,22)=1.28, p=.316).

Conclusion:This study demonstrates quantifiable real time adaptation by participants to material characteristics while performing a suturing task. Participants had longer path lengths and took more time to place sutures in the friable tissue paper. Importantly they improved their performance on these measures with each subsequent stitch, indicating changes in psychomotor planning or performance. This adaptation did not occur with the less difficult suturing tasks. Motion capture technology is a promising method for investigating surgical performance and understanding how surgeons adapt to operative complexity.

 

56.02 Design of Tissue Force Feedback Knot-Tying Simulator and Performance Metrics for Deliberate Practice

J. Hsu1, J. R. Korndorffer2, K. M. Brown1  1University Of Texas Medical Branch,Surgery,Galveston, TX, USA 2Tulane University School Of Medicine,Surgery,New Orleans, LA, USA

Introduction: Surgical residents develop technical skills at variable rates, often based on random chance of cases encountered.  One such skill is tying secure knots without exerting excessive force. Deliberate practice allows more uniform training; however, there are no widely used methods to provide immediate feedback on force exerted as part of deliberate practice training to proficiency. This study describes the design of a simulator using a force sensor to measure instantaneous forces exerted on a blood vessel analog during knot tying, the development of expert derived performance goals, and assessment of novice performance in relation to these goals.

Methods: The knot-tying simulator consists of a Silastic tubing suspended horizontally at a fixed height in a confined space to simulate tying deep in the abdomen, and a Vernier Dual-Range Force Sensor perpendicularly attached to the tubing. Vessel ligations were performed on the tubing with 3-0 Silk sutures at a fixed offset from the sensor location to measure the vertical forces exerted. The offset was corrected by adjusting the sensor reading according to a standard curve created beforehand by a spring scale exerting known fixed forces at fixed offset locations from the sensor. 6 experts (surgical faculty and senior residents) and 8 novices (junior residents) were recruited to each perform 10 vessel ligations (2 square knots each) with two-handed and one-handed techniques. The data were recorded in Vernier Logger Pro software. Internal consistency for each individual’s 10 peak force measurements was tested with Cronbach’s α .  Average peak force for novices was compared to experts using Student's t-test.

Results: Cronbach’s α  was 0.95. The expert group on average exerted a significantly lower peak force compared to novices and had less variation in peak force across the 10 trials when performing two-handed (0.71±0.36 N vs 1.06±0.4 N, p<0.01) and one-handed (0.82±0.11N vs 1.41±0.36N, p<0.01) vessel ligations. Interestingly, 3 out of the 8 novices performed at force levels equivalent to the expert group average.

Conclusion: We have created a simulator that consistently records peak force data for an individual tying surgical knots, and demonstrates discrimination between novices and experts. Although the expert group performed vessel ligations with significantly lower peak force and less variation than the novice group, there were individuals within the novice group who performed at the expert level. This is consistent with the conceptual framework of milestones, and suggests that the skill of gentle knot tying can be measured and may develop at different chronologic levels of training in different individuals. This simulator can be used as part of deliberate practice curriculum with immediate feedback.

 

56.03 Novel Use of Google Glass for Wireless Vital Sign Monitoring During Surgical Procedures

C. A. Liebert1, M. A. Zayed3, J. Tran1, J. N. Lau1, O. Aalami2  1Stanford University School Of Medicine,Department Of Surgery, Division Of General Surgery,Palo Alto, CA, USA 2Stanford University School Of Medicine And Palo Alto Veteran’s Affairs Health Care System,Department Of Surgery, Division Of Vascular Surgery,Palo Alto, CA, USA 3Washington University School Of Medicine,Department Of Surgery, Section Of Vascular Surgery,St. Louis, MO, USA

Introduction: Real-time monitoring of patient vital signs is essential to ensure patient safety during surgical procedures requiring conscious sedation, such as bronchoscopy, endoscopy, chest tube placement, and endovascular interventions. Google Glass, with its novel head-mounted display and controls, provides a hands-free platform for enhanced monitoring of vital-sign parameters, particularly when safe multi-tasking is required and an anesthesiologist is not present.

Methods: In this randomized-controlled pilot study with crossover design, PGY-1 through PGY-5 surgery residents (n=14) at an academic institution participated in two standardized simulated bedside procedure scenarios. Scenario 1 involved thoracostomy tube placement using a SimMan® platform, and Scenario 2 involved bronchoscopy using a hybrid CAE EndoVR/SimMan® platform. Traditional vital sign monitors were available 90º from the operative field during all procedures, and residents were additionally randomized with or without continuous wireless vital sign streaming to Google Glass. Time to recognition of pre-programmed vital sign deterioration was recorded. User feedback was collected by survey after completion of the study.

Results: Live streaming of vital signs to Google Glass resulted in a trend towards earlier recognition of critical vital sign changes in both scenarios. During bronchoscopy, the experimental group used traditional monitors 90.3% less (p=0.003), yet recognized critical desaturation 8.8 seconds earlier than the controls (64.6s vs. 73.4s, p=ns). Similarly, during thoracostomy tube placement, the experimental group spent 70.8% (p=0.01) less time looking away from the procedural field, yet recognized severe hypotension 10.5 seconds earlier than the controls (31.3s vs. 42.8s, p=ns). Additionally, hypotension was less severe at time of recognition in the experimental group (+7.7 mmHg, p=0.08). The majority of participants ‘agreed’ or ‘strongly agreed’ that Google Glass increased their situational awareness (64%), was helpful in monitoring vital signs (86%), was easy to use in the procedural setting (93%), and has potential to improve patient safety (85%). The majority of participants ‘agreed’ or ‘strongly agreed’ that they would consider using Google Glass in their future surgical practice.

Conclusions: This randomized-controlled pilot study demonstrates the utility and feasibility of Google Glass for real-time vital sign streaming. Using Google Glass significantly decreased the amount of time spent looking away from the procedural field. There was a trend towards earlier time to recognition of vital sign deterioration in both scenarios, but the small sample size limits the statistical significance of this finding. This study provides preliminary evidence that the novel head-mounted platform of Google Glass can be used in the clinical setting to enhance procedural situational awareness and patient safety.

56.04 A Simulation-Enhanced Hand-off Curriculum Improves Resident Knowledge and Confidence

R. L. Hoffman1, M. Gupta1, R. R. Kelz1, J. Pascual1  1Hospital Of The University Of Pennsylvania,Philadelphia, PA, USA

Introduction: Hand-offs in surgery are a frequent and necessary part of patient care.  The ACGME has mandated that all residency programs incorporate hand off training.  While simulation has been shown to be effective for technical skills and team training, it is unknown if simulation enhances the teaching of hand-off skills. The aim of this study was to determine if the use of simulation in hand-off training is advantageous.

Methods: The implementation of a combined didactic and simulation-training hand-off curriculum for new interns from a large, urban academic institution was evaluated using knowledge, confidence, and satisfaction assessments. Interns were divided into two groups based on the educational program: simulation + didactics (SIM), and didactics only (DID).  All interns attended a 90 minute interactive didactic session reviewing literature, best practices and video-recorded examples of handoffs (written and verbal).  Before and after the didactic session, each SIM resident was given 10 minutes to review two patient cases and 3 minutes to perform a verbal hand-off to a resident actor. Each intern completed a self-assessment confidence survey using a 5-point Likert scale along with a multiple choice knowledge-based examination at the start and finish of the educational program. At the conclusion of the training, a satisfaction survey was administered. Paired t-tests were used to compare results across the two groups.

Results: Forty-two interns participated in the hand-off curriculum, with 21 SIM and 21 DID. Both the DID and SIM groups reported increased confidence with giving verbal hand-offs (DID before:3.11;DID after:3.68, p<0.01; SIM before:3.05 SIM after:3.57;p=0.02), and with knowledge on fact inclusion (DID before:3.16; DID after:3.63; SIM before: 3.00; SIM after: 3.67; both p<0.01). The SIM group reported improved confidence with receiving verbal hand-offs (before 3.05; after 3.81;p<0.01). Both groups reported a moderate decrease in anxiety associated with giving and receiving hand-offs, but only the DID group reported a significant decrease in anxiety when receiving hand-offs (before:2.84; after 3.68;p<0.01). There was a significant improvement in the overall scores on the knowledge-based examination in both groups (DID before:17.38; DID after:18.95; SIM before 15.71; SIM after:18.29;p<0.01).  The SIM group improved by 16%, and the DID group by 9%. Ninety-five percent of interns were satisfied/extremely satisfied with the curriculum, and 62% were highly likely to continue to use the curriculum tools in practice.

Conclusion:The use of a structured hand-off curriculum was successful in improving confidence, decreasing anxiety and increasing knowledge in the process of delivering verbal patient handoffs. The value of simulation integrated in a handoff curriculum needs further exploration, but may not provide significant advantages over a well-planned interactive didactic session.

 

56.05 Development of a proficiency-based virtual reality training curriculum for laparoscopic appendectomy

P. Sirimanna1, R. Aggarwal2, M. A. Gladman1  1University Of Sydney,Sydney Medical School – Concord,Sydney, NSW, Australia 2McGill University,Department of Surgery,Montreal, QC, Canada

Introduction: Laparoscopic appendectomy (LA) is the commonest emergency surgical procedure and is associated with significant variation in practice and outcomes. Further, this index procedure has a significant learning curve and is often a surgical trainee’s first experience of laparoscopy. Virtual reality (VR) simulation allows supplementary technical skill acquisition outside of the operating room (OR) with demonstrable translational benefits to actual OT performance. Surprisingly, training in LA has received little attention. Thus, we aimed to validate a VR simulator model of LA as a training and assessment tool, and obtain benchmarks of proficient skill in order to develop a proficiency-based VR simulation technical skills training curriculum for LA.

Methods: 10 experienced (performed >50 LAs), 8 intermediate (10-30 LAs) and 20 inexperienced (<10 LAs) operators were recruited. Using a high-fidelity VR laparoscopic surgical simulator (LAP MentorTM), experienced and intermediate operators performed 2 repetitions of 5 guided procedural modules that navigate users through the component steps of LA using a variety of surgical techniques, as well as an unguided module that allow a full LA to be practiced without direction. Inexperienced operators performed 10 repetitions of these modules to facilitate learning curve analysis. Performance was assessed using simulator-derived metrics of economy of motion, No of movements, path length, idle and task time. Construct validity was determined by comparing these metrics between the 3 groups for each module. Validated modules were used for curriculum construction, as were proficiency benchmarks for each metric (set as the median values obtained in the ‘experienced’ group) to provide performance goals.

Results: Guided modules demonstrated construct validity as evidenced by statistically significant differences between the 3 groups in terms of No of movements, path length, idle and task time (p<0.05), with statistically learning curves that plateaued between the 6th and 9th sessions (p<0.01).  2 of the 5 guided modules also exhibited construct validity for economy of motion. The unguided full LA module demonstrated construct validity for economy of motion (5.00 vs 7.17 vs 7.84, P<0.01), No of movements (1101 vs 690.5 vs 532, P<0.01), path length (1797.08cm vs 1573.51cm vs 1315.09cm, P<0.01), idle time (325.43s vs 160.44s vs 118.45s, P<0.01) and task time (864.49s vs 477.2s vs 352.12s, P<0.01), with statistically learning curves that failed to plateau. Examples of proficiency criteria used for curriculum construction include: task time (352s), path length (1315cm), and No of movements (532) for the unguided module.

Conclusions: A VR simulator model of LA was demonstrated to be a valid training and assessment tool. Consequently, the first evidence-based technical skills training curriculum for LA was constructed that facilitates acquisition of proficiency status by trainees.

56.06 Position Clustering: A Novel Approach to Quantifying Laparoscopic Port Placement

D. N. Rutherford1, A. D. D’Angelo1, C. Kwan1, P. B. Barlow1, C. M. Pugh1  1University Of Wisconsin,Department Of Surgery, School Of Medicine And Public Health,Madison, WI, USA 2University Of Wisconsin,Department Of Kinesiology, School Of Education,Madison, WI, USA

Introduction: Port placement is critical for surgical performance. Quantification of port selection may improve assessment of laparoscopic skill. The study aim was to use a novel objective method to assess port placement.

Methods: Twenty PGY 2-3 surgery residents from Midwestern programs completed a port placement assessment indicating locations for 5mm and 12mm ports on images of four hernias: epigastric, right lower quadrant (RLQ) incisional, right inguinal, and umbilical. Three of four possible hernias were randomly selected for assessment. The researchers then created a two-dimensional coordinate grid, and assigned coordinates to port locations on each image. The inguinal hernia was subdivided into total extra-peritoneal (TEP), trans-abdominal pre-peritoneal (TAPP) and other given differences in port placement for these procedures. Hierarchical cluster analysis (HCA) was used to group clusters of 5mm and 12mm port locations for each hernia. Clusters were defined using Ward’s method for linking squared Euclidean distances. The resultant distances of port cluster centers to the edges of the hernias were calculated.

Results:All participants completed the port placement assessment (epigastric (N=6), right umbilical (N=14), right inguinal (N=20) and RLQ incisional (N=20)). Figure 1 depicts distances from cluster center to hernia edge. For 5mm ports, the amount of variance in port to hernia distance was significantly different between hernias (Levene's test of variance F(6,28)=4.029, p<.005). The other approach for the inguinal hernia contained the most variance, whereas the epigastric had the least. Residents who chose methods other than TEP or TAPP for the right inguinal showed similar port placements to those observed for the RLQ incisional hernia. Multiple positions for 12mm ports were consistently either near (7.98-8.75cm) or far (11.7-15.3cm) from the hernia edge. For 12mm ports, the amount of variance in port to hernia distance was significantly different between different hernias (Levene's test of variance F(6,28)=2.83, p<.028). Both Inguinal TAP and TEP had the least amount of variance and the incisional hernia had the most. 

Conclusion:HCA was successfully used to classifying variability in port placement. The amount of variability in port distance to hernia edge differed depending on the type of hernia. Interestingly, those participants that did not use the TEP or TAPP approach, demonstrated clusters similar to the RLQ incisional hernia repair despite different anatomic considerations with these repairs. Future work will focus on employing cluster analysis to investigate expert-novice differences and other factors central to port placement, including between port distance and port pattern angles.