37.04 Results of Non-operative Management of Acute Limb Ischemia in Infants

S. Wang1, A. R. Gutwein1, N. A. Drucker1, R. L. Motaganahalli1, M. C. Dalsing1, B. W. Gray1, M. P. Murphy1, G. W. Lemmon1  1Indiana University School Of Medicine,Indianapolis, IN, USA

Objective:

Acute limb ischemia (ALI) in infants poses a challenge to the clinician secondary to poor operative outcomes, limb loss risk, and life-long morbidity.  This retrospective study reviews a 10-year institutional experience with non-operative management of ALI in infants.

 

Methods:

Infants (age ≤ 12-months) diagnosed with ALI by duplex and treated with initial non-operative management at a tertiary care dedicated children’s hospital were identified via vascular laboratory lower extremity arterial duplex records.  Patient demographics, injury characteristics, treatment given, and outcomes were abstracted via chart review and presented using descriptive statistics.  Continuous variables are presented as mean ± standard deviation.

 

Results:    

During the study period, a total of 25 (28% female) infant patients were diagnosed with ALI.  The average age for this cohort was 3.5 ± 3.2 months.  The majority of cases were secondary to iatrogenic injury (88%) from arterial cannulation (Table).  Injury sites were concentrated to the lower extremities (84%) as compared to the upper.  Absence of Doppler signals were noted in 64% of infants while limb cyanosis observed in 60% at the time of presentation.

 

Infants were initially treated with anticoagulation (80%) when possible.  Two patients failed non-operative management and required thrombolysis secondary to progression of thrombus burden while anticoagulated.  There were no major (above-ankle) amputations at 30-days.  Three deaths occurred within 30-days; all were unrelated to limb ischemia.  In the 30-day survivors, overall duration of follow-up was 52.1 ± 37.7 months.  One infant required above-knee amputation six weeks after diagnosis resulting in an overall limb salvage rate of 96%.  Long-term morbidity included two patients with a chronic wound of the affected limb and one patient with limb length discrepancy.  No subjects reported claudication at the latest follow-up appointment.  Additionally, all patients were independently ambulatory except for one female who was using a walker with leg braces.

 

Conclusions:

In contrast to the adult population, ALI in infants can be managed with anticoagulation and non-operative intervention.  Long-term follow-up continues to demonstrate excellent functional results and minimal disability. 

37.01 The Mortality for Surgical Repair is Lower than Ligation in Patients with Portal Vein Injury

J. Sabat1, T. Tan2, B. Hoang2, P. Hsu2, Q. Chu3  1Banner University Medical Center/ University of Arizona,Tucson, AZ, USA 2University Of Arizona,Tucson, AZ, USA 3Louisiana State University Health Sciences Center,New Orleans, LA, USA

Introduction:
Portal vein injury is uncommon, and the optimal treatment is controversial. We compared the outcomes of ligation versus repair of portal injury utilizing the National Trauma Database (NTDB).

Methods:
All adult patients who suffered from portal injury were identified from NTDB (2002-2014) by International Classification of Diseases (ICD), Ninth Revision Diagnosis codes. Patients were stratified by treatment modality into observation, ligation, and surgical repair using ICD procedure codes. Outcomes including hospital mortality, bowel resection, and length of stay (LOS) between ligation and surgical repair were compared by two-sample t-test or X2 test as appropriate. Multivariable analyses were performed with logistic regression. 

Results:
Among 752 patients with portal vein injury, 345 patients (45.9%) were observed, 103 patients (13.7%) had ligation, and 304 (40.4%) underwent surgical repair.  Over 95% was from penetrating trauma, and mortality was 49%. Age, gender, injury severity score (ISS), Glasgow Coma Scale (GCS), presenting blood pressure and heart rate were similar between groups that underwent ligation and surgical repair. The hospital mortality (59.2% vs. 47.7%, p=.08), bowel resection (1.9% vs. 1%, p=.55), and LOS (12.5 vs. 15 days, p=.08) were also comparable between ligation and repair in univariable analysis. In multivariable analysis, the hospital mortality was significantly lower for surgical repair compared to ligation (OR 0.63, 95% CI 0.40, 0.99, p=.04).  

Conclusion:
Portal vein injury is caused by penetrating trauma and is associated with significant mortality and morbidity. Surgical repair is associated with significantly lower mortality than ligation of the portal vein and should be performed if feasible. 

37.02 Impact of Glucose Control and Regimen on Limb Salvage in Patients Undergoing Vascular Intervention

J. L. Moore1, Z. Novak1, M. Patterson1, M. Passman1, E. Spangler1, A. W. Beck1, B. J. Pearce1  1University Of Alabama at Birmingham,Division Of Vascular Surgery And Endovascular Therapy,Birmingham, Alabama, USA

Introduction:

Studies have demonstrated correlation between levels of glycosylated hemoglobin (HbA1c) in diabetic patients and the incidence of both peripheral artery disease (PAD) and lower extremity amputation (AMP).  However, the impact of glucose control on outcomes in patients undergoing open or endovascular PAD treatment has not been examined.  The purpose of this study is to assess the effect of HbA1c and medication regimen on amputation-free survival (AFS) in patients undergoing treatment for limb salvage.

Methods:

Limb salvage patients with a baseline HbA1c within one month of treatment were identified from a prospectively maintained vascular registry queried from 2010-17.  The hospital EMR was cross-referenced to identify patients with HbA1c measured within 3 months of the index procedure.  Patient records were examined and instances of AMP, type of treatment (ENDO v OPEN), demographics, co-morbidities, and diabetic glycemic control modalities were analyzed.  Diagnosis of diabetes was determined by a combination of HbA1c, physician diagnosis, and usage of diabetic medications.

Results:

Our query found 306 eligible limbs for analysis.  AFS was associated with diabetes (82.6%, p=0.002), non-white race (56.5%, p=0.006), insulin-only diabetic control (52.2%, p<0.001), post-operative creatinine >1.3mg/dL (38.0%, p<0.001), and dialysis (26.1%, p<0.001).  HbA1c was not significantly associated with AFS.  Survival analysis (Kaplan-Meier plots) revealed a diagnosis of diabetes was significantly associated with worse AFS in the entire cohort (Log rank=0.011) [Graph 1] as well as in the critical limb ischemia subgroup (Log rank=0.049) (Rutherford >3) (not pictured).  Logistic regression demonstrated an association with age (p=0.040, AOR=1.027), post-operative creatinine level (p=0.003, AOR=1.247), non-white race (p=0.048, AOR=0.567), and insulin-only diabetic control (p=0.002, AOR=2.535) with worse AFS across all limbs surveyed.

Conclusion:

Diabetes with insulin only regimen has significantly worse AFS than non-diabetic patients or those on an insulin sensitizing regimen.  This may represent a surrogate for disease severity, but the type of medications may present a modifiable risk factor to improve limb salvage.

36.09 Opioid Prescribing vs. Consumption in Patients Undergoing Hiatal Hernia Repair

A. A. Mazurek1, A. A. Brescia1, R. Howard1, A. Schwartz1, K. Sloss1, A. Chang1, P. Carrott1, J. Lin1, W. Lynch1, M. Orringer1, R. Reddy1, P. Lagisetty2, J. Waljee1, M. Englesbe1, C. Brummett1, K. Lagisetty1  1University Of Michigan,Department Of Surgery,Ann Arbor, MI, USA 2Ann Arbor VA,Division Of General Internal Medicine And Center For Clinical Management And Research,Ann Arbor, MI, USA

Introduction:  Recent studies have demonstrated a high prevalence of excessive opioid prescribing after surgery, and the incidence of persistent opioid use is among the highest after thoracic surgery. Procedure-specific prescribing guidelines have been shown to reduce excessive prescribing in certain health systems; however, this has not been studied within thoracic surgery. There is little data available to assess how many opioids patients take versus are prescribed following surgery. To establish evidence-based guidelines to reduce excessive prescribing, this study compared postoperative opioid prescribing dosages to actual usage following open and laparoscopic hiatal hernia repair (HHR).

Methods:  Retrospective chart review was performed on 119 patients who underwent open (transthoracic and transabdominal) or laparoscopic HHR between January and December 2016, and received an opioid prescription after surgery. The patient cohort consisted of opioid naïve patients, defined as individuals not using opioids at the time of surgery. Patients underwent a telephone survey regarding postoperative opioid use. The amount of opioid prescribed was quantified in oral morphine equivalents (OME) to adjust for varying potencies between medications. Descriptive statistics (median and interquartile range, IQR) were used to summarize variables. Mann-Whitney U tests were used to compare the OME prescribed vs. actual patient use within the patient cohort.

Results: 91 opioid naïve patients (37 open HHR; 54 laparoscopic HHR) were surveyed, with a response rate of 69% (n=63, 27 open, 36 lap). Mean age was 59 ± 14 years and the cohort was 65% female. Median follow-up time was 305 days (IQR 209-463). The overall median prescription size was 300 mg OME (IQR 225-375) and median patient use was 150 mg OME (IQR 25-300) (p<0.0001). Following open HHR, median prescription size was 350 mg OME (IQR 250-420) and median patient use was 225 mg OME (IQR 105-300) (p=0.001). Following laparoscopic HHR, median prescription size was 270 mg OME (IQR 200-350) and median patient use was 106 mg OME (IQR 6-295) (p<0.0001). In comparing open vs. laparoscopic HHR, significantly more OME were prescribed for open (p=0.01), with a difference in median patient use that did not reach statistical significance (p=0.08).

Conclusion: Patients use far fewer opioids than they are prescribed following open and laparoscopic HHR. While there is excess prescribing in both cohorts, laparoscopic procedures tended to have a greater difference in amount prescribed versus actual usage. These findings may be used to develop guidelines that better standardize postoperative prescribing to reduce overprescribing. 

36.10 Trends in Postoperative Opioid Prescription Size: 2010 – 2014

J. Hur1, J. S. Lee1, H. M. Hu1, M. P. Klueh1, R. A. Howard1, J. V. Vu1, C. M. Harbaugh1, C. M. Brummett2, M. J. Englesbe1, J. F. Waljee1  1University Of Michigan,Department Of Surgery,Ann Arbor, MI, USA 2University Of Michigan,Department Of Anesthesiology,Ann Arbor, MI, USA

Introduction:
Despite growing concerns about the dangers of prescription opioids, deaths from opioid overdoses have increased in recent years, reaching over 33,000 fatalities in 2015. Surgeons play a key role in this epidemic, providing 10% of opioid prescriptions in the United States. In this context, it is unclear how opioid prescribing by surgeons has changed during this time period. In this study, we examined trends in postoperative opioid prescription size over time. We hypothesized that postoperative opioid prescription size would increase during this time period.

Methods:
Using a nationwide dataset of employer-based insurance claims, we identified opioid-naive patients who underwent laparoscopic cholecystectomy, breast procedures (lumpectomy and mastectomy), or wide local excision from 2010 – 2014. Opioid prescriptions were obtained from pharmacy claims and converted to oral morphine equivalents (OMEs) for comparison. Our primary outcome measure was the size of the first opioid prescription between the day of surgery and 14 days after discharge. We calculated the mean prescription size with 95% confidence intervals for each year and procedure type. Mean prescription sizes were compared using t-tests.

Results:
In this cohort, 134,085 opioid-naïve patients underwent surgery during the study period. Of these patients, 108,893 (81.2%) underwent laparoscopic cholecystectomy (mean age 46 ± 15 years; 71.1% female); 19,199 (14.3%) underwent breast procedures (mean age 58 ± 12 years, 99.8% female); and 5,993 (4.5%) underwent wide local excision (mean age 55 ± 14 years, 45.1% female). Figure 1 shows the mean opioid prescription size by year and procedure type. For laparoscopic cholecystectomy, opioid prescriptions markedly increased in size from 230 OMEs in 2010 (equivalent to 46 tablets of 5 mg hydrocodone) to 475 OMEs in 2014 (equivalent to 95 tablets of 5 mg hydrocodone). This increase was statistically significant (p<0.001). Prescription size for breast procedures also increased significantly from 228 OMEs to 394 OMEs (p<0.001). For wide local excision, prescription size increased from 200 OMEs to 277 OMEs, but this difference was not statistically significant (p=0.10).

Conclusion:
For opioid-naïve patients undergoing common elective surgical procedures, opioid prescription size continued to increase from 2010 – 2014, reaching the equivalent of almost 100 tablets of 5 mg hydrocodone in 2014. Given recent studies showing most surgical patients require only 10 – 15 tablets of 5 mg hydrocodone, surgeons should focus on tailoring opioid prescriptions to better match actual patient requirements. 
 

36.08 Development & Usage of a Computerized Simulation Model to Improve Operating Room Efficiency

L. H. Stevens1,2, N. Walke2, J. Hobbs2, T. Bell1, K. Boustany2, B. Zarzaur1  1IU School Of Medicine,General Surgery,Indianapolis, IN, USA 2IU Health,Perioperative Services,Indianapolis, IN, USA

Introduction:
~~Efficient usage of the operating rooms is crucial to a hospital’s mission and survival. Traditionally the allocation of operating room (OR) block time to surgeons has been heavily influenced by historical usage patterns (which may no longer be relevant), local politics and organizational culture instead of data driven analysis of the most efficient OR allocation. We created a computerized simulation model of the OR’s to drive more rationale and efficient utilization. This model provides the ability to test proposed changes in block allocation, demonstrate the impact of those changes to the surgeons, and thus gain surgeons’ buy-in to the proposed changes before implementation.

Methods:
~~A discrete-event, adaptive, complex system computerized simulation model was created based on big-data analysis of 3 years of historical OR data and an industrial engineering work-flow analysis of a 600-bed level-1 trauma hospital with 30 operating rooms. Data elements included: admission type, case urgency, number of cases by surgical specialty, equipment utilized, case duration, personnel required, and patient flow within the perioperative department (from patient check-in to discharge from the recovery room). The simulator provides the ability to model changes in OR block allocation by the full day or half day, create specialty specific blocks, open OR blocks as “first-come first-served,” set aside OR blocks for urgent or emergent cases, and/or to close OR blocks and then measure the impact of these changes on OR utilization and throughput. The simulator provides the ability to test up to 8 different block allocation scenarios at a time and runs each scenario 10 times to assess the total & mean OR utilization over a month.

Results:
~~Using actual O.R. case volumes, case urgencies, and specialty mix the simulator was used to contrast the O.R. utilization achieved by the historical specialty based OR block allocation (scenario #1) with total elimination of all specialty block allocation, making every OR open for elective scheduling on a “first-come, first served” basis (scenario #2). Having all OR’s open for “first-come, first-served” scheduling resulted in significantly higher total and mean OR utilization (Total OR utilization scenario 1= 2,051.9 hours vs. scenario 2=2,236.4, p=0.02; mean OR utilization scenario 1=68.4% vs. scenario 2=74.5%, p=0.02).

Conclusion:
~~The usage of a computerized simulator of the OR’s provides surgical leaders with a virtual laboratory to test experimental OR allocation scenarios that can increase OR utilization but would be far too radical to implement without the surgeons’ buy-in. Surgeon buy-in and implementation of new approaches to OR allocation are enhanced by this data driven approach.
 

36.07 ED Utilization as a Comprehensive Healthcare Metric after Lumbar Spine Surgery

L. M. Pak1, M. A. Chaudhary1, N. K. Kwon1, T. P. Koehlmoos2, A. H. Haider1, A. J. Schoenfeld1  2Uniformed Services University Of The Health Sciences,Department Of Preventive Medicine,Bethesda, MD, USA 1Brigham And Women’s Hospital,Center For Surgery And Public Health,Boston, MA, USA

Introduction: Post-discharge emergency department (ED) visits represent a significant clinical event for patients and are an important quality metric in healthcare. The national volume of lumbar spine surgeries has risen dramatically and represents an increasingly large proportion of healthcare costs. Quantifying the use of the ED post-discharge and identifying factors that increase ED utilization are critical in evaluating current hospital practices and addressing deficiencies in patient care.

Methods: This study utilized claims from patients insured through TRICARE, the insurance plan of the Department of Defense. TRICARE data was queried for the years 2006-2014 for patients aged 18-64 years old who had undergone one of three common lumbar spine surgery procedures (discectomy, spine decompression, spine fusion). Patient demographics, treatment characteristics, and follow-up information was abstracted from the claims data.  Sponsor rank was used as a proxy for socio-economic status. Utilization of the ED at 30- and 90-days were the primary outcomes.  Multivariable logistic regression tests were used to identify independent factors associated with 30- and 90-day ED utilization following a lumbar spine procedure.

Results: In the period under study, 48,485 patients met inclusion criteria. Fifteen percent of patients (n=7,183) presented to the ED within 30 days post-discharge. The 30-day readmission rate was 5% (n=2,344). By 90 days post-discharge, 30% of patients (n=14,388) presented to an ED. The 90-day readmission rate was 8% (n=3,842). The overall 30-day and 90-day complication rates were 6% (n=2,802) and 8% (n=4,034), respectively. Following multivariable testing, female sex, increased Charlson comorbidity index, lower socio-economic status, fusion-based spine procedures, length of stay, and complications were associated with ED utilization within 30- and 90-days (Table). Dependent beneficiary status was associated with 90-day ED utilization only (OR1.050, 95%CI 1.020-1.081).

Conclusion: Within 30- and 90-days after lumbar spine surgery, 15% and 30% of patients, respectively, sought care in the ED. However, only one-third of these patients had a complication recorded during the same period, and even fewer were subsequently readmitted. These findings suggest a high rate of unnecessary ED utilization. We have identified several characteristics associated with the risk of ED utilization, which may present viable targets for intervention in the peri-operative period. 

36.04 Surgical Procedures in Health Professional Shortage Areas: Impact of a Surgeon Incentive Payment Plan

A. Diaz1, E. Schneider1, J. Cloyd1, T. M. Pawlik1  1Ohio State University,Columbus, OH, USA

Introduction:  The American College of Surgeons has predicted a physician shortage in the US with a particular deficiency in general surgeons. Any shortage in surgical workforce is likely to impact underserved areas. The Affordable Care Act (ACA) established a Center for Medicare/Medicaid Services (CMS) based 10% reimbursement bonus for general surgeons in Health Professional Shortage Areas (HPSAs). We sought to assess the impact of the ACA Surgery Incentive Payment (SIP) on surgical procedures performed in HPSAs.

Methods:  Hospital utilization data from the California Office of Statewide Health Planning and Development between January 1, 2006 and December 31, 2015 were used to categorize hospitals according to HPSA location.  A difference in difference analysis was used to measure the effect of the SIP on year-to-year differences for in- and out-patient surgical procedures by hospital type pre-(2006-2010) versus post-(2011-2015) SIP implementation.

Results: Among 409 hospitals, two hospitals performed surgery in a designated HPSA. Both HPSA hospitals were located in a rural area, were non-teaching, and had <500 beds. The number of total surgical procedures was similar at both non-HPSA (Pre: n=210, 6,048  vs. Post: n=212,1,550) and HPSA (Pre: n=8,734  vs. Post: n=8,776) hospitals. Over the time period examined, inpatient (IP) procedures decreased (non-HPSA, Pre: 933,388 vs. Post: 890,322; HPSA, Pre: 5,166 vs. Post: 4,301), while outpatient (OP) procedures increased (non-HPSA, Pre: 1,172,660 vs. Post: 1,231,228; HPSA, Pre: 3,568 vs. Post: 4,475)(all p< 0.05). Post-SIP implementation, surgical procedures performed at HPSA hospitals markedly increased compared with non-HPSA hospitals (IP non-HPSA: -625 vs. HPSA: 363; OP non-HPSA: -111 vs. HPSA: 482)(both p<0.05). Of note, while the number of ORs increased over time among non-HPSA hospitals (Pre: n=3,042 vs. Post: n=3,206, p<0.05) OR numbers remained stable at HPSA hospitals (Pre: n=16 vs. Post: n=17). To estimate population-level effects of the SIP, a difference-in-differences model was used to adjust for cluster-related changes, as well as preexisting differences among non-HPSA and HPSA hospitals. Using this approach, the impact of the SIP on surgical procedure volume among HPSA relative to non-HPSA hospitals was noted to be considerable (Figure 1). 

Conclusion:  CMS SIP implementation was associated with a significant increase in the number of surgical procedures performed at HPSA hospitals relative to non-HPSA hospitals, essentially reversing the trend from negative to positive. Further analyses are warranted to determine whether bonus payment policies actually help to fill a need in underserved areas or whether incentives simply shift procedures from non-HPSA to HPSA hospitals.

36.05 An Analysis of Preoperative Weight Loss and Risk in Bariatric Surgery

L. Owei1, S. Torres Landa1, C. Tewksbury1, V. Zoghbi1, J. H. Fieber1, O. E. Pickett-Blakely1, D. T. Dempsey1, N. N. Williams1, K. R. Dumon1  1Hospital Of The University Of Pennsylvania,Gastrointestinal Surgery,Philadelphia, PA, USA

Introduction:

Preoperative weight loss theoretically reduces the risk of surgical complications following bariatric surgery. Current guidelines have focused on preoperative weight loss as an important element of patient care and, for some payers, a requirement for prior authorization. However, the association between preoperative weight loss and surgical complications remains unclear. The purpose of this study is to test the hypothesis that preoperative weight loss lowers operative risk in bariatric surgery.

Methods:

We conducted a retrospective analysis using the inaugural American College of Surgeons Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program data -2015. Only patients who had primary laparoscopic gastric bypass, open gastric bypass and laparoscopic sleeve gastrectomy were included. Patients were stratified into 4 groups by percent preoperative total body weight (TBW) loss. Univariate analyses was performed. Logistic regression was also used to determine the association between preoperative weight loss and surgical outcomes (mortality, reoperation, readmission, and intervention) with adjustment for potential confounders.   

Results:

A total of 120,283 patients were included in the analysis, with a mean age of 44.6 (±12.0) and 78.7% were female. Procedures were laparoscopic sleeve gastrectomy (69.0%), laparoscopic gastric bypass (30.3%), and open gastric bypass (1.2%). Of the total number of patients, 25% had <1% preoperative TBW loss, 22% had 1 – 2.99%, 29% had 3 – 5.99%, and 24% had ≥6%. When stratified by percent TBW loss, significant differences were found in age, sex, race, co-morbidities, smoking, and ASA classification (p<0.05). Using the <1% preoperative total percent body loss group as a reference, logistic regression revealed that a TBW loss of ≥3% was associated with a significant decrease in operative (30 day) mortality (p = 0.012). Preoperative weight loss in excess of 6% TBW was not associated with a further decrease in operative mortality. There was no significant association between percent TBW loss and reoperation, readmission or intervention within 30 days of operation (Table 1). 

Conclusion:

A preoperative reduction of more than 3% of TBW is associated with a significant reduction in operative mortality following bariatric surgery. These results suggest that a modest preoperative weight loss may substantially reduce operative mortality risk in this population. Further studies are needed to elucidate the association between preoperative weight loss and other outcome measures (reoperation, readmission, intervention). 

 

**The ACS MBSAQIP and the centers participating are the source of the data, and are not responsible for the validity or the conclusions.
 

36.06 Practices and Barriers Regarding Transitions of Care for Postoperative Opioid Prescribing

M. P. Klueh1, J. S. Lee1, K. R. Sloss1, L. A. Dossett1, M. J. Englesbe1, C. M. Brummett2, J. F. Waljee1  1University Of Michigan,Department Of Surgery,Ann Arbor, MI, USA 2University Of Michigan,Department Of Anesthesiology,Ann Arbor, MI, USA

Introduction:
Persistent opioid use is common following surgery, even among previously opioid-naïve patients. To date, it remains unclear how clinicians coordinate opioid prescribing for patients who require ongoing opioid analgesics after routine postoperative care is complete. To better understand these transitions of care, we conducted a qualitative study of surgeons and primary care physicians to describe practices and barriers for opioid prescribing in surgical patients who develop new persistent opioid use.

Methods:
We conducted face-to-face interviews with 11 physicians at a single academic healthcare system using a semi-structured interview guide. Participants were comprised of surgeons (n=4 resident surgeons; n=4 attending surgeons) and primary care physicians (n=3 attending physicians). We developed open-ended questions to describe the clinical course of patients after surgery, practices and attitudes for postoperative opioid prescribing, and the transition to chronic pain management. Interviews (15 – 30 minutes) were audiotaped, transcribed verbatim, and independently coded for a priori and emergent themes using the constant comparative method. Open and axial coding were applied using narrative analysis.

Results:
Table 1 summarizes key themes in transitions of care for postoperative opioid prescribing. Participants reported a wide range of underlying causes for the need to transition patients to chronic pain management, including provider confidence, signs of addiction, and time from operation. Practices for transitioning care ranged from passive transitions with no closed loop communication, to active transitions with continued follow-up to ensure the patient had transitioned to another physician for pain management. Barriers to transitioning care included a lack of standardized practices, lack of time, and limited access to pain specialists.

Conclusion:
Surgeons and primary care physicians describe varying practices and barriers for transitions of care in patients who develop new persistent opioid use after surgery. These findings may help identify interventions to improve coordination of care for these vulnerable patients.
 

36.02 WHO Surgical Safety Checklist Modification: Do Changes Emphasize Communication and Teamwork?

I. Solsky1, J. Lagoo1, W. Berry1,2, J. Baugh3, L. A. Edmondson1, S. Singer2, A. B. Haynes1,4,5  1Ariadne Labs,Boston, MA, USA 2Harvard School Of Public Health,Department Of Health Policy And Management,Boston, MA, USA 3University Of California – Los Angeles,Department Of Emergency Medicine,Los Angeles, CA, USA 4Harvard School Of Medicine,Surgery,Brookline, MA, USA 5Massachusetts General Hospital,Department Of Surgery,Boston, MA, USA

Introduction:  Adopted by thousands of hospitals globally, the World Health Organization’s (WHO) Surgical Safety Checklist is meant to be modified to best serve local practice but little is known about the type of changes that are made. The goal of this study is to provide a descriptive analysis of the extent and content of checklist modification.

Methods:  Non-subspecialty surgical checklists in English were obtained through online search along with targeted requests sent to hospitals. A detailed coding scheme was created to capture modifications to checklist content and formatting. Overall checklist information was collected such as the total number of lines of text and the team members explicitly mentioned. Information was also collected on modifications made to individual items and which were most frequently deleted. New items added were also captured. Descriptive statistics were performed.

Results: 161 checklists from 17 US states (n=116) and 11 countries (n=45) were analyzed. Every checklist was modified. Compared to the WHO checklist, those in our sample contained more lines of text (median: 63 (IQR: 50-73; Range: 14-216) vs. 56) and more items (36 (IQR: 30-43; Range: 14-80) vs. 28). Checklists added a median of 13 new items (IQR: 8-21, Range: 0-57). Items most frequently added referenced implants/special equipment (added by 83.23% of checklists), DVT prophylaxis/anticoagulation (74.53%), patient positioning (62.73%), and an opportunity to voice questions/concerns (55.28%). Despite increasing in size, checklists removed a median of 5 WHO items (IQR: 2-8; Range: 0-19). The most frequently removed items were the pulse oximeter check (removed in 75.16% of checklists), the articulation of patient-specific concerns from the nurse (47.83%) or anesthetist (38.51%), and the surgeon-led discussion of anticipated blood loss (45.96%) or case duration (42.24%), the latter 4 items comprising part of the WHO checklist’s 7-item “Anticipated Critical Events” section, which is intended for the exchange of critical information. The surgeon was not explicitly mentioned as participating in any part of the checklist in 14.29% of checklists; the anesthesiologist/CRNA in 14.91%, the circulator in 9.94%, and the scrub in 77.64%.

Conclusion: As encouraged by the WHO, checklists are highly modified. However, many are enlarged with additional lines and items that may not prompt discussion or encourage teamwork.  Of particular concern is the frequent removal of items from the WHO’s “Anticipated Critical Events” section, which is central to the checklist’s efforts to prevent complications by giving all team members an opportunity to voice concerns together. Leadership involved in checklist creation should ensure that checklists can be easily implemented, are inclusive of all team members, and promote a culture of safety. Further research is needed to assess the clinical impact of checklist modifications.

 

36.03 Crash Telemetry-Based Injury Severity Prediction Outperforms First Responders in Field Triage

K. He1, P. Zhang1,2, S. C. Wang1,2  2International Center Of Automotive Medicine,Ann Arbor, MI, USA 1University Of Michigan,Department Of Surgery,Ann Arbor, MI, USA

Introduction:

Early identification of severely injured patients in Motor Vehicle Collisions (MVC) is crucial. Mortality in this population is reduced by one quarter if these patients are directed to a level I trauma center versus a non-trauma center. The Centers for Disease Control and Prevention (CDC) Guidelines for Field Triage of Injured Patients recommends occupants at 20% or greater risk of Injury Severity Score (ISS) 15+ be urgently transported to a trauma center. With the increasing availability of vehicle telemetry technology, there is a great potential for advanced automatic collision notification (AACN) systems to improve trauma outcomes by detecting patients at risk for severe injury and facilitating early transport to trauma centers. We compared first responder field triage to a real-world field test of our updated injury severity prediction (ISPv2) algorithm using crash outcomes data from General Motors vehicles equipped with OnStar.

Methods:

We performed a literature search to determine the sensitivity of first responder identification of ISS 15+ MVC occupants. We used National Automotive Sampling System Crashworthiness Data System (NASS_CDS) data from 1999-2013 to construct a functional logistic regression model predicting the probability that one or more occupants in a non-rollover MVC would have ISS 15+ injuries. Variables included principal direction of force, change in velocity, multiple vs. single impacts, presence of older occupants (≥55 years old), presence of female occupants, belt use, and vehicle type. We validated our model using 2008-2011 crash data from Michigan vehicles with AACN capabilities identified from OnStar records. We confirmed telemetry crash data sent from the vehicles using police crash reports. We obtained medical records and imaging data for patients transported from the scene for evaluation and treatment. ISS was assumed to be ≤15 for MVC occupants not transported for medical assessment. We used our ISPv2 algorithm and transmitted telemetry data to predict the probability that an occupant had ISS 15+ injuries and compared our prediction to the observed injuries for each occupant and each vehicle.

Results:
Recent studies have found field triage to be 50-66% sensitive in identifying ISS 15+ occupants. Our study population included 924 occupants in 836 crash events. The median age was 41 years old, 57% were female, 21% were right-sided passengers, and 1.2% experienced an ISS 15+ injury. Our ISPv2 model was 72.7% sensitive (ISPv2 ≥0.2 when ISS 15+) and 93% specific (ISPv2<0.2 when ISS ≤15) for identifying seriously injured MVC patients. 

Conclusion:
Our second generation ISP algorithm was highly specific and more sensitive than current field triage in identifying MVC patients at risk for ISS 15+ injuries. This real-world field study shows telemetry data transmitted before dispatch of emergency medical systems is superior in selecting patients who require urgent transfer to trauma centers. 
 

35.09 The Significance of Laparoscopic Bursectomy Via an Outside Bursa Omentalis Approach in Gastric Cancer

L. Zou1, B. Zheng1, L. Zou1  1Guangdong Provincial Hospital Of Chinese Medicine,Department Of Gastrointestinal Surgery,Guangzhou, GUANGDONG, China

Introduction:

This study was aimed to compare the safety, feasibility and short-term effects of Laparoscopic bursectomy and D2 radical gastrectomy(LBDRG) with those of laparoscopic D2 radical gastrectomy (LDRG) in advanced gastric cancer (AGC).

Methods:

We retrospectively analyzed data on 68 consecutive patients undergoing LBDRG via an outside bursa omentalis approach (OBOA) from August 2012 to December 2014. The surgical outcomes of patients who underwent LBDRG were matched and compared with those of patients who underwent classic LDRG in our department at the same time.

Results:

The clinicopathological characteristics were similar between the two groups following matching. Although the mean operative time was longer in the LBDRG group than in the LDRG group (323.4±20.70 min vs. 288.5±21.76 min; p<0.05), the number of lymph nodes dissected was significantly greater in the LBDRG group than in the LDRG group (30.49±5.41 vs. 23.2±4.87; p<0.05). Additionally, there was no significant difference in the rate of local recurrence or metastases within the median two-year follow-up between the LBDRG group (5.9% [4/68]) and the LDRG group (8.8% [6/68]). 

Conclusion:
These results suggested that this technique is technically safe and feasible for AGC patients, and the short-term oncological effects are equal to those of LDRG.
 

35.10 Isolated Blunt Traumatic Brain Injury is Associated with Fibrinolysis Shutdown

J. Samuels1, E. Moore2, A. Banerjee1, C. Silliman3, J. Coleman1, G. Stettler1, G. Nunns1, A. Sauaia1  1University Of Colorado Denver,Department Of General Surgery,Aurora, CO, USA 2Denver Health Medical Center,Department Of Surgery,Aurora, CO, USA 3Children’s Hospital Colorado,Pediatrics-Heme Onc And Bone Marrow Transplantation,Aurora, CO, USA

Introduction:

While trauma-induced coagulopathy (TIC) contributes to mortality in seriously injured patients, the additive effect of Traumatic Brain Injury (TBI) remains unclear.  Prior studies have suggested TBI initiates an exaggerated bleeding diathesis with decreased clot formation and increased clot degradation in the initial post-injury phase. However, this coagulation phenotype has not been assessed using comprehensive coagulation assays, such as thrombelastography (TEG). This is desperately needed with the growing practice of empiric anti-fibrinolytic therapy. Therefore, the purpose of this study is to define the coagulation phenotypes of patients with TBI compared to other injury patterns as measured by TEG as well as conventional coagulation tests (CCT).

Methods:

The TAP (Trauma Activation Protocol) database is a prospective assessment of TIC in all patients meeting criteria for trauma activation at a level I trauma center. Patients were categorized into three groups: 1) Isolated TBI (I-TBI): AIS head ≥3 and ED GCS≤8 and AIS ≤2 for all other body regions; 2) TBI with polytrauma (TBI+Torso): AIS head≥3 and at least one AIS≥3 for other regions; and 3) Non-TBI (I-Torso): AIS head <3 and at least one AIS≥3 for other regions. Phenotype frequency was compared using the Chi-square test. Significance was declared at P<0.05.

Results:

There were 186 qualified patients, 38 with I-TBI, 55 with TBI+Torso, and 93 non-TBI patients enrolled between 2013 and 2016. Arrival SBP was higher for I-TBI (138 mmHg) compared to I-Torso (108 mmHg), but there were no significant differences in signs of shock (lactate, base deficit). Also, no differences existed between the three groups’ INRs, PTTs, or TEG measurements (ACT, Angle, and MA).

The distribution of fibrinolysis phenotypes is depicted in Figure 1. I-TBI and TBI+Torso had significantly higher incidence of fibrinolysis shutdown (rTEG Ly30 <0.9%) compared to I-Torso (p=0.045), and this persisted when only comparing patients in shock (Base Deficit ≥6) with a third of patients in the I-TBI group demonstrating shutdown, (p <0.01). Hyperfibrinolysis occurred in a minority of phenotypes (≤ 33%) in all three groups. Nearly 50% of patients with shock demonstrated shutdown after experiencing a TBI with other injuries.

Conclusion:

Historically, TBI has been associated with a coagulopathy characterized by hyperfibrinolysis. In contrast, this study found that TBI (isolated or with other injuries) was associated with fibrinolysis shutdown rather with a minority of patients demonstrating hyperfibrinolysis. With the growing use of empiric tranexamic acid (TXA), these data suggest that TXA should be given only when indicated by point of care testing.

 

36.01 Opioid Prescribing Habits of Pediatric Versus General Surgeons Following Laparoscopic Appendectomy

M. R. Freedman-Weiss1, A. S. Chiu1, S. L. Ahle1, R. A. Cowles1, D. E. Ozgediz1, E. R. Christison-Lagay1, D. G. Solomon1, M. G. Caty1, D. H. Stitelman1  1Yale University School Of Medicine,Department Of Surgery, Section Of Pediatric Surgery,New Haven, CT, USA

Introduction:

The complex issue of prescribing opioids balances recognizing opioids as a tool to reduce pain and as an addictive drug with a propensity to cause suffering. Adolescents who use prescription opioids have an increased risk for future drug abuse and overdose, making them a high-risk population. Appendectomy is one of the most common operations, often requires narcotic analgesia, and is performed by both pediatric and general surgeons. The opioid prescribing patterns of these two provider groups have not yet been compared; we hypothesize that pediatric surgery providers prescribe fewer opioids for adolescents than do general surgery providers.

Methods:

A retrospective chart review was conducted across a single health system consisting of four hospitals. All laparoscopic appendectomies performed between January 1, 2016 to August 14, 2017 on patients aged 7-20 were included for analysis. Any case coded for multiple procedures or identified as converted to open were excluded.

The primary outcome measure was amount of narcotic prescribed postoperatively. To standardize different formulations and types of analgesia prescribed, prescriptions were converted into Morphine Milligram Equivalents (MME). For reference, one 5 mg pill of oxycodone equals 7.5 MME. Patients were further grouped into quartiles based on amount of narcotic prescribed, with the top quartile classified as “high prescribing.” Logistic regression was performed evaluating odds of high prescribing, and incorporated patient weight, gender, race, insurance status, and service provider type (pediatric vs. general surgery).

Results:

A total of 336 pediatric laparoscopic appendectomies were analyzed, 148 by general surgeons and 188 by pediatric surgeons. Pediatric surgeons prescribed less narcotic than general surgeons overall (73.6 MME vs. 109.6 MME, p<0.001). For patients under the age of 13, there was no significant difference between pediatric (46.6 MME) and general surgeons (48.0 MME, p=0.8921). However, for the 13-20 age group, pediatric surgeons prescribed 28% less narcotic than general surgeons (93.2 MME vs. 130.1 MME, p<0.0001).

Regression analysis of patients 13-20 demonstrated that heavier weights (120-159lbs vs. <120lbs OR 4.6 95%CI[1.4-15.2]), ≥160lbs vs. <120lbs OR 5.5 95%CI[1.5-20.3]) and being cared for by a general surgery service (vs. pediatric surgery OR 5.2 95%CI[2.2-12.1]) were associated with high prescribing.

Conclusion:

After a laparoscopic appendectomy in a single hospital system, general surgeons prescribe significantly larger amounts of narcotic to adolescent patients than do pediatric surgeons. Although both provider types practice weight-based prescribing, even when controlling for weight, general surgeons are significantly more likely to be high prescribers. One substantial and modifiable contributor to the opioid epidemic is the amount of opioid prescribed, thus highlighting the need for education and guidelines on this topic.

35.08 Clinical Impact of Genetic Alterations According to Primary Tumor Sidedness in Colorectal Cancer

Y. Shimada1, Y. Tajima1, M. Nagahashi1, H. Ichikawa1, M. Nakano1, H. Kameyama1, J. Sakata1, T. Kobayashi1, Y. Takii2, S. Okuda3, K. Takabe4,5, T. Wakai1  1Niigata University Graduate School Of Medical And Dental Sciences,Division Of Digestive And General Surgery,Niigata, , Japan 2Niigata Cancer Center Hospital,Department Of Surgery,Niigata, , Japan 3Niigata University Graduate School Of Medical And Dental Sciences,Division Of Bioinformatics,Niigata, , Japan 4Roswell Park Cancer Institute,Breast Surgery,Buffalo, NY, USA 5University At Buffalo Jacobs School Of Medicine And Biomedical Sciences,Department Of Surgery,Buffalo, NY, USA

Introduction: Right-sided colorectal cancer (RCRC), which is derived from midgut, has different molecular and biological characteristics compared with left-sided colorectal cancer (LCRC) which is derived from hindgut. Recently, several unplanned retrospective analyses revealed the differences between RCRC and LCRC in prognosis and response to targeted therapy. We hypothesized that primary tumor sidedness is a surrogate for non-random distribution of genetic alterations, and is a simple and useful biomarker in patients with Stage IV CRC. To teste this hypothesis, we investigated the genetic alterations using comprehensive genomic sequencing (CGS), and analyzed the clinical impact of primary tumor sidedness in patients with Stage IV CRC.

Methods:  One-hundred-eleven Stage IV CRC patients with either RCRC or LCRC were analyzed. We investigated genetic alterations using 415-gene panel, which includes the genetic alterations associated with resistance to anti-EGFR therapy. The differences of clinicopathological characteristics and genetic alterations were analyzed between RCRC and LCRC using Fisher’s exact test. The differences in response to targeted therapies, and clinical significance of residual tumor status were analyzed between RCRC and LCRC using log-rank test. 

Results: Thirty-four patients (31%) and 77 patients (69%) had RCRC and LCRC, respectively. Histopathological grade 3 was significantly associated with RCRC (P = 0.042). Pulmonary metastasis was significantly associated with LCRC (P = 0.012), and peritoneal metastasis was significantly associated with RCRC (P = 0.002). Regarding residual tumor status, R0 resection of both primary and metastatic lesions showed significantly better overall survival compared with R2 resection in both RCRC and LCRC (P = 0.026 and 0.002, respectively). Regarding genetic alterations, RCRC has more genetic alterations associated with resistance to anti-EGFR therapy (BRAF, ERBB2, FGFR1, KRAS, PIK3CA, PTEN) compared with LCRC (P = 0.040). In 73 patients with anti-VEGF therapy, there was no significant difference on progression-free survival (PFS) between RCRC and LCRC (P = 0.866). Conversely, in 47 patients with anti-EGFR therapy, RCRC showed significantly worse PFS than LCRC (P = 0.019).

Conclusion: RCRC is more likely to have the genetic alterations associated with resistance to anti-EGFR therapy compared with LCRC, and shows resistance to anti-EGFR therapy. Primary tumor sidedness is a surrogate for non-random distribution of molecular subtypes in CRC.
 

35.06 IS CERVICAL MAGNETIC RESONANCE IMAGING FOR CERVICAL SPINE CLEARANCE JUSTIFIED AFTER NEGATIVE CT?

R. Kang1, C. Ingersol1, K. Herzing1, A. P. Ekeh1  1Wright State University,Surgery,Dayton, OH, USA

Introduction:
CT of the Cervical Spine(CT CS) is utilized widely in the evaluation of moderate to severely injured patients. In neurologically intact patients with imaging negative for injuries, but with persistent neck midline tenderness, a variety of protocols for further evaluation have been adopted by trauma centers including the use of Magnetic Resonance Imaging(MRI). The necessity and cost of this modality has been questioned in the presence of a negative high quality CT CS. We sought to ascertain changes in clinical management in this population of patients after a protocol change at a Level I Trauma Center.

Methods:
Data were retrospectively collected for patients seen at a Level 1 Trauma Center between Dec 2014- Jan 2015. Patients were identified through the trauma registry and cross-referenced with a database from the radiology department. All patients that obtained either a CS CT, MRI, or both CS CT and MRI during the specified period were identified. For our analysis, only patients that received both a CS CT and MRI with persistent neck pain and no neurological deficits were selected. The charts of these patients were reviewed for demographic and clinical data, including: age, gender, mechanism of injury, diagnosis on admission, length of hospital stay, length of ICU stay, injury severity score (ISS), results of the CS CT, and results of the MRI. This study followed a policy change on the trauma service in which patients with persistent tenderness were with negative CT CS were sent for MRI and the use of Flexion Extension films was discontinued.

Results:
In the two years studied, 485 patients were identified. 485 patients obtained a CS CT(n = 142), MRI(n = 46), or both a CS CT and MRI(n = 260) Of these patients that received both a CS CT and an MRI, the mean age was 50.7 years and males 64.2%. Motor Vehicle Crashes (MVCs) (41.5%), falls(37.3%), auto vs. and motorcycle crashes (5.4%) were the most common etiologies. Of the 260 patients who received both a CS CT and an MRI, 72(27.7%) had additional findings on MRI not seen on CT. In these patients with additional MRI findings, there was no intervention in 69.4% surgery in 26.3% and outpatient follow-up 4.2%. In all 72 of these cases, the findings on MRI did not change management. When comparing patients that had a difference between their CS CT and MRI and those that did not, there was significant difference between age, length of hospital stay, length of ICU stay, or ISS. There was also no significant difference between mechanism of injury or diagnosis on admission. 

Conclusion:
The optimal management of neurologically intact patients with persistent neck pain following a negative CS CT remains controversial. In patients with a negative CS CT and persistent neck pain, MRI added little clinical value with no additional change in clinical management in any of the patients who had additional findings. A clear role for MRI in this population needs to be defined by well-designed prospective studies. 

35.07 Comparable Outcomes after Liver Transplantation with and without Chronic Portal Vein Thrombosis

K. Phelan1, C. Kubal1, J. Fridell1, R. Mangus1  1Department Of Surgery,Division Of Transplantation,Indianapolis, IN, USA

Introduction: Optimal portal flow is crucial to successful liver transplantation. Portal vein thrombosis (PVT), when present, is associated with increased risk of early mortality and graft failure [1]. At our center, an aggressive approach towards PVT was utilized to improve post-transplant outcomes. This study reports outcomes of liver transplantation in patients with pre-transplant PVT.

Methods: All records for liver transplants over a 15-year period at a single center were reviewed and data extracted. PVT was identified on pre-transplant imaging and was documented in patient charts. Cavernous transformation, main portal vein thrombus, and thrombus of either splenic vein or superior mesenteric vein extending into the confluence was considered as PVT. Patient and graft survival were considered as primary endpoints.

Surgical techniques: Depending on the extent of PVT, various surgical approaches were used. In the majority of cases, extensive portal thromboendovenectomy was performed intraoperatively. When optimal portal flow was not established, superior/ inferior mesenteric venous bypass was utilized. Patients with extensive porto-mesenteric thrombosis were listed for back up multivisceral transplant which was performed if intraoperative attempts at liver transplant failed [2]. Post-transplant anticoagulation was utilized routinely for 3 to 6 months when complete clearance of the PVT was not achieved intraoperatively. Efforts were made to not use expanded criteria donor (ECD) liver allografts when significant PVT was present.

Results: There were 246 patients (12%) with pre-transplant PVT. Of those, 191 (78%) were in the main portal vein. Cavernous transformation existed in 2% of all patients with PVT. Patient demographic and clinical factors associated with PVT were year of transplant, number of days on the waiting list, race, and a primary diagnosis of fatty liver disease. Transplants with PVT had comparable graft loss at 7- and 90-days (3% and 3%, p=0.78; 7% and 7%, p=0.83). Patient and graft survival at 1-year for PVT and no PVT were 89% and 88% (p=0.66) and 89% and 90% (p=0.93).  Cox regression showed comparable long-term graft survival for transplants with PVT (66% versus 64% at 10-years; p=0.64).

Conclusion: With an aggressive approach towards PVT, excellent early and long term outcomes can be achieved after liver transplantation.

 

35.05 Discontinuation of Surgical vs Non-Surgical Clinical Trials: An Analysis of 88,498 Trials

T. J. Mouw1, S. W. Hong1, S. Sarwar1, A. E. Fondaw2, A. Walling3, M. Al-Kasspooles1, P. J. DiPasco1  1Unversity Of Kansas Medical Center,General Surgery,Kansas City, KS, USA 2University Of Kansas School of Medicine – Kansas City, Kansas City, KS, USA 3Unversity Of Kansas School of Medicine – Wichita, Family and Community Medicine, Wichita, KS, USA

Introduction:
Trial early discontinuation is a complex issue with both financial and ethical implications.  It has been previously reported that over 20% of surgical trials will be discontinued prematurely and many of those which reach completion will not publish. Previous studies have been limited in scope owing to the need for manual review of selected trials. To date there has been no broad analysis comparing surgical and non-surgical registered clinical trials.

Methods:
The US National Institutes of Health registry at clinicaltrials.gov was accessed 7/7/17 and all US trials from 2005-2017 were downloaded by status (Completed, ongoing, and discontinued). An algorithm was developed to automatically assign trials as “surgical” or “non-surgical” based on trial type and inclusion of surgical keywords generated from a list of 10,000 trial titles and descriptions. The algorithm was validated by testing a subset of trials against a team of blinded residents and medical students. A primary analysis was conducted of all US trials based on the assigned designation of surgical and non-surgical per the trial status. Significance was established via two-tailed z-test. The reasons for discontinuation between surgical and non-surgical trials were examined and tabulated. A univariate multiple logistic regression using SPSS version 20.0 was performed to assess the impacts of trial design, characteristics, and funding sources on trial discontinuation and completion.

Results:

The database search yielded 82,719 non-surgical and 5779 surgical trials after automatic assignment. The algorithm for assignments had an overall accuracy of 87.99% (95%CI 86.85-89.13%) and was associated with a +LR of 6.09 and -LR of 0.093.

Significant differences were observed in trial status (Non-surg vs surg: Completed: 55.51% vs 39.49%, Ongoing: 33.42% vs 44.54%, and Discontinued: 11.07% vs 15.97%, p <0.001 each). Industry was more likely to fund non-surgical trials (44.00% vs 32.50%, p <0.001). Surgical trials were more likely to discontinue due to poor recruitment (44.65% vs 34.74% p<0.001). Industry funding was associated with increased discontinuation (OR 1.63 p<0.001). This remained true for the surgical subset of trials funded by industry (OR 1.25 p=0.041). Reaching enrollment and/or phase 1, reporting results, and NIH funding were all protective against discontinuation while randomization had no effect.

Conclusion:
Surgical trials are less likely to reach completion compared to non-surgical trials. This study establishes industry funding as a contributory factor to trial discontinuation. However, it is not clear if this is due to forces in play after trial initiation or if it is due to relative exclusivity of selection criteria among the different trial sponsors. Poor study recruitment is an major cause for early trial discontinuation and surgical trials are more susceptible to this than non-surgical trials.

35.03 Impact of Hospital Volume on Outcomes of Laparoscopic versus Open Hepatectomy for Liver Cancer

S. W. De Geus1, G. G. Kasumova1, T. E. Sachs1, O. Akintorin1, S. Ng1, D. McAneny1, J. F. Tseng1  1Boston University,Surgery,Boston, MA, USA

Introduction:  Previous investigators have suggested that laparoscopic liver resection may be superior to an open operation based on studies at high-volume centers; however, the applicability of these findings remains unclear. This study investigates whether hospital volume is a factor in determining the short- and long-term outcomes of laparoscopic versus open hepatectomy for liver cancer.

Methods:  The National Cancer Database (NCDB) was queried for patients who underwent open or laparoscopic hepatectomy, without transplantation, for liver cancer 2010-2013. Institutions were defined as being either low-volume hospitals (LVH, ≤ 11 operations/year) or high-volume hospitals (HVH, >11 operations/year). For entire cohort and within each category, positive margin rate, 30-day mortality, readmissions, prolonged hospital stay (hospital stay ≥ 14 days), and overall survival were compared between patients who had laparoscopic and open resections, using multivariate logistic regression and Kaplan-Meier methods. 

Results: 2,867 patients underwent hepatectomy for liver cancer. Overall, 612 (21.4%) of resections were performed laparoscopically. After adjustment for covariates, resections for liver cancers at a HVH were significantly associated with lower positive-margin rates (HVH vs. LVH: 8.3% vs. 11.0%; adjusted odd ratio [AOR], 0.744; p=0.0413) and 30-day mortality (HVH vs. LVH: 3.5% vs. 6.2%; AOR, 0.646; p=0.0375). However, no significant differences were observed among the HVHs and LVHs regarding readmissions (HVH vs. LVH: 4.6% vs. 4.8%; AOR, 1.039; p=0.8482), prolonged hospital stay (HVH vs. LVH: 9.2% vs. 8.8%; AOR, 1.065; p=0.6648), or overall survival (HVH: log-rank p=0.1405; LVH: log-rank p=0.2322). Multivariate regressions showed in both HVH and LVH, laparoscopic resections were not significantly associated with positive margins (HVH: AOR, 1.246; p=0.4176; LVH: AOR, 0.991; p=0.9627), 30-day mortality (HVH: AOR, 0.755; p=0.5456; LVC: AOR, 1.037; p=0.8808), readmission (HVH: AOR, 0.834; p=0.6297; LVH: AOR, 0.698; p=0.2302) prolonged hospital stay (HVH: AOR, 0.626; p=0.1172; LVH: AOR, 0.886; p=0.5766), or overall survival (HVH: log-rank p=0.1405; LVH: log-rank p=0.2322) when compared to open.

Conclusion: Although outcomes after major operations are influenced by various factors beyond hospital volume alone, the results of this study suggest that patients with liver cancer are at higher risk of having positive resection margins and 30-day mortality if they are treated at LVH instead of HVH.  However, for both high- and low-volume hospitals, laparoscopic resections of liver cancer were associated with surgical and oncologic outcomes that were similar to those for open operations. Although residual selection bias regarding MIS vs open approach must be acknowledged, our data suggest that laparoscopic liver resection when feasible is a reasonable approach across hospital volume strata.