27.05 A Shared Decision Approach to Chronic Abdominal Pain Based on Cine-MRI

B. A. Van Den Beukel1, S. Van Leuven1, M. Stommel1, C. Strik1, M. A. IJsseldijk1, F. Joosten2, H. Van Goor1, R. P. Ten Broek1  1Radboud University Medical Center,General Surgery,Nijmegen, GELDERLAND, Netherlands 2Rijnstate Hospital,Department Of Radiology,Arnhem, GELDERLAND, Netherlands

Introduction:
Chronic abdominal pain develops in 18-40% of patients who have undergone abdominal surgery. Adhesions are associated with chronic post-operative pain; however, diagnosis and treatment is controversial.  In this study we evaluate long-term pain and healthcare utilization in a prospective cohort of patients who underwent adhesion mapping by cine-MRI, with subsequent treatment determined through a shared decision-making approach. 

Methods:
Patients with chronic post-operative abdominal pain with suspicion for causative adhesions underwent evaluation with cine-MRI. When adhesions were present on cine-MRI, individualized risks and benefits of adhesiolysis were discussed in a shared-decision making process. Patients who elected to undergo adhesiolysis received an anti-adhesion barrier. Pain and healthcare utilization were evaluated by questionnaire at follow up.

Results:
106 patients were recruited, with a median of 19 (range 6-47) months’ follow-up. 79 patients had adhesions on cine-MRI, 45 underwent an operation, while 34 patients elected not to pursue surgical intervention. 27 patients had no adhesions on cine-MRI, five choose to proceed with diagnostic laparoscopy. Response rate to follow-up questionnaire was 86?8%. In the operative group (Group 1), 80?0% of 45 responders reported long-term improvements in pain, compared to 42?9% (difference 37·1%; 95% confidence interval (CI): 14·4%-55·9%) in patients with adhesions on cine-MRI who declined surgery (28 responders, group 2), and 26?3% (difference 53·7%; 95%CI: 27·3%-70·8%) in patients with no adhesions on cine-MRI who declined laparoscopy (19 responders, group 3). Consultation of medical specialists was significantly lower in group 1 compared to groups 2 and 3 (35?7% vs. 65?2% vs. 58.8%; P=0?023). 

Conclusion:
We demonstrate long-term pain relief in two-thirds of patients with chronic pain caused by adhesions, using cine-MRI and a shared decision making process. Long-term improvement of pain was achieved in 80% of patients who underwent surgery with concurrent application of an anti-adhesion barrier. 
 

27.04 Impact of Surgery Start Time on Roux-en-Y Gastric Bypass and Sleeve Gastrectomy Length of Stay

A. Suzo1, B. Needleman1, K. Perry1, S. Noria1  1Ohio State University Wexner Medical Center,General & GI Surgery/Surgery/Medicine,Columbus, OH, USA

Indroduction:

Methods for decreasing length of stay (LOS) in surgical patients typically involve analysis of intraoperative and/or postoperative care. This study aims to investigate the impact of surgery start time and floor admission time on hospital length of stay after Roux-en-Y gastric bypass and sleeve gastrectomy.

Methods:
All patients who underwent index laparoscopic Roux-en-Y gastric bypass (RYGB) or sleeve gastrectomy (SG) at The Ohio State University Wexner Medical Center during FY2014 were identified. Preoperative, intraoperative, hospital stay, and postoperative data (demographics, comorbid conditions, hospital progress timestamps, and complications) were obtained from the electronic medical record. Mann-Whitney tests were used to determine associations between incision time, and time to admission to the hospital floor, with hospital length of stay (LOS).

Results:
A total of 291 patients were identified and included in the analyses. Of these, 174 patients underwent SG while 117 underwent RYGB. Patients who had a first start case had a shorter hospital LOS versus those whose case was later in the day, however the numbers did not reach significance (2.0 vs 2.6; p = 0.092) and RYGB (2.8 vs 2.9; p = 0.653). Interestingly, patients admitted to the bariatric surgery unit from the post-anesthesia care unit (PACU) before 1:00 pm had significantly shorter LOS than those admitted after 1:00 pm for SG (1.7 vs 2.6; p = 0.0006), but not for RYGB (2.5 vs 3.0; p = 0.359). This trend persisted for admission from the PACU before 2:00 pm for SG (1.9 vs 2.6; p = 0.0235) versus RYGB (2.7 vs 3.0; p = 0.8723). However, when the analysis was extended to 3:00 pm, there was no significant difference in LOS for SG (2.0 vs 2.6; p = 0.2127) versus RYGB (2.6 vs 3.1; p = 0.3977).

Conclusion:
Early arrival to the hospital floor is associated with a significantly shorter hospital length of stay for patient undergoing SG. This suggests that strategic scheduling of common surgical procedures may be used to improve patient outcomes and decreasing LOS.

 

27.03 A Novel Stepwise-Regression Model to Predict Lymphedema Risk After Lymph Node Dissection for Melanoma

A. Kelsall1, T. Novice1, B. Chang1, K. Ogu1, J. Noda1, R. Hoogmoed1, J. Loh1, H. Cheriyan1, C. Ky1, M. Martin1, B. Sunkara1, M. S. Cohen1  1University Of Michigan,Department Of Surgery,Ann Arbor, MI, USA

Introduction: Secondary lymphedema is a significant complication after axillary (ALND) or inguinal lymphadenectomy (ILND) in melanoma patients. While prior studies have noted pre- and postoperative risk factors, no clinically useful statistical tool exists for predicting the likelihood of a patient developing lymphedema following ALND or ILND. In this context, we used data from the largest single institution cohort of melanoma patients who underwent ALND or ILND to develop a lymphedema risk assessment tool and tested its efficacy in predicting the development of lymphedema in another cohort of patients undergoing similar procedures

Methods: We retrospectively reviewed our prospective database of melanoma patients undergoing either ALND or ILND. The model cohort (N=524;302 ALND,222 ILND) contained patients undergoing procedures between June 2005 and June 2015. The test cohort (N=98;53 ALND,45 ILND) contained patients between November 2015 and June 2016. Patients having bilateral lymphadenectomy, iliac dissections, or preoperative chemotherapy were excluded. Demographic and clinical data were collected from the electronic medical record (EMR). We used stepwise logistic regression to model the impact of various preoperative and postoperative risk factors on the likelihood of developing lymphedema in the model cohort. These models were then used to calculate risk scores for patients in the test cohort, and procedure-specific tercile thresholds were used to assign patients to “low”,“moderate”, and “high” risk categories and clinical outcome incidence was used to evaluate the accuracy of the risk prediction tool.  

Results:Key preoperative factors (ever smoking, stage, and peripheral vascular disease) were included in the preoperative risk estimation model. Three additional postoperative factors (# nodes dissected, adjuvant therapy, and 30-day non-lymphedema complications) were added when estimating risk postoperatively. In the test cohort, “low”, “moderate” and “high” risk patient groups experienced significantly different (each p<0.01) incidences of lymphedema when estimating risk both preoperatively and postoperatively (see table 1). Both the postoperative and preoperative models equally were able to predict and stratify the incidence of lymphedema.

Conclusion:Our findings demonstrate that this novel risk assessment tool can accurately use either pre- or postoperative factors to reasonably predict the risk of developing secondary lymphedema in melanoma patients undergoing either an ILND or ALND. Using such a risk-stratification tool may provide the patient and surgical team important information for surgical decision-making and discussion. 

 

27.02 MELD-Na Score as a Predictor of Anastomotic Leak in Elective Colorectal Surgery

K. Coakley1, S. Sarasani1, T. Prasad1, S. Steele2, I. Paquette3, B. Heniford1, B. Davis1  2Case Western Reserve University School Of Medicine,Department Of Surgery,Cleveland, OH, USA 3University Of Cincinnati,Department Of Surgery,Cincinnati, OH, USA 1Carolinas Medical Center,GI And Minimally Invasive Surgery,Charlotte, NC, USA

Introduction:
In patients with cirrhosis awaiting liver transplantation The Model for End-Stage Liver Disease Sodium Model (MELD-Na) is extensively studied.  Because of the simplicity of the scoring system, there has been interest in applying MELD-Na to predict patient outcomes in the non-cirrhotic surgical patient, and has been shown to predict postoperative morbidity and mortality after elective colon cancer surgery.  Our aim was to identify the utility of MELD-Na to predict anastomotic leak in all types of elective colorectal cases.

Methods:

The ACS NSQIP Targeted Colectomy database was queried (2012 – 201) for all elective colorectal procedures in patients without ascites.  Leak rates were compared by MELD-Na score using Chi-square tests and multivariate logistic regression analysis.

Results:
We identified 44,540 elective colorectal cases (mean age, 60.5 years ±14.4, mean BMI 28.8±6.6, 52% female), of which 70% were colectomy and 30% proctectomy.  Laparoscopic approach accounted for 64.72% while 35.3% were open.  The overall complication and mortality rates were 21% and 0.7%, respectively, with a total anastomotic leak rate of 3.4%.    Overall, 98% had a preoperative MELD-Na score between 10-20.  Incremental increases in MELD-Na score (10-14, 15-19 and ≥20) were associated with an increased leak rate, specifically in proctectomies (3.9% vs 5.1% vs10.7% p<0.028).  MELD-Na score ≥20 had an increased leak rate when compared to those with MELD-Na 10-14 (OR 1.627; 95% CI (1.015, 2.607).  A MELD-Na increase from 10-14 into 15-19 increases overall mortality (OR 5.22; 95% CI 3.55, 7.671).    In all elective colorectal procedures, for every one-point increase in MELD-Na score, anastomotic leak (OR 1.04 95% CI (1.006, 1.07), mortality (OR 1.24; 95% CI, (1.20, 1.27) and overall complications (OR 1.10; 95% CI (1.09,1.12) increased.   MELD-Na was an independent predictor of anastomotic leak in proctectomies, when controlling for gender, steroid use, smoking, approach, operative time, preoperative chemotherapy and Crohns Disease (OR 1.06, 95% CI (1.002, 1.122)). 

Conclusion:
MELD-Na is an independent predictor of anastomotic leak in proctectomies.  Anastomotic leak risk increases with increasing MELD-Na in elective colorectal resections, as does 30-day mortality and overall complication rate.  As MELD-Na score increases to above 20, restorative proctectomy has a 10% rate of anastomotic leak. 

27.01 Transgastric pancreatic necrosectomy – expedited return to pre-pancreatitis health

M. M. Dua1, D. J. Worhunsky1, L. Malhotra1, W. G. Park1, J. A. Norton1, G. A. Poultsides1, B. C. Visser1  1Stanford University,Palo Alto, CA, USA

Introduction:  The best operative strategy for necrotizing pancreatitis remains controversial. Traditional surgical necrosectomy is associated with significant morbidity; operative debridement contributes to the substantial risk of pancreatic
and bowel fistulae, which are associated with recurrent hospitalizations and long-term support to manage pain or nutritional requirements. Minimally invasive endoscopic and percutaneous strategies typically require multiple procedures and a prolonged hospital course. We developed a transgastric approach to pancreatic necrosectomy to overcome the shortcomings of the other techniques described.

Methods:  Patients with walled-off, retrogastric pancreatic necrosis who underwent transgastric necrosectomy (TN) during 2009-2016 were retrospectively reviewed. Open TN is performed via an anterior gastrotomy to debride the pancreas through a wide cystgastrostomy in the posterior wall. Laparoscopic TN involves endoscopic insufflation of the stomach for placement of transgastric ports for operative debridement. The cystgastrostomy is left open in both types of TN to allow ongoing internal drainage of necrosis. Endpoints included postoperative complications and mortality.

Results: Forty-four patients underwent TN (9 open, 35 laparoscopic). Operative indications included persistent unwellness (n=26), infection (n=14), pseudoaneurysm hemorrhage failing embolization (n=3), and worsening sepsis (n=1). The median peroperative APACHE II score for the total cohort was 6 (0-27); however, disease severity was higher in the open TN group compared to the laparoscopic TN group (APACHE II score 12 vs 5, p = 0.03) resulting in a longer length of stay (LOS 11 vs 7 days, open vs laparoscopic, respectively, p = 0.01). Clinical outcomes for the total cohort are represented in the attached table.  A majority of the cohort (74%) experienced none (n=23) or minor (n=10) complications. Six patients had postoperative bleeding; 5 required embolization and there was one death. No patient required more than one operative debridement; five patients required percutaneous drainage for residual collections. There were no postoperative fistulae or wound complications.

Conclusion: The transgastric approach to pancreatic necrosectomy allows for effective debridement with a single definitive operation and minimizes the morbidity associated with prolonged drainage, fistulae and wound complications. When anatomically suitable, the transgastric approach (whether laparoscopic or open) is an effective strategy that expedites return to pre-pancreatitis health and offers significant benefits in the recovery of these patients.   

26.10 Early Identification of Deep Space Infection in Colorectal Surgery

J. R. Bergquist1,2, C. B. Storlie2, K. L. Mathis1, J. C. Boughey3, D. A. Etzioni4, E. B. Habermann2, R. R. Cima1  1Mayo Clinic,Division Of Colon And Rectal Surgery,Rochester, MN, USA 2Mayo Clinic,Robert D And Patricia E Kern Center For The Science Of Health Care Delivery,Rochester, MN, USA 3Mayo Clinic,Department Of Surgery,Rochester, MN, USA 4Mayo Clinic In Arizona,Colon And Rectal Surgery,Phoenix, AZ, USA

Introduction:  Key drivers of colorectal surgical-site-infection (C-SSI) occurrence are institution-specific, and early identification of patients who will develop C-SSI requiring readmission remains challenging. We developed an analytic tool which would utilize institution-specific data for C-SSI screening and treatment during index hospitalization. 

Methods:  Elective colorectal resections from institutional ACS-NSQIP datasets (2006-2014) at 2 locations were included. A Bayesian-Probit regression model with multiple-imputation (BPMI) via Dirichlet process handled missing data. The baseline for comparison was a multivariate logistic regression model (GLM) with indicator variables for missing data (e.g., adding a “missing” level to factors) and stepwise variable selection. Out-of-sample performance was evaluated with Receiver Operating Characteristic (ROC) and Net Reclassification Improvement (NRI) analysis of 10-fold cross-validated samples. Primary endpoint was C-SSI requiring hospital readmission. 

Results: Among 2376 resections, deep/organ space C-SSI rate was 4.6% (N=108: Figure-patients 3,4). Among patients developing C-SSI, N=65(60.1%) were discharged prior to clinical diagnosis (Figure-patient 3). The tool identified N=15(23.1%) of these patients prior to discharge (3 requiring re-operation), with 10% false alarm rate. Among patients clinically diagnosed with C-SSI prior to discharge (patient 4), the tool identified C-SSI 4.5 (mean) days prior to clinical identification. Tool performance generated ROC=0.77 and NRI=21.7%, demonstrating high predictive accuracy. When applied to independent validation data (N=478 cases, N=20 SSI), the tool identified during hospitalization 40% of patients discharged then readmitted with C-SSI (ROC=0.75; NRI=8.4%). 

Conclusion: Identification of C-SSI prior to clinical presentation can facilitate early intervention, potentially reducing morbidity, re-admission, and re-operation. Our tool correctly identified a substantial proportion of patients who were discharged and readmitted with C-SSI in two independent datasets. This institutionally-generic analytic tool can improve outcomes and reduce costs associated with readmission and late C-SSI identification.  

 

26.09 Use of Dual Lumen VV ECMO for Neonatal and Pediatric Patients in a Tertiary Care Children’s Hospital

J. L. Carpenter1, Y. R. Yu1, D. L. Cass1, O. O. Olutoye1, J. A. Thomas2, C. Burgman2, C. J. Fernandes3, T. C. Lee1  1Texas Children’s Hospital,Department Of Surgery,Houston, TX, USA 2Texas Children’s Hospital,Critical Care Section, Department Of Pediatrics,Houston, TX, USA 3Texas Children’s Hospital,Neonatology Section, Department Of Pediatrics,Houston, TX, USA

Introduction:

Recent advances in extracorporeal membrane oxygenation (ECMO) have led to increased use of venovenous (VV) ECMO in the neonatal and pediatric patient population yet there is little data on outcomes related to this technology. Reported complications of dual lumen VV ECMO have included need for venoarterial (VA) conversion and cardiac perforation. We present the evolution and experience of neonatal and pediatric VV ECMO at a tertiary care institution.

Methods:

Records for NICU and PICU patients who received ECMO support from 01/2005 to 07/2016 were reviewed. Comparison groups included cannulation mode and indication for ECMO. Analyses of survival to discharge, complications (metabolic, hemorrhagic, neurologic, renal, cardiovascular, pulmonary, infectious, and mechanical), and decannulation rate were performed with χ2 tests. Kaplan Meier analysis was used to compare survival based on indication for therapy.

Results:

A total of 160 patients (105 NICU, 55 PICU), ages 0 days – 19 years, required 13 ± 11 days of ECMO. Indications were sepsis (8%) and cardiorespiratory failure (92%); of which 44% (n=64) had diaphragmatic hernia. VV ECMO was the primary cannulation mode in 83 patients with a survival of 64%. VA ECMO was used in 77 patients with 54% survival. Nine VV patients (11%) required VA conversion. VV cannulas were placed percutaneously in 45% of patients (n=37) with 16 placed via an existing central line. Ten VV patients were extubated to spontaneous respirations while on ECMO; three survived to discharge. Overall, 74% of patients (n=118) were successfully decannulated and 57% survived to discharge. Since 2010, the frequency of VV cannulation increased from 50% to 85% and the mortality rate was unchanged. VA ECMO was associated with a significantly higher rate of acute intra-cranial hemorrhage than VV (28% vs 9%, p=0.003). There were no differences in survival (p=0.52), complications (p=0.40), or re-operation rate (p=0.85) between VV and VA groups. There were no cardiac injuries with the use of double lumen VV cannulas. Survival by ECMO indication is shown in Figure 1. There is a significant difference in overall survival (p=0.002); septic patients had a median survival of 20 days, whereas patients with cardiorespiratory failure had a median survival of 129 days.

Conclusion:

VV ECMO cannulation is associated with a lower rate of intra-cranial hemorrhage and may be the preferred first-line mode of ECMO support for cardiorespiratory failure.  VV can be an effective mode of ECMO support in the both the NICU and PICU populations, though conversion to VA ECMO may occasionally be necessary.

 

26.08 The Effect of Resident Involvement on Perioperative Outcomes in Bariatric Surgeries

J. Kudsi1, K. Hayes1, R. Amdur1, P. Lin1, K. Vaziri1  1George Washington University,Department Of General Surgery,Washington, DISTRICT OF COLUMBIA, USA

Introduction:

The current surgical residency training is based on a model of graduated responsibility, giving greater responsibility based on an individual trainees’ ability. To assess the effect of this model on care of the bariatric surgery patient; we decided to study the impact of resident involvement in bariatric surgery stratified by level of training. The aim of this study is to assess the impact of resident involvement on perioperative outcome in bariatric procedures including sleeve gastrectomy (SG) and gastric bypass (GB). 
 

Methods:

Four-year retrospective review 2006-2010 of ACS-NSQIP database for 19,616 lap/open GB (87.8%), and 2,730 lap/open SG (12.2%). All concurrent procedures were excluded except: EGD, liver biopsy, wedge liver biopsy, and lap biopsy. Other exclusions included cases with both laparoscopic and open procedures, emergency cases, cases missing PGY, and those with PGY level higher than 5.  

Pre-treatment patient characteristics and outcomes were compared across levels of the resident variable Juniors PGY 1/2/3 (J) vs seniors PGY 4-5 (S) vs no residents (N) using chi-square for categorical variables and analysis of variance for continuous variables. 

13 composite outcomes were compared; wound (superficial surgical site infection, deep wound infection, organ space infection, dehiscence), pulmonary (pneumonia, prolonged intubation, reintubation), sepsis/septic shock, deep venous thrombosis/pulmonary embolism (DVT/PE), bleeding, cardiac (MI, cardiac arrest), renal (AKI, dialysis) and urinary tract infection (UTI), operative time>4h, Length of Stay (LOS)>3days, return to OR, and mortality. All confounding variables were controlled.
 

Results:

There were 19,616 GB (87.8%), and 2,730 SG (12.2%). Surgical assist distribution was: Junior 3,554 (15.9%), Senior 5,406 (24.2%), N 13,386 (59.9%). 

Cases that included Senior residents more often involved African American patients than cases treated by no resident or junior residents. Non-resident cases had slightly lower BMI, fewer non-independent patients, and less COPD, than cases that involved residents. Cases with Junior residents had the highest rate of dyspnea.

There was no difference in mortality. Senior residents had significantly worse outcomes compared to junior and non-residents in LOS>3 days (S 12.4%, J 10.9%, N 8.3%, P<.0001), wound complications (S 3.6%, J 2.8%, N 2.7% P.007), pulmonary complications (S 1.2%, J 0.7%, N 0.9 P .033), sepsis/ septic shock (S 1.3%, J 1.1%, N 0.8% P .0022), cardiac events (S 0.33, J 0.14, N 0.12, P 0.005) and OR time>4h (S 4.16, J 4.11, N 1.6, P < 0.0001). Junior residents had worse outcome in renal complications (S 0.46, J 0.51, N 0.23, P 0.006).

Conclusion:

Bariatric procedures with senior resident assistance have worse outcomes when compared to junior or non-resident assistance. These results suggest further evaluation of the graduated responsibility model in bariatric operative education.
 

26.06 Symptomatic Hematomas Following Cervical Exploration: A Comparative Analysis over 40 Years

A. Jyot1, T. Pandian1, M. H. Zeb1, N. D. Naik1, A. Chandra1, F. J. Cardenas1, M. Mohan1, E. H. Buckarma1, D. R. Farley1  1Mayo Clinic,General Surgery,Rochester, MINNESOTA, USA

Introduction: Cervical hematoma is a highly dreaded complication of cervical exploration and poses a unique challenge due to the combination of its rarity and associated high mortality. Rising endocrine case volumes over the last decade underscore the need for better understanding of this life threatening condition. We previously reported on the incidence (0.31%) and outcomes of this complication from 1976-2000 (Study A). Given that many of these operations are now performed through smaller incisions and as outpatients, we aim to analyze the complication rate and possible changes in trends from 2001-2015 (Study B).

Methods: A retrospective case-control study including 10,138 patients undergoing thyroidectomy and parathyroidectomy from 2001- 2015 was conducted. Cases were matched 1-to-1 for gender, age, type and year of operation. Univariate analysis testing was performed to assess for baseline discrepancies between study groups followed by a conditional logistic regression to identify perioperative risk factors.

Results: Thirty-two hematomas requiring re-exploration were identified (Study B incidence =0.30%, Study A incidence=0.31%). There were 24 women and 8 men (mean age= 58.4±17.1 years), undergoing thyroidectomy (22), parathyroidectomy (8) and both procedures (2). No perioperative risk factors for developing a cervical hematoma were identified. Most hematomas (n=18, 56%) presented within 6 hours of wound closure, while 7 (22%) presented between 7and 24 hours and 7 (22%) beyond 24 hours. This was in contrast to study A where the most common time of presentation was beyond 6 hours (43%). Neck swelling was the most common presenting symptom (n=22, 69%), followed by neck pain (n=8, 25%), respiratory distress (n=6, 19%), dysphagia (n=6, 19%) and wound discharge (n=4, 13%). At re-exploration, 19 (60%) hematomas were found to be deep and 13 (40%) superficial to the strap muscles. The bleeding source was identified in 24 (75%) cases (11 arterial, 8 venous, 3 diffuse oozing and 2 with oozing and venous bleeding). In our study and control groups, vocal cord paralysis/voice change (25 vs. 22, p=0.171), followed by hypocalcemia (5 vs. 3, p=0.708), were common complications, however no complication reached statistical significance. Mean hospital stay was longer in the patients requiring cervical re-exploration (3.1 days vs. 1.6 days, p=0.005).

Conclusion: The frequency of cervical hematomas remains unaltered over 4 decades. Failure to define a high risk population in the current study highlights the need for meticulous hemostasis. With increasing outpatient neck surgery, scrutiny prior to dismissal and clear patient education regarding symptomatic cervical hematomas is imperative.

 

 

26.05 Exponential Decay Modeling Can Define Parameters of the Weight Loss Trajectory After Gastric Bypass

E. S. Wise1,2, J. Felton1, M. D. Kligman1  1University Of Maryland Medical Center,Department Of Surgery,Baltimore, MD, USA 2Vanderbilt University Medical Center,Department Of Surgery,Nashville, TN, USA

Introduction:

Laparoscopic Roux-en-Y gastric bypass (LRYGB) is a well-described operation that produces durable and clinically significant weight loss. While factors influencing future weight loss have been studied, temporal patterns of weight loss are less well described. We test the hypotheses that postoperative weight loss may conform to exponential decay, and subsequently, that three-month weight loss can help characterize a patient’s weight loss trajectory.

Methods:

A retrospective analysis of 1,097 consecutive LRYGB patients at a single institution over a ten year period provided the data necessary to generate postoperative weight loss curves. Using pre- and postoperative BMI data, the mean and standard deviation of postoperative BMI as a function of time was obtained, with multiple linear and nonlinear fits tested for optimal conformity. The highest and poorest performing patients at three month follow-up were stratified based on their cumulative rate of weight loss into two strata: <0.3% EBMIL/day (n = 102) and >0.5% EBMIL/d (n = 191). Exponential decay rate constants (λ) were generated for each group, allowing for optimized estimation of time until half of the weight loss is complete (t1/2) as well as plateau BMI (BMIf). Linear regression analysis was used to interrogate the association of λ, calculated at three months and normalized using a BMIf of 25 kg/m2n, 3 mo.), to %EBMIL at 2-3 years, a surrogate for actual BMIf.

Results:

For the entire cohort, one-phase exponential decay provided the best fit for the weight loss function over time (n = 1,097, r = .43, λ (x 103) = 7.3, t1/2 = 95 days, BMIf = 31.4). Patients who performed poorly at three months (<0.3% EBMIL/d, n = 102, r = .49, lambda(x 103) = 6.4, t1/2 = 108 days, BMIf = 38.9) had a smaller λ and a higher BMIf than those who performed optimally (>0.5% EBMIL/d, n = 191, r = .62, λ (x 103) = 9.4, t1/2 = 74 days, BMIf = 26.6; Figure 1A). Normalized rate constants calculated from three-month weight loss (λn,3 mo.) demonstrated a significant correlation with %EBMIL2-3 years (n = 428, P < .001, r = .28, B = -4.1 %EBMIL per unit increase in normalized lambda; Figure 1B).

Conclusion:

We demonstrate that weight loss after LRYGB conforms to exponential decay. This finding necessarily submits that weight loss trajectory is governed by a patient-specific rate constant and a plateau BMI. Further studies are necessary to characterize the patient and institution-specific factors that contribute to these parameters. However, we find that patient performance at three months, as suggested by λn,3 mo, is a significant predictor of long term weight loss in accordance with our hypothesis.

 

26.03 Health Care Consumption and Sick Leave for Persistent Abdominal Pain after Cholecystectomy

S. Z. Wennmacker1, M. G. Dijkgraaf4, G. P. Westert3, J. P. Drenth2, C. J. Van Laarhoven1, P. R. De Reuver1  1Radboud University Medical Center,Surgery,Nijmegen, , Netherlands 2Radboud Univeristy Medical Center,Gastroenterology And Hepatology,Nijmegen, , Netherlands 3Radboud University Medical Center,Scientific Institute For Quality Of Healthcare (IQ Healthcare),Nijmegen, , Netherlands 4Academic Medical Center,Clinical Research Unit,Amsterdam, , Netherlands

Introduction: Annually, 800.000 cholecystectomies are performed in the United States and 22.000 in the Netherlands. Estimated costs of a cholecystectomy in the Netherlands are around 4000 euro’s. Gallbladder removal for symptomatic gallstones appears to be ineffective in terms of pain relief, in up to 40% of patients. Although several studies have reported on persistent abdominal pain after cholecystectomy, there is no literature on the actual burden of persistent pain to the health care system. The aim of this study is to determine health care consumption and the related costs in patients with persistent abdominal pain after cholecystectomy.

Methods: All 146 patients of a previous prospective multicenter cohort study who reported persistent abdominal pain 24 weeks after cholecystectomy between June 2012 and June 2014 were included in this study. Health care consumption was assessed in February 2016 using Patients experience of surgery questionnaire (PESQ), Medical Consumption Questionnaire (iMCQ) and patients’ medical records. Sick leave and productivity loss of (un)paid work were assessed by the Productivity Cost Questionnaire (iPCQ). Costs were calculated according the Dutch “Guideline for performing economic evaluations in health care” and reported in euro's.

Results: The response rate was 85% (124/146 patients), after a mean follow-up of 31.0 months after surgery (SD 6.5). A total of 55.6% (n=69) of patients had additional care for persistent abdominal pain after cholecystectomy; 30.6% received primary care, 37.1% received secondary care, 16% were admitted in the emergency department, and 8.9% of the patients were admitted to hospital. Diagnostic procedures were performed in 33.9% (n=42) of the patients, which revealed gallstone or surgery related causes in nine patients. In 20 patients another diagnosis was found. Additional treatment included use of medication in 17.7% (n= 22) of the patients (10% uses analgetics, 9.6% uses proton pomp inhibitors ). Additional interventions were performed in 7 patients (5.6%). Estimated mean medical costs for persistent abdominal pain since cholecystectomy were €1,239 (SD €3,573) per patient. Subsequent mean costs of sick leave and productivity loss of (un)paid work were €727 (SD €2,163) per patient.

Conclusion: Due to persistent abdominal pain after cholecystectomy, 55% of the patients needed additional health care, and one third of the patients underwent additional diagnostic procedures. Postoperative medical costs and costs of sick leave and productivity loss in patients with persistent abdominal pain are up to 50% of the initial costs of the cholecystectomy.

 

26.02 Timed Stair-Climbing is a Surrogate for Sarcopenia Measurements for Predicting Surgical Outcomes

S. Baker1, M. Waldrop1, J. Swords1, T. Wang1, M. J. Heslin1, C. M. Contreras1, S. Reddy1  1University Of Alabama at Birmingham,Surgical Oncology,Birmingham, Alabama, USA

Introduction: Estimating sarcopenia by measuring psoas muscle density (PMD) has been advocated as a method to accurately predict post-operative morbidity. This method is cumbersome and not feasible for a busy surgeon to use in practice. We have previously demonstrated that a simple timed stair climb (TSC) outperforms the ACS NSQIP Surgical Risk Calculator in predicting morbidity and hospital length of stay. The aim of the present study is to determine whether the TSC is a marker of axial muscle strength and can be used to replace PMD measurements in predicting morbidity.

Methods: From March 2014 to May 2015, 298 patients attempted TSC prior to undergoing elective abdominal surgery. PMD was measured using pre-operative CT scans obtained within 30 days of surgery at the L3 vertebra. Ninety-day complications were assessed using the Accordion Severity Grading System. Multivariable analysis was performed to identify pre-operative risk factors associated with operative morbidity.

Results: A grade 2 or higher complication occurred in 72 (24.2%) of patients with 8 (2.7%) deaths. There was an inverse relationship between PMD and TSC (Figure, P<0.0001) and a direct relationship between TSC and complications (Figure, P=0.04). On multivariable analysis only the decreasing PMD (P=0.018) and increasing TSC (P=0.026) were predictive of post-operative morbidity.  Area under the receiver operating characteristic curves demonstrated no difference between TSC and PMD in predicting complications (AUC 0.72 vs. 0.70, P=0.49).  Both TSC and PMD were superior to the ACS NSQIP Risk Calculator (AUC 0.55, both P<0.05).

Conclusions: Both TSC and PMD are excellent predictors of post-operative morbidity in this population. TSC appears to be a surrogate for axial muscle strength measured by PMD. TSC is an easy tool to administer in lieu of PMD when considering patient outcomes.

26.01 Surgeons Overestimate Post-operative Complications and Death.

K. Pei1, J. Healy1, K. A. Davis1  1Yale University School Of Medicine,Surgery,New Haven, CT, USA

Introduction:

 

Assessing post-operative morbidity and mortality is largely based on experience and published statistics. Projections of complications and death have critical implications for counseling patients preoperatively, particularly for challenging patients. We hypothesize that resident and attending surgeons overestimate complications and death after surgery for complex surgical patients. 

Methods:

General surgery residents and attending surgeons at an urban, tertiary, academic medical center were invited to participate in an online assessment.  Seven complex clinical scenarios were presented to participants via anonymous, online modules.   For each scenario, participants estimated the likelihood of any morbidity, mortality, surgical site infection, pneumonia, and cardiac complications on a 0-100% scale. Scenarios were representative of a diverse General Surgery practice including colectomy, duodenal ulcer repair, inguinal hernia repair, perforated viscus exploration, small bowel resection, cholecystectomy, and mastectomy.   Participant responses were compared to risk adjusted outcome measures by the American College of Surgeons National Surgical Quality Improvement Project (NSQIP) online calculator.   Responses were reported as means with 95% confidence intervals, differences in participant responses and NSQIP estimates were reported as absolute percentage differences of the mean.  This study was approved by the institutional Human Investigation Committee.

Results:

101 Residents and 48 Attending Surgeons (trained in General Surgery) were invited. Overall response rate was 73.8% for all participants.  For all 7 clinical scenarios, there was no significant difference between resident and attending estimates of morbidity or mortality.  There was significant variation in estimates among participants with wide 95% confidence intervals.  Overall, the mean percentages of the estimates were 25.8-30% over NSQIP estimates for morbidity and mortality.

Conclusion:

General surgery residents and attending surgeons did not significantly differ in their estimate of post-operative complications and death; however, both groups grossly overestimated risks in complex surgical patients.  These results demonstrate broad variance in and near universal overestimation of predicted surgical risk when compared to national, risk adjusted models.

21.12 Prognostic impact of pancreastatin following chemoembolization for neuroendocrine tumors

D. S. Strosberg1, J. Onesti4, N. Saunders3, G. Davidson1, M. Shah5, M. Dillhoff1, C. Schmidt1, M. Bloomston2, L. A. Shirley1  1The Ohio State University Wexner Medical Center,Surgical Oncology,Columbus, OH, USA 221st Century Oncology,Fort Meyers, FL, USA 3Emory University School Of Medicine,Atlanta, GA, USA 4Mercy Health Grand Rapids,Grand Rapids, MI, USA 5The Ohio State University Wexner Medical Center,Medical Oncology,Columbus, OH, USA

Introduction: Transarterial chemoembolization (TACE) is a viable treatment option for patients with metastatic neuroendocrine tumors (NETs) to control tumor progression and palliate symptoms of hormone excess.  Pancreastatin, a split product of chromogranin, has been shown to correlate with survival in patients with NETs. The objective of this study was to investigate the prognostic impact of pancreastatin levels in patients with metastatic NETs treated with TACE.

Methods: Patients with metastatic NET treated with TACE at a single institution from 2000 to 2013 were analyzed. Clinical variables were analyzed with Chi-square, Fisher Exact, or independent T-test as appropriate.  Kaplan-Meier curves for overall survival (OS) were analyzed using log-rank testing for curve differences.

Results: 188 patients underwent TACE for metastatic NETs during the study period.  An initial pancreastatin level greater than 5000 pg/mL correlated with worse OS from time of first TACE (Median OS 58.5 months vs 22.1 months, p<0.001). A decrease in pancreastatin levels by 50% or more after TACE treatment correlated with improved OS (Median OS 53.8 months vs 29.9 months, p=0.032). Patients with carcinoid syndrome were more likely to have a subsequent increase in pancreastatin after initial drop post-TACE (percent of patient with increase 78.1% vs 55.2%, p=0.002). Patients who had an increase in pancreastatin levels after initial drop post-TACE were also more likely to have liver progression on axial imaging (70.7% vs 40.7%, p=0.005) as well as more likely to need repeat TACE (21.1% vs 6.7%, p=0.009).

Conclusion: For patients with liver metastases from NET, measurement of pancreastatin levels can be useful in several steps during potential TACE treatment.  Extreme high levels prior to TACE can predict poor outcomes, significant drops in pancreastatin after TACE correlate with improved survival, and a rise in levels after initial drop may predict progressive liver disease requiring repeat TACE.  As such, pancreastatin levels should be measured throughout the TACE treatment period.

 

21.10 Asymptomatic Screening in Trauma Patients Reduces Risk for Pulmonary Embolism

D. Koganti1, A. Johnson1, S. Stake1, A. Wallace1, S. Cowan1, J. Marks1, M. Cohen1  1Thomas Jefferson University,Surgery,Philadelphia, PA, USA

Introduction:
Now that deep vein thrombosis (DVT) is linked to reimbursement and publicly reported metrics, hospitals are pressuring trauma programs to discourage lower extremity (LE) venous duplex ultrasounds (VDUS) in asymptomatic patients. Current evidence is ambiguous and controversial. We aimed to evaluate LE VDUS screening practices at our institution for risk reduction for pulmonary embolism (PE).

Methods:
Patients admitted to an urban level-1 trauma center between 2005 and 2015 were retrospectively reviewed, excluding patients with a length of stay (LOS) <4 days. We performed propensity-matching of screened to unscreened patients based on gender, transfer, spinal procedure, spinal cord injury, or spinous, femur, pelvis, tibia and upper extremity fracture. In our matched samples, we performed a chi-squared analysis to determine association of screening with PE, absolute risk reduction and number needed to treat.

Results:
Of the 11,280 trauma patients admitted, 5,611 met LOS criteria. Of these patients, 2,687 (48%) underwent asymptomatic LE VDUS screening. Propensity matching identified 1,915 unscreened patients with a similar risk profile. The rate of PE was significantly higher in our matched unscreened sample [1.72% (n=33) vs 0.45% (n=12), p<0.001, Figure]. The absolute risk reduction was 1.28%, suggesting that the number needed to screen to prevent one PE is 78 high-risk patients.

Conclusion:
The data demonstrate significant risk reduction for pulmonary embolism in propensity-matched patients at our institution over a 10-year period. The screened patients still have a higher risk factor profile than the matched cohort suggesting that the actual risk reduction might even be greater than 1.28%. This data can help define the best population for routine screening and determine the cost-effectiveness of screening programs.
 

21.09 Impact of Pain after Trauma on Long Term Patient Reported Outcomes

J. P. Herrera-Escobar1, M. Apoj2, G. Kasotakis2,3, A. J. Rios-Diaz1, E. Lilley1, J. Appelson1, B. Gabbe4, K. Brasel5, E. Schneider1, H. Kaafarani6, G. Velmahos6, A. Salim1, A. H. Haider1  1Brigham And Women’s Hospital,Boston, MA, USA 2Boston University,Boston, MA, USA 3Boston Medical Center,Boston, MA, USA 4Monash University,Melbourne, VIC, Australia 5Oregon Health And Science University,Portland, OR, USA 6Massachusetts General Hospital,Boston, MA, USA

Introduction:  The Institute of Medicine has recently called for incorporating long-term patient reported outcomes (PROs) to improve trauma care quality and increase patient-centeredness. In order to understand the value of collecting post-discharge PROs, we sought to describe the burden of self-reported pain at 6 and 12 months after injury, and determine its association with important PROs such as: Post-Traumatic Stress Disorder (PTSD), return to work, and new need of assistance at home.

Methods: Trauma patients with an ISS ≥9 were identified retrospectively using the institutional trauma registry of two Level I trauma centers and contacted 6 or 12 months post-injury to participate in a telephone interview evaluating PROs measures: Trauma and Health Related Quality of Life (T-QoL and Short Form-12 [SF-12]), PTSD screening (Breslau), return to work, and residential status. Multivariable logistic regression models clustered by facility and adjusting for confounders (age, sex, ISS, and length of stay) were used to obtain the odds of positive PTSD screening, not returning to work, and new need of assistance at home, in trauma patients who reported to have pain on a daily basis compared to those who did not.

Results: We conducted 305 interviews: 141 at 6 months and 164 at 12 months after injury. 48/52% (6/12 months) reported pain on a daily basis, 24/19% took pain medications daily, 47/48% were limited by pain in the things they are able to do, and 36/37% reported that pain moderately, quite a bit or extremely interfered with their normal work. There were no differences in age, gender, ISS, mechanism of injury, and length of stay between patients with and without pain on a daily basis at 6 and 12 months (all p>0.05). Compared to patients without pain, patients with pain at 6 months were more likely to screen positive for PTSD (OR: 1.92[1.28-2.88]), required assistance at home without prior need (OR: 2.86[1.82-4.48]), and did not return to work (OR: 2.85[2.46-3.30]). Similarly, at 12 months, patients with pain had higher odds of positive PTSD screening (OR: 5.95[3.87-9.15]) and requiring assistance at home without prior need (OR: 5.53[1.48-20.71]). However, we did not find a statistically significant difference in return to work compared to patients without pain (OR: 2.35[0.79-7.03]) 

Conclusion: There is an enormous amount of self-reported pain after trauma that is not being captured by current trauma registries that are limited to outcomes at discharge. Pain after trauma is associated with poor PROs such as positive PTSD screening, delayed return to work, and new need of assistance at home. Inclusion of long-term PROs in trauma registries will enable quality improvement that is more inclusive of all aspects of recovery after an injury.

 

20.11 Safety of Early Venous Thromboembolism Prophylaxis for Isolated Blunt Splenic Injury: A TQIP Study

B. Lin1, K. Matsushima1, L. De Leon1, G. Recinos1, A. Piccinini1, E. Benjamin1, K. Inaba1, D. Demetriades1  1University Of Southern California,Acute Care Surgery,Los Angeles, CALIFORNIA, USA

Introduction:

Non-operative management (NOM) has become the standard of care in hemodynamically stable patients with blunt splenic injury. Due to the potential risk of bleeding, there are no widely accepted guidelines for an optimal and safe timeframe for the initiation of venous thromboembolism (VTE) prophylaxis in patients undergoing NOM. The purpose of this study was to explore the association between the timing of VTE prophylaxis initiation and NOM failure rate in isolated blunt splenic injury. 

Methods:

After approval by the institutional review board, we utilized the American College of Surgeons (ACS) Trauma Quality Improvement Program (TQIP) database (2013-2014) to identify adult patients (≥18 years) who underwent NOM for isolated blunt splenic injuries (Grade III/IV/V). Patients were excluded if they expired within 24 hours of admission or required surgical management of splenic injury within 12 hours after admission. Failure of NOM was defined as any splenic surgeries after 12 hours of admission. The incidence of overall NOM failure was compared between two groups: 1) VTE prophylaxis <48 hours after admission (early prophylaxis group), and 2) VTE prophylaxis ≥48 hours (late prophylaxis group). Similarly, we compared the incidence of NOM failure after the initiation of VTE prophylaxis between the early and late prophylaxis group. Multiple logistic regression analysis was performed for NOM failure adjusting for clinically important covariates including the timing of VTE prophylaxis initiation. 

Results:

A total of 816 patients met the inclusion criteria; median age: 34 years (IQR 23-52), 67% male gender, median ISS: 13 (IQR 10-17), 679 patients (83.2%) with severe splenic injury (Grade IV/V). Of the patients who met the inclusion criteria, VTE prophylaxis was not administered in 525 patients (64.3%), whereas VTE prophylaxis was given < 48 hours and ≥48 hours after admission in 144 and 147 patients, respectively. Among patients who received VTE prophylaxis, angioembolization of the spleen was performed in 30 patients (10.3%). Overall NOM failure rate was 13.4% (39/291). While overall NOM failure rate was significantly lower in the early group compared to late prophylaxis group (4.9% vs. 21.8%, p<0.001), there was no significant difference in the NOM failure rate after the initiation of VTE prophylaxis between two groups (3.5% vs. 3.4%, p=1.00). In the multiple logistic regression analysis, early initiation of VTE prophylaxis was not significantly associated with NOM failure (OR: 1.19, 95% CI 0.31-4.51, p=0.80).

Conclusion:

Our results suggest that early initiation of VTE prophylaxis (<48 hours) does not increase the risk of NOM failure in patients with isolated splenic injury. Further prospective study to validate the safety of early VTE prophylaxis is warranted.   
 

20.07 Failure to Rescue Following Cytoreductive Surgery and Hyperthermic Intraperitoneal Chemotherapy

K. Li1, A. A Mokdad1, M. Augustine1, S. Wang1, M. Porembka1, A. Yopp1, R. Minter1, J. Mansour1, M. Choti1, P. Polanco1  1University Of Texas Southwestern Medical Center,Division Of Surgical Oncology,Dallas, TX, USA

Introduction: Cytoreductive surgery with hyperthermic intraperitoneal chemotherapy (CRS/HIPEC) has been shown to significantly improve the survival of selected patients with peritoneal carcinomatosis (PC). However, this invasive procedure can result in significant morbidity and mortality. Using a national cohort of patients, this study aims to identify perioperative patient characteristics predictive of failure to rescue (FTR)–mortality following postoperative complications from CRS/HIPEC.

Methods: Patients who underwent CRS/HIPEC between 2005 and 2013 were identified in the American College of Surgeons National Surgical Quality Improvement Program dataset (NSQIP). Patients who suffered any post-operative complication were identified. Major complications were defined as those corresponding to Clavien-Dindo grade III or IV. Failure to rescue (FTR) was defined as 30-day mortality in the setting of a treatable complication. Patients who suffered FTR were compared against those who survived a complication (non-FTR) using patient characteristics, pre-operative clinical information, types of resections, and severity of complication. Univariable comparisons were conducted using the Wilcoxon rank-sum test for continuous variables and the Fischer’s exact test for categorical variables. Predictors of FTR were identified using a multi-variable logistic regression model.

Results: From the NSQIP database, 915 eligible CRS/HIPEC cases were identified in the study period. Overall, 382 patients (42%) developed postoperative complications and constituted our study population. A total of 88 (10%) patients suffered one or more major complications. Seventeen patients died following a complication, amounting to an FTR rate of 4%. Patients’ age, gender, and race were similar between FTR and non-FTR groups. Colorectal cancer was the most common diagnosis in the FTR and non-FTR groups (35% vs 25%, respectively). The rates of multi-visceral resections were also similar (88% vs 86%, p=1.00). FTR patients were more likely than non-FTR patients to have dependent functional status (18% vs 2%, p=0.01), have ASA class 4 status (29% vs 8%, p=0.01), develop three or more complications (65% vs 24%, p<0.01), and suffer a major complication (94% vs 20%, p<0.01). Independent predictors of FTR were as follows: having a major complication (odds ratio [OR] 66.0, 95% confidence interval [CI] 8.4-516.6), dependent functional status (OR 5.9, 95%CI 0.8-41.9), and ASA class 4 (OR 13.4, 95%CI 1.2-146.8). Procedure type and diagnosis were not predictive of FTR.

Conclusion: Morbidity associated with CRS/HIPEC is comparable to other complex surgical procedures and has an acceptable low rate of death in this national cohort of patients. Dependent functional status and ASA class 4 are patient factors predictive of FTR. These patients have a prohibitively high risk of 30-day mortality following postoperative complications and should be considered ineligible for CRS/HIPEC.

20.04 Preoperative Enteral Access is not Requite Prior to Multimodality Treatment of Esophageal Cancer

T. K. Jenkins4, A. N. Lopez4, G. A. Sarosi1,2, K. Ben-David3, R. M. Thomas1,2  1University Of Florida,Department Of Surgery,Gainesville, FL, USA 2North Florida/South Georgia Veterans Health System,Department Of Surgery,Gainesville, FL, USA 3Mount Sinai Medical Center,Department Of Surgery,Miami Beach, FL, USA 4University Of Florida,College Of Medicine,Gainesville, FL, USA

Introduction:  While prior research has shown that preoperative (preop) enteral access is feasible and safe in patients to support their nutrition prior to esophagectomy, controversy exists regarding its necessity, as subjective dysphagia is a poor indicator of need for enteral access. We hypothesized that patients who underwent preop enteral access prior to esophagectomy for cancer fared no better than those who had surgical enteral access performed at the time of esophagectomy.

Methods: An IRB approved retrospective database of patients undergoing esophagectomy for esophageal malignancy from 2007-2014 was established. Clinicopathologic factors were recorded including preop enteral access, weight change, nutritional labs, preop cancer stage, operative details, and perioperative complications.

Results: One hundred fifty-six patients were identified, of which 99 (63.5%) received preop chemoradiation (cXRT) prior to esophagectomy. Since preop cXRT can influence perioperative nutrition, this group comprised the study cohort. Fifty (50.5%) underwent preop enteral access [esophageal stent (1), gastrostomy (14), jejunostomy (32), nasoenteric (1), combination (2); “access group”] prior to cXRT followed by esophagectomy and feeding jejunostomy unless it was pre-existing. There was no difference in demographics, preop tumor staging, or operative details between the access and non-access groups. No difference was noted between access and non-access groups in subjective dysphagia [n=43 (86%) vs 37 (75.5%), respectively; p=0.2)] or mean preop serum albumin (gm/dl) [3.9 (range 3.1-4.5) vs 4 (range 3.3-6.4), respectively; p=0.2]. To account for potential cXRT delays, there was no difference in median time from diagnosis to surgery in the access vs non-access groups (126d vs 126d, p=0.5). Comparing weight loss 6mo preop to surgery, the access group had a mean 5.2% weight loss (range -29.4 – +6.6%) vs 4.5% reduction (range -19.4% – +68.2%) in the non-access group (p=0.8). Additionally, mean weight loss 6mo preop to 6mo postop was similar in the access vs non-access groups [-11.2% (range -44% – +5.3%) vs -15.4% (range -34.1% – -1.4%), respectively p=0.1].  Complication rates between access and non-access groups (64% vs 51%, respectively; p=0.2) were likewise similar.  In patients with reported dysphagia, there was no difference in weight change 6mo preop to 6mo postop in the access vs non-access group (-11% vs -15.2%, p=0.1; respectively).

Conclusions: Despite the bias of establishing enteral access prior to preop cXRT for esophageal malignancy in candidates for esophagectomy, there was no difference in weight change, preop albumin, or complication rates in patients who had preop enteral access versus those who did not. Patients with esophageal malignancy should therefore proceed directly to appropriate neoadjuvant and surgical therapy with enteral access performed at the time of definitive resection or reserved for those with obstruction confirmed on endoscopy.

17.20 Chemotherapy versus Chemoradiotherapy for Resected Pancreatic Cancer: Defining the Optimal Regimen

C. Mosquera1, L. M. Hunter1, T. L. Fitzgerald1  1East Carolina University Brody School Of Medicine,Surgical Oncology,Greenville, NC, USA

Introduction:  Postoperative adjuvant therapy for pancreatic adenocarcinoma has engendered significant controversy and is the subject of yet to be completed clinical trials. Pending these trials, data to guide optimal patient management is needed.

Methods:  Patients with resected adenocarcinomas of the pancreas undergoing surgery only, postoperative chemotherapy (CT), and chemoradiotherapy (CRT) from 2004-2013 were identified using the NCDB.

Results: A total of 26,821 patients were included. A majority were male (50.6%), white (86.3%), stage of II (82.4%), and with a Charlson comorbidity score of 0 (67.7%). On univariate analysis, adjuvant therapy was most strongly associated with younger age, race, insurance status, lymphatic invasion, high grade, and size > 2cm., p<.0001. On multivariate analysis, the associations continued for age <50 years (OR 5.24), lymphatic invasion (OR 1.60), high-grade (OR 1.37) and size > 2cm (OR 1.35), but not race or insurance status. On univariate survival analysis, patients that received adjuvant therapy had a greater median survival compared to surgery alone (22.0 vs 18.2 months, p<.0001). On multivariate survival analysis adjuvant therapy continued to be associated with survival (1.39, p<.0001). A total of 16,549 patients received postoperative CT or CRT. On univariate analysis patients who were older, had negative margins, and with Medicare were most likely to receive CT, p<.0001. On multivariate analysis age and negative margins were significant (OR 1.73, p<.0001), lymphatic invasion, tumor size, and insurance status were not. When survival analysis was restricted to those receiving CT or CRT, the highest median survival was seen in low grade (31.2 months, p<.0001) and size < 2cm. (33.1 months, p<.0001). Patients who received CRT had longer median survival than those who received CT (22.9 vs 21.8 months, p=.0001). On multivariate analysis of CT vs CRT (HR 1.14), high grade (HR 1.64), positive margins (HR 1.53), and lymphatic invasion (HR 1.50) continue to be associated with diminished survive, p <0.0001.

Conclusion: Postoperative adjuvant therapy is associated with a 40% improved survival. In contrast to other studies, these data suggest a modest survival advantage to combination therapy compared to CT alone.