72.06 Stricter ioPTH Criterion for Successful Parathyroidectomy in Stage III CKD patients with pHPT

S. Liu1, A. Yusufali1, R. Teo1, M. Mao1, Z. F. Khan1, J. C. Farra1, J. I. Lew1  1University Of Miami Leonard M. Miller School Of Medicine,Division Of Endocrine Surgery, DeWitt Daughtry Family Department Of Surgery,Miami, FL, USA

Introduction:
The effect of altered parathormone (PTH) metabolism in renal insufficiency on intraoperative parathormone (ioPTH) monitoring during parathyroidectomy (PTX) for primary hyperparathyroidism (pHPT) remains unclear. A stricter >50% ioPTH drop with return to normal range criterion, rather than the classic >50% ioPTH drop criterion alone, may be needed to achieve optimal operative success in this patient population with renal disease. This study compares operative outcomes using classic and stricter >50% ioPTH drop criteria in patients with mild or moderate renal insufficiency undergoing PTX guided by ioPTH monitoring for pHPT.

Methods:
A retrospective review of prospectively collected data in 605 patients undergoing PTX guided by ioPTH monitoring for pHPT was performed. All patients had elevated calcium and PTH levels, with ≥6 months of follow up and a mean follow up of 45 months. Glomerular filtration rate (GFR) was estimated with the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation. The National Kidney Disease Outcomes Quality Initiative (KDIGO) staging was used to define the stages of CKD based on estimated GFR (eGFR): Stage I with normal or high GFR (GFR>90 ml/min), Stage II Mild CKD (GFR= 60-89 ml/min), Stage III Moderate CKD (GFR = 45-59 ml/min). Patients with overt secondary hyperparathyroidism (CKD Stage IV and V) were excluded. Patients were further subdivided into patients with >50% ioPTH drop only criterion (classic) and patients with a >50% ioPTH drop to within normal range (<65 pg/mL) criterion (stricter). Operative outcomes including the rates of operative success, failure, recurrence, bilateral neck exploration (BNE) and multiglandular disease (MGD) were compared across the three CKD groups.

Results:
Of 605 patients, 38% (230/605) had normal renal function or stage I CKD, 44% (268/605) had Stage II CKD, and 18% (107/605) had Stage III CKD. In patients with Stage I and II CKD, there was no statistical differences in rates of operative success, failure, recurrence, BNE and MGD between patients with classic compared to those with the stricter criterion. However, in Stage III CKD patients, there was a significant difference in operative success rates between those patients who had >50% ioPTH drop alone and those who had >50% ioPTH drop and return to normal range, (92% vs. 100%, respectively, p<0.05). There was no difference in recurrence, BNE, or MGD rates in Stage III CKD patients between those who had classic >50% ioPTH drop compared to those with the stricter criterion. 

Conclusion:
PTX guided by ioPTH monitoring using the classic >50% ioPTH drop criterion is performed with the highest operative success in patients with normal renal function, Stage I, and Stage II CKD. However, in patients who have Stage III CKD, a stricter >50% ioPTH drop with return to normal range criterion may lead to improved rates of operative success, and should be used during PTX in this patient population with renal disease. 
 

72.04 Adrenal Incidentaloma Follow-up is Influenced by Patient, Radiological and Medical Provider Factors

D. I. Maher1, E. Williams1, S. Grodski1,2, J. W. Serpell1,2, J. C. Lee1,2  1Alfred Hospital,Monash University Endocrine Surgery Unit,Melbourne, VIC, Australia 2Monash University,Department Of Surgery,Melbourne, VIC, Australia

Introduction: The majority of adrenal incidentalomas (AI) are benign, although some are large, functional or malignant, and may require surgery. Therefore, all require follow-up. This case-control study aims to determine the pattern of AI follow-up in a level 1 trauma centre in Melbourne Australia, focussing on the factors that influence whether follow-up is facilitated.

Methods: Patients with CT-detected AIs between January 2010 and September 2015 at The Alfred Hospital were included. Case files were identified using a key word search of electronic CT reports. Patients were excluded if the primary purpose of the CT was to investigate adrenal disease, or if the patient had a history of known adrenal mass. Cases were assessed by two authors and reviewed for demographics, managing unit, CT indication and findings, and follow-up arrangements. To consistently determine if “follow-up” occurred, a strict definition of the term was applied. Statistical analysis using t-test, Chi-squared test and logistic regression was performed using Stata SE v14, with a p-value of < 0.05 set as significant.

Results: A total of 38 848 chest and abdominal CTs were performed in the study period, yielding 804 patients with AIs who met inclusion criteria (mean age 65, 58 % male). The mean size of AI was 23 mm. Univariate analysis demonstrated that follow-up was more likely to occur in younger patients (mean age 62 vs 66, p < 0.001); in larger lesions (mean size 26 mm vs 21 mm, p < 0.001); if the CT suggested follow-up (p < 0.001); or if the CT report suggested a diagnosis (p < 0.001). Follow-up arrangements were most likely to be made by the trauma unit (39 %, p = 0.01).

A multivariable analysis supported the significance of these findings and indicated that the CT report and managing unit strongly influence follow-up rates. When a diagnosis was suggested by the CT report, follow-up was more likely to be facilitated (odds ratio 0.63, 95 % CI 0.45 – 0.88; p < 0.01). Additionally, more cases in the follow-up group had a follow-up recommendation in the CT report (2.88, 1.95 – 4.26; p < 0.01). A large difference in the frequency of follow-up was noted between the Trauma Unit compared to other units (1.77, 1.09 – 2.89; p < 0.02). This variance is possibly due to the introduction of a dedicated adrenal lesion protocol.

Follow-up arrangements were made for 245 cases (30 %). In 36 % of these cases (N = 88) follow-up occurred at The Alfred Hospital. Seven cases (8 %) required surgical intervention. Histopathology confirmed four adrenal cortical tumours, two metastatic melanomas and one phaeochromocytoma. 

Conclusions: This study highlights that AI follow-up is often overlooked, and that approaches need to be developed to ensure that all cases receive the review they require. This study demonstrates that follow-up is influenced by patient, radiological and medical provider factors. An adrenal lesion follow-up protocol may improve follow-up rates, but requires further research.

72.03 Clinical Significance of BRAF Non-V600E Mutations in Colorectal Cancer

Y. Shimada1, Y. Tajima1, M. Nagahashi1, H. Ichikawa1, M. Nakano1, H. Kameyama1, J. Sakata1, T. Kobayashi1, Y. Takii2, S. Okuda3, K. Takabe4,5, T. Wakai1  1Niigata University Graduate School Of Medical And Dental Sciences,Division Of Digestive And General Surgery,Niigata, , Japan 2Niigata Cancer Center Hospital,Department Of Surgery,Niigata, , Japan 3Niigata University Graduate School Of Medical And Dental Sciences,Division Of Bioinformatics,Niigata, , Japan 4Roswell Park Cancer Institute,Breast Surgery,Buffalo, NY, USA 5University At Buffalo Jacobs School Of Medicine And Biomedical Sciences,Department Of Surgery,Buffalo, NY, USA

Introduction: Recent advances of comprehensive genomic sequencing (CGS) enables to detect not only BRAF V600E mutation but also BRAF non-V600E mutations in a single assay. While it has been proved that BRAF V600E mutation in colorectal cancer shows poor prognosis and poor response to anti-EGFR therapy, clinical significance of BRAF non-V600E mutation has not been fully investigated. The present work aimed to describe clinicopathological characteristics and clinical outcome of BRAF non-V600E mutant type compared with BRAF wild-type and BRAF V600E mutant-type.

Methods:  One-hundred-eleven Stage IV CRC patients were analyzed. We investigated genetic alterations using 415-gene panel, which includes BRAF V600E and non-V600E mutations. The differences of clinicopathological characteristics and genetic alterations were analyzed among BRAF wild-type, BRAF V600E mutant-type, and BRAF non-V600E mutant-type using Fisher’s exact test. Overall survival (OS) and Progression-free survival (PFS) in response to targeted therapies were analyzed among the 3 groups using log-rank test. 

Results: CGS revealed that 98 patients (88%), 7 patients (6%), and 6 patients (6%) were BRAF wild-type, BRAF V600E mutant-type, and BRAF non-V600E mutant-type, respectively. The variants of BRAF non-V600E in each 6 patients were as follows: G469A, G469A and V502I, D594G, I326V, N581Y, D594G. BRAF V600E mutant-type were more frequently right-sided, histopathological grade 3, mucinous type, and with multiple peritoneal metastases distant from primary lesion. BRAF non-V600E mutant-type were more frequently left-sided, non-mucinous type, and with bilateral multiple lung metastases. While BRAF V600E mutant-type showed significantly worse OS than BRAF wild-type and non-V600E mutant-type (P < 0.001 and P = 0.038, respectively), BRAF non-V600E mutant-type showed no significant difference compared with BRAF wild-type. Two of 6 patients with BRAF non-V600E mutation underwent R0 resection and showed no evidence of disease at final follow-up. In 47 patients with anti-EGFR therapy, while BRAF V600E mutant-type showed significantly worse PFS than BRAF wild-type (P = 0.013), BRAF non-V600E mutant-type showed no significant difference compared with BRAF wild-type. In 73 patients with anti-VEGF therapy, there was no significant difference on PFS among the 3 groups.

Conclusion: BRAF non-V600E mutant-type demonstrates different clinicopathological characteristics and clinical outcome from BRAF V600E mutant-type. Further preclinical and clinical investigations are needed to clarify the role of BRAF non-V600E mutation in colorectal cancer.

 

72.01 PTEN Mutation Is Associated With Worse Prognosis In Stage III Colorectal Cancer

Y. Tajima1, Y. Shimada1, M. Nagahashi1, H. Ichikawa1, H. Kameyama1, M. Nakano1, J. Sakata1, T. Kobayashi1, H. Nogami2, S. Maruyama2, Y. Takii2, S. Okuda3, K. Takabe4,5, T. Wakai1  1Niigata University Graduate School Of Medical And Dental Sciences,Division Of Digestive And General Surgery,Niigata, NIIGATA, Japan 2Niigata Cancer Center Hospital,Department Of Surgery,Niigata, NIIGATA, Japan 3Niigata University Graduate School Of Medical And Dental Sciences,Division Of Bioinformatics,Niigata, NIIGATA, Japan 4Roswell Park Cancer Institute,Breast Surgery,Buffalo, NEW YORK, USA 5The State University Of New York,Department Of Surgery, University At Buffalo Jacobs School Of Medicine And Biomedical Sciences,Buffalo, NEW YORK, USA

Introduction:
he PI3K/AKT/mTOR pathway is related with cell proliferation and frequently activated in many human cancers. On the other hand, PTEN is a tumor suppressor gene inhibiting PI3K-initiated signaling. Loss of PTEN can be occurred in various types of tumor and associated with progression and worse prognosis. However, the association between PTEN mutation and prognosis in colorectal cancer (CRC) remains unclear. Our aim was to analyze the clinical impact of PTEN mutation in patients with colorectal cancer.

Methods:
Two-hundred-one Stage I–IV CRC patients who underwent colorectal resection were analyzed. We investigated genetic alterations associated with CRC using 415-gene panel. The association between PTEN mutation status and clinicopathological characteristics was analyzed using Fisher’s exact test. The association between PTEN mutation status and relapse-free survival (RFS) was analyzed using log-rank test and Cox proportional hazards model.

Results:
Fifty-five (27%) of 201 patients had PTEN mutation. Tumor diameter < 50 mm, lymphatic invasion, venous invasion, distant metastasis, poorly differentiated cluster grade 2/3 and Ki67 < 60 % were significantly associated with PTEN mutation (P < 0.001, P = 0.036, P = 0.003, P = 0.002, P = 0.009 and P = 0.003, respectively). Univariate analysis showed that PTEN mutation was significantly associated with the worse RFS in patients with Stage III CRC (P = 0.002) (Fig.1). On the other hand, PTEN mutation was not significantly associated with the RFS in patients with Stage I/II CRC and the overall survival in patients with Stage IV CRC (Fig.1). Of 415 genes, 18 genes had mutations in over 10% of patients with Stage III CRC. Those 18 genes were ACVR2A (10.9%), APC (71.7%), BRAF (10.9%), BRCA2 (10.9%), CDH1 (10.9%), CIC (10.9%), ERBB2 (13.0%), FAT1 (10.9%), FBXW7 (23.9%), KRAS (41.3%), PIK3CA (15.2%), PTEN (15.2%), RNF43 (19.6%), SMAD2 (10.9%), SMAD4 (21.7%), SPEN (13.0%), STK11 (15.2%), TP53 (78.3%). Among the 18 genes, PTEN mutation was significantly associated with CIC mutation (P = 0.020). Univariate analysis in patients with Stage III CRC showed that the RFS was significantly worse in mucinous type, CIC mutation, ERBB2 mutation and PTEN mutation (P = 0.005, P = 0.009, P = 0.037, P = 0.002, respectively). Multivariate analysis in patients with Stage III showed that only PTEN mutation was significantly affected the RFS (hazard ratio 6.18, 95% confidence interval 1.63–23.5, P = 0.007).

Conclusion:
PTEN mutation was associated with worse prognosis in Stage III CRC. We speculate that PTEN is one of potential driver genes in CRC.
 

72.02 Unplanned Reoperation in Patients Undergoing Surgery for Rectal Cancer

L. V. Saadat1, A. C. Fields1, H. Lyu1, R. D. Urman1, E. E. Whang1, J. Goldberg1, R. Bleday1, N. Melnitchouk1  1Brigham And Women’s Hospital,Boston, MA, USA

Introduction: The rate of unplanned reoperation can provide information about surgical quality and the incidence and management of postoperative complications. There has been a paucity of studies assessing reoperation rates after rectal cancer surgery and the morbidity after such procedures remains largely unknown. The goal of this study was to determine the factors associated with unplanned reoperation following low anterior resection (LAR) and abdominoperineal resection (APR) for patients with rectal cancer and outcomes following these reoperations. 

 

Methods: The American College of Surgeons National Surgical Quality Improvement Program database was used to identify patients who underwent elective LAR and APR for rectal cancer from 2012-2014. The primary outcomes were 30-day reoperation rates and postoperative complications. Bivariate and multivariate analyses were conducted to assess risk factors for reoperation.

 

Results: A total of 11,297 patients were identified; 7,714 patients underwent LAR and 3,583 patients underwent APR. 454 LAR patients (5.9%) and 289 APR patients (8.1%) required reoperation within 30 days of their index operation. The most common reasons for reoperation were infection, bleeding, and bowel obstruction. The mean time to reoperation was 10.6 days and 13.1 days for LAR and APR, respectively. Multivariate analysis revealed that female sex (OR: 1.5, 95%CI: 1.19-2.01, p value: 0.001), poor functional status (OR: 2.2, 95%CI: 1.03-4.50, p value: 0.04), operation time (OR: 1.001, 95%CI: 1.00-1.002, p value: 0.01), and low preoperative albumin (OR: 0.79, 95%CI: 0.62-0.99, p value: 0.04) were independent risk factors for reoperation after LAR. Smoking (OR: 1.7, 95%CI: 1.2-2.4, p value: 0.001), COPD (OR: 1.8, 95%CI: 1.1-3.1, p value: 0.03), poor functional status (OR: 2.1, 95%CI: 1.1-4.3, p value: 0.032), operation time (OR: 1.003, 95%CI: 1.002-1.004, p value: <0.001), low preoperative albumin (OR: 0.69, 95%CI: 0.53-0.90, p value: 0.007), and laparoscopic approach (OR: 1.5, 95%CI: 1.1-2.1, p value: 0.02) were independent risk factors for reoperation after APR. Postoperative complication rates are high for those undergoing reoperation and patients are significantly more likely to have a non-home discharge (p <0.001) if reoperation takes place.  

 

Conclusion: Reoperation following LAR and APR for rectal cancer is not uncommon. This study highlights the common causes of reoperation, potentially modifiable preoperative risk factors for reoperation, and the morbidity associated with such operations.

 

 

71.09 Trends in Liver Transplantation with Older Liver Donors in the United States

C. E. Haugen1, X. Luo1, C. Holscher1, J. Garonzik-Wang1, M. McAdams-DeMarco1,2, D. Segev1,2  1Johns Hopkins University School Of Medicine,Surgery,Baltimore, MD, USA 2Johns Hopkins University Bloomberg School Of Public Health,Epidemiology,Baltimore, MD, USA

Introduction:  As the United States population ages, older liver donors (OLDs) represent a potential expansion of the donor pool. Historically, grafts from OLDs have been associated with poor outcomes and higher rates of discard, but some recent studies reported equivalent outcomes to grafts from younger donors. We hypothesized that there would be increased use of grafts from OLDs over time and sought to identify trends in demographics, discard, and outcomes of OLDs. 

Methods: We identified OLDs (aged≥70) and liver-only OLD graft recipients from 1/1/2003-12/31/2016 in the Scientific Registry of Transplant Recipients. We studied temporal changes in OLD graft characteristics, utilization, and recipient characteristics. Cuzick test of trends was used to evaluate OLD graft use over time. Cox proportional hazards models were used to estimate adjusted mortality and graft loss for OLD graft recipients. 

Results: Since 2003, 3350 transplants with OLD grafts have been performed. However, the annual percentage of OLD transplants performed out of all adults liver transplants has decreased from 6.0% to 3.2% (p=0.001), and annually 12-25% of all recovered OLDs were discarded. Recently transplanted OLDs were more likely to shorter cold ischemia and come from non-Caucasian donors who had higher BMI and anoxia or head trauma as the cause of death compared to OLDs in 2003. Recent OLD recipients were more likely to be older and less likely to be listed as Status 1 or receive shared organs. Graft and patient survival for recipients of OLD grafts have improved since 2003 (Figure). For recipients of OLD grafts from 2013-2016, mortality was 60% lower (aHR:0.40, 95%CI:0.31-0.52, p<0.001) and all-cause graft loss was 55% lower (aHR:0.45, 95%CI:0.36-0.57, p<0.001) than for OLD recipients between 2003-2006.

Conclusion: Up to 25% of OLDs are discarded annually across the US, and the number of OLD transplants performed has been decreasing. However, there is a significant improvement in graft and patient survival for OLD recipients since 2003. Particularly in the setting of an aging population, these trends in improved outcomes can guide OLD use and decrease OLD discard to possibly expand the donor pool. 
 

71.10 Survival after the Introduction of the Lung Allocation Score In Simultaneous Lung-Liver Recipients

K. Freischlag2, M. S. Mulvihill1, P. M. Schroder1, B. Ezekian1, S. Knechtle1  1Duke University Medical Center,Surgery,Durham, NC, USA 2Duke University Medical Center,School Of Medicine,Durham, NC, USA

Introduction:
The optimal management of patients with combined lung and liver failure is uncertain. Simultaneous lung and liver transplantation (LLT) confers survival benefit over remaining on the waitlist for transplantation. In 2005, the lung allocation score (LAS) was introduced and significantly reduced waitlist and overall mortality in single organ lung transplants. The current system for simultaneous LLT generally matches a recipient with a donor based on his or her LAS. Oftentimes this results in a relatively low MELD score at transplantation compared to liver alone. However, the current impact of LAS on LLT is unknown. To ascertain whether the current lung allocation system has improved survivability in this cohort, we studied LLT before and after the introduction of LAS.

Methods:
The OPTN/UNOS STAR file was queried for adult recipients of simultaneous LLT. Demographic characteristics were subsequently generated and examined. Kaplan-Meier analysis with the log-rank test compared survival between groups. A hazard ratio was generated based on the presence of LAS alone.

Results:
A total of 64 recipients of LAS were identified as suitable for analysis. Of those, 10 underwent transplant prior to the introduction of LAS. Comparatively, those without a LAS score had a higher mean FEV1 (48.22 vs 29.56, p=0.012), higher mean creatinine at transplant (1.22 vs 0.73, p=0.001), higher percentage diagnosed with primary pulmonary hypertension (40% vs 0%, p=0.004), and an earlier mean transplant year (1999.4 vs 2011.17, p<0.001). Survival was significantly lower in the LLT cohort before the introduction of LAS compared to the cohort after LAS (Figure 1- 1-year: 50.0% vs 83.3%, 5-year: 40.0% vs 67.5%, 10-year: 20.0% vs 55.6%, p=0.0073).  Presence of LAS was a predictor of decreased mortality (OR 0.051, 95%CI 0.006-0.436, p=0.007). 

Conclusion:
LLT is a rare procedure and most national analyses have included patients before and after the introduction of the LAS. Our study shows that survival in combined lung-liver transplantation after the introduction of the lung allocation score was significantly increased and presence of LAS was a predictor of decreased mortality.  While many factors contributed to the changes in mortality, the cohorts before and after the introduction of LAS are significantly different and should be treated as such when conducting future studies in simultaneous thoracic and abdominal organ allocation.
 

71.08 Kidney Paired Donation Programs Don't Become Concentrated with Highly Sensitized Candidates Over Time

C. Holscher1, K. Jackson1, A. Thomas1, C. Purcell2, M. Ronin2, A. Massie1, J. Garonzik Wang1, D. Segev1  1Johns Hopkins University,Baltimore, MD, USA 2National Kidney Registry,New York, NY, USA

Introduction: In order to utilize a willing but incompatible living donor, transplant candidates must either proceed with incompatible living donor kidney transplantation or attempt to find a more compatible match using kidney paired donation (KPD). For the latter, the benefit of a “better” match must be balanced with the morbidity and mortality associated with increased dialysis time while searching for a match. A common criticism of KPD registries is that the "easy-to-match" candidates match and leave the registry pool quickly, and thereby create a registry pool concentrated with difficult to match patients, making future KPD matches challenging. We hypothesized that, given alternative treatments such as deceased donor kidney transplant priority and desensitization, this concern would no longer be the case.

Methods: We studied 1894 registrants to the National Kidney Registry (NKR), the largest KPD registry in the United States (US), between 2011 and 2015. Candidates were considered a part of the KPD registry pool for the year they registered, and remained in the pool after registration until they matched into a KPD transplant or were removed from the registry for other reasons such as death, receipt of a deceased donor kidney transplant, or incompatible living donor transplant. The prevalent composition of the NKR pool was compared across years, comparing by age, gender, race/ethnicity, body mass index (BMI), ABO blood type, and panel reactive antibody (PRA) categories. Comparisons were made with chi-square, ANOVA, and t-tests, as appropriate.  

Results: Candidates were median age 50 (IQR: 38-60) years, 48% female, 66% white, and had a median BMI of 27 (23-31). Overall, 59% of candidates had blood type O, 24% had blood type A, 15% had blood type B, and 2% had blood type AB. The mean PRA was 53 with 29% having a PRA of 0, 29% having a PRA of 1-79, 18% having a PRA 80-97, and 24% having a PRA 98 or higher. Notably, there were no statistically significant differences by year in age, gender, race, BMI, blood type, or PRA. Further, there were no statistically significant changes by year in the composition of the pool by PRA category (Figure).  

Conclusion: In the largest KPD registry in the US, there is no evidence that KPD registrants have become more difficult to match over time. This should encourage continued enrollment of incompatible donor/recipient pairs in KPD registries to facilitate compatible transplantation.

71.06 Initial Experience with Closed Incision Negative Pressure Wound Therapy in Kidney Transplant Patients

C. J. Lee1, G. Sharma1, C. Blanco1, A. Bhargava1, S. Patil1, F. Weng2, S. Geffner1,2, H. Sun1,2  1Saint Barnabas Medical Center,Department Of Surgery,Livingston, NJ, USA 2Saint Barnabas Medical Center,Renal And Pancreas Transplant Division,Livingston, NJ, USA

Introduction: Despite being the most commonly transplanted organ, renal transplantation still has several complications, one of the most frequent being surgical site infections (SSI). This study examines the effectiveness of closed incision negative pressure wound therapy (NPWT) and antimicrobial foam dressings in preventing SSI in kidney transplant patients.

Methods:  A retrospective chart review was performed on 78 patients who received a kidney transplantation from June 2015 to August 2015. The recipients were divided into high risk (Body mass index over 30, on hemodialysis over 5 years, or pre-transplant diabetes mellitus) and low risk. High risk patients were dressed with closed incision NPWT, and low risk patients with antimicrobial foam. All patients received antibiotics within 30 minutes of the incision. The incision was closed in multiple layers and skin was closed with surgical staplers. Both dressings were applied intraoperatively and removed 5 to 7 days postoperatively. Wounds were evaluated until 30 days postoperatively. The primary end point of the study was SSI as defined according to the definition of the Centers for Disease Control and Prevention. Univariate analysis was performed by using Chi-square test for categorical and continuous variables. A P value of .05 or less was considered statistically significant.  SPSS 20.0 statistical package was used to perform data evaluation.

Results: NPWT was used in 39 patients, out of which 17 patients received a living donor kidney transplant (LDKT) and 22 patients received a deceased donor kidney transplant (DDKT). Antimicrobial foam dressing was applied in 39 patients, out of which 21 patients received a LDKT and 18 patients received a DDKT. Eight patients in the antimicrobial foam group underwent re-operation and none of them were for wound infection. Six patients in the NPWT group underwent re-operation, and 2 out of 6 patients were because of wound infection. SSI in high-risk patients at the same institution in 2014 was 22%, which was reduced to 12.8% in patients from June to August 2015, after implementation of the NPWT dressing. The SSI in the low-risk patients in 2014 was 6%, which was reduced to 0% after implementation of the antimicrobial foam dressing. On Univariate analysis, BMI, diabetes and peri-operative transfusion were the significantly different factors between the two groups. On multivariate analysis, there were no independent factors associated with wound infection. (Table 1)

Conclusion: Based on our initial experience, closed incision NPWT and antimicrobial foam dressings, in addition to standard perioperative measures, have shown encouraging results in reducing surgical site infections for high- and low- risk renal transplant recipients.

 

71.07 Early Hypertension, Diabetes, and Proteinuria After Kidney Donation: A National Cohort Analysis

C. Holscher1, S. Bae1, M. Henderson1, S. DiBrito1, C. Haugen1, A. Muzaale1, A. Massie1, J. Garonzik Wang1, D. Segev1  1Johns Hopkins University,Baltimore, MD, USA

Introduction:  Living kidney donors (LKDs) are at greater risk of end stage renal disease (ESRD) than the general population. While late post-donation ESRD is more likely due to hypertension (HTN) or diabetes (DM), early post-donation ESRD is often secondary to glomerulonephritis and is associated with proteinuria. Better understanding of the prevalence of and risk factors for early post-donation proteinuria, HTN, and DM will improve LKD follow-up care.

Methods:  Using SRTR data, we identified 41260 LKDs who underwent donor nephrectomy from 2008-2014 with follow-up data included through 2017. Given the high loss to follow-up (59% missing proteinuria, 33% missing HTN, and 31% missing DM), sensitivity analyses were done using inverse probability weighting (IPW) and multiple imputation by chained equations (MICE). Multiple logistic regression models were used to compare risk factors for proteinuria, HTN, and DM.

Results: Among LKDs, 1.55%1.70%1.86% had HTN, 0.06%0.09%0.13% had DM, and 5.11%5.47%5.84% had proteinuria at two years post-donation. Sensitivity analyses revealed similar estimates of HTN and DM, but slightly higher estimates of proteinuria [6.09%6.44%6.84% (IPW) and 6.35%6.72%7.23% (MICE)]. HTN was more likely in older (for each 10 years, aOR: 1.341.491.66), more obese (for each 5 BMI units, aOR: 1.171.341.53), and hypertensive (for each 10 mmHg, aOR: 1.351.451.56) LKDs. HTN was less likely in LKDs who had donated more recently (by year, aOR: 0.900.941.00), were female (aOR: 0.630.780.97), Hispanic/Latino (

Reference: white, aOR: 0.430.640.94), and not related to the recipient (aOR: 0.580.730.93). DM was more likely in LKDs who were Hispanic/Latino (aOR: 1.393.8510.64) and had higher BMIs (aOR: 1.131.933.28). Proteinuria was more likely in LKDs who had higher BMIs (aOR: 1.121.231.36) and in African American LKDs (aOR: 1.481.852.32) and Hispanic/Latino LKDs (aOR: 1.211.511.88) relative to white LKDs. Proteinuria was less likely in LKDs who were older (aOR: 0.770.830.90), female (aOR: 0.640.760.89), and were not related to their recipient (aOR: 0.700.830.99, Table). 

Conclusion: The low early post-nephrectomy prevalence of HTN, DM, and proteinuria in LKDs is reassuring and suggests risk of ESRD is limited to a small proportion of LKDs. Improved understanding of which LKDs are at risk for these conditions might improve pre-donation risk stratification and counseling as well as post-donation prevention of ESRD. 

71.05 Arterial, but Not Venous, Reconstruction Increases Morbidity and Mortality in Pancreaticoduodenectomy

S. L. Zettervall1, J. Holzmacher1, T. Ju1, G. Werba1, B. Huysman1, P. Lin1, A. Sidawy1, K. Vaziri1  1George Washington University School Of Medicine And Health Sciences,Surgery,Washington, DC, USA

Introduction:  Vascular reconstruction during pancreaticoduodenectomy is increasingly utilized to improve pancreatic cancer resectability. However, very few multi-institutional studies have evaluated the morbidity and mortality of arterial and venous resection with reconstruction during this procedure.

Methods:  A retrospective analysis of prospectively collected data was performed utilizing the targeted pancreas module of the National Surgical Quality Improvement Program (NSQIP) for patients undergoing pancreaticoduodenectomy from 2012-2014. Demographics, comorbidities, and 30-day outcomes were compared among patients who underwent venous or arterial reconstruction and no vascular reconstruction. Multivariate analysis was utilized to adjust for differences in demographic and operative characteristics.

Results: 3002 patients were included in NSQIP in the time period studied: 384 with venous reconstruction, 52 with arterial reconstruction, and 2566 without. Patients who underwent both venous and arterial reconstruction were excluded (N=81). Compared to patients without reconstruction, those with venous reconstruction had more congestive heart failure (0.2% vs. 1.8%, P <.01), and those with arterial reconstruction had higher rates of pulmonary disease (11.5 vs. 4.5%, P =0.02). Pre-operative chemotherapy was more common in both venous (34% vs 12%, P < .01), and arterial reconstruction (21% vs 12%, P < .04). On multivariate analysis, there was also no increase in morbidity or mortality following venous reconstruction, compared to those without reconstruction. In contrast, using multivariate analysis, arterial reconstruction was associated with increased 30-day mortality with an Odds Ratio (OR): 6.7, 95%; Confidence Interval (CI): 1.8-25. Also morbidity was increased as represented with return to the operating room (OR: 4.5, 95%; CI: 1.5-15), pancreatic fistula (OR: 4.4, 95%; CI: 1.7-11), and reintubation (OR: 3.9, 95%; CI: 1.1-14).

Conclusion: These findings suggest that venous reconstruction, may be considered to improve survival in patients previously thought of as unresectable due to venous involvement. Careful consideration should be made prior to arterial reconstruction given the significant increase in perioperative complications and death within 30-day from operative procedure.

 

71.04 Significance of repeat hepatectomy for intrahepatic recurrence of HCC within Milan criteria

T. Gocho1, Y. Saito1, M. Tsunematsu1, R. Maruguchi1, R. Iwase1, J. Yasuda1, F. Suzuki1, S. Onda1, T. Hata1, S. Wakiyama1, Y. Ishida1, K. Yanaga1  1Jikei University School Of Medicine,Department Of Surgery,Minato-ku, TOKYO, Japan

Introduction: Standard treatment strategy for intrahepatic recurrence (IHR) of hepatocellular carcinoma (HCC) within Milan criteria (MC) after primary hepatic resection is different between Western countries and Japan. In Western countries, salvage liver transplantation (ST) is reported to have good results, while repeat hepatectomy in Japan is usually a treatment of choice for patients with good hepatic reserves in Japan. The aim of this study is to evaluate the prognostic impact of IHR of HCC within MC and to identify factors related to IHR within MC.

Methods: Between April 2003 and December 2015, 218 patients were treated with primary resection for HCC at Jikei University Hospital. Of those, 118 patients who developed IHR were retrospectively reviewed, and the significance of the following clinicopathological factors were assessed: patient factors (age, sex, viral status, background liver), primary and recurrent tumor factors (size, number, macroscopic portal vein invasion), treatment modality and 5-year overall survival after recurrence (5-y OS).

Results: Median age was 68 years (29 – 90) and 107 patients (91%) were male. Sixty-eight patients (58%) developed IHR within MC, and 37 patients (54%) were treated with repeat hepatectomy. With the median follow-up period of 64.6 months, IHR within MC showed significantly better 5-y OS (74%) as compared with IHR beyond MC (22%) (p < 0.001?. 5-y OS of the patients with IHR within MC treated with repeat hepatectomy was 85%, which was better than reported 5-y OS of ST. By univariate analysis, the patients with IHR within MC had higher rate of HBV+?p = 0.034?, tumor size more than 5 cm ?p < 0.001? and macroscopic PV invasion ?p = 0.041?. By multivariate analysis, independent prognostic factors consisted of tumor size more than 5 cm ?p = 0.041?, macroscopic PV invasion (p = 0.027? and repeat hepatectomy ?p < 0.001?.

Conclusion: IHR within MC after primary liver resection in selected patients for HCC could be treated with repeat hepatectomy with good outcome as compared with ST.

71.03 So Many Pancreatic Cystic Neoplasms, So Little Known About Their Natural History

F. F. Yang1, M. M. Dua2, P. J. Worth2, G. A. Poultsides3, J. A. Norton3, W. G. Park4, B. C. Visser2  1Stanford University,School Of Medicine,Palo Alto, CA, USA 2Stanford University,Hepatobiliary & Pancreatic Surgery,Palo Alto, CA, USA 3Stanford University,Surgical Oncology,Palo Alto, CA, USA 4Stanford University,Gastroenterology & Hepatology,Palo Alto, CA, USA

Introduction: Pancreatic cystic neoplasms (PCNs) are a frequent incidental finding on imaging performed for indications unrelated to the pancreas. Guidelines for management of PCNs are largely based on surgical series; important aspects of their natural history are still unknown. The purpose of this study was to characterize which PCNs can be safely observed.

Methods: A retrospective study of patients who either underwent immediate resection of a PCN (within 6 weeks of presentation) or observation with at least two imaging studies between 2004-2014 was performed. Descriptive statistics and multiple logistic regression analyses were performed to determine predictors of premalignancy and malignancy.

Results:  Of the 1151 patients in this study, 66 (5.7%) underwent immediate surgery while 1085 patients had surveillance with a median follow-up of 15.5 months, mean of 24.7 (SD 25.6). Of the observed patients, 183 (16.9%) demonstrated radiographic progression, while the majority (83.1%) did not progress. Eighty-four (7.6%) of the observed patients eventually underwent surgery for concerning features with a median of 8.0 months until resection, mean of 18.1 (SD 26.1). The risk of malignancy among patients undergoing immediate surgery was 65%. The risk of developing malignancy during the first 12 months of surveillance was 5.3%, while the risk for malignancy decreases with surveillance time (TABLE).

Multiple logistic regression demonstrated that amongst all patients, jaundice (OR=36.3, CI 95%=5.96-221, p<0.0001), initial cyst size>3.0cm (OR=5.14, CI 95%=1.13-23.5, p=0.035), solid component (OR=2.96, CI 95%=1.04-8.42, p=0.042), and main pancreatic duct dilation (MPD)>5mm (OR=4.18, CI 95%=1.18-14.9, p=0.27) were independent predictors of premalignancy or malignancy. Among observed patients, jaundice (OR=13.9, CI 95%=1.48-130.3, p=0.021), unintentional weight loss (OR=8.03, CI 95%=1.59-40.5, p=0.012), radiographic progression (OR=3.42, CI 95%=1.28-7.91, p=0.004), and MPD>5mm (OR=4.99, CI 95%=1.24-20.0, p=0.023) were independent predictors of premalignancy or malignancy.

Conclusion: Relatively few pancreatic cystic lesions progress to malignancy during surveillance, especially beyond a time frame of one year. However, the risk of transformation does persist after 5 years of follow-up. This understanding of the natural history, predictors of malignancy, and especially the timeframe of transformation of PCN to either carcinoma-in-situ or invasive adenocarcinoma is important for counseling of patients undergoing surveillance.

71.01 Damage Control Pancreatic Débridement: Salvaging the Most Severely Ill

T. K. Maatman1, A. Roch1, M. House1, A. Nakeeb1, E. Ceppa1, C. Schmidt1, K. Wehlage1, R. Cournoyer1, N. Zyromski1  1Indiana University School Of Medicine,Indianapolis, IN, USA

Introduction:  Damage Control Laparotomy is a widely accepted practice in trauma surgery. We have applied this approach selectively to severely ill patients requiring open pancreatic débridement. Damage Control Débridement (DCD) is a novel, staged approach to pancreatic débridement; we sought to evaluate outcomes associated with this technique.

Methods:  Retrospective review evaluating 75 consecutive patients undergoing open pancreatic débridement between 2006 and 2016. Data were prospectively collected in our institutional Necrotizing Pancreatitis Database. 12 patients undergoing DCD were compared to 63 undergoing single stage débridement (SSD). Two independent groups T-tests and Pearson’s correlations or Fisher’s exact tests were performed to analyze the bivariate relationships between DCD and suspected factors defined pre- and post-operatively. P-values of <0.05 were accepted as statistically significant.

Results: Patients treated by DCD were more severely ill globally. DCD patients had higher incidence of preoperative organ failure, need for ICU admission, APACHE II scores (table), and more profound malnutrition (albumin DCD=1.9 g/dL, SSD=2.5 g/dL; p=0.03). Indications for DCD included: hemodynamic compromise (n=4), medical coagulopathy (n=4), or a combination (n=4). 6 of 12 DCD patients required more than one subsequent débridement prior to definitive abdominal closure (mean number of total débridements=2.6; range 2-4). Length of stay (DCD=43.8, SSD=17.1, p<0.01) and ICU stay (DCD=20.8, SSD=5.9, p<0.01) was longer in DCD patients. However, no difference was seen in the rate of readmission (DCD=42%, SSD=41%, p=0.90) or repeat intervention (any: DCD=58%, SSD=33%, p=0.10; endoscopic: DCD=17%, SSD=11%, p=0.59; percutaneous drain: DCD=42%, SSD=19%, p=0.09; return to OR after abdominal closure: DCD=0%, SSD=13%, p=0.20). The DCD group had a decreased rate of pancreatic fistula (DCD=33%, SSD=65%, p=0.04). Overall mortality was 2.7%; no significant difference in mortality was observed between DCD (8%) and SSD (2%), p=0.19.

Conclusion: Despite having substantially more severe acute illness, necrotizing pancreatitis patients treated with damage control débridement had equivalent morbidity and mortality as those undergoing elective single stage pancreatic débridement. Damage control débridement is an effective technique with which to salvage severely ill necrotizing pancreatitis patients.

 

71.02 Raid Growth Speed of Cyst was a Predictive Factor for Malignant Intraductal Mucinous Papillary Neoplasms

K. Akahoshi1, N. Chiyonobu1, H. Ono1, Y. Mitsunori1, T. Ogura1, K. Ogawa1, D. Ban1, A. Kudo1, M. Tanabe1  1Tokyo Medical And Dental University,Hepato-Biliary-Pancreatic Surgery,Bunkyo-ku, Tokyo, Japan

Introduction:
Intraductal mucinous papillary neoplasms (IPMN) are cystic tumors of the pancreas with the ability to progress to invasive cancer, and being discovered with increasing frequency. Currently, the timing of surgical treatment is determined based on the international consensus guideline. However, pre-operative risk stratification for malignant IPMN is sill difficult. Novel predictors for malignant potential of IPMN are expected to be identified.

Methods:
This is a retrospective, single-center study of IPMN patients who underwent surgical resection between 2005 and 2015, and 81 patients were enrolled. Clinical and pathological data were collected and analyzed. The differences between benign IPMN and malignant IPMN were compared. Malignant IPMN was defined as presence of high-grade dysplasia or invasive cancer based on pathological diagnosis of resected specimen.

Results:
Of the 81 patients, 46 showed benign (low to intermediate dysplasia) and 35 showed malignant IPMN. Malignant IPMN were present in 28% of patients with branch duct type (10/36), 55% with combined duct type (17/31) and 57% with main duct type (8/14). Fifty-nine percent (24/41) of patients with high-risk stigmata and 27% (10/37) with worrisome features exhibited malignant IPMN. High-risk stigmata significantly correlated with malignant potential (p=0.006). Next, cyst growth speed of branch duct type and combined type patients with at least 2 contrast-enhanced imaging studies was measured. Average cyst growth speed of benign IPMN and malignant IPMN patients was 0.979±1.796mm/year and 6.933±2.958mm/year, respectively (p<0.001).

Conclusion:
Rapid cyst growth speed was a predictive factor for malignant IPMN as well as high-risk stigmata. Evaluation of cyst growth speed would contribute to optimize treatment strategy of IPMN patients.
 

70.10 A Decade of Components Separation Technique; An Increasing Trend in Open Ventral Hernia Repair.

M. R. Arnold1, J. Otero1, K. A. Schlosser1, A. M. Kao1, T. Prasad1, A. Lincourt1, P. D. Colavita1, B. T. Heniford1  1Carolinas Medical Center,General Surgery,Charlotte, NC, USA

Introduction:

Components separation technique (CST), a complex surgical adjunct to ventral hernia repair (VHR), was originally described by Ramirez et al. greater than 25 years ago. Reports of CST have increased over the last several years, but no studies to date have examined the trends of CST utilization and associated complications over time. This purpose of this study is to examine the trends on CST over the past 10 years.

Methods:

The ACS-NSQIP database identified open VHR with components separation from 2005 to 2014. Preoperative risk factors, operative characteristics, outcomes, and morbidity trends were compared. Univariate analysis of outcomes and morbidity was performed. Multivariate analysis was performed to control for potential confounding variables.

Results:

A total of 129,532 patients underwent open VHR during the study period. CST was performed as part of 8,317 ventral hernia repairs. Use of CST increased from 39 cases in 2005, to 2,275 cases in 2014 (2.6% vs10.2%; p<0.0001). Over the past decade, preoperative smoking and dyspnea significantly decreased (p<0.05). A decrease was seen in superficial and deep wound infection (10.3% vs 5.7%;p<0.05) and (7.7% vs. 3.4%;p<0.05), and all wound related complications (18.0% vs 10.2%;p< 0.05), minor complications (18.0% vs. 13.4%; p<0.0001), and major complications (25.6% vs 12.8%;p<0.0001). Hospital length of stay decreased (11.0 vs. 6.3;p<0.05), and hospital readmissions decreased from 2011 to 2014 (14.4% vs. 11.1%;p<0.05). There was no significant change in thirty-day mortality (Range 0-1.42%;p=0.28). Multivariate regression was performed to control for pre-operative comorbidities; there was an overall decrease in wound dehiscence for 2011 (OR0.3, 95%CI 0.1-0.9), 2012 (OR0.2, 95%CI 0.1-0.7), 2013 (OR0.2, 95%CI 0.0-0.6), and 2014 (OR0.2, 95%CI 0.1-0.7). There was no significant change in major or minor complications, wound infection, or mortality.

Conclusion:

CST in VHR has significantly increased in frequency over 10 years. As experience with CST increased, there has been a significant decrease in the rate of associated complications. When adjusted for preoperative risk factors, the risk of wound dehiscence decreased. However, the rate of other complications has remained unchanged. This suggests preoperative patient optimization, and improvement in patient selection and modifiable risk factors, such as smoking, rather than changes in surgical technique, have led to improved outcomes. Due to the limitations of the NSQIP database, changes in chronic disease management, such as diabetes, may be overlooked. Additional studies are needed to further elucidate the reason for this decrease in complications.

 

70.08 Don’t Get Stuck. A Quality Improvement Project to Reduce Perioperative Blood-Borne Pathogen Exposure

J. P. Gurria1,3, H. Nolan1, S. Polites1, K. M. Arata4, A. Muth5, L. Phipps4, R. A. Falcone1,2,3  1Cincinnati Children’s Hospital Medical Center,General And Thoracic Surgery,Cincinnati, OH, USA 2University Of Cincinnati,Surgery,Cincinnati, OH, USA 3Cincinnati Children’s Hospital Medical Center,Trauma Surgery,Cincinnati, OH, USA 4Cincinnati Children’s Hospital Medical Center,Operating Room,Cincinnati, OH, USA 5Cincinnati Children’s Hospital Medical Center,Occupational Safety And Environmental Health,Cincinnati, OH, USA

Introduction:
Blood-borne pathogen exposure (BBPE) among health care employees represents a significant safety and resource burden with over 380,000 events reported annually across hospitals in the United States.  The perioperative environment is a high-risk area for BBPE and efforts to reduce exposures are not well defined. The incidence of patients with blood-borne pathogen infections is often under-appreciated and therefore the risk from of BBPE is not appreciated.  A multi-disciplinary group of nurses, surgical techs, surgeons and employee health specialists worked collaboratively to develop and implement a BBPE prevention bundle to reduce exposures and therefore the overall number of Occupational Safety and Health Administration (OSHA) recordable cases.

Methods:
A BBPE prevention bundle including mandatory double gloving, safety zone on the sterile field, engineered sharps injury prevention devices on all needles, and clear communication around passing of sharps was created in an evidence-based, collaborative fashion at our institution and implemented for all perioperative staff. After an initial pilot period from January through July 2016 with one surgical division, the bundle implementation was spread throughout our entire perioperative system. We monitored exposures as both, days between exposures, and total number of exposures comparing baseline period of fiscal year (FY) 2015 to the post-implementation periods of FY 2016 and FY 2017. Analysis by specialty, role, location, type of injury, and timing was performed.

Results:
During the study period the number of surgical procedures remained relatively constant (35,000/FY). During the pre-implementation period, 45 OSHA recordable cases were reported. During implementation year, (FY16), cases decreased to 38 (15% decrease); while in the late post-implementation period of FY17, only 22 cases were reported (42% additional decrease), for a total of 52% decrease (p<0.022). The mean number of days between injuries significantly increased from 2.5 to 16.3 over the study period (FIGURE 1). For FY17, the main cause and type of BBPE was a needle stick while suturing (63%). By role, clinical fellows and attendings combined had the most injuries (54.6%). Among divisions: pediatric surgery (18%), operating room staff (18%) and orthopedics (13.6%) had the most events.

Conclusion:
A comprehensive and multi-disciplinary approach to employee safety, focused on reduction of BBPE, resulted in a significant progressive annual decrease of injuries among perioperative staff.  Efforts to change the culture of safety and implement a successful bundle into a complex environment, benefited from the support and diversity of a widely representative team.
 

70.09 The Association Between Travel Distance, Institutional Volume, and Outcomes for Rectal Cancer Patients

M. Cerullo1, M. Turner1, M. A. Adam1, Z. Sun1, J. Migaly1, C. Mantyh1  1Duke University Medical Center,Department Of Surgery,Durham, NC, USA

Introduction: Though the relationship between surgical volume and outcomes has been well studied, travel to higher-volume centers remains a significant barrier for patients. Though travel to higher-volume centers is associated with improved outcomes for other cancers, it remains unclear whether travel to higher volume centers is associated with improved 30- and 90-day mortality, better long-term survival, or a higher likelihood of undergoing appropriate surgery in patients with rectal cancer.

Methods: Patients with rectal adenocarcinoma (stages I-III) with a single primary tumor were identified in the 2011-2014 National Cancer Database (NCDB). Patients were further categorized into quartiles by distance traveled, while centers were categorized into volume quartiles. Multivariable logistic regression and Cox proportional hazards regression models were used to compare outcomes between patients in the highest quartile of travel distance treated at the highest volume centers (travel) with patients in the lowest quartile of travel distance treated at the lowest volume centers (local).

Results: Exactly 3,088 patients in the lowest quartile of travel distance treated at low-volume centers and 3,071 patients in the highest quartile of travel distance treated at high-volume centers were identified. Overall, 38.2% of patients had stage III disease (35.3% of short-distance/low-volume patients vs. 41.1% of greater-distance/high-volume patients, p<0.001), and 63.6% received neoadjuvant radiation (57.7% of short-distance/low-volume patients vs. 69.6% of greater-distance/high-volume patients, p<0.001). After adjustment for disease severity and receipt of adjuvant therapy, patients who traveled greater distances to high-volume centers had a 71% lower 30- and 61% lower 90-day mortality (30-day: OR=0.29, 95% CI: 0.15–0.57, p<0.001; 90-day: OR=0.39, 95% CI: 0.24–0.62, p<0.001), as well as lower risk of overall mortality (Hazard ratio=0.78, 95% CI: 0.68–0.88, p<0.001). These patients also were more likely to have adequate lymph node harvest (OR=1.83, 95% CI: 1.64–2.05, p<0.001) and less likely to have positive margins (OR=0.76, 95% CI: 0.59–0.96, p=0.02). However, these patients also had 42% greater odds of being readmitted after surgery (OR: 1.42, 95% CI: 1.14–1.75, p=0.001).

Conclusion: Traveling greater distances to high-volume centers improves 30- and 90-day mortality, overall risk of death, and pathologic surrogates for appropriate surgery in rectal cancer patients. As patients may often be faced with choosing to obtain care at local lower-volume centers or traveling to higher-volume centers, these findings provide an impetus for facilitating travel for patients to higher-volume centers.

70.06 Neoadjuvant Radiation for Locally Advanced Colon Cancer: A Good Idea for a Bad Problem?

A. T. Hawkins1, T. M. Geiger1, M. Ford1, R. L. Muldoon1, B. Hopkins1, L. A. Kachnic1, S. Glasgow2  1Vanderbilt University Medical Center,Nashville, TN, USA 2Washington University,Colon And Rectal Surgery,St. Louis, MO, USA

Introduction: Compared with lower tumor stages, resection of locally advanced colon cancer (LACC) is associated with poor survival. Few strategies are available to address this disparity. Data on the effect of neoadjuvant radiation therapy (NRT) to improve resectability and survival is lacking. We hypothesized that NRT will result in increased R0 resection, decreased multivisceral resection rates and improved overall survival in patients with LACC.  

Methods: The National Cancer Database (NCDB 2004-2014) was queried for adults with clinical evidence of LACC (defined as clinically evident T4 disease prior to surgery) who underwent curative resection. Patients with metastatic disease or in whom clinical staging data was unavailable were excluded.  Patients were divided into two groups – those who underwent NRT and those that did not.  Bivariate and multivariable analyses were used to examine the association between NRT and R0 resection rate, multivisceral resection and overall survival.  

Results: 15,207 patients with clinical T4 disease who underwent resection were identified over the study period.  195 (1.3%) underwent NRT.  The majority of patients in the NRT group underwent 4500 cGy of radiation in 25 fractions over five weeks (range: 3900- 5040 cGy). Rate of NRT utilization did not change by year. Factors associated with the use of NRT included younger age, male gender, private insurance, lower Charlson comorbidity score, and treatment at an academic/research program.  NRT was associated with superior R0 resection rates (NRT 87.2% vs. No NRT 79.8%; p=0.009) but not lower multivisceral resection rates (NRT 45.6% vs. No NRT 21.5%; p< 0.001).  Five-year overall survival was increased in the NRT group (NRT 62.0% vs. No NRT 45.7%; p< 0.001; PLEASE SEE FIGURE).  The benefit of NRT persisted in a Cox proportional hazard multivariable model containing a number of confounding variables including comorbidity, multivisceral resection and postoperative chemotherapy (OR 1.30; 95%CI 1.01-1.69; p=0.04).  

Conclusion: Younger age, male gender, private insurance, lower Charlson comorbidity score, and treatment at an academic/research program were associated with increased rates of NRT utilization. Although radiation is rarely used in locally advanced colon cancer, this NCDB analysis suggests that the use of neoadjuvant radiation for clinical T4 disease may be associated with superior R0 resection rates and improved overall survival.  NRT should be considered on a case-by-case basis in locally advanced colon cancer. Further research is necessary to identify patients that will benefit the most from this approach. 

 

70.07 The Effect of Trainee Involvment on Patient Outcomes in General Surgery Cases over Time

T. Feeney2, J. Havens1  1Brigham And Women’s Hospital,Trauma,Boston, MA, USA 2Harvard School Of Public Health,Boston, MA, USA

Introduction:  Resident duty hour reform was implemented in 2003 and further modified in 2011. The effect of these changes on patient outcomes remains unclear. We investigated the effect on outcomes of resident involvement in surgical procedures over time since these changes have been implemented. We hypothesized that there has been no change in outcomes since implementation of the resident work hours restrictions.

Methods:  We utilized the ACS-NSQIP database (2005-2012). General surgery were identified by common procedural terminology code, and were restricted to ages ≥18. Using 2005/2006 as reference logistic and linear regression analysis was performed.

Results:There were 422,733 procedures analyzed. In the attending only group there was no difference in the odds of major morbidity in 2012 (Odds Ratio (OR) (0.67 (95% Confidence Interval 0.35-1.31;p=0.247)), overall morbidity (0.86 (0.63-1.18;p=0.354)) or all-cause mortality(0.40(0.09-1.87;p=0.246)). In cases that included a trainee there was no change in the odds of major morbidity in 2012(OR 1.42(0.77-2.62;p=0.264)), the odds of overall morbidity (OR 1.12(0.81-1.53;p=0.495)) or the odds of all-cause mortality (OR 2.20(0.46-10.49;p=0.322)) over the same time period. There was an increase in mean operative time the attending only cases from 2005 to 2012 (14.7 min(10.8- 18.6;p<0.001), but there was a decrease of 7.48 min(11.0-3.9;p<0.001)in the mean operating time in the cases that included a trainee.

Conclusion: Between 2005-2012 there were no changes in the odds of overall complications, major complications, or all-cause mortality in surgeries involving attending surgeons only or involving trainees. There has been a significant change in the mean operative time in both groups. Attending surgeons operating alone had an increase in operative time while cases that included a trainee had a decrease in operative time. These data suggest that while operative time has changed, surgical outcomes for patients have not changed between 2005 and 2012.