29.02 National Evaluation of High Ratio Massive Transfusion in the Trauma Quality Improvement Program

B. R. Stultz1, D. Milia1, T. Carver1, C. Dodgion1  1Medical College Of Wisconsin,Milwaukee, WI, USA

Introduction:  Forty percent of trauma related in-hospital deaths involve massive hemorrhage. Recent studies have demonstrated that high ratio massive transfusions are related to improved outcomes but this has not been evaluated in a national cohort. The purpose of this study was to use the Trauma Quality Improvement Program (TQIP) database to determine the adoption and efficacy of high ratio transfusions.

Methods:  A retrospective analysis of adult massive transfusions from 2013-2015 within the TQIP database was performed. Massive transfusions were defined as ≥4 and ≥10 units of packed red blood cells (PRBC) at 4 and 24 hours, respectively. High ratio transfusion (HRT) was defined as plasma:platelets:pRBC ≥1:1:2.  Multivariate logistic regression was used to evaluate morbidity and mortality of patients receiving HRT with ratios of ≥1:1:1 and ≥1:1:2. Rate of adoption of HRT transfusion was evaluated by year.

Results: 20,009/689,072 (2.8%) patients underwent a massive transfusion in 318 level I & II trauma centers. The median age was 38 with median injury severity scores of 26. Seventy-six percent were male and median LOS was 11 days. Thirty-three percent suffered penetrating injuries, 61% underwent operative intervention and 44% underwent HRT. Overall mortality rate was 31%. Ratios of ≥1:1:1 decreased mortality (OR 0.88, p=0.012), with significant improvement if met at 4 hours (OR 0.61, p<0.01). ≥1:1:2 decreased mortality at 24 hours (OR 0.71, p<.0001) but not 4h.  Those with HRT had increased risk for complications (OR 1.5, p<.0001). In 2015, patients were significantly more likely (OR 1.2 p≤.0001) to receive HRTs when compared to previous years.

Conclusion: Adoption of HRT is significantly increasing overtime with early ratios of 1:1:1 or better conferring the greatest mortality benefit. However, there remain opportunities for significant improvement since more than half of the patients did not reach the high ratio transfusion threshold.

 

28.08 Bowel Preparation with Antibiotics Decreases Surgical-Site Infection for Both Left & Right Colectomy

A. J. Hjelmaas1, A. Kanters1, R. Anand1, J. Cedarbaum1, Y. Chen1, L. Ly1, N. Kamdar1, D. Campbell1, S. Hendren1, S. Regenbogen1  1University Of Michigan,Michigan Medicine,Ann Arbor, MI, USA

Introduction:
Despite recent studies demonstrating the effectiveness of mechanical bowel preparation with oral antibiotics for decreasing rates of surgical site infections (SSI) after colectomy, there remains inconsistency in practice with particular controversy over the role of bowel preparation in right-sided resections. Generally, bacterial concentration and stool solidity increases with progression through the colon, and there persists a belief that bowel preparation is needed only for left-sided resections. To understand whether there is heterogeneity in the efficacy of bowel preparation, we evaluate rates of SSI by the anatomy of resection and type of bowel preparation.

Methods:
We conducted a retrospective cohort study of patients who underwent elective colorectal resection with anastomosis and without stoma between 2012 and 2015, using prospectively-collected data from the Michigan Surgical Quality Collaborative, a state-wide consortium encompassing 73 community, academic, and tertiary hospitals. MSQC nurse reviewers collect a variety of colectomy-specific processes of care, including the type of bowel preparation – mechanical preparation with antibiotics, mechanical preparation without antibiotics, and no bowel preparation. We categorized resections by type of anastomosis according to CPT code – ileocolic (IC), colo-colonic (CC), or colorectal (CR); then compared the incidence of SSI between bowel preparation subtypes. We compared adjusted rates of SSI using logistic regression, including known patient-specific risk factors for SSI.

Results:
A total of 6192 patients were included in the study. 1134 underwent IC anastomosis, 3537 underwent CC anastomosis, and 1521 underwent CR anastomosis. Adjusted comparisons are shown in the Figure. For all cases, adjusted rates of SSI were 8.3% for no bowel preparation, 7.1% for mechanical preparation, and 4.6% for mechanical preparation with antibiotics (p<0.001). For right-sided colectomy, the adjusted rates of postoperative SSI were 11.1%, 5.4%, and 5.1% for no prep, mechanical prep, and mechanical prep with antibiotics, respectively (p=0.005).

Conclusion:
As in previous studies, we find overall rates of SSI are lowest when mechanical preparation is used in conjunction with oral antibiotics. Contrary to the assumption that bowel preparation is unnecessary for right colectomy, we found that bowel preparation led to significantly fewer SSIs even among resections with ileocolic anastomosis. This finding will reinvigorate efforts in our statewide collaborative to encourage bowel preparation with antibiotics for all colorectal resections. 
 

28.09 Patient-Reported Health Literacy Scores Associated With Readmissions Following Surgery

S. Baker1,2, L. Graham1,2, E. Dasinger1,2, T. Wahl1,2, J. Richman1,2, L. Copeland3, E. Burns4, J. Whittle4, M. Hawn5, M. Morris1,2  1University Of Alabama at Birmingham,Birmingham, AL, USA 2VA Birmingham Healthcare System,Birmingham, AL, USA 3VA Central Western Massachusetts Health Care System,Leeds, MA, USA 4Milwaukee VA Medical Center,Milwaukee, WI, USA 5VA Palo Alto Healthcare Systems,Palo Alto, CA, USA

Introduction: Hospital readmissions following surgery can be expensive and taxing on patients. Identifying mutable factors in predicting readmissions would be advantageous to both patients and healthcare systems. We hypothesized that patients with lower health literacy (HL) were more likely to be readmitted to the hospital following surgery.

Methods: We enrolled 734 patients undergoing general, vascular, or thoracic surgery at 4 Veterans Affairs (VA) Medical Centers, August 2015-June 2017. Patients were eligible if their post-operative hospital stay was more than 48 hours and they were discharged alive. Trained interviewers assessed patients’ overall health on the day of discharge using the Veterans Health Survey (VR12) Physical and Mental Component Scores (PCS; MCS). Health literacy was assessed by the 3-question Chew Health Literacy Questionnaire (HLQ), and the quality of the discharge transition by the Care Transition Measure (CTM-15). Patients were followed for 30 days post-discharge for readmission or emergency department (ED) use. A follow-up telephone interview at day 30 identified readmissions to non-VA hospitals. The HLQ summed three 5-point items (range 0-12); scores of 0-3 indicated adequate health literacy while scores of 4-12 indicated marginal or possibily inadequate health literacy. Bivariate and multivariable analyses examined correlations between HL and each outcome, 30-day readmission or ED use. Logistic regression models adjusted for clinical and demographic covariates.

Results: At the time of discharge, 33% of patient responses were consistent with inadequate HL (HL-low, n=245). Patients with adequate HL (HL-high) had better overall physical and mental health compared to patients with HL-low (PCS 32.0 vs. 29.5, p=0.01; MCS 49.7 vs 45.7, p<0.01) and reported higher-quality discharges (CTM-15 Mean: 3.3 vs 3.2, p<0.01). The overall 30-day readmission rate was 16% (n=124), however, it was 14% for patients with HL-high compared to 21% with HL-low (p<0.01). After adjusting for overall health (VR12), patients with HL-low were 1.5 times more likely to experience a readmission versus HL-high (OR=1.5, 95% CI=1.0-2.2); patterns of ED use were similar (OR for HL-low =1.38; 95% CI=0.95-2.01). Among the HL factors, patients who reported: (1) always having difficulty understanding written information were 2.8 times more likely to be readmitted (95% CI= 1.0-2.3), (2) not always confident filling out medical forms were 1.6 times more likely to be readmitted (95% CI= 1.1-2.4), and (3) ever requiring help to read hospital materials were 1.5 times more likely to be readmitted (95% CI= 1.2-6.5).

Conclusion: Low health literacy is common among VA surgery patients and an important contributor to readmission. Future work should focus on early identification of inadequate HL and the development of interventions to educate and empower this vulnerable population prior to discharge. 

28.10 Parathyroidectomy is Underutilized in the Treatment of Primary Hyperparathyroidism in Veterans

E. A. Alore1, J. W. Suliburk1, D. J. Ramsey2, C. J. Balentine3, K. I. Makris1,4  1Baylor College Of Medicine,Michael E. DeBakey Department Of Surgery,Houston, TX, USA 2Michael E. DeBakey Veterans Affairs Medical Center, Health Services Research And Development Center Of Innovation,Center For Innovations In Quality, Effectiveness And Safety,Houston, TX, USA 3University of Alabama at Birmingham,Department Of Surgery,Birmingham, AL, USA 4Michael E. DeBakey Veterans Affairs Medical Center,Operative Care Line, Division of General Surgery,Houston, TX, USA

Introduction:
Untreated hyperparathyroidism significantly impairs quality of life and incurs substantial costs to both patients and health care systems. Parathyroidectomy is the only cure for primary hyperparathyroidism (pHPT), yet there is evidence that parathyroidectomy is underutilized in single-institution and regional studies. The purpose of our study is to assess the utilization of parathyroidectomy in pHPT within a national population sample. We hypothesized that parathyroidectomy is underutilized in the treatment of pHPT.

Methods:
We performed a retrospective search of all patients within the national VA corporate data warehouse between 2000-2010. Adults with pHPT were identified using a validated algorithm by meeting the following criteria: elevated serum parathyroid hormone (PTH) level (> 88 pg/mL), elevated serum calcium level (>10.5 mg/dL) and serum creatinine < 2.5 mg/dL. Patients with secondary or tertiary hyperparathyroidism were excluded based on serum creatinine ≥2.5 mg/dL, history of dialysis, or prior renal transplantation. Rates of parathyroidectomy were calculated amongst patients with pHPT. A reverse stepwise logistic regression using p>0.2 as a criterion for removal from the model was used to identify predictive factors of parathyroidectomy.

Results:
Of 383,701 patients with hypercalcemia, 80,250 (20.9%) were tested for PTH. A total of 21,465 patients met diagnostic criteria for pHPT across the VA system during the study period. An average of 1,951 patients (0.03%) per year were diagnosed with pPTH out of an average of 6,997,378 patients treated at the VA per year. Of all patients with pPHT, only 1,679 (7.8%) underwent parathyroidectomy. In a subgroup analysis, of the 1,501 patients with pHPT presenting with serum Ca >11.5 mg/dL (an established indication for parathyroidectomy), only 301 (16.7%) underwent parathyroidectomy. On the reverse stepwise logistic regression, significant predictors of parathyroidectomy included a documented diagnosis of hyperparathyroidism in the medical record by ICD-9 code, high serum calcium level, history of kidney stones, osteoporosis, younger age and normal EGFR (Table 1).

Conclusion:
Despite being the only definitive treatment of pHTP, parathyroidectomy is extraordinarily underutilized nationally within the VA, even when a clear indication for operation exists. Further studies are needed to identify the underlying reasons for this underutilization and guide corrective interventions.

28.07 Less Telemetry, Better Outcomes: University Network Implementation Of SafetyNet Monitoring System

A. Cipriano1, C. Roscher2, A. Carmona2, J. Rowbotham3, S. P. Stawicki1  1St. Luke’s University Health Network,Department Of Surgery,Bethlehem, PA, USA 2St. Luke’s University Health Network,Department Of Anesthesiology,Bethlehem, PA, USA 3St. Luke’s University Health Network,Quality Resource Department,Bethlehem, PA, USA

Introduction: Effective detection of clinical patient deterioration (CPD) on medical-surgical units (MSU) continues to pose a significant challenge. Inefficient utilization of hospital resources negatively affects institutional quality, safety, and finances. The goal of this study is to evaluate a pilot implementation of pulse oximetry-based SafetyNet monitoring system (SNMS) as a method of resource-efficient CPD detection on MSU at a tertiary referral center. We hypothesized that the deployment of SNMS will be associated with improved detection of CPD and fewer intensive care unit (ICU) transfers.

Methods: This is a post-hoc, IRB exempt analysis of a quality improvement initiative’s designed to increase CPD detection and to prevent unplanned ICU transfers on our orthopedic MSU through the use of SNMS (Masimo, Irvine, CA). Concurrently, we sought to reduce telemetry overutilization. Primary outcome was the ICU transfer rate, with telemetry utilization and non-clinical alarm burden (NCAB) as secondary outcomes. We compared study outcomes on two adjacent MSUs (P8 + P9). The P8 unit served as “control” both during pre- and post-SNMS implementation periods (e.g., the SNMS was only deployed on the P9). Quality reporting methods, Fisher’s Exact and Mann-Whitney U-test were used to compare pre-/post-SNMS periods, with significance set at α=0.05.

Results: Study duration was 30 months (Jan-Dec 2015 "pre-implementation" and Jan 2016-Jun 2017 "post-implementation" period). We examined 21,189 patient-days on the P9 MSU (11,702 and 9,487 pre/post-intervention, respectively) and 23,388 patient-days on the P8 MSU (13,616 and 9,772 pre/post-intervention). Median case-mix index (CMI) was higher for P9 than P8 during the duration of the study (2.08 [IQR 1.98-2.17] vs 1.67 [IQR 1.64-1.76], respectively). SNMS implementation was associated with significant reduction of ICU transfers form P9. Median ICU transfers per 1000 pt-days declined from 11.7 pre-SNMS to 8.8 post-SNMS (Fig 1, p<0.03). Median telemetry utilization per 1,000 pt-days declined from 20.8 pre-SNMS to 16.5 post-SNMS (p<0.01). Targeted staff training and “sensor off” delay implementation resulted in significant reduction in NCAB, from 72.3 to 36.5 pages/device (p<0.01).

Conclusion: Implementation of SNMS was associated with 25% reduction in ICU transfers per 1,000 pt-days on our P9 MSU. At the same time, median telemetry utilization on P9 MSU was reduced by 21%. In addition, we were able to reduce the number of non-clinical nursing notifications by 50% through the combination of staff education and “sensor off” delay implementation. Due to its success, this pilot program is undergoing active expansion.

28.06 Impact of a Continuous Local Anesthetic Pain Ball on Post-operative Pain in Kidney Transplant Recipients

E. M. Betka1, J. Ortiz2, M. Rees2, S. Spetz1, P. Samenuk1, L. Eitniear1  1University Of Toledo Medical Center,Pharmacy,Toledo, OH, USA 2University Of Toledo Medical Center,Transplant Surgery,Toledo, OH, USA

Introduction:  The management of post-operative pain is critical for improving patient satisfaction, recovery time, and reducing hospital length of stay. In 2016, the American Pain Society (APS) and the American Society of Anesthesiologist (ASA) teamed up to provide the first evidenced based guidelines for the management of post-operative pain. The APS/ASA strongly recommend utilizing a multimodal approach for the management of post-operative pain as this approach is supported by a high quality of evidence. This approach for post-operative pain management has proven to be efficacious for procedures involving the abdominal wall. However, data are controversial in regards to this approach for kidney transplant recipients.

Methods:  This retrospective cohort study was approved by the Institutional Review Board at The University of Toledo Medical Center (UTMC). Patients 18 years and older admitted to UTMC from July 1, 2006 through July 30, 2016 who underwent kidney transplantation were included. Patients received one of the following post-operative pain management regimens: the standard of care (SOC) consisting of intravenous (IV) and/or oral (PO) opioids, with or without the addition a local anesthetic pain ball (LAPB). The primary outcome was the cumulative opioid requirements in IV morphine equivalents at 48 hours following transplantation. Secondary outcomes included post-operative pain scores at 24 and 48 hours following transplantation, and hospital length of stay.

Results: Information on baseline characteristics and study endpoints were collected for 102 patients. Propensity scores were utilized to match the patients based on the following confounders: age, sex, race, body mass index, baseline opiate use, previous abdominal procedure, repeat transplantation, type of transplant, and intra-operative morphine requirements. After matching, 38 subjects remain in each group. The median (IQR) IV morphine equivalent dose at 48 hours was 19.85 mg (14.85, 42.18) in the SOC group and 17.65 mg (4.95, 30.56) in the LAPB group (p= 0.120). There was no significant difference in median pain scores at 24 hours (p= 0.059) or 48 hours (p-value=0.139) in those receiving the LAPB versus those receiving the SOC. Also, there was no significant effect on hospital length of stay in those receiving the LAPB versus the SOC (3 days versus 4 days, respectively; p=0.449). 

Conclusion: This study demonstrates that the use of a LAPB as a alternative to IV/PO opioids did not reduce the post-operative opioid requirements in kidney transplant recipients following transplantation. The current literature regarding the use of LAPB for abdominal procedures outside of kidney transplantation remains positive. However, the literature supporting the use in kidney transplant recipients remains controversial. The data obtained from this trial supports the need for further randomized control trials that heavily control for confounding factors that are likely to affect post-operative opioid utilization. 

 

28.04 Improvements in Surgical Mortality: The Roles of Complications and Failure to Rescue

B. T. Fry1,2, J. R. Thumma2, J. B. Dimick2,3  3University Of Michigan,Department Of Surgery,Ann Arbor, MI, USA 1University Of Michigan,Medical School,Ann Arbor, MI, USA 2University Of Michigan,Center For Healthcare Outcomes And Policy,Ann Arbor, MI, USA

Introduction: Surgical mortality has declined considerably over the last decade. While most hospitals have reduced mortality to some degree, much can be learned from how hospitals with the largest reductions achieved their improvement. Specifically, the roles of reducing complications and improving rescue from complications once they occur (known as failure to rescue or FTR) remain unclear. This study sought to understand which of these factors plays a larger role in reducing surgical mortality.

Methods: Using Medicare Provider Analysis and Review files, we performed a retrospective, longitudinal cohort study of patients who underwent abdominal aortic aneurysm (AAA) repair, pulmonary resection, colectomy, and pancreatectomy. We then calculated hospital-level risk- and reliability-adjusted rates of 30-day mortality, serious complications, and FTR for these patients in two time periods: 2005-2006 and 2013-2014 (n=699,771 patients). Serious complications were defined as the presence of one or more of eight complications plus a procedure-specific length of stay of greater than the 75th percentile. FTR was defined as death occurring in a patient with at least one serious complication. Hospitals were stratified into quintiles by change in mortality over time with average rates of 30-day mortality, serious complications, and FTR reported for each quintile. Variance partitioning was used to determine the relative contributions of differences in complication and FTR rates to the observed changes in hospital-level surgical mortality between time periods.

Results: After stratifying by reductions in mortality from 2005-2014, the top 20% of hospitals had decreased mortality rates by 3.4% (8.9 to 5.5%, p<0.001), decreased complication rates by 1.8% (15.2 to 13.4%, p<0.001), and decreased FTR rates by 7.4% (25.8 to 18.4%, p<0.001). In contrast, the bottom 20% of hospitals had actually increased mortality rates by 1.1% (6.9 to 8.0%, p<0.001), increased complication rates by 0.9% (14.6 to 15.5%, p<0.001), and increased FTR by 0.6% (22.1 to 22.7%, p<0.001). When examining the factors most associated with reductions in mortality, we found that decreased FTR explained 69% of the improvement in hospitals’ mortality rates over time, whereas decreased complication rates accounted for only 6% of this improvement. 

Conclusion: Hospitals with the largest reductions in surgical mortality achieved these improvements largely through reducing FTR rates and not by reducing serious complication rates. This suggests that hospitals aiming to reduce surgical mortality should engage in efforts focused on improving rescue from serious complications.  

28.05 Inconsistent Benchmarking by Mortality vs Readmission: Implications for Medicare Payment Metrics

C. K. Zogg1,2,3, Z. G. Hashmi3, J. R. Thumma2, A. M. Ryan2, J. B. Dimick2  1Yale University School Of Medicine,New Haven, CT, USA 2University Of Michigan,Center For Healthcare Outcomes And Policy,Ann Arbor, MI, USA 3Brigham And Women’s Hospital,Center For Surgery And Public Health,Boston, MA, USA

Introduction: Since passage of the 2010 ACA, the Centers for Medicare & Medicaid Services have begun to tie surgical reimbursements to hospital performance on 30-day mortality and readmission rates. Under this system, there remain concerns that some high-performing hospitals with a lower risk of 30-day mortality may suffer from higher readmissions simply by saving lives. This creates the potential for reimbursement strategies to unfairly penalize such hospitals for providing superior care. The objective of this study was to determine whether benchmarking results are similar when hospitals are profiled based on 30-day mortality versus readmission rates.

Methods:  Older adult (≥65y) patients presenting for 3 common operations (elective colectomy, CABG, AAA) were identified using 2013-2014 100% Medicare fee-for-service claims. Each hospital was benchmarked on each outcome using risk-adjusted observed-to-expected (O/E, current Medicare standard) and shrinkage-adjusted (SA) rates (multilevel-modeling that accounts for variability due to hospitals with small sample-size). These estimates were then used to generate hospital performance profiles which were compared using: 1) linear regression with weighted correlation coefficients, 2) concordance among high/average/low performers with thresholds set as ±1 SD above/below the mean, and 3) magnitude of difference in quintile rank.

Results: Little to no correlation was found between mortality and readmission (Figure)—colectomy r=0.110; κ=0.002, p-value=0.111. Only 26.4% (707/2673) of hospitals performing colectomies had identical rankings for both metrics (CABG 24.8%, AAA 26.2%). Four percent had completely different rates (CABG 12.9%, AAA 12.5%)—an inverse association which became significant, r=-0.241, and markedly more pronounced, 25.0%, among high-risk patients with LOS ≥30d. SA demonstrated similar results. Discrepancies between mortality/readmission ranks were most pronounced among large hospitals (4-quintile difference vs no difference, ≥400 beds: 21.5 vs 17.9%, p=0.014), with more surgical admissions (highest quartile: 32.3 vs 29.3%, p<0.001), lacking certifications from organizations such as the Joint Commission and Council of Teaching Hospitals but with a larger resident role, more complex case-mix, and lower number of RNs/bed (p≤0.013 for each).

Conclusion: Mortality and readmission benchmarking do not identify high-quality hospitals in the same way. This creates a dichotomy between standards used to determine Medicare reimbursement rates. Implementation of benchmarking that reflects multiple aspects of quality is needed in order to avoid inconsistent penalization of large, outlying, teaching hospitals providing high-quality mortality care.

28.01 The Impact of ERAS protocol on Urinary Tract Infections after Free Flap Breast Reconstruction.

B. Sharif-Askary1, R. Zhao1, S. Hollenbeck1  1Duke University Medical Center,Division Of Plastic And Reconstructive Surgery,Durham, NC, USA

Introduction:  Hospitals are evaluated for quality based on a number of metrics including the occurrence of complications. Recently, our hospital instituted the Enhanced Recovery After Surgery (ERAS) protocol for patients undergoing free flap breast reconstruction. Urinary tract infections are among the most common healthcare-associated infections, with the majority seen after prolonged urinary catheterization. The ERAS protocol calls for early removal of urinary catheters. In this study, we compare the rate of UTI in patients who have undergone traditional recovery after surgery (pre-ERAS) to those who were enrolled in the ERAS protocol. We hypothesized that early catheter removal would decrease the rate of UTI in patients undergoing breast reconstruction with free flaps.

Methods:  We retrospectively reviewed the charts of 238 patients who underwent free flap breast reconstruction. We initiated the ERAS protocol in May of 2015. This study includes patients seen between March 2012 and June 2017 to capture both pre- and post-ERAS cohorts. UTI was defined using the American College of Surgeons NSQIP definition. Statistical analyses were conducted using SPSS software (Version 24.0, IBM Corp). We compared the incidence of UTI before and after ERAS initiation using a logistic regression while controlling for age, BMI, rate of diabetes and length of surgery.

Results: There were 160 patients evaluated prior to ERAS implementation and 78 patients evaluated in the post-ERAS group. The overall incidence of UTI for all patients who underwent free flap reconstruction was 4.6%. Next, we compared patients from the pre-ERAS group to the post-ERAS group. There were no significant differences with regards to mean age, BMI, or length of surgery. However, the rate of diabetes was higher in the pre-ERAS group compared to the post-ERAS group (11% vs. 4%, p=0.04, t-test). Post-ERAS patients had a significantly higher rate of UTI than pre-ERAS patients when controlling for age, BMI, rate of diabetes and length of surgery (1.9% vs. 10.3% p=0.008, OR=6.72). Of post-ERAS patients who were found to have post-op UTI, 25% were found to have bacteria on a pre-operative urinalysis.

Conclusion: In contrast to our hypothesis, we found that the rate of UTI was significantly higher in the post-ERAS patients. Further analysis is needed to determine the cause of this finding but may include the need for re-catheterization after early catheter removal. Based on these findings, we suggest individualized decision-making within the ERAS protocol in regards to timing of urinary catheter removal.

 

28.02 A Comparison of Operative Approaches in Diverticulitis Requiring Urgent Intervention

C. E. Cauley1, Z. Fong1,2, D. Chang2, H. Kunitake1, R. Ricciardi1, L. Bordeianou1  1Massachusetts General Hospital,Department Of Surgery,Boston, MA, USA 2Massachusetts General Hospital,Codman Center For Clinical Effectiveness In Surgery,Boston, MA, USA

Introduction: Guidelines from the American Society of Colon and Rectal Surgeons support the use of sigmoid resection and primary anastomosis with proximal diversion as a safe option for hemodynamically stable patients with perforated diverticulitis (including feculent peritonitis) at the surgeon’s discretion.  However, there are concerns regarding the broad implementation of these guidelines across hospitals with varying expertise. This study evaluates and compares the outcomes of primary anastomosis with proximal diversion versus end colostomy and Hartmann’s pouch for patients with perforated diverticulitis at National Surgical Quality Improvement Program facilities.

Methods:   We abstracted data from the National Surgical Quality Improvement Program participant user file. Patients who underwent emergent colectomy for perforated diverticulitis between 1/1/2005 through 12/31/2015 were identified. To confirm purulent or feculent diverticulitis, we excluded patients with wound classification of 1 or 2. Outcomes of patients who underwent primary anastomosis with proximal diversion were compared to those treated with end colostomy before and after propensity score matching (caliber width 0.03). Factors associated with mortality, reoperation, and infection were also determined using logistic regression modeling.

Results: 5,254 patients requiring emergent colectomy for perforated diverticulitis were selected: 4,261 (81%) with end colostomy and 993 (18.9%) with primary anastomosis and proximal diversion.  The rate of primary anastomosis with proximal diversion was 10.3% in 2005 and 19.2% in 2015.  Median hospital stay was the same (10 [7-15] days for primary anastomosis with diversion vs. 10 [7-14] days for end colostomy, p=0.8). The reoperation rate was statistically similar (8.7% for end colostomy vs. 7.1% for primary anastomosis with diversion, p=0.1), and mortality rate was statistically equivalent (8.1% end colostomy vs. 6.5% for primary anastomosis with diversion p=0.08).   After propensity score matching, surgical outcomes remained similar with equivalent mortality (8.2% vs 7.0%).  Multivariable logistic regression analysis revealed that operation type (primary anastomosis or Hartmann resection) was not associated with the outcomes of reoperation, postoperative infection, or mortality. (Figure)

Conclusions: Our data demonstrate no difference in surgical outcomes for perforated diverticulitis patients treated with primary anastomosis and proximal diversion as compared to traditional Hartmann resection. These findings indicate that guidelines for perforated diverticulitis in hemodynamically stable patients may be safely and broadly applied to institutions participating in the National Surgical Quality Improvement Program.

28.03 Mapping Trauma Outcomes: The Road to Zero Preventable Trauma Deaths

Z. G. Hashmi1,2, M. P. Jarman1, T. Uribe-Leitz1, J. W. Scott1, N. R. Udyavar1, J. Havens1, A. Salim1, A. H. Haider1  1Brigham And Women’s Hospital,Boston, MA, USA 2Sinai Hospital Of Baltimore,Department Of Surgery,Baltimore, MD, USA

Introduction:  The recent National Academies of Sciences, Engineering and Medicine (NASEM) report states that 20,000-30,000 trauma deaths could be prevented each year if all patients were to receive the highest quality of trauma care. While this burden has been quantified, the nationwide geographic distribution of these preventable trauma deaths remains unknown. Knowing where these deaths occur in each state is important to appropriately allocate resources for the optimal care of the injured. The objective of this study is to identify the geographic distribution of preventable trauma deaths for the state of Florida.

Methods:  Adult trauma patients(age≥16) with blunt/penetrating injury in the Healthcare Cost and Utilization Project(HCUP) Florida State Inpatient Database(SID) 2010-2014 were included. Hospitals were linked to the United States Office of Management and Budget-defined Core Based Statistical Areas(CBSAs) using the American Hospital Association supplemental file. CBSAs are distinct geographic units with a high level of socioeconomic integration allowing appropriate population-level comparisons. Preventable deaths were defined as lives which could have been saved if treated at the best-performing CBSA-quintile for in-hospital mortality. We performed hierarchical logistic regression using an empiric Bayes approach to generate Reliability-Adjusted in-hospital mortality rates for each CBSA. These rates were then used to benchmark each CBSA into a performance quintile. Next, generalized linear modeling was used to calculate the relative-risk(RR) of mortality at each quintile, relative to the best-performing CBSA-quintile. This RR was then used to calculate the number of preventable deaths at each CBSA-quintile compared to the best-performers.

Results: A total of 405,126 patients representing 33 CBSAs were included. Overall, 15.8%(1319/8344) of all trauma deaths were deemed preventable. Most of these deaths[78.6% (1037/1319)] occurred at the two worst performing CBSA-quintiles. Figure1 demonstrates that while most of the burden appeared to be concentrated in south/central Florida, isolated areas outside of this cluster were also identified. Separate benchmarking for older trauma patients(≥65 years) demonstrated a higher proportion of preventable deaths(27.6%) versus younger patients(8.3%). 

Conclusion: Preventable trauma deaths have a heterogeneous geographic distribution and disproportionately affect certain patient populations. This study shows the feasibility of mapping these deaths using state-specific data. Similar nationwide mapping can offer a unique insight for regional prioritization of quality improvement and resource allocation to achieve the NASEM goal of “Zero Preventable Deaths After Injury.”

27.09 Defining Surgeon Volume Threshold for Improved Outcomes From Minimally Invasive Colectomy

M. A. Adam1, D. Becerra1, M. C. Turner1, C. R. Mantyh1, J. Migaly1  1Duke University Medical Center,Department Of Surgery,Durham, NC, USA

Introduction: The association between surgeon volume and improved outcomes for minimally invasive colectomy (MIC) has been established. However, a definition of a high-volume MIC surgeon remains unclear. We aimed to determine the number of MIC per surgeon per year that is associated with the lowest risk of postoperative complications. 

Methods: Adult patients undergoing MIC were identified from the HCUP-National Inpatient Sample (2008-2009). Multivariabe logistic regression with restricted cubic splines was utilized to examine the association between the number of annual MIC/surgeon and risk of complications.

Results: 6554 patients were identified; 51% had a diagnosis of colon cancer. Overall, 20% experienced a postoperative complication and 0.5% died in hospital. Median surgeon volume was 10 cases/year. After adjustment for case and procedure mix, the likelihood of experiencing a complication decreased with increasing surgeon volume up to 20 MIC cases/year (p<0.01) (Figure). The vast majority of patients (70%) underwent surgery by low-volume (<20 cases/year). Patients treated by low volume surgeons were more likely to experience conversion to open colectomy (0.8% vs. 0.3%), postoperative complications (21% vs. 17%), prolonged hospital length of stay (6 vs. 5 days), and higher inflation-adjusted hospital costs ($12669 vs. $11752), (all p<0.01). 

Conclusion:  This study identifies a surgeon volume threshold (>20 cases/year) that is associated with improved patient outcomes from minimally invasive colectomy. Identifying a threshold number of cases defining a high-volume MIC surgeon is important, as it has implications for quality improvement, criteria for referral and reimbursement, and surgical education.

 

27.10 Postoperative Morbidity Independently Predicts Cancer-Related Survival in Peritoneal Metastases

H. A. Choudry1, Y. Shuai2, J. F. Pingpank1, M. P. Holtzman1, S. S. Ahrendt1, H. L. Jones1, L. Ramalingam1, A. H. Zureikat1, H. J. Zeh1, D. L. Bartlett1  1University Of Pittsburgh Medical Center,Surgical Oncology,Pittsburgh, PA, USA 2University Of Pittsburgh Cancer Institute,Biostatistics Facility,Pittsburgh, PA, USA

Introduction: Postoperative morbidity may negatively impact cancer-related outcomes by inducing a pro-tumorigenic environment and preventing the timely initiation of postoperative systemic therapy. We hypothesized that postoperative morbidity would predict cancer-related survival, independent of tumor histology, grade, extent of disease, and other comorbidities. 

Methods: We addressed our hypothesis by using a prospective database of 1296 patients with peritoneal metastases undergoing complex surgical resection with high postoperative morbidity and long-term cancer-related mortality rates. We graded all postoperative morbidity using the Clavien-Dindo grading system. Kaplan-Meier method was used to estimate survival.  Multivariate analyses identified associations with survival and postoperative morbidity.

Results: Cytoreductive surgery and hyperthermic intraperitoneal chemoperfusion was performed for peritoneal metastases from cancers of the appendix (50%), colorectum (30%), ovary (8%) and mesothelioma (12%). Tumor burden assessed by median peritoneal carcinomatosis index (PCI) was 16 and optimal cytoreduction (residual tumor < 2.5mm) was achieved in 93% of patients. Major postoperative morbidity (Clavien-Dindo grades 3-5) occurred in 24% of patients and long-term cancer-related mortality was 53%, after a median follow-up of 55 months. Median progression-free survival and overall survival calculated from surgery were 15 and 39 months, respectively. In a multivariate Cox proportional hazards model, major postoperative morbidity (Clavien-Dindo grades 3/4) was an independent negative predictor of survival (HR 1.4) along with non-appendiceal primary histology, higher tumor grade, higher PCI, incomplete cytoreduction, higher age-adjusted Charlson comorbidity index, and recurrent symptomatic disease at presentation. Patients with grades 3/4 postoperative morbidity were 1.6/2.5 times more likely to die of their cancer than those with no post-operative complications. Using multivariate logistic regression model, independent predictors of major postoperative morbidity included higher preoperative ASA (American Society of Anesthesiologists) physical status classification, longer operative time, higher PCI, and non-appendiceal primary histology. 

Conclusion: In our experience, postoperative morbidity independently predicted cancer-related survival, regardless of comorbidities, tumor type, extent, grade, and completeness of surgery. Future work will focus on mechanism underlying this phenomenon. Moreover, the extent of surgical resection required to clear the disease played a dominant role in predicting occurrence of postoperative morbidity. Ongoing studies will address optimization of selection criteria and perioperative management strategies that may reduce postoperative morbidity in such patients that frequently require lengthy procedures and multi-visceral resections. 

 

27.08 Predictive Value of Leukocyte and Platelet-derived Ratios in Locally Advanced Rectal Adenocarcinoma

W. H. Ward1, A. C. Esposito2, N. Goel1, K. J. Ruth3, E. R. Sigurdson1, J. E. Meyer5, C. S. Denlinger4, J. M. Farma1  1Fox Chase Cancer Center,Department Of Surgical Oncology,Philadelphia, PA, USA 2Temple University,Lewis Katz School Of Medicine,Philadelpha, PA, USA 3Fox Chase Cancer Center,Biostatistics And Bioinformatics Facility,Philadelphia, PA, USA 4Fox Chase Cancer Center,Department Of Hematology/Oncology,Philadelphia, PA, USA 5Fox Chase Cancer Center,Department Of Radiation Oncology,Philadelphia, PA, USA

Introduction:
Although advances in the multidisciplinary treatment of locally advanced rectal cancer have improved survival, there is variability in response to therapy. In addition to tumor biology, host factors and immunologic capacity may play a role. Given the morbidity of therapy and risk of recurrence, there is much interest in preoperative identification of predictive biomarkers.  In colorectal cancer, recent data suggest the utility of the lymphocyte-to-monocyte ratio (LMR), neutrophil-to-lymphocyte ratio (NLR), and platelet-to-lymphocyte ratio (PLR) in predicting survival. The aim of our study was to examine these factors in our rectal cancer patients and determine whether any association exists between these ratios and overall survival.  

Methods:
Using data from a prospectively maintained database at our tertiary referral center, a query was completed for all patients with clinical stage II to III rectal adenocarcinoma who underwent comprehensive treatment from 2002-2016.  We included patients with full demographic, staging, treatment, and survival data who had a complete blood count collected prior to neoadjuvant chemoradiation (pre-CRT) and again prior to surgery (post-CRT).  LMR, NLR, and PLR were calculated for the pre-CRT and post-CRT time points.  Potential cutpoints associated with overall survival (OS) differences were determined using the maxstat R package, which identifies optimal cutpoints while controlling for repeat testing (p<0.05). Survival curves based on these cutpoints were compared using log-rank tests and were adjusted for age and stage using Cox regression.

Results:
A total of 146 patients were included.  Cutpoints were significantly associated with OS for pre-CRT ratios but not for post-CRT ratios.  Within the pre-treatment group, “low” (<2.86) LMR was associated with decreased OS (log-rank p=0.004, 5 year OS= 69% [95%CI 54%-80%]) compared to “high” (>2.86) LMR (5 year OS= 86% [95%CI 75%-93%]).  In the same group, “high” NLR (>4.47) was associated with decreased OS (log-rank p<0.001), and “high” PLR (>203.6) was associated with decreased OS (log-rank p<0.001).  With adjustment for age and final pathologic stage, the associations of NLR and PLR with OS retained their statistical significance (p=0.017 and p=0.005, respectively), and the association of LMR and OS had borderline statistical significance (p=0.075).

Conclusion:
If obtained prior to the start of neoadjuvant chemoradiation, LMR, NLR, and PLR values are accurate predictors of 5-year OS in patients with locally advanced rectal adenocarcinoma.  Following the administration of neoadjuvant therapy, these ratios lose their predictive ability.  Further confirmation of the value of these ratios in larger datasets will be important.
 

27.07 Post-Discharge Opioid Utilization after Colorectal Surgery is Modified by ERAS Pathways

K. E. Hudak1, L. E. Goss2, R. K. Burton3, P. K. Patel1, E. A. Dasinger2, G. D. Kennedy2, J. A. Cannon2, M. S. Morris2, J. S. Richman2, D. I. Chu2  1University Of Alabama at Birmingham,School Of Medicine,Birmingham, Alabama, USA 2University Of Alabama at Birmingham,Department Of Gastrointestinal Surgery,Birmingham, Alabama, USA 3University Of Alabama at Birmingham,School Of Public Health,Birmingham, Alabama, USA

Introduction:  The excess utilization of opioids after surgery is common and may contribute to the national opioid epidemic. Enhanced Recovery After Surgery (ERAS) pathways have been shown to decrease in-hospital opioid utilization, but their effect on post-discharge opioid utilization is unclear. We hypothesized that patients undergoing ERAS for colorectal surgery would have decreased opioid utilization on discharge and at one-year post-discharge.

Methods:  A single-institution ERAS database was used to identify all patients undergoing colorectal surgery in 2015. ERAS patients were then matched by sex, race, age, indication, and procedure with pre-ERAS patients from 2013-14 to create a comparison group. Patient/procedure-level characteristics were included. Excluded were patients who died within one year of surgery, long-term dependent opioid users, and opioid users above the 99th percentile of oral morphine equivalents (OME). Outcomes evaluated included OME at discharge, total OME within 1-year, OME per pill (OME/P) at discharge, opioid type, and pill counts. Variables with p<0.05 on bivariate comparisons were included in adjusted linear models.

Results: Of 395 patients included in this study, 89.6% were prescribed an opioid on discharge. Pre-ERAS (n=197) and ERAS (n=198) patients were similar by matched characteristics and smoking status, ASA class, hypertension and diabetes. Compared to pre-ERAS patients, ERAS patients had more minimally invasive surgeries (43.4% vs. 32.5%), more ostomies (38.9% vs. 25.9%) and had lower rates of baseline opioid use (15.2% vs. 29.4%) (p<0.03). More ERAS patients were discharged with no opioids compared to pre-ERAS patients (13.1% vs. 7.6%, p=0.07). Among those discharged with opioids, ERAS patients received an average of 403 OME and 60.6 pills vs. 343 OME and 46.9 pills for pre-ERAS (p<0.03 for all). However, the OME/P at discharge was significantly lower for ERAS (6.9 vs. 7.6, p<0.01), which remained after adjustment for covariate differences (7.0 vs 7.9, p=0.01). ERAS patients used more low-OME medications, such as tramadol (35.9% vs. 0%, p<0.001) and were prescribed fewer high-OME medications containing hydrocodone or oxycodone (37.9% vs. 72%, p<0.01). At one-year post-discharge, ERAS patients received fewer additional high-OME prescriptions (34.3% vs. 43.7%, p<0.01).

Conclusion: ERAS modifies post-discharge opioid utilization for patients undergoing colorectal surgery. On discharge, more patients undergoing ERAS required no opioids and at one year, ERAS patients required less opioid prescriptions. While ERAS patients discharged with opioids did receive more OMEs overall, these OMEs were distributed over more pills and ERAS patients actually received more low-potency (low OME) pills, accounting for a lower OME/P ratio. These findings suggest a potential role for ERAS in reducing post-discharge opioids utilization and an additional need to standardize post-discharge prescriptions patterns.

 

27.04 A Preoperative Prediction Model for Risk of Multiple Admissions after Colon Cancer Surgery

J. H. Fieber1, C. E. Sharoky1, K. Collier1, R. L. Hoffman1, C. Wirtalla1, E. C. Paulson1, G. C. Karakousis1, R. R. Kelz1  1University Of Pennsylvania,Department Of Surgery,Philadelphia, PA, USA

Introduction: Colon cancer treatment commonly has a profound impact on patients’ and caregivers’ ability to maintain their involvement in the work force, potentially leading to loss of insurance and income. The use of medical services, including multiple hospital admissions [MuAdmin], contributes to time lost at work. We developed a simplified model to predict preoperative risk of MuAdmin amongst patients undergoing colon resection to help patients prepare for treatment and to guide improvement efforts.

Methods: Patients ≥18 years-old with colon cancer that underwent elective surgical resection without postoperative complications identified in discharge claims from California and New York (2008-2011) were included. The primary outcome factor, MuAdmin, was defined as the 90th percentile for admissions following resection. Logistic regression models were developed to identify factors predictive of MuAdmin. A weighted point system was developed using beta-coefficients (β) (p<0.05). A point value of 1 was assigned to β<0.5, 2 was assigned to 0.5≤β<1, and 3 was assigned to β≥1.   A random sample of 75% of the data was used for model development leaving a 25% sample for validation.

Results: A total of 14,805 patients underwent colon surgery with 27.3% requiring at least 1 admission. MuAdmin, defined as ≥2 admissions following resection, impacted 9.7% of patients.  The statistically significant predictors of MuAdmin were Elixhauser comorbidity index ≥3 (β=0.30), metastasis (β=0.96), payer system (Medicare β=0.25, Medicaid β=0.58), and the number of prior admissions in the year before resection (1: β=0.43, 2: β=0.54, 3: β=1.45). Scores ranged from 0-8. Scores ≤1 had <7% risk of MuAdmin, scores between 2-5 had at 10-21% risk of MuAdmin and scores ≥6 had a >30% risk of MuAdmin. Our prediction model accurately stratified patients by the likelihood of MuAdmin. [Table 1: Observed and Predicted Rates of MuAdmin following Colon Cancer Resection by Risk Score]

Conclusion: Following discharge after resection of colon cancer, almost a third of patients are admitted at least once and nearly 10% require 2 or more admissions in the year following surgery.  A simple, preoperative clinical model can predict the likelihood of multiple admissions in patients anticipating resection.  This information can assist patients and caregivers in managing time off from work to minimize the threat of unemployment and financial hardship.

27.05 Fecal Microbiota Transplant Protocol Implementation: A Community-Based University Hospital Experience

R. Duarte-Chavez2, T. R. Wojda1,3, B. Geme1, G. Fioravanti2, S. P. Stawicki3  1St. Luke’s University Health Network,Division Of Gastroenterology,Bethlehem, PA, USA 2St. Luke’s University Health Network,Department Of Internal Medicine,Bethlehem, PA, USA 3St. Luke’s University Health Network,Department Of Surgery,Bethlehem, PA, USA

Introduction: Clostridium difficile (CD) is a serious and increasingly prevalent healthcare associated infection. The pathogenesis of CD infection (CDI) includes the acquisition of CD with a disruption of the native gut flora. Antibiotics are a major risk although other contributors have been identified. Management combines discontinuation of the offending agent, initiation of CD-specific antibiotic(s), probiotic use, fecal microbial transplantation (FMT), and surgery as the “last resort” option. The aim of this study is to review short-term clinical results following the implementation of FMT protocol at our community-based University hospital.

Methods: Following IRB / Infection Control Committee approvals, FMT protocol was implemented for patients with CDI. Prospective tracking of FMT procedures (Jul 1, 2015-Feb 1, 2017) was conducted using REDCap™ data capture system. Indications for FMT included: (a) ≥ 3 CDI recurrences; (b) ≥2 hospital admissions with severe CDI; or (c) first episode of complicated CDI (CCDI). Risk factors for CDI and treatment failure were assessed. Patients were followed for ≥3 months to assess cure/failure, relapse and side effects. Frozen 250 mL FMT samples were acquired from OpenBiome (Somerville, MA). After 4 hrs of thawing, the liquid suspension was applied colonoscopically, from terminal ileum to mid-transverse colon. Recorded data included disease severity (Hines VA CDI Severity Score, HVCSS), concomitant medications, number of FMT treatments, non-FMT therapies, cure rates, and mortality.

Results: Thirty-five patients (mean age 58.5 yrs, 69% female) received FMT, with primary cure in 30 (86%) cases. Within this sub-group, 2/30 (6.7%) patients recurred and were subsequently cured with long-term oral vancomycin (OV). Among 5/35 (14%) primary FMT failures, 3 (60%) were cured with long-term OV and 2 (40%) required colectomy. For the 7 patients who either failed FMT or recurred, long-term OV was curative in all but 2 cases (Fig 1). For patients with severe CDI (HVCSS ≥3), primary / secondary cure rates were 6/10 (60%) and 8/10 (80%), respectively. Patients with CCDI (n=4) had higher HVCSS (4 vs 3) and mortality of 25%. Characteristics of patients who failed initial FMT included older age (70 vs 57 yrs), female sex (80% vs 67%), severe CDI (80% vs. 13%), as well as opioid use during the initial infection (60% vs 37%) and at the time of FMT (60% vs 27%). Most commonly reported side effect of FMT was loose stools.

Conclusion: This study supports the efficacy and safety of FMT in the setting of CDI, with primary (86%) and secondary (71%) non-surgical cure rates being consistent with previous reports. The potential role of opioid use as a modulator of CDI warrants further study.

27.06 Colon Cancer Stages I-III: Why Roam When You Can Resect Near Home?

O. K. Jawitz1, M. Turner1, M. Adam1, C. Mantyh1, J. Migaly1  1Duke University Medical Center,Department Of Surgery,Durham, NC, USA

Introduction:
Cancer resections performed at high-volume colorectal surgery centers are associated with improved post-operative outcomes including fewer complications such as anastomotic leak and increased survival. It is not known if patients who do not live in proximity to high-volume centers benefit from choosing to travel to these institutions as opposed to receiving their care at local, low-volume centers. 

Methods:
The 2006-2014 National Cancer Database (NCDB) was queried for patients with pathologic stage I-III colon adenocarcinoma who underwent cancer treatment at a single center. Travel distances to treatment centers were calculated. Matching the first and fourth quartiles of travel distance with first and fourth quartiles of institution surgical volume established short distance/low-volume (local) and long distance/high-volume (travel) cohorts. The primary outcome of interest was overall survival compared between the local and travel cohorts. Secondary outcomes included incidence of positive resection margins, adequate lymph node harvesting, length of stay, readmission rates, 30-day and 90-day mortality, and use of adjuvant chemotherapy.

Results:
A total of 33,339 patients met inclusion criteria, including 18,163 patients that traveled ≤2.6 miles to centers that performed ≤ 34 resections per year (local) and 15,176 patients that traveled ≥ 21.8 miles to centers that performed ≥ 83 resections annually (travel). In unadjusted analysis, patients in the travel cohort had lower rates of positive resection margins (3.3% vs. 5.0%, p<0.001), more frequently had adequate lymph node harvests (88.9% vs. 79.2%, p<0.001), and had lower 30-day (2.4% vs. 4.0%, p<0.001) and 90-day mortality (4.0% vs. 6.6%, p<0.001). On multivariable logistic regression analysis adjusting for patient demographic, tumor, and facility characteristics, traveling longer distances to high-volume centers remained an independent predictor of improved overall survival (hazard ratio 0.84, p<0.001) and secondary outcomes of adequate lymph node harvesting (OR 0.48, p<0.001), negative resection margins (OR 0.65, p<0.001), lower readmission rates (OR 0.84, p<0.001), 30-day mortality (OR 0.75, p<0.001), and 90-day mortality (OR 0.74, p<0.001). 

Conclusion:
For patients with stage I-III colon cancer who do not live in proximity to high-volume colorectal surgery centers, traveling to these institutions as opposed to receiving treatment at local low-volume centers conveys a postoperative survival advantage. Additionally, rates of adequate oncologic resections and readmission are superior to those who seek care locally. Patients with stage I-III colon cancer should be encouraged to undergo surgical resection at high-volume centers, even if this involves traveling outside of their local region. 
 

27.03 Colorectal Surgical Site Infection Prevention Kits Prior to Elective Colectomy Improves Outcomes

S. E. Deery1, S. T. McWalters2, S. R. Reilly3, H. N. Milch1, D. W. Rattner1, E. A. Mort3,4, D. C. Hooper5, M. G. Del Carmen1,6, L. G. Bordeianou1  1Massachusetts General Hospital,Colorectal Center,Boston, MA, USA 2Massachusetts General Hospital,Edward P. Lawrence Center For Quality And Safety,Boston, MASSACHUSETTS, USA 3Massachusetts General Hospital,Department Of Patient Safety And Quality,Boston, MASSACHUSETTS, USA 4Massachusetts General Hospital,Department Of General Internal Medicine,Boston, MASSACHUSETTS, USA 5Massachusetts General Hospital,Division Of Infectious Diseases,Boston, MASSACHUSETTS, USA 6Massachusetts General Hospital,Department Of Obstetrics And Gynecology,Boston, MASSACHUSETTS, USA

Introduction:
Patient compliance with preoperative mechanical and antibiotic bowel preparation, skin washes, carbohydrate loading, and avoidance of fasting are key components of successful colorectal ERAS and surgical site infection (SSI)-reduction programs. In July 2016, we began a quality improvement project distributing a free SSI Prevention KIT (SSIPK) containing patient instructions (Figure), mechanical and oral bowel preparation, chlorhexidine washes, and carbohydrate drink to all patients scheduled for elective colectomy, with the goal of improving patient compliance and rates of SSI.

Methods:
This was a prospective data audit of our first 221 SSIPK+ patients, who were compared to historical controls (SSIPK-) of 1,760 patients undergoing elective colectomy (1/1/13–3/31/17). A 1:1 propensity score system accounted for nonrandom treatment assignment with the Chi square test to compare matched patients’ compliance and complications.

Results:
SSIPK+ (N=219) and SSIPK- (N=219) matched patients were statistically identical on demographics, comorbidities, BMI, surgical indication, surgeon, and procedure. SSIPK+ patients had higher compliance with mechanical (95% vs. 71%, P < 0.001) and oral antibiotic (94% vs. 27%, P < 0.001) bowel preparation. This translated into lower overall SSI rates (5.9% vs. 11.4%, P = .04). SSIPK+ patients also had lower rates of anastomotic leak (2.7% vs. 6.8%, P = 0.04), prolonged postoperative ileus (5.9% vs. 14.2%, P < .01), and unplanned intubation (0% vs. 2.3%, P = .02). Furthermore, SSIPK+ patients had shorter mean hospital length of stay (3.1 vs. 5.4 days, P < .01) and had fewer unplanned readmissions (5.9% vs. 14.6%, P < .001). There were no differences in rates of postoperative pneumonia, urinary tract infection, Clostridium difficile colitis, sepsis, or death. 

Conclusion:
Provision of a free-of-charge SSIPK improves patient compliance with preoperative instructions, which is associated with significantly lower rates of surgical site infections, lower rates of prolonged postoperative ileus, and shorter hospital stays with fewer readmissions. Widespread utilization of such a kit could therefore lead to significantly improved outcomes and lower costs.
 

27.01 Impact of Mental Health Diagnoses and Treatment on Outcomes after Colorectal Cancer Surgery

C. G. Ratcliff1,4,5, N. N. Massarweh2,5, S. Sansgiry5,6, L. Dindo1,5, H. Yu5,6, D. H. Berger2,5,7, J. A. Cully1,5  1Baylor College Of Medicine,Department Of Psychiatry & Behavioral Sciences,Houston, TX, USA 2Baylor College Of Medicine,Department Of Surgery,Houston, TX, USA 4Sam Houston State University,Department Of Psychology,Hunstville, TX, USA 5Michael E. DeBakey Veterans Affairs Medical Center,Houston, TX, USA 6Baylor College Of Medicine,Department Of Medicine,Houston, TX, USA 7Baylor St. Luke’s Medical Center,Houston, TX, USA

Introduction:  Data regarding the impact of mental health (MH) diagnosis and treatment on postoperative outcomes are evolving.  Presently, little is known about the prevalence and effect of MH treatment on outcomes following surgery for colorectal cancer (CRC).

Methods:  We identified 58,961 Veterans who underwent CRC surgery from 2000-2014 using Veteran Affairs (VA) Surgical Quality Improvement Program (VASQIP) linked to the VA Corporate Data Warehouse to identify MH diagnoses and services. Multivariable logistic regression adjusting for clinical and demographic factors was used to evaluate the association between MH diagnosis (defined as depression, anxiety, PTSD, bipolar, psychotic, personality, cognitive, and substance use disorders) that were documented 30d prior to surgery and the occurrence of 1+ postoperative complication (POCOMP), 90d readmission (90dReadm), and length of stay (LOS). The impact of MH treatment (defined as psychiatric medication and psychotherapy [meds+therapy], psychiatric medication alone [meds alone], psychotherapy alone [therapy alone], or no treatment) within the 30d prior to surgery was also examined.

Results: Within the cohort, 9,029 (15%) had a MH diagnosis (depression = 2,738 [30%], anxiety = 942 [10%], PTSD = 1,762 [20%], bipolar = 505 [6%], psychotic = 679 [8%], cognitive = 239 [3%], personality = 105 [1%], substance use = 4,579 [51%]). Among Veterans with a MH diagnosis, 136 (2%) received meds+therapy, 4,157 (46%) meds alone, 308 (3%) therapy alone, and 4,428 (49%) no treatment during the 30d before surgery.

POCOMP occurred in 30% and 90dReadm in 23% of Veterans. Median LOS was 8d (IQR 6). MH diagnosis was associated with greater odds of POCOMP (OR: 1.10, CI: 1.05-1.16), 90dReadm (OR: 1.10, CI: 1.04-1.16), and longer LOS (OR: 1.42, CI: 1.09-1.86) compared to no MH diagnosis.

Veterans with a MH diagnosis who received no preoperative MH treatment (OR: 1.08, CI: 1.00-1.15) or meds alone (OR: 1.15, CI: 1.07-1.24) had greater odds of POCOMP relative to Veterans without MH diagnosis. Similarly, Veterans with a MH diagnosis who received no preoperative MH treatment (OR: 1.13, CI: 1.04-1.21) or meds alone (OR: 1.15, CI: 1.07-1.24) had greater odds of 90dReadm relative to Veterans without MH diagnosis. Finally, Veterans with a MH diagnosis who received meds alone had longer LOS relative to Veterans without MH diagnosis (OR: 1.96, CI: 1.35-2.85). Odds of POCOMP, 90dReadm, and longer LOS for Veterans with a MH diagnosis who received meds+therapy or therapy alone did not statistically differ from Veterans without MH diagnoses.

Conclusion: MH diagnoses are associated with postoperative complications and readmissions among Veterans who undergo CRC surgery. Provision of preoperative psychotherapy, alone or in combination with psychiatric medication, may help mitigate the adverse effect of psychiatric conditions. Since few Veterans receive adequate preoperative MH treatment, screening for these psychiatric risk factors may be warranted.