35.10 Isolated Blunt Traumatic Brain Injury is Associated with Fibrinolysis Shutdown

J. Samuels1, E. Moore2, A. Banerjee1, C. Silliman3, J. Coleman1, G. Stettler1, G. Nunns1, A. Sauaia1  1University Of Colorado Denver,Department Of General Surgery,Aurora, CO, USA 2Denver Health Medical Center,Department Of Surgery,Aurora, CO, USA 3Children’s Hospital Colorado,Pediatrics-Heme Onc And Bone Marrow Transplantation,Aurora, CO, USA

Introduction:

While trauma-induced coagulopathy (TIC) contributes to mortality in seriously injured patients, the additive effect of Traumatic Brain Injury (TBI) remains unclear.  Prior studies have suggested TBI initiates an exaggerated bleeding diathesis with decreased clot formation and increased clot degradation in the initial post-injury phase. However, this coagulation phenotype has not been assessed using comprehensive coagulation assays, such as thrombelastography (TEG). This is desperately needed with the growing practice of empiric anti-fibrinolytic therapy. Therefore, the purpose of this study is to define the coagulation phenotypes of patients with TBI compared to other injury patterns as measured by TEG as well as conventional coagulation tests (CCT).

Methods:

The TAP (Trauma Activation Protocol) database is a prospective assessment of TIC in all patients meeting criteria for trauma activation at a level I trauma center. Patients were categorized into three groups: 1) Isolated TBI (I-TBI): AIS head ≥3 and ED GCS≤8 and AIS ≤2 for all other body regions; 2) TBI with polytrauma (TBI+Torso): AIS head≥3 and at least one AIS≥3 for other regions; and 3) Non-TBI (I-Torso): AIS head <3 and at least one AIS≥3 for other regions. Phenotype frequency was compared using the Chi-square test. Significance was declared at P<0.05.

Results:

There were 186 qualified patients, 38 with I-TBI, 55 with TBI+Torso, and 93 non-TBI patients enrolled between 2013 and 2016. Arrival SBP was higher for I-TBI (138 mmHg) compared to I-Torso (108 mmHg), but there were no significant differences in signs of shock (lactate, base deficit). Also, no differences existed between the three groups’ INRs, PTTs, or TEG measurements (ACT, Angle, and MA).

The distribution of fibrinolysis phenotypes is depicted in Figure 1. I-TBI and TBI+Torso had significantly higher incidence of fibrinolysis shutdown (rTEG Ly30 <0.9%) compared to I-Torso (p=0.045), and this persisted when only comparing patients in shock (Base Deficit ≥6) with a third of patients in the I-TBI group demonstrating shutdown, (p <0.01). Hyperfibrinolysis occurred in a minority of phenotypes (≤ 33%) in all three groups. Nearly 50% of patients with shock demonstrated shutdown after experiencing a TBI with other injuries.

Conclusion:

Historically, TBI has been associated with a coagulopathy characterized by hyperfibrinolysis. In contrast, this study found that TBI (isolated or with other injuries) was associated with fibrinolysis shutdown rather with a minority of patients demonstrating hyperfibrinolysis. With the growing use of empiric tranexamic acid (TXA), these data suggest that TXA should be given only when indicated by point of care testing.

 

36.01 Opioid Prescribing Habits of Pediatric Versus General Surgeons Following Laparoscopic Appendectomy

M. R. Freedman-Weiss1, A. S. Chiu1, S. L. Ahle1, R. A. Cowles1, D. E. Ozgediz1, E. R. Christison-Lagay1, D. G. Solomon1, M. G. Caty1, D. H. Stitelman1  1Yale University School Of Medicine,Department Of Surgery, Section Of Pediatric Surgery,New Haven, CT, USA

Introduction:

The complex issue of prescribing opioids balances recognizing opioids as a tool to reduce pain and as an addictive drug with a propensity to cause suffering. Adolescents who use prescription opioids have an increased risk for future drug abuse and overdose, making them a high-risk population. Appendectomy is one of the most common operations, often requires narcotic analgesia, and is performed by both pediatric and general surgeons. The opioid prescribing patterns of these two provider groups have not yet been compared; we hypothesize that pediatric surgery providers prescribe fewer opioids for adolescents than do general surgery providers.

Methods:

A retrospective chart review was conducted across a single health system consisting of four hospitals. All laparoscopic appendectomies performed between January 1, 2016 to August 14, 2017 on patients aged 7-20 were included for analysis. Any case coded for multiple procedures or identified as converted to open were excluded.

The primary outcome measure was amount of narcotic prescribed postoperatively. To standardize different formulations and types of analgesia prescribed, prescriptions were converted into Morphine Milligram Equivalents (MME). For reference, one 5 mg pill of oxycodone equals 7.5 MME. Patients were further grouped into quartiles based on amount of narcotic prescribed, with the top quartile classified as “high prescribing.” Logistic regression was performed evaluating odds of high prescribing, and incorporated patient weight, gender, race, insurance status, and service provider type (pediatric vs. general surgery).

Results:

A total of 336 pediatric laparoscopic appendectomies were analyzed, 148 by general surgeons and 188 by pediatric surgeons. Pediatric surgeons prescribed less narcotic than general surgeons overall (73.6 MME vs. 109.6 MME, p<0.001). For patients under the age of 13, there was no significant difference between pediatric (46.6 MME) and general surgeons (48.0 MME, p=0.8921). However, for the 13-20 age group, pediatric surgeons prescribed 28% less narcotic than general surgeons (93.2 MME vs. 130.1 MME, p<0.0001).

Regression analysis of patients 13-20 demonstrated that heavier weights (120-159lbs vs. <120lbs OR 4.6 95%CI[1.4-15.2]), ≥160lbs vs. <120lbs OR 5.5 95%CI[1.5-20.3]) and being cared for by a general surgery service (vs. pediatric surgery OR 5.2 95%CI[2.2-12.1]) were associated with high prescribing.

Conclusion:

After a laparoscopic appendectomy in a single hospital system, general surgeons prescribe significantly larger amounts of narcotic to adolescent patients than do pediatric surgeons. Although both provider types practice weight-based prescribing, even when controlling for weight, general surgeons are significantly more likely to be high prescribers. One substantial and modifiable contributor to the opioid epidemic is the amount of opioid prescribed, thus highlighting the need for education and guidelines on this topic.

35.08 Clinical Impact of Genetic Alterations According to Primary Tumor Sidedness in Colorectal Cancer

Y. Shimada1, Y. Tajima1, M. Nagahashi1, H. Ichikawa1, M. Nakano1, H. Kameyama1, J. Sakata1, T. Kobayashi1, Y. Takii2, S. Okuda3, K. Takabe4,5, T. Wakai1  1Niigata University Graduate School Of Medical And Dental Sciences,Division Of Digestive And General Surgery,Niigata, , Japan 2Niigata Cancer Center Hospital,Department Of Surgery,Niigata, , Japan 3Niigata University Graduate School Of Medical And Dental Sciences,Division Of Bioinformatics,Niigata, , Japan 4Roswell Park Cancer Institute,Breast Surgery,Buffalo, NY, USA 5University At Buffalo Jacobs School Of Medicine And Biomedical Sciences,Department Of Surgery,Buffalo, NY, USA

Introduction: Right-sided colorectal cancer (RCRC), which is derived from midgut, has different molecular and biological characteristics compared with left-sided colorectal cancer (LCRC) which is derived from hindgut. Recently, several unplanned retrospective analyses revealed the differences between RCRC and LCRC in prognosis and response to targeted therapy. We hypothesized that primary tumor sidedness is a surrogate for non-random distribution of genetic alterations, and is a simple and useful biomarker in patients with Stage IV CRC. To teste this hypothesis, we investigated the genetic alterations using comprehensive genomic sequencing (CGS), and analyzed the clinical impact of primary tumor sidedness in patients with Stage IV CRC.

Methods:  One-hundred-eleven Stage IV CRC patients with either RCRC or LCRC were analyzed. We investigated genetic alterations using 415-gene panel, which includes the genetic alterations associated with resistance to anti-EGFR therapy. The differences of clinicopathological characteristics and genetic alterations were analyzed between RCRC and LCRC using Fisher’s exact test. The differences in response to targeted therapies, and clinical significance of residual tumor status were analyzed between RCRC and LCRC using log-rank test. 

Results: Thirty-four patients (31%) and 77 patients (69%) had RCRC and LCRC, respectively. Histopathological grade 3 was significantly associated with RCRC (P = 0.042). Pulmonary metastasis was significantly associated with LCRC (P = 0.012), and peritoneal metastasis was significantly associated with RCRC (P = 0.002). Regarding residual tumor status, R0 resection of both primary and metastatic lesions showed significantly better overall survival compared with R2 resection in both RCRC and LCRC (P = 0.026 and 0.002, respectively). Regarding genetic alterations, RCRC has more genetic alterations associated with resistance to anti-EGFR therapy (BRAF, ERBB2, FGFR1, KRAS, PIK3CA, PTEN) compared with LCRC (P = 0.040). In 73 patients with anti-VEGF therapy, there was no significant difference on progression-free survival (PFS) between RCRC and LCRC (P = 0.866). Conversely, in 47 patients with anti-EGFR therapy, RCRC showed significantly worse PFS than LCRC (P = 0.019).

Conclusion: RCRC is more likely to have the genetic alterations associated with resistance to anti-EGFR therapy compared with LCRC, and shows resistance to anti-EGFR therapy. Primary tumor sidedness is a surrogate for non-random distribution of molecular subtypes in CRC.
 

35.06 IS CERVICAL MAGNETIC RESONANCE IMAGING FOR CERVICAL SPINE CLEARANCE JUSTIFIED AFTER NEGATIVE CT?

R. Kang1, C. Ingersol1, K. Herzing1, A. P. Ekeh1  1Wright State University,Surgery,Dayton, OH, USA

Introduction:
CT of the Cervical Spine(CT CS) is utilized widely in the evaluation of moderate to severely injured patients. In neurologically intact patients with imaging negative for injuries, but with persistent neck midline tenderness, a variety of protocols for further evaluation have been adopted by trauma centers including the use of Magnetic Resonance Imaging(MRI). The necessity and cost of this modality has been questioned in the presence of a negative high quality CT CS. We sought to ascertain changes in clinical management in this population of patients after a protocol change at a Level I Trauma Center.

Methods:
Data were retrospectively collected for patients seen at a Level 1 Trauma Center between Dec 2014- Jan 2015. Patients were identified through the trauma registry and cross-referenced with a database from the radiology department. All patients that obtained either a CS CT, MRI, or both CS CT and MRI during the specified period were identified. For our analysis, only patients that received both a CS CT and MRI with persistent neck pain and no neurological deficits were selected. The charts of these patients were reviewed for demographic and clinical data, including: age, gender, mechanism of injury, diagnosis on admission, length of hospital stay, length of ICU stay, injury severity score (ISS), results of the CS CT, and results of the MRI. This study followed a policy change on the trauma service in which patients with persistent tenderness were with negative CT CS were sent for MRI and the use of Flexion Extension films was discontinued.

Results:
In the two years studied, 485 patients were identified. 485 patients obtained a CS CT(n = 142), MRI(n = 46), or both a CS CT and MRI(n = 260) Of these patients that received both a CS CT and an MRI, the mean age was 50.7 years and males 64.2%. Motor Vehicle Crashes (MVCs) (41.5%), falls(37.3%), auto vs. and motorcycle crashes (5.4%) were the most common etiologies. Of the 260 patients who received both a CS CT and an MRI, 72(27.7%) had additional findings on MRI not seen on CT. In these patients with additional MRI findings, there was no intervention in 69.4% surgery in 26.3% and outpatient follow-up 4.2%. In all 72 of these cases, the findings on MRI did not change management. When comparing patients that had a difference between their CS CT and MRI and those that did not, there was significant difference between age, length of hospital stay, length of ICU stay, or ISS. There was also no significant difference between mechanism of injury or diagnosis on admission. 

Conclusion:
The optimal management of neurologically intact patients with persistent neck pain following a negative CS CT remains controversial. In patients with a negative CS CT and persistent neck pain, MRI added little clinical value with no additional change in clinical management in any of the patients who had additional findings. A clear role for MRI in this population needs to be defined by well-designed prospective studies. 

35.07 Comparable Outcomes after Liver Transplantation with and without Chronic Portal Vein Thrombosis

K. Phelan1, C. Kubal1, J. Fridell1, R. Mangus1  1Department Of Surgery,Division Of Transplantation,Indianapolis, IN, USA

Introduction: Optimal portal flow is crucial to successful liver transplantation. Portal vein thrombosis (PVT), when present, is associated with increased risk of early mortality and graft failure [1]. At our center, an aggressive approach towards PVT was utilized to improve post-transplant outcomes. This study reports outcomes of liver transplantation in patients with pre-transplant PVT.

Methods: All records for liver transplants over a 15-year period at a single center were reviewed and data extracted. PVT was identified on pre-transplant imaging and was documented in patient charts. Cavernous transformation, main portal vein thrombus, and thrombus of either splenic vein or superior mesenteric vein extending into the confluence was considered as PVT. Patient and graft survival were considered as primary endpoints.

Surgical techniques: Depending on the extent of PVT, various surgical approaches were used. In the majority of cases, extensive portal thromboendovenectomy was performed intraoperatively. When optimal portal flow was not established, superior/ inferior mesenteric venous bypass was utilized. Patients with extensive porto-mesenteric thrombosis were listed for back up multivisceral transplant which was performed if intraoperative attempts at liver transplant failed [2]. Post-transplant anticoagulation was utilized routinely for 3 to 6 months when complete clearance of the PVT was not achieved intraoperatively. Efforts were made to not use expanded criteria donor (ECD) liver allografts when significant PVT was present.

Results: There were 246 patients (12%) with pre-transplant PVT. Of those, 191 (78%) were in the main portal vein. Cavernous transformation existed in 2% of all patients with PVT. Patient demographic and clinical factors associated with PVT were year of transplant, number of days on the waiting list, race, and a primary diagnosis of fatty liver disease. Transplants with PVT had comparable graft loss at 7- and 90-days (3% and 3%, p=0.78; 7% and 7%, p=0.83). Patient and graft survival at 1-year for PVT and no PVT were 89% and 88% (p=0.66) and 89% and 90% (p=0.93).  Cox regression showed comparable long-term graft survival for transplants with PVT (66% versus 64% at 10-years; p=0.64).

Conclusion: With an aggressive approach towards PVT, excellent early and long term outcomes can be achieved after liver transplantation.

 

35.05 Discontinuation of Surgical vs Non-Surgical Clinical Trials: An Analysis of 88,498 Trials

T. J. Mouw1, S. W. Hong1, S. Sarwar1, A. E. Fondaw2, A. Walling3, M. Al-Kasspooles1, P. J. DiPasco1  1Unversity Of Kansas Medical Center,General Surgery,Kansas City, KS, USA 2University Of Kansas School of Medicine – Kansas City, Kansas City, KS, USA 3Unversity Of Kansas School of Medicine – Wichita, Family and Community Medicine, Wichita, KS, USA

Introduction:
Trial early discontinuation is a complex issue with both financial and ethical implications.  It has been previously reported that over 20% of surgical trials will be discontinued prematurely and many of those which reach completion will not publish. Previous studies have been limited in scope owing to the need for manual review of selected trials. To date there has been no broad analysis comparing surgical and non-surgical registered clinical trials.

Methods:
The US National Institutes of Health registry at clinicaltrials.gov was accessed 7/7/17 and all US trials from 2005-2017 were downloaded by status (Completed, ongoing, and discontinued). An algorithm was developed to automatically assign trials as “surgical” or “non-surgical” based on trial type and inclusion of surgical keywords generated from a list of 10,000 trial titles and descriptions. The algorithm was validated by testing a subset of trials against a team of blinded residents and medical students. A primary analysis was conducted of all US trials based on the assigned designation of surgical and non-surgical per the trial status. Significance was established via two-tailed z-test. The reasons for discontinuation between surgical and non-surgical trials were examined and tabulated. A univariate multiple logistic regression using SPSS version 20.0 was performed to assess the impacts of trial design, characteristics, and funding sources on trial discontinuation and completion.

Results:

The database search yielded 82,719 non-surgical and 5779 surgical trials after automatic assignment. The algorithm for assignments had an overall accuracy of 87.99% (95%CI 86.85-89.13%) and was associated with a +LR of 6.09 and -LR of 0.093.

Significant differences were observed in trial status (Non-surg vs surg: Completed: 55.51% vs 39.49%, Ongoing: 33.42% vs 44.54%, and Discontinued: 11.07% vs 15.97%, p <0.001 each). Industry was more likely to fund non-surgical trials (44.00% vs 32.50%, p <0.001). Surgical trials were more likely to discontinue due to poor recruitment (44.65% vs 34.74% p<0.001). Industry funding was associated with increased discontinuation (OR 1.63 p<0.001). This remained true for the surgical subset of trials funded by industry (OR 1.25 p=0.041). Reaching enrollment and/or phase 1, reporting results, and NIH funding were all protective against discontinuation while randomization had no effect.

Conclusion:
Surgical trials are less likely to reach completion compared to non-surgical trials. This study establishes industry funding as a contributory factor to trial discontinuation. However, it is not clear if this is due to forces in play after trial initiation or if it is due to relative exclusivity of selection criteria among the different trial sponsors. Poor study recruitment is an major cause for early trial discontinuation and surgical trials are more susceptible to this than non-surgical trials.

35.03 Impact of Hospital Volume on Outcomes of Laparoscopic versus Open Hepatectomy for Liver Cancer

S. W. De Geus1, G. G. Kasumova1, T. E. Sachs1, O. Akintorin1, S. Ng1, D. McAneny1, J. F. Tseng1  1Boston University,Surgery,Boston, MA, USA

Introduction:  Previous investigators have suggested that laparoscopic liver resection may be superior to an open operation based on studies at high-volume centers; however, the applicability of these findings remains unclear. This study investigates whether hospital volume is a factor in determining the short- and long-term outcomes of laparoscopic versus open hepatectomy for liver cancer.

Methods:  The National Cancer Database (NCDB) was queried for patients who underwent open or laparoscopic hepatectomy, without transplantation, for liver cancer 2010-2013. Institutions were defined as being either low-volume hospitals (LVH, ≤ 11 operations/year) or high-volume hospitals (HVH, >11 operations/year). For entire cohort and within each category, positive margin rate, 30-day mortality, readmissions, prolonged hospital stay (hospital stay ≥ 14 days), and overall survival were compared between patients who had laparoscopic and open resections, using multivariate logistic regression and Kaplan-Meier methods. 

Results: 2,867 patients underwent hepatectomy for liver cancer. Overall, 612 (21.4%) of resections were performed laparoscopically. After adjustment for covariates, resections for liver cancers at a HVH were significantly associated with lower positive-margin rates (HVH vs. LVH: 8.3% vs. 11.0%; adjusted odd ratio [AOR], 0.744; p=0.0413) and 30-day mortality (HVH vs. LVH: 3.5% vs. 6.2%; AOR, 0.646; p=0.0375). However, no significant differences were observed among the HVHs and LVHs regarding readmissions (HVH vs. LVH: 4.6% vs. 4.8%; AOR, 1.039; p=0.8482), prolonged hospital stay (HVH vs. LVH: 9.2% vs. 8.8%; AOR, 1.065; p=0.6648), or overall survival (HVH: log-rank p=0.1405; LVH: log-rank p=0.2322). Multivariate regressions showed in both HVH and LVH, laparoscopic resections were not significantly associated with positive margins (HVH: AOR, 1.246; p=0.4176; LVH: AOR, 0.991; p=0.9627), 30-day mortality (HVH: AOR, 0.755; p=0.5456; LVC: AOR, 1.037; p=0.8808), readmission (HVH: AOR, 0.834; p=0.6297; LVH: AOR, 0.698; p=0.2302) prolonged hospital stay (HVH: AOR, 0.626; p=0.1172; LVH: AOR, 0.886; p=0.5766), or overall survival (HVH: log-rank p=0.1405; LVH: log-rank p=0.2322) when compared to open.

Conclusion: Although outcomes after major operations are influenced by various factors beyond hospital volume alone, the results of this study suggest that patients with liver cancer are at higher risk of having positive resection margins and 30-day mortality if they are treated at LVH instead of HVH.  However, for both high- and low-volume hospitals, laparoscopic resections of liver cancer were associated with surgical and oncologic outcomes that were similar to those for open operations. Although residual selection bias regarding MIS vs open approach must be acknowledged, our data suggest that laparoscopic liver resection when feasible is a reasonable approach across hospital volume strata.

 

35.04 Adequacy of Daily Enoxaparin After Colorectal Surgery: An Examination of Anti-Factor Xa Levels

C. J. Pannucci1, K. I. Fleming1, A. Prazak2, C. Bertolaccini2, B. Pickron3  1University Of Utah,Division Of Plastic Surgery,Salt Lake City, UT, USA 2University Of Utah,Department Of Pharmacy,Salt Lake City, UT, USA 3University Of Utah,Department Of Surgery,Salt Lake City, UT, USA

Introduction:
Colorectal surgery patients, particularly those with malignancy, are known to be at increased risk for post-operative venous thromboembolism (VTE).  Current recommendations support that enoxaparin prophylaxis minimizes risk for peri-operative VTE.  While enoxaparin 40mg once daily is a commonly prescribed prophylactic dose, whether this dose adequately thins the blood remains unknown—this is relevant because inadequate enoxaparin dose has been associated with downstream VTE events in other surgical populations.  We examined anti-Factor Xa (aFXa) levels, a marker of blood thin-ness, in response to enoxaparin 40mg once daily among a prospectively recruited cohort of colorectal surgery patients. 

Methods:
Colorectal surgery patients were prospectively enrolled into this clinical trial (NCT02704052).  Patients received enoxaparin 40mg once daily, initiated at 6-18 hours after their surgical procedure.  Peak and trough aFXa levels were drawn, with goals of 0.3-0.5 IU/mL and 0.1-0.2 IU/mL, respectively; these ranges have been shown to maximize VTE risk reduction while minimizing bleeding risk.  We examined the proportion of patients with in and out of range aFXa in response to enoxaparin 40mg once daily and the impact of patient weight on rapidity of enoxaparin metabolism.

Results:
To date, 39 colorectal surgery patients who received enoxaparin 40mg once daily have been enrolled.  One patient had post-operative rectal bleeding requiring enoxaparin cessation prior to aFXa lab draws.  63.2% of patients (n=24) had inadequate peak aFXa levels (<0.3 IU/mL) in response to enoxaparin 40mg once daily.  28.9% of patients (n=11) had in range peak aFXa levels (0.3-0.5 IU/mL) and 7.9% of patients (n=3) were over-anticoagulated (>0.5 IU/mL).  Patient weight was associated with rapidity of enoxaparin metabolism (r2=0.41).  Among 22 patients who had trough levels drawn, 81.8% (n=18) had an undetectable trough level at 12 hours—thus the majority of patients actually receive no chemical prophylaxis for 12 hours per day. 

Conclusion:
Based on pharmacodynamics, enoxaparin 40mg once daily is inadequate for the majority of colorectal surgery patients.  For a medication that is administered daily, four out of five colorectal surgery patients receive no detectable anticoagulation for 12 hours per day.  This study plans to continue patient accrual for one year, with the goal of correlating aFXa with clinically relevant endpoints including 90-day VTE and 90-day bleeding.  As patient weight predicts rapidity of enoxaparin metabolism, a weight-based enoxaparin dosing strategy might be more appropriate.
 

35.02 Quantitative Measure of Intestinal Permeability Correlates with Sepsis

S. A. Angarita2, T. A. Russell2, P. Ruchala3, S. Duarte2, I. A. Elliott2, J. P. Whitelegge3, A. Zarrinpar1  1University Of Florida,Surgery,Gainesville, FL, USA 2University Of California – Los Angeles,Surgery,Los Angeles, CA, USA 3University Of California – Los Angeles,Pasarow Mass Spectrometry Laboratory,,Los Angeles, CA, USA

Introduction: Intestinal barrier integrity loss plays a key role in the development and perpetuation of disease states such as inflammatory bowel and celiac disease. It also crucial to the onset of sepsis and multiple organ failure in situations of intestinal hypoperfusion, including trauma and major surgery, or in the setting of abnormal blood flow such as portal hypertension. A variety of tests have been developed to assess intestinal epithelial cell damage, intestinal tight junction status, and the consequences of intestinal barrier integrity loss, i.e. increased intestinal permeability.  These methods suffer from a lack of sensitivity, a prolonged period of specimen collection, or high expense. We have developed a technique to measure the concentration of the nonabsorbable food dye FD&C Blue #1 from the blood and sought to apply this technique to assess its utility in measuring intestinal barrier function in humans.

Methods:  Four healthy volunteers and ten subjects in the intensive care unit were recruited in accordance with an IRB approved protocol. Subjects were given 0.5 mg/kg Blue #1 orally or per nasogastric tube as an aqueous solution of diluted food coloring (0.5 mg/mL). Five blood specimens were drawn per subject (5 mL/draw): 0 hour – prior to dose, 1 hour, 2 hours, 4 hours, and 8 hours. The plasma was then extracted with an acidified mixture of isopropanol and acetonitrile. The organic extracts were then analyzed by high performance liquid chromatography/mass spectrometry looking for the presence of the unmodified dye.

Results: This study was performed in two phases. Phase one attempted to establish the lower limit of detection and measure the baseline/normal intestinal absorption of Blue #1. To do so four healthy subjects were recruited. We found no detectable absorption. In Phase 2, ten patients in the intensive care unit were recruited. Six patients met criteria for septic shock (identified by a vasopressor requirement to maintain a mean arterial pressure of 65 mm Hg or greater in the absence of a hypovolemic or cardiogenic etiology). The septic patients demonstrated significantly greater absorption of Blue #1 after 2 hours and 8 hours.

Conclusion: We have developed a novel, easy-to-use method to measure intestinal permeability. The method utilizes a food grade non-absorbable dye that can be detected by mass spectrometry analysis of patient blood at multiple time points following oral consumption. This method would allow for the measurement of the intestinal permeability of patients at risk for sepsis, organ failure, or other conditions where loss of function of the intestinal barrier could lead to adverse symptoms or secondary effects.

34.10 Non-invasive Fibrosis Marker Impacts the Mortality after Hepatectomy for Hepatoma among US Veterans

F. B. Maegawa1,2, L. Shehorn3, J. B. Kettelle1,2, T. S. Riall2  1Southern Arizona VA Health Care System,Department Of Surgery,Tucson, AZ, USA 2University Of Arizona,Department Of Surgery,Tucson, AZ, USA 3Southern Arizona VA Health Care System,Department Of Nursing,Tucson, AZ, USA

Introduction:
The clinical role of non-invasive fibrosis markers (NIFM) on the mortality of patients undergoing hepatectomy for hepatocellular carcinoma (HCC) is not well established. We investigate the long-term impact of NIFM on mortality after hepatectomy for HCC. 

Methods:
This analysis utilized the Department of Veterans Affairs Corporate Data Warehouse database between 2000-2012. The severity of hepatic fibrosis was determined by the AST-platelet ratio index (APRI) and the Fibrosis-4 score (FIB-4). Kaplan-Meier survival and Cox proportional hazard regression methods were utilized for analysis. 

Results:
Mean age, MELD score, and BMI were 65.6 (SD: ± 9.4) years, 9 (SD: ± 3.1) and 28 (SD: ± 4.9) kg/m2, respectively. Most the patients were white (64.5%), followed by black (27.6%). The most common operation was partial lobectomy (56.5%) followed by right hepatectomy (28.7%). Out of 475 veterans who underwent hepatectomy for HCC, 26.3% had significant fibrosis utilizing APRI (index >1) and 29.2% utilizing FIB-4 (score > 3.25). The long-term survival among veterans with APRI > 1 was significantly worse compared to those with a normal index. Kaplan-Meier survival analysis revealed a median survival of 2.76 vs 4.38 years, respectively (Log-Rank: p< 0.0018). In contrast, the FIB-4 score was not associated with worse survival. Median survival among veterans with FIB-4 > 3.25 compared to those with a normal score was 3.28 vs 4.22 years, respectively (Log-Rank: p = 0.144). Unadjusted Cox proportional hazard regression showed that APRI >1 is associated with increased mortality (HR: 1.45; 95% CI 1.14 – 1.84). After adjusting for age, race, BMI and MELD score, APRI remained associated with increased mortality (HR: 1.36, 95% CI: 1.02 – 1.82). FIB-4 was not associated with increased mortality in both unadjusted and adjusted analysis (HR: 1.19; 95% CI: 0.94 – 1.50 and HR:1.29; 95% CI: 0.96 – 1.72, respectively).

Conclusion:
APRI can be used as a preoperative tool to predict long-term mortality after hepatectomy, refining the selection criteria for liver resection for HCC. These results suggest patients with APRI > 1 are likely to benefit from other curative therapies, such as transplantation.
 

35.01 Triple-drug Therapy to Prevent Pancreas Fistula for the Patients with a High Drain Amylase Level.

T. Adachi1, S. Ono1, T. Adachi1, M. Yamashita1, T. Hara1, A. Soyama1, M. Hidaka1, K. Kanetaka1, M. Takatsuki1, S. Eguchi1  1Nagasaki University,Department Of Surgery,Nagasaki, , Japan

Introduction: A high drain amylase level is a well known predictive marker for the development of a pancreatic fistula (PF) after a pancreaticoduodenectomy (PD). Any sign of PF following a PD warrants immediate preventive measures to avoid the development of PF. We aimed to determine the efficacy of a triple-drug therapy (TDT) regimen using gabexate mesilate, octreotide, and carbapenem antibiotics to prevent PF in patients showing a high drain amylase level on postoperative day (POD1) after PD.

Methods: We enrolled 183 patients who had undergone a PD from the year 2007. Patients were divided into two groups based on the study period. The former period group (2007~2011, n = 81) included patients in whom no particular treatment had been administered even if their drain amylase level on POD1 was high ( ≥  10,000 IU/L). The latter period group (n = 102) included patients having a high drain amylase level on POD1, who had received TDT [gabexate mesilate 600 mg/day continuous intravenous (civ.), octreotide 150 µg/day civ., and carbapenem 1.0 g/day IV] along with a fasting status in one week. Any other postoperative management including the day of drain removal (POD5) was same in this study period. The primary endpoint was the incidence of PF [beyond grade B defined by the International Study Group of Pancreatic Fistula (ISGPF criteria)]. 

Results: Incidence of PF in all enrolled patients was 10.9%. Incidence of high drain amylase level ( ≥  10,000 IU/L) on POD1 in the former group was 11.1% and in the latter group was 17.6%. Incidence of PF in patients whose drain amylase level was not high ( ?  10,000 IU/L) was equivalent (8.2 vs. 4.8%, p = 0.36), between the two groups; however incidence of PF in patients with a high drain amylase level in the latter group was effectively prevented by administration of TDT showing results that were statistically significant (88.9 vs. 11.1%, p ? 0.001).

Conclusion: TDT is an effective treatment strategy to prevent PF even in patients with a high drain amylase level after a PD procedure.
 

34.09 Variations in Demographics and Outcomes for Extracorporeal Membrane Oxygenation in the US: 2008-2014

K. L. Bailey1, Y. Seo1, E. Aguayo1, V. Dobaria1, Y. Sanaiha1, R. J. Shemin1, P. Benharash1  1David Geffen School Of Medicine, University Of California At Los Angeles,Division Of Cardiac Surgery,Los Angeles, CA, USA

Introduction:

Extracorporeal membrane oxygenation (ECMO) is increasingly used as a life-sustaining measure in patients with acute or end-stage cardiac and/or respiratory failure. We aimed to analyze the national trends in cost and clinical outcomes for venoarterial and venovenous ECMO. We further assessed whether variations in the utilization of ECMO exist based on geography and hospital size. 

Methods:

All adult ECMO patients in the 2008-2014 National Inpatient Sample (NIS) were analyzed. NIS is an all-payer inpatient database that estimates more than 35 million annual U.S. hospitalizations. Patient demographics, hospital characteristics, and outcomes including mortality, cost, and length of stay were evaluated using non-parametric tests for trends.

Results:

A national estimate of 18,685 adult ECMO patients were categorized by indication: 8,062 (43.2%) respiratory failure, 7,817 (41.8%) postcardiotomy, 1,198 (6.4%) lung transplant, 903 (4.8%) cardiogenic shock, and 706 (3.8%) heart transplant patients. Annual ECMO admissions increased significantly from 1,137 in 2008 to 5,240 in 2014 (P<0.001). The respiratory failure group showed the greatest increase from 416 cases in 2008 to 2,400 cases in 2014 (P=0.003). Average cost and length of stay for overall admissions increased significantly from $125,000+/-$12,457 to $178,677+/-$8,948 (P=0.013) and 21.8 to 24.0 days (P=0.04) respectively. Elixhauser scores measuring comorbidities increased from 3.17 to 4.14 over the study period. Mortality decreased from 61.4% to 46.0% among total admissions (P<0.001) and among all indications except for cardiogenic shock and heart transplantation. The heart transplant group had the highest percentage of neurologic complications (14.9%). ECMO admissions exhibited a persistent increase at hospitals in the South, West, and Midwest (P<0.001, P<0.001, and P=0.002, respectively) with the South having the largest fractional growth. While ECMO was utilized more frequently at medium and large hospitals (P<0.001), a smaller fraction of cases was performed at large centers in more recent years. 

Conclusion:

The past decade has seen an exponential growth of ECMO at medium and large hospitals in multiple regions of the US, paralleling a significant improvement in outcomes across cardiac and respiratory indications. This is despite a higher risk profile of patients being placed on ECMO in more recent times. Developments in ECMO technology and care of critically ill patients are likely responsible for greater survival and longer lengths of stay. The rapid growth of this technology and costs of care warrant further standardization in order to achieve optimal outcomes in the present era of value-based healthcare delivery.

34.08 Prolonged Post-Discharge Opioid Use After Liver Transplantation

D. C. Cron1, H. Hu1, J. S. Lee1, C. M. Brummett2, J. F. Waljee1, M. J. Englesbe1, C. J. Sonnenday1  2University Of Michigan Medical School,Anesthesiology,Ann Arbor, MI, USA 1University Of Michigan Medical School,Surgery,Ann Arbor, MI, USA

Introduction:
Prolonged opioid use following surgical procedures is common. End-stage liver disease is associated with painful comorbidities, and liver transplant recipients may be at risk of postoperative prolonged opioid use. We studied the incidence and predictors of prolonged opioid use following hospital discharge after liver transplantation. 

Methods:
Using a national dataset of employer-based insurance claims, we identified N=1821 adults who underwent liver transplantation between 12/2009 and 8/2015. Prolonged opioid use was defined as patients who filled an opioid prescription within two weeks of post-transplant hospital discharge, and also filled ≥1 opioid prescription between 90-180 days post-discharge. We stratified our analysis by preoperative opioid use status: opioid-naïve, chronic opioid use (≥120 days supply in the year before transplant, or ≥3 opioid prescriptions in the 3 months before surgery), and intermittent use (all other non-chronic use). We also investigated demographics, comorbidities, liver disease etiology, and hospital length of stay (LOS) as potential predictors of prolonged use. We used multivariate logistic regression to compute covariate-adjusted incidence of prolonged opioid use. 

Results:
In the year before liver transplantation, 55% of patients were opioid-naïve, 34% had intermittent use, and 11% had chronic use. Overall, 47% of transplant recipients filled an opioid within 2 weeks of hospital discharge, and 19% of all patients had prolonged use. The adjusted rate of prolonged opioid use was 8-fold higher among preoperative chronic opioid users compared to opioid-naïve (61% vs. 8%, P<0.001, Figure). Among preoperatively opioid-naïve patients, predictors of prolonged post-transplant opioid use included: hospital LOS <21 days (Odds ratio [OR]=1.93, P=0.013) and any psychiatric comorbidity (OR=1.8, P=0.030). Age, gender, insurance type, medical comorbidities, and liver disease etiology were not predictive of prolonged opioid use.

Conclusion:
Opioid use remains common beyond 90 days after post-liver transplant hospital discharge, with particularly high rates among preoperative chronic opioid users. Close outpatient follow-up and coordination of care is necessary post-transplant to optimize pain control and decrease rates of prolonged opioid use.
 

34.07 Comparison of Premature Death from Firearms versus Motor Vehicles in Pediatric Patients.

J. D. Oestreicher1,2, W. Krief1,2, N. Christopherson3,6, C. J. Crilly5, L. Rosen4, F. Bullaro1,2  1Steven And Alexandra Cohen Children’s Medical Center,Pediatric Emergency Medicine,New Hyde Park, NY, USA 2Hofstra Northwell School Of Medicine,Pediatrics,Hempstead, NY, USA 3Northwell Health Trauma Institute,Manhasset, NY, USA 4Feinstein Institute For Medical Research,Biostatistics,Manhasset, NY, USA 5Hofstra Northwell School Of Medicine,Hempstead, NY, USA 6Steven And Alexandra Cohen Children’s Medical Center,New Hyde Park, NY, USA

Introduction:
Gun violence is the second leading cause of pediatric trauma death after only motor vehicles. Though federally funded scientific data have driven life-saving policy from lead poisoning to SIDS, there remain little data on pediatric gun violence. While Congress spends $240 million annually on researching traffic safety, it explicitly bans research on gun violence despite the fact that, with the inclusion of adults, guns and cars kill the same number of Americans annually. Therefore, we sought to describe demographic and clinical characteristics of pediatric firearm and motor vehicle injuries and compare their impact on years of potential life lost (YPLL). We hypothesized that these two mechanisms have similar impact on premature death, thus highlighting this staggering disparity in research.

Methods:
We analyzed data from the National Trauma Data Bank (NTDB) in patients ≤21 years of age presenting to a participating emergency department (ED) with a pediatric firearm (PF) or pediatric motor vehicle (PMV) event from 2009 through 2014. We examined demographic and clinical characteristics of PF and PMV cases using descriptive statistics. The Cochrane-Armitage test was used to trend PF cases over time. YPLL was calculated for PF and PMV cases, using 75 years of age as reference. Because the large sample size yielded p<0.0001 for all comparisons, clinical rather than statistical significance was assessed.

Results:
A total of 1,047,018 pediatric ED visits were identified, with 5.7% PF cases and 27.8% PMV cases. There was a significant decline in PF cases from 2009 (6.2%) to 2014 (5.3%). Demographics for PF cases were as follows: mean age of 17.9 years, 89.0% male, 60.0% African American, 16.9% Hispanic. For PMV: mean age of 15.5 years, 60.6% male, 60.3% Caucasian, and 16.5% Hispanic. PF cases were more likely to die in the ED or hospital (12.5% vs 3.2%), less likely to be transferred to a different hospital (2.5% vs 3.9%), and had similar admission rates (77.5% vs 78.3%) and median lengths of stay (2.0 days). Assault accounted for 79.3% of PF cases, self-inflicted, 4.8%, and accidental, 11.7%. Self-inflicted PF cases had a higher median Injury Severity Score (13) than assault (9) or accidental (4) and were more likely to die (40.2% vs 11.4% vs 6.7%). Accidental PF cases tended to be younger (15.7 years) as compared to assault (18.2 years) and self-inflicted (17.8 years). Among all pediatric ED visits, YPLL from a PF case was 4.1 per 10 visits and, for PMV, 5.4 per 10 visits.

Conclusion:
Motor vehicles and firearms each remain a major cause of premature death. For traumatized children who are brought to an ED, four children die from a gun for every five who die from a motor vehicle, leading to similar and profound YPLL. An evidence-based approach has saved millions of lives from motor vehicle crashes; the same federal funding and research should be directed at the epidemic of pediatric firearm injury.
 

34.04 Hemodialysis Predicts Poor Outcomes after Infrapopliteal Endovascular Revascularization

C. W. Hicks1, J. K. Canner2, K. Kirkland2, M. B. Malas1, J. H. Black1, C. J. Abularrage1  1Johns Hopkins University School Of Medicine,Division Of Vascular Surgery,Baltimore, MD, USA 2Johns Hopkins University School Of Medicine,Center For Surgical Trials And Outcomes Research,Baltimore, MD, USA

Introduction:

Hemodialysis (HD) has been shown to be an independent predictor of poor outcomes after femoropopliteal revascularization procedures in patients with critical limb ischemia (CLI). However, HD patients tend to have isolated infrapopliteal disease. We aimed to compare outcomes for HD versus non-HD patients following infrapopliteal open lower extremity bypass (LEB) and endovascular peripheral vascular interventions (PVI).

Methods:

Data from the Society for Vascular Surgery Vascular Quality Initiative database (2008-2014) were analyzed. All patients undergoing infrapopliteal LEB or PVI for rest pain or tissue loss were included. One-year primary patency (PP), secondary patency (SP), and major amputation outcomes were analyzed for HD vs. non-HD stratified by treatment approach using both univariable and multivariable analyses.

Results:

1,688 patients were included, including 348 patients undergoing LEB (HD=44 vs. non-HD=304) and 1,340 patients undergoing PVI (HD=223 vs. non-HD=1,117). Patients on HD more frequently underwent revascularization for tissue loss (89% vs. 77%, P<0.001) and had ≥2 comorbidities (91% vs. 76%, P<0.001). Among patients undergoing LEB, one-year PP (66% vs. 69%) and SP (71% vs. 78%) were similar for HD vs. non-HD (P≥0.25), but major amputations occurred more frequently in the HD group (27% vs. 14%; P=0.03). Among patients undergoing PVI, one-year PP (70% vs. 78%) and SP (82% vs. 90%) were lower and the frequency of major amputations was higher (27% vs. 10%; P<0.001) for HD patients (all, P<0.001). After correcting for baseline differences between groups, outcomes were similar for HD vs. non-HD patients undergoing LEB (P≥0.21), but persistently worse for HD patients undergoing PVI (all, P≤0.007) (Table).

Conclusion:

Hemodialysis is an independent predictor of poor patency and a higher risk of major amputation following infrapopliteal endovascular revascularization procedures for the treatment of critical limb ischemia. The use of endovascular interventions in these higher-risk patients is not associated with improved limb salvage outcomes and may be an inappropriate use of healthcare resources.

34.05 Cognitive Impairment and Graft Loss in Kidney Transplant Recipients

J. M. Ruck1, A. G. Thomas1, A. A. Shaffer1,2, C. E. Haugen1, H. Ying1, F. Warsame1, N. Chu2, M. C. Carlson3,4, A. L. Gross2,4, S. P. Norman5, D. L. Segev1,2, M. McAdams-DeMarco1,2  1Johns Hopkins University School Of Medicine,Department Of Surgery,Baltimore, MD, USA 2Johns Hopkins School Of Public Health,Department Of Epidemiology,Baltimore, MD, USA 3Johns Hopkins School Of Public Health,Department Of Mental Health,Baltimore, MD, USA 4Johns Hopkins University Center On Aging And Health,Baltimore, MD, USA 5University Of Michigan,Department Of Internal Medicine, Division Of Nephrology,Ann Arbor, MI, USA

Introduction:  Cognitive impairment is common in patients with end-stage renal disease and impairs adherence to complex treatment regimens. Given the complexity of immunosuppression regimens following kidney transplantation, we hypothesized that cognitive impairment might be associated with an increased risk of all-cause graft loss among kidney transplant (KT) recipients. 

Methods:  Using the Modified Mini-Mental State (3MS) examination, we measured global cognitive function in a prospective cohort of 864 KT candidates (8/2009-7/2016). We estimated the association between pre-KT cognitive impairment and graft loss, using hybrid registry-augmented Cox regression to adjust for confounders precisely estimated in the Scientific Registry of Transplant Recipients (N=101,718). We compared the risk of graft loss between KT recipients with vs. without any cognitive impairment (3MS<80) and those with vs. without severe cognitive impairment (3MS<60), stratified by the type of transplant (living donor KT (LDKT) or deceased donor KT (DDKT)). We extrapolated estimates of the prevalence of any cognitive impairment and of severe cognitive impairment in the national kidney transplant recipient population using predictive mean matching and multiple imputation by chained equations.

Results: The prevalence of any cognitive impairment in this 864-patient multicenter cohort was 6.7% among LDKT recipients and 12.4% among DDKT recipients, extrapolating nationally to 8.1% among LDKT recipients and 13.8% of DDKT recipients. LDKT recipients with any cognitive impairment had higher graft loss risk than recipients without any cognitive impairment (5-year graft loss: 45.5% vs. 10.6%, p<0.01; aHR: 1.263.288.51, p=0.02); those with severe impairment had a risk of similar magnitude that was not statistically significant (0.742.7910.61, p=0.1). DDKT recipients with any cognitive impairment had no increase in graft loss vs. those without any cognitive impairment, but those with severe cognitive impairment had higher graft loss risk (5-year graft loss: 53.0% vs. 24.0%, p=0.04; aHR: 1.382.976.29, p<0.01). 

Conclusion: Cognitive impairment is common among both LDKT and DDKT recipients in the United States. Given these associations between cognitive impairment and graft loss, pre-KT screening for impairment is warranted to identify and more carefully follow higher-risk KT recipients. 

 

34.06 Lymph Node Ratio Does Not Predict Survival after Surgery for Stage-2 (N1) Lung Cancer in SEER

D. T. Nguyen2, J. P. Fontaine1,2, L. Robinson1,2, R. Keenan1,2, E. Toloza1,2  1Moffitt Cancer Center,Department Of Thoracic Oncology,Tampa, FL, USA 2University Of South Florida Health Morsani College Of Medicine,Tampa, FL, USA

Introduction:   Stage-2 nonsmall-cell lung cancers (NSCLC) include T1N1M0 and T2N1M0 tumors in the current Tumor-Nodal-Metastases (TNM) classification and are usually treated surgically with lymph node (LN) dissection and adjuvant chemotherapy.  Multiple studies report that a high lymph node ratio (LNR), which is the number of positive LNs divided by total LNs resected, as a negative prognostic factor in NSCLC patients with N1 disease who underwent surgical resection with postoperative radiation therapy (PORT).  We sought to determine if a higher LNR predicts worse survival after lobectomy or pneumonectomy in NSCLC patients (pts) with N1 disease but who never received PORT.

Methods:   Using Surveillance, Epidemiology, and End Results (SEER) data, we identified pts who underwent lobectomy or pneumonectomy with LN excision (LNE) for T1N1 or T2N1 NSCLC from 1988-2013.  We excluded pts who had radiation therapy, multiple primary NSCLC tumors, or zero to unknown number of LNs resected.  We included pts with Adenocarcinoma (AD), Squamous Cell (SQ), Neuroendocrine (NE), or Adenosquamous (AS) histology.  Log-rank test was used to compare Kaplan-Meier survival of pts who had LNR <0.125 vs. 0.125-0.5 vs. >0.5, stratified by surgical type and histology.

Results:  Of 3,452 pts, 2666 (77.2%) had lobectomy and 786 (22.8%) had pneumonectomy.  There were 1935 AD pts (56.1%), 1308 SQ pts (37.9%), 67 NE pts (1.9%), and 141 AS pts (4.1%).  When comparing all 3 LNR groups for the entire cohort, 1082 pts (31.3%) had LNR <0.125, 1758 pts (50.9%) had LNR 0.125-0.5, and 612 pts (17.7%) had LNR >0.5.  There were no significant differences in 5-yr survival among all 3 LNR groups for the entire population (p=0.551).  After lobectomy, 854 pts (32.0%) had LNR <0.125, 1357 (50.9%) pts had LNR 0.125-0.50, and 455 pts (17.1%) had LNR >0.5.  After pneumonectomy, 228 pts (29.0%) had LNR <0.125, 401 pts (51.0%) had LNR 0.125-0.5, and 157 pts (19.9%) had LNR >0.5.  There was no significant difference in 5-yr survival among all 3 LNR groups in either lobectomy pts (p=0.576) or pneumonectomy pts (p=0.212).  When stratified by histology, we did not find any significance in 5-yr survival among all 3 LNR groups in AD pts (p=0.284), SQ pts (p=0.908), NE pts (p=0.065), or AS pts (p=0.662).  There were no differences in 5-yr survival between lobectomy vs. pneumonectomy pts at LNR <0.125 (p=0.945), at LNR 0.125-0.5 (p=0.066), or at LNR >0.5(p=0.39).

Conclusion:  Patients with lower LNR did not have better survival than those with higher LNR in either lobectomy or pneumonectomy pts.  Lower LNR also did not predict better survival in each histology subgroup.  These findings question the prognostic value of LNRs in NSCLC patients with N1 disease after lobectomy or pneumonectomy without PORT and suggest further evaluation of LNRs as a prognostic factor.

34.03 Trends in Opioid Prescribing From Open and Minimally Invasive Thoracic Surgery Patients 2008-2014

K. A. Robinson1, J. D. Phillips2, D. Agniel3, I. Kohane3, N. Palmer3, G. A. Brat1,3  1Beth Israel Deaconess Medical Center,Surgery,Boston, MA, USA 2Dartmouth-Hitchcock,Thoracic Surgery,Lebanon, NH, USA 3Harvard Medical School,Biomedical Informatics,Boston, MA, USA

Introduction:
The US is facing an opioid epidemic with an increasing number of abuse, misuse and overdose events. As a major group of prescribers, surgeons must understand the impact that post-surgical opioids have on the long-term outcome of their patients. Previous work has demonstrated that approximately 6% of opioid naïve patients have new persistent opioid use postoperatively (Brummett et al., 2017). In thoracic surgery, postoperative pain has been a significant determinant of morbidity. It is generally accepted that video assisted or minimally invasive approaches allow patients to recover faster and with less postoperative pain. However, recent literature has been unable to show a significant difference in chronic pain after minimally invasive versus open thoracotomy (Brennan & Ph, 2017). In this study, we aimed to identify if there was a difference in postoperative opioid prescribing in patients undergoing minimally invasive compared to open thoracic surgery.

Methods:
In a de-identified administrative and pharmacy database of over 1.4 million opioid naïve surgical patients for the years 2008-2014, we retrospectively analyzed patients undergoing minimally invasive thoracic surgery vs open thoracic surgery based upon their ICD coding and compared these cohorts with opioid prescribing and post-operative misuse codes.

Results:
1907 minimally invasive (MIS) and 2081 open thoracic surgery cases were identified from CPT cohorts. During the years of the study, average daily morphine milligram equivalents prescribed decreased for both open and MIS thoracic cases (Figure 1a). However, during this same time period, the duration of opioids prescribed after minimally invasive thoracic did not significantly change. In fact, duration of prescription was trending toward an increased duration for both open thoracic surgery and MIS thoracic surgery (Figure 1b).

Conclusion:
Previous work has demonstrated that increasing the duration of opioid prescribed after surgery is a stronger predictor of opioid misuse than dosage prescribed. By prolonging the length of exposure to opioid medications, prescribers may not be reducing the risk of misuse in their patients. Furthermore, we observed that open and MIS patients were prescribed approximately the same daily dose. This suggests that postoperative prescribing behavior for pain is not defined by the surgery performed. 
 

34.01 Impact of Functional PET Imaging on the Surgical Treatment of Neuroblastoma

W. Hsu1, W. Hsu1  1National Taiwan University Hospital,Division Of Pediatric Surgery, Department Of Surgery,Taipei, ., Taiwan

Introduction:

Gross total resection (GTR) of neuroblastoma (NB) could be predicted by imaging-defined risk factors (IDRFs) on CT/MR images but might also be confounded by other biological features. This study aims to investigate the complementary role of positron emission tomography (PET) scans in predicting GTR of NB in addition to IDRFs.

Methods:

From 2007 to 2014, diagnostic PET scans with 18F-fluorodeoxyglucose (FDG) and 18F-fluoro-dihydroxyphenylalanine (FDOPA) were performed in 42 children with NB at National Taiwan University Hospital, Taipei, Taiwan. The extent of tumor resections was correlated with clinical features and imaging findings. 

Results:

Among 42 NB patients with diagnostic FDG and FDOPA PET images (median age, 2.0 [0.5–4.9] years; male:female, 28:14), 8 patients had their primary tumors responded completely to induction chemotherapy and were excluded from the analysis. For the rest 34 patients, 27 (79.4%) could achieve GTR of the primary tumor including 9 patients (26.5%) at the first operation and 18 patients (52.9%) at the best subsequent operation(s) , while the other 7 patients (20.6%) only had partial resection. Based on the primary tumors’ maximal standard uptake value (SUVmax) on PET scans, we found that the SUVmax ratio between FDG and FDOPA (G:D) was positively correlated with Hexokinase 2 (HK2; P = 0.002) gene expression but negatively with Dopa decarboxylase (DDC; P = 0.03) gene expression levels. Tumors with higher-than-median G:D ratio (G:D ≥ 1.4), indicating a “glycolytic” phenotype with less catecholaminergic differentiation, was also correlated with poor-risk genomic types (P < 0.001) and a lower probability of GTR (56% vs. 100%; P = 0.007). Using the G:D ratio to predict GTR also complemented the anatomical IDRF from CT/MRI (GTR rate, 46% vs. 100% among 20 patients with IDRF; P = 0.04). Yet, GTR or IDRF per se was not associated with survival outcome.

Conclusion:

NB tumors with higher FDG uptake and lower FDOPA uptake at diagnosis were associated with a less likelihood of GTR. The incorporation of functional PET imaging may help to develop a more tailored, risk-directed surgical strategy for NB patients.

 

34.02 Rate of Secondary Interventions After Open Versus Endovascular AAA Repair

H. Krishnamoorthi1,3, H. Jeon-Slaughter2,4, A. Wall1, S. Banerjee2,4, B. Ramanan1,3, C. Timaran1,3, J. G. Modrall1,3, S. Tsai1,3  1VA North Texas Health Care System,Vascular Surgery,Dallas, TX, USA 2VA North Texas Health Care System,Cardiology,Dallas, TX, USA 3University Of Texas Southwestern Medical Center,Vascular Surgery,Dallas, TX, USA 4University Of Texas Southwestern Medical Center,Cardiology,Dallas, TX, USA

Introduction:  While long-term durability and improved peri-operative outcome of endovascular AAA repair has been demonstrated, some studies have suggested an increased rate of secondary interventions compared with open AAA repair. More recent data suggest that rates between the two modalities may be similar. We investigated the rate of secondary intervention in patients undergoing elective EVAR or open AAA repair and the effect of AAA size in these two groups of patients.

Methods:  A retrospective, single-institution review was conducted between January 2003 and December 2012. Secondary intervention was defined as any intervention within 30 days of the procedure or an AAA repair-related procedure after 30 days, which included repair of endoleaks and incisional hernia repair. Cochran-Mantel-Haenszel statistics were conducted to examine associations between AAA size and need for secondary interventions over 10 years.

Results: A total of 342 patients underwent elective AAA repair. 274 patients underwent elective EVAR and 68 patients underwent open AAA repair.  The mean age of patients treated with EVAR was 69±9 years, while the mean age of patients treated with open AAA repair was 67±7 years. The mean follow-up period was 49 months post-EVAR (standard deviation 29 months) and 78 months post-open repair (standard deviation 46 months).  The rate of secondary intervention was significantly lower in the EVAR group compared with the open AAA repair group (14.9% vs 27.9%, p=0.004). The most common secondary intervention was repair of type II endoleak (n=14, 5.1%) after EVAR and incisional hernia repair (n=4, 5.9%) after open AAA repair. Of the 274 EVAR patients, 133 (48.5%) died during the study period, while 34 (50%) of the 68 open AAA repair patients died during the study period.  Need for secondary intervention was not associated with long-term mortality in either the EVAR or the open repair group (p=0.11 and p=0.87, respectively).  Furthermore, in both the open repair and EVAR groups, AAA size was not associated with rate of secondary intervention.

Conclusion: The rate of secondary intervention in patients treated with EVAR is significantly lower than in patients treated with open AAA repair.  However, secondary intervention is not associated with long-term survival in either group.