35.04 Adequacy of Daily Enoxaparin After Colorectal Surgery: An Examination of Anti-Factor Xa Levels

C. J. Pannucci1, K. I. Fleming1, A. Prazak2, C. Bertolaccini2, B. Pickron3  1University Of Utah,Division Of Plastic Surgery,Salt Lake City, UT, USA 2University Of Utah,Department Of Pharmacy,Salt Lake City, UT, USA 3University Of Utah,Department Of Surgery,Salt Lake City, UT, USA

Introduction:
Colorectal surgery patients, particularly those with malignancy, are known to be at increased risk for post-operative venous thromboembolism (VTE).  Current recommendations support that enoxaparin prophylaxis minimizes risk for peri-operative VTE.  While enoxaparin 40mg once daily is a commonly prescribed prophylactic dose, whether this dose adequately thins the blood remains unknown—this is relevant because inadequate enoxaparin dose has been associated with downstream VTE events in other surgical populations.  We examined anti-Factor Xa (aFXa) levels, a marker of blood thin-ness, in response to enoxaparin 40mg once daily among a prospectively recruited cohort of colorectal surgery patients. 

Methods:
Colorectal surgery patients were prospectively enrolled into this clinical trial (NCT02704052).  Patients received enoxaparin 40mg once daily, initiated at 6-18 hours after their surgical procedure.  Peak and trough aFXa levels were drawn, with goals of 0.3-0.5 IU/mL and 0.1-0.2 IU/mL, respectively; these ranges have been shown to maximize VTE risk reduction while minimizing bleeding risk.  We examined the proportion of patients with in and out of range aFXa in response to enoxaparin 40mg once daily and the impact of patient weight on rapidity of enoxaparin metabolism.

Results:
To date, 39 colorectal surgery patients who received enoxaparin 40mg once daily have been enrolled.  One patient had post-operative rectal bleeding requiring enoxaparin cessation prior to aFXa lab draws.  63.2% of patients (n=24) had inadequate peak aFXa levels (<0.3 IU/mL) in response to enoxaparin 40mg once daily.  28.9% of patients (n=11) had in range peak aFXa levels (0.3-0.5 IU/mL) and 7.9% of patients (n=3) were over-anticoagulated (>0.5 IU/mL).  Patient weight was associated with rapidity of enoxaparin metabolism (r2=0.41).  Among 22 patients who had trough levels drawn, 81.8% (n=18) had an undetectable trough level at 12 hours—thus the majority of patients actually receive no chemical prophylaxis for 12 hours per day. 

Conclusion:
Based on pharmacodynamics, enoxaparin 40mg once daily is inadequate for the majority of colorectal surgery patients.  For a medication that is administered daily, four out of five colorectal surgery patients receive no detectable anticoagulation for 12 hours per day.  This study plans to continue patient accrual for one year, with the goal of correlating aFXa with clinically relevant endpoints including 90-day VTE and 90-day bleeding.  As patient weight predicts rapidity of enoxaparin metabolism, a weight-based enoxaparin dosing strategy might be more appropriate.
 

35.02 Quantitative Measure of Intestinal Permeability Correlates with Sepsis

S. A. Angarita2, T. A. Russell2, P. Ruchala3, S. Duarte2, I. A. Elliott2, J. P. Whitelegge3, A. Zarrinpar1  1University Of Florida,Surgery,Gainesville, FL, USA 2University Of California – Los Angeles,Surgery,Los Angeles, CA, USA 3University Of California – Los Angeles,Pasarow Mass Spectrometry Laboratory,,Los Angeles, CA, USA

Introduction: Intestinal barrier integrity loss plays a key role in the development and perpetuation of disease states such as inflammatory bowel and celiac disease. It also crucial to the onset of sepsis and multiple organ failure in situations of intestinal hypoperfusion, including trauma and major surgery, or in the setting of abnormal blood flow such as portal hypertension. A variety of tests have been developed to assess intestinal epithelial cell damage, intestinal tight junction status, and the consequences of intestinal barrier integrity loss, i.e. increased intestinal permeability.  These methods suffer from a lack of sensitivity, a prolonged period of specimen collection, or high expense. We have developed a technique to measure the concentration of the nonabsorbable food dye FD&C Blue #1 from the blood and sought to apply this technique to assess its utility in measuring intestinal barrier function in humans.

Methods:  Four healthy volunteers and ten subjects in the intensive care unit were recruited in accordance with an IRB approved protocol. Subjects were given 0.5 mg/kg Blue #1 orally or per nasogastric tube as an aqueous solution of diluted food coloring (0.5 mg/mL). Five blood specimens were drawn per subject (5 mL/draw): 0 hour – prior to dose, 1 hour, 2 hours, 4 hours, and 8 hours. The plasma was then extracted with an acidified mixture of isopropanol and acetonitrile. The organic extracts were then analyzed by high performance liquid chromatography/mass spectrometry looking for the presence of the unmodified dye.

Results: This study was performed in two phases. Phase one attempted to establish the lower limit of detection and measure the baseline/normal intestinal absorption of Blue #1. To do so four healthy subjects were recruited. We found no detectable absorption. In Phase 2, ten patients in the intensive care unit were recruited. Six patients met criteria for septic shock (identified by a vasopressor requirement to maintain a mean arterial pressure of 65 mm Hg or greater in the absence of a hypovolemic or cardiogenic etiology). The septic patients demonstrated significantly greater absorption of Blue #1 after 2 hours and 8 hours.

Conclusion: We have developed a novel, easy-to-use method to measure intestinal permeability. The method utilizes a food grade non-absorbable dye that can be detected by mass spectrometry analysis of patient blood at multiple time points following oral consumption. This method would allow for the measurement of the intestinal permeability of patients at risk for sepsis, organ failure, or other conditions where loss of function of the intestinal barrier could lead to adverse symptoms or secondary effects.

34.10 Non-invasive Fibrosis Marker Impacts the Mortality after Hepatectomy for Hepatoma among US Veterans

F. B. Maegawa1,2, L. Shehorn3, J. B. Kettelle1,2, T. S. Riall2  1Southern Arizona VA Health Care System,Department Of Surgery,Tucson, AZ, USA 2University Of Arizona,Department Of Surgery,Tucson, AZ, USA 3Southern Arizona VA Health Care System,Department Of Nursing,Tucson, AZ, USA

Introduction:
The clinical role of non-invasive fibrosis markers (NIFM) on the mortality of patients undergoing hepatectomy for hepatocellular carcinoma (HCC) is not well established. We investigate the long-term impact of NIFM on mortality after hepatectomy for HCC. 

Methods:
This analysis utilized the Department of Veterans Affairs Corporate Data Warehouse database between 2000-2012. The severity of hepatic fibrosis was determined by the AST-platelet ratio index (APRI) and the Fibrosis-4 score (FIB-4). Kaplan-Meier survival and Cox proportional hazard regression methods were utilized for analysis. 

Results:
Mean age, MELD score, and BMI were 65.6 (SD: ± 9.4) years, 9 (SD: ± 3.1) and 28 (SD: ± 4.9) kg/m2, respectively. Most the patients were white (64.5%), followed by black (27.6%). The most common operation was partial lobectomy (56.5%) followed by right hepatectomy (28.7%). Out of 475 veterans who underwent hepatectomy for HCC, 26.3% had significant fibrosis utilizing APRI (index >1) and 29.2% utilizing FIB-4 (score > 3.25). The long-term survival among veterans with APRI > 1 was significantly worse compared to those with a normal index. Kaplan-Meier survival analysis revealed a median survival of 2.76 vs 4.38 years, respectively (Log-Rank: p< 0.0018). In contrast, the FIB-4 score was not associated with worse survival. Median survival among veterans with FIB-4 > 3.25 compared to those with a normal score was 3.28 vs 4.22 years, respectively (Log-Rank: p = 0.144). Unadjusted Cox proportional hazard regression showed that APRI >1 is associated with increased mortality (HR: 1.45; 95% CI 1.14 – 1.84). After adjusting for age, race, BMI and MELD score, APRI remained associated with increased mortality (HR: 1.36, 95% CI: 1.02 – 1.82). FIB-4 was not associated with increased mortality in both unadjusted and adjusted analysis (HR: 1.19; 95% CI: 0.94 – 1.50 and HR:1.29; 95% CI: 0.96 – 1.72, respectively).

Conclusion:
APRI can be used as a preoperative tool to predict long-term mortality after hepatectomy, refining the selection criteria for liver resection for HCC. These results suggest patients with APRI > 1 are likely to benefit from other curative therapies, such as transplantation.
 

35.01 Triple-drug Therapy to Prevent Pancreas Fistula for the Patients with a High Drain Amylase Level.

T. Adachi1, S. Ono1, T. Adachi1, M. Yamashita1, T. Hara1, A. Soyama1, M. Hidaka1, K. Kanetaka1, M. Takatsuki1, S. Eguchi1  1Nagasaki University,Department Of Surgery,Nagasaki, , Japan

Introduction: A high drain amylase level is a well known predictive marker for the development of a pancreatic fistula (PF) after a pancreaticoduodenectomy (PD). Any sign of PF following a PD warrants immediate preventive measures to avoid the development of PF. We aimed to determine the efficacy of a triple-drug therapy (TDT) regimen using gabexate mesilate, octreotide, and carbapenem antibiotics to prevent PF in patients showing a high drain amylase level on postoperative day (POD1) after PD.

Methods: We enrolled 183 patients who had undergone a PD from the year 2007. Patients were divided into two groups based on the study period. The former period group (2007~2011, n = 81) included patients in whom no particular treatment had been administered even if their drain amylase level on POD1 was high ( ≥  10,000 IU/L). The latter period group (n = 102) included patients having a high drain amylase level on POD1, who had received TDT [gabexate mesilate 600 mg/day continuous intravenous (civ.), octreotide 150 µg/day civ., and carbapenem 1.0 g/day IV] along with a fasting status in one week. Any other postoperative management including the day of drain removal (POD5) was same in this study period. The primary endpoint was the incidence of PF [beyond grade B defined by the International Study Group of Pancreatic Fistula (ISGPF criteria)]. 

Results: Incidence of PF in all enrolled patients was 10.9%. Incidence of high drain amylase level ( ≥  10,000 IU/L) on POD1 in the former group was 11.1% and in the latter group was 17.6%. Incidence of PF in patients whose drain amylase level was not high ( ?  10,000 IU/L) was equivalent (8.2 vs. 4.8%, p = 0.36), between the two groups; however incidence of PF in patients with a high drain amylase level in the latter group was effectively prevented by administration of TDT showing results that were statistically significant (88.9 vs. 11.1%, p ? 0.001).

Conclusion: TDT is an effective treatment strategy to prevent PF even in patients with a high drain amylase level after a PD procedure.
 

34.09 Variations in Demographics and Outcomes for Extracorporeal Membrane Oxygenation in the US: 2008-2014

K. L. Bailey1, Y. Seo1, E. Aguayo1, V. Dobaria1, Y. Sanaiha1, R. J. Shemin1, P. Benharash1  1David Geffen School Of Medicine, University Of California At Los Angeles,Division Of Cardiac Surgery,Los Angeles, CA, USA

Introduction:

Extracorporeal membrane oxygenation (ECMO) is increasingly used as a life-sustaining measure in patients with acute or end-stage cardiac and/or respiratory failure. We aimed to analyze the national trends in cost and clinical outcomes for venoarterial and venovenous ECMO. We further assessed whether variations in the utilization of ECMO exist based on geography and hospital size. 

Methods:

All adult ECMO patients in the 2008-2014 National Inpatient Sample (NIS) were analyzed. NIS is an all-payer inpatient database that estimates more than 35 million annual U.S. hospitalizations. Patient demographics, hospital characteristics, and outcomes including mortality, cost, and length of stay were evaluated using non-parametric tests for trends.

Results:

A national estimate of 18,685 adult ECMO patients were categorized by indication: 8,062 (43.2%) respiratory failure, 7,817 (41.8%) postcardiotomy, 1,198 (6.4%) lung transplant, 903 (4.8%) cardiogenic shock, and 706 (3.8%) heart transplant patients. Annual ECMO admissions increased significantly from 1,137 in 2008 to 5,240 in 2014 (P<0.001). The respiratory failure group showed the greatest increase from 416 cases in 2008 to 2,400 cases in 2014 (P=0.003). Average cost and length of stay for overall admissions increased significantly from $125,000+/-$12,457 to $178,677+/-$8,948 (P=0.013) and 21.8 to 24.0 days (P=0.04) respectively. Elixhauser scores measuring comorbidities increased from 3.17 to 4.14 over the study period. Mortality decreased from 61.4% to 46.0% among total admissions (P<0.001) and among all indications except for cardiogenic shock and heart transplantation. The heart transplant group had the highest percentage of neurologic complications (14.9%). ECMO admissions exhibited a persistent increase at hospitals in the South, West, and Midwest (P<0.001, P<0.001, and P=0.002, respectively) with the South having the largest fractional growth. While ECMO was utilized more frequently at medium and large hospitals (P<0.001), a smaller fraction of cases was performed at large centers in more recent years. 

Conclusion:

The past decade has seen an exponential growth of ECMO at medium and large hospitals in multiple regions of the US, paralleling a significant improvement in outcomes across cardiac and respiratory indications. This is despite a higher risk profile of patients being placed on ECMO in more recent times. Developments in ECMO technology and care of critically ill patients are likely responsible for greater survival and longer lengths of stay. The rapid growth of this technology and costs of care warrant further standardization in order to achieve optimal outcomes in the present era of value-based healthcare delivery.

34.08 Prolonged Post-Discharge Opioid Use After Liver Transplantation

D. C. Cron1, H. Hu1, J. S. Lee1, C. M. Brummett2, J. F. Waljee1, M. J. Englesbe1, C. J. Sonnenday1  2University Of Michigan Medical School,Anesthesiology,Ann Arbor, MI, USA 1University Of Michigan Medical School,Surgery,Ann Arbor, MI, USA

Introduction:
Prolonged opioid use following surgical procedures is common. End-stage liver disease is associated with painful comorbidities, and liver transplant recipients may be at risk of postoperative prolonged opioid use. We studied the incidence and predictors of prolonged opioid use following hospital discharge after liver transplantation. 

Methods:
Using a national dataset of employer-based insurance claims, we identified N=1821 adults who underwent liver transplantation between 12/2009 and 8/2015. Prolonged opioid use was defined as patients who filled an opioid prescription within two weeks of post-transplant hospital discharge, and also filled ≥1 opioid prescription between 90-180 days post-discharge. We stratified our analysis by preoperative opioid use status: opioid-naïve, chronic opioid use (≥120 days supply in the year before transplant, or ≥3 opioid prescriptions in the 3 months before surgery), and intermittent use (all other non-chronic use). We also investigated demographics, comorbidities, liver disease etiology, and hospital length of stay (LOS) as potential predictors of prolonged use. We used multivariate logistic regression to compute covariate-adjusted incidence of prolonged opioid use. 

Results:
In the year before liver transplantation, 55% of patients were opioid-naïve, 34% had intermittent use, and 11% had chronic use. Overall, 47% of transplant recipients filled an opioid within 2 weeks of hospital discharge, and 19% of all patients had prolonged use. The adjusted rate of prolonged opioid use was 8-fold higher among preoperative chronic opioid users compared to opioid-naïve (61% vs. 8%, P<0.001, Figure). Among preoperatively opioid-naïve patients, predictors of prolonged post-transplant opioid use included: hospital LOS <21 days (Odds ratio [OR]=1.93, P=0.013) and any psychiatric comorbidity (OR=1.8, P=0.030). Age, gender, insurance type, medical comorbidities, and liver disease etiology were not predictive of prolonged opioid use.

Conclusion:
Opioid use remains common beyond 90 days after post-liver transplant hospital discharge, with particularly high rates among preoperative chronic opioid users. Close outpatient follow-up and coordination of care is necessary post-transplant to optimize pain control and decrease rates of prolonged opioid use.
 

34.07 Comparison of Premature Death from Firearms versus Motor Vehicles in Pediatric Patients.

J. D. Oestreicher1,2, W. Krief1,2, N. Christopherson3,6, C. J. Crilly5, L. Rosen4, F. Bullaro1,2  1Steven And Alexandra Cohen Children’s Medical Center,Pediatric Emergency Medicine,New Hyde Park, NY, USA 2Hofstra Northwell School Of Medicine,Pediatrics,Hempstead, NY, USA 3Northwell Health Trauma Institute,Manhasset, NY, USA 4Feinstein Institute For Medical Research,Biostatistics,Manhasset, NY, USA 5Hofstra Northwell School Of Medicine,Hempstead, NY, USA 6Steven And Alexandra Cohen Children’s Medical Center,New Hyde Park, NY, USA

Introduction:
Gun violence is the second leading cause of pediatric trauma death after only motor vehicles. Though federally funded scientific data have driven life-saving policy from lead poisoning to SIDS, there remain little data on pediatric gun violence. While Congress spends $240 million annually on researching traffic safety, it explicitly bans research on gun violence despite the fact that, with the inclusion of adults, guns and cars kill the same number of Americans annually. Therefore, we sought to describe demographic and clinical characteristics of pediatric firearm and motor vehicle injuries and compare their impact on years of potential life lost (YPLL). We hypothesized that these two mechanisms have similar impact on premature death, thus highlighting this staggering disparity in research.

Methods:
We analyzed data from the National Trauma Data Bank (NTDB) in patients ≤21 years of age presenting to a participating emergency department (ED) with a pediatric firearm (PF) or pediatric motor vehicle (PMV) event from 2009 through 2014. We examined demographic and clinical characteristics of PF and PMV cases using descriptive statistics. The Cochrane-Armitage test was used to trend PF cases over time. YPLL was calculated for PF and PMV cases, using 75 years of age as reference. Because the large sample size yielded p<0.0001 for all comparisons, clinical rather than statistical significance was assessed.

Results:
A total of 1,047,018 pediatric ED visits were identified, with 5.7% PF cases and 27.8% PMV cases. There was a significant decline in PF cases from 2009 (6.2%) to 2014 (5.3%). Demographics for PF cases were as follows: mean age of 17.9 years, 89.0% male, 60.0% African American, 16.9% Hispanic. For PMV: mean age of 15.5 years, 60.6% male, 60.3% Caucasian, and 16.5% Hispanic. PF cases were more likely to die in the ED or hospital (12.5% vs 3.2%), less likely to be transferred to a different hospital (2.5% vs 3.9%), and had similar admission rates (77.5% vs 78.3%) and median lengths of stay (2.0 days). Assault accounted for 79.3% of PF cases, self-inflicted, 4.8%, and accidental, 11.7%. Self-inflicted PF cases had a higher median Injury Severity Score (13) than assault (9) or accidental (4) and were more likely to die (40.2% vs 11.4% vs 6.7%). Accidental PF cases tended to be younger (15.7 years) as compared to assault (18.2 years) and self-inflicted (17.8 years). Among all pediatric ED visits, YPLL from a PF case was 4.1 per 10 visits and, for PMV, 5.4 per 10 visits.

Conclusion:
Motor vehicles and firearms each remain a major cause of premature death. For traumatized children who are brought to an ED, four children die from a gun for every five who die from a motor vehicle, leading to similar and profound YPLL. An evidence-based approach has saved millions of lives from motor vehicle crashes; the same federal funding and research should be directed at the epidemic of pediatric firearm injury.
 

34.04 Hemodialysis Predicts Poor Outcomes after Infrapopliteal Endovascular Revascularization

C. W. Hicks1, J. K. Canner2, K. Kirkland2, M. B. Malas1, J. H. Black1, C. J. Abularrage1  1Johns Hopkins University School Of Medicine,Division Of Vascular Surgery,Baltimore, MD, USA 2Johns Hopkins University School Of Medicine,Center For Surgical Trials And Outcomes Research,Baltimore, MD, USA

Introduction:

Hemodialysis (HD) has been shown to be an independent predictor of poor outcomes after femoropopliteal revascularization procedures in patients with critical limb ischemia (CLI). However, HD patients tend to have isolated infrapopliteal disease. We aimed to compare outcomes for HD versus non-HD patients following infrapopliteal open lower extremity bypass (LEB) and endovascular peripheral vascular interventions (PVI).

Methods:

Data from the Society for Vascular Surgery Vascular Quality Initiative database (2008-2014) were analyzed. All patients undergoing infrapopliteal LEB or PVI for rest pain or tissue loss were included. One-year primary patency (PP), secondary patency (SP), and major amputation outcomes were analyzed for HD vs. non-HD stratified by treatment approach using both univariable and multivariable analyses.

Results:

1,688 patients were included, including 348 patients undergoing LEB (HD=44 vs. non-HD=304) and 1,340 patients undergoing PVI (HD=223 vs. non-HD=1,117). Patients on HD more frequently underwent revascularization for tissue loss (89% vs. 77%, P<0.001) and had ≥2 comorbidities (91% vs. 76%, P<0.001). Among patients undergoing LEB, one-year PP (66% vs. 69%) and SP (71% vs. 78%) were similar for HD vs. non-HD (P≥0.25), but major amputations occurred more frequently in the HD group (27% vs. 14%; P=0.03). Among patients undergoing PVI, one-year PP (70% vs. 78%) and SP (82% vs. 90%) were lower and the frequency of major amputations was higher (27% vs. 10%; P<0.001) for HD patients (all, P<0.001). After correcting for baseline differences between groups, outcomes were similar for HD vs. non-HD patients undergoing LEB (P≥0.21), but persistently worse for HD patients undergoing PVI (all, P≤0.007) (Table).

Conclusion:

Hemodialysis is an independent predictor of poor patency and a higher risk of major amputation following infrapopliteal endovascular revascularization procedures for the treatment of critical limb ischemia. The use of endovascular interventions in these higher-risk patients is not associated with improved limb salvage outcomes and may be an inappropriate use of healthcare resources.

34.05 Cognitive Impairment and Graft Loss in Kidney Transplant Recipients

J. M. Ruck1, A. G. Thomas1, A. A. Shaffer1,2, C. E. Haugen1, H. Ying1, F. Warsame1, N. Chu2, M. C. Carlson3,4, A. L. Gross2,4, S. P. Norman5, D. L. Segev1,2, M. McAdams-DeMarco1,2  1Johns Hopkins University School Of Medicine,Department Of Surgery,Baltimore, MD, USA 2Johns Hopkins School Of Public Health,Department Of Epidemiology,Baltimore, MD, USA 3Johns Hopkins School Of Public Health,Department Of Mental Health,Baltimore, MD, USA 4Johns Hopkins University Center On Aging And Health,Baltimore, MD, USA 5University Of Michigan,Department Of Internal Medicine, Division Of Nephrology,Ann Arbor, MI, USA

Introduction:  Cognitive impairment is common in patients with end-stage renal disease and impairs adherence to complex treatment regimens. Given the complexity of immunosuppression regimens following kidney transplantation, we hypothesized that cognitive impairment might be associated with an increased risk of all-cause graft loss among kidney transplant (KT) recipients. 

Methods:  Using the Modified Mini-Mental State (3MS) examination, we measured global cognitive function in a prospective cohort of 864 KT candidates (8/2009-7/2016). We estimated the association between pre-KT cognitive impairment and graft loss, using hybrid registry-augmented Cox regression to adjust for confounders precisely estimated in the Scientific Registry of Transplant Recipients (N=101,718). We compared the risk of graft loss between KT recipients with vs. without any cognitive impairment (3MS<80) and those with vs. without severe cognitive impairment (3MS<60), stratified by the type of transplant (living donor KT (LDKT) or deceased donor KT (DDKT)). We extrapolated estimates of the prevalence of any cognitive impairment and of severe cognitive impairment in the national kidney transplant recipient population using predictive mean matching and multiple imputation by chained equations.

Results: The prevalence of any cognitive impairment in this 864-patient multicenter cohort was 6.7% among LDKT recipients and 12.4% among DDKT recipients, extrapolating nationally to 8.1% among LDKT recipients and 13.8% of DDKT recipients. LDKT recipients with any cognitive impairment had higher graft loss risk than recipients without any cognitive impairment (5-year graft loss: 45.5% vs. 10.6%, p<0.01; aHR: 1.263.288.51, p=0.02); those with severe impairment had a risk of similar magnitude that was not statistically significant (0.742.7910.61, p=0.1). DDKT recipients with any cognitive impairment had no increase in graft loss vs. those without any cognitive impairment, but those with severe cognitive impairment had higher graft loss risk (5-year graft loss: 53.0% vs. 24.0%, p=0.04; aHR: 1.382.976.29, p<0.01). 

Conclusion: Cognitive impairment is common among both LDKT and DDKT recipients in the United States. Given these associations between cognitive impairment and graft loss, pre-KT screening for impairment is warranted to identify and more carefully follow higher-risk KT recipients. 

 

34.06 Lymph Node Ratio Does Not Predict Survival after Surgery for Stage-2 (N1) Lung Cancer in SEER

D. T. Nguyen2, J. P. Fontaine1,2, L. Robinson1,2, R. Keenan1,2, E. Toloza1,2  1Moffitt Cancer Center,Department Of Thoracic Oncology,Tampa, FL, USA 2University Of South Florida Health Morsani College Of Medicine,Tampa, FL, USA

Introduction:   Stage-2 nonsmall-cell lung cancers (NSCLC) include T1N1M0 and T2N1M0 tumors in the current Tumor-Nodal-Metastases (TNM) classification and are usually treated surgically with lymph node (LN) dissection and adjuvant chemotherapy.  Multiple studies report that a high lymph node ratio (LNR), which is the number of positive LNs divided by total LNs resected, as a negative prognostic factor in NSCLC patients with N1 disease who underwent surgical resection with postoperative radiation therapy (PORT).  We sought to determine if a higher LNR predicts worse survival after lobectomy or pneumonectomy in NSCLC patients (pts) with N1 disease but who never received PORT.

Methods:   Using Surveillance, Epidemiology, and End Results (SEER) data, we identified pts who underwent lobectomy or pneumonectomy with LN excision (LNE) for T1N1 or T2N1 NSCLC from 1988-2013.  We excluded pts who had radiation therapy, multiple primary NSCLC tumors, or zero to unknown number of LNs resected.  We included pts with Adenocarcinoma (AD), Squamous Cell (SQ), Neuroendocrine (NE), or Adenosquamous (AS) histology.  Log-rank test was used to compare Kaplan-Meier survival of pts who had LNR <0.125 vs. 0.125-0.5 vs. >0.5, stratified by surgical type and histology.

Results:  Of 3,452 pts, 2666 (77.2%) had lobectomy and 786 (22.8%) had pneumonectomy.  There were 1935 AD pts (56.1%), 1308 SQ pts (37.9%), 67 NE pts (1.9%), and 141 AS pts (4.1%).  When comparing all 3 LNR groups for the entire cohort, 1082 pts (31.3%) had LNR <0.125, 1758 pts (50.9%) had LNR 0.125-0.5, and 612 pts (17.7%) had LNR >0.5.  There were no significant differences in 5-yr survival among all 3 LNR groups for the entire population (p=0.551).  After lobectomy, 854 pts (32.0%) had LNR <0.125, 1357 (50.9%) pts had LNR 0.125-0.50, and 455 pts (17.1%) had LNR >0.5.  After pneumonectomy, 228 pts (29.0%) had LNR <0.125, 401 pts (51.0%) had LNR 0.125-0.5, and 157 pts (19.9%) had LNR >0.5.  There was no significant difference in 5-yr survival among all 3 LNR groups in either lobectomy pts (p=0.576) or pneumonectomy pts (p=0.212).  When stratified by histology, we did not find any significance in 5-yr survival among all 3 LNR groups in AD pts (p=0.284), SQ pts (p=0.908), NE pts (p=0.065), or AS pts (p=0.662).  There were no differences in 5-yr survival between lobectomy vs. pneumonectomy pts at LNR <0.125 (p=0.945), at LNR 0.125-0.5 (p=0.066), or at LNR >0.5(p=0.39).

Conclusion:  Patients with lower LNR did not have better survival than those with higher LNR in either lobectomy or pneumonectomy pts.  Lower LNR also did not predict better survival in each histology subgroup.  These findings question the prognostic value of LNRs in NSCLC patients with N1 disease after lobectomy or pneumonectomy without PORT and suggest further evaluation of LNRs as a prognostic factor.

34.03 Trends in Opioid Prescribing From Open and Minimally Invasive Thoracic Surgery Patients 2008-2014

K. A. Robinson1, J. D. Phillips2, D. Agniel3, I. Kohane3, N. Palmer3, G. A. Brat1,3  1Beth Israel Deaconess Medical Center,Surgery,Boston, MA, USA 2Dartmouth-Hitchcock,Thoracic Surgery,Lebanon, NH, USA 3Harvard Medical School,Biomedical Informatics,Boston, MA, USA

Introduction:
The US is facing an opioid epidemic with an increasing number of abuse, misuse and overdose events. As a major group of prescribers, surgeons must understand the impact that post-surgical opioids have on the long-term outcome of their patients. Previous work has demonstrated that approximately 6% of opioid naïve patients have new persistent opioid use postoperatively (Brummett et al., 2017). In thoracic surgery, postoperative pain has been a significant determinant of morbidity. It is generally accepted that video assisted or minimally invasive approaches allow patients to recover faster and with less postoperative pain. However, recent literature has been unable to show a significant difference in chronic pain after minimally invasive versus open thoracotomy (Brennan & Ph, 2017). In this study, we aimed to identify if there was a difference in postoperative opioid prescribing in patients undergoing minimally invasive compared to open thoracic surgery.

Methods:
In a de-identified administrative and pharmacy database of over 1.4 million opioid naïve surgical patients for the years 2008-2014, we retrospectively analyzed patients undergoing minimally invasive thoracic surgery vs open thoracic surgery based upon their ICD coding and compared these cohorts with opioid prescribing and post-operative misuse codes.

Results:
1907 minimally invasive (MIS) and 2081 open thoracic surgery cases were identified from CPT cohorts. During the years of the study, average daily morphine milligram equivalents prescribed decreased for both open and MIS thoracic cases (Figure 1a). However, during this same time period, the duration of opioids prescribed after minimally invasive thoracic did not significantly change. In fact, duration of prescription was trending toward an increased duration for both open thoracic surgery and MIS thoracic surgery (Figure 1b).

Conclusion:
Previous work has demonstrated that increasing the duration of opioid prescribed after surgery is a stronger predictor of opioid misuse than dosage prescribed. By prolonging the length of exposure to opioid medications, prescribers may not be reducing the risk of misuse in their patients. Furthermore, we observed that open and MIS patients were prescribed approximately the same daily dose. This suggests that postoperative prescribing behavior for pain is not defined by the surgery performed. 
 

34.01 Impact of Functional PET Imaging on the Surgical Treatment of Neuroblastoma

W. Hsu1, W. Hsu1  1National Taiwan University Hospital,Division Of Pediatric Surgery, Department Of Surgery,Taipei, ., Taiwan

Introduction:

Gross total resection (GTR) of neuroblastoma (NB) could be predicted by imaging-defined risk factors (IDRFs) on CT/MR images but might also be confounded by other biological features. This study aims to investigate the complementary role of positron emission tomography (PET) scans in predicting GTR of NB in addition to IDRFs.

Methods:

From 2007 to 2014, diagnostic PET scans with 18F-fluorodeoxyglucose (FDG) and 18F-fluoro-dihydroxyphenylalanine (FDOPA) were performed in 42 children with NB at National Taiwan University Hospital, Taipei, Taiwan. The extent of tumor resections was correlated with clinical features and imaging findings. 

Results:

Among 42 NB patients with diagnostic FDG and FDOPA PET images (median age, 2.0 [0.5–4.9] years; male:female, 28:14), 8 patients had their primary tumors responded completely to induction chemotherapy and were excluded from the analysis. For the rest 34 patients, 27 (79.4%) could achieve GTR of the primary tumor including 9 patients (26.5%) at the first operation and 18 patients (52.9%) at the best subsequent operation(s) , while the other 7 patients (20.6%) only had partial resection. Based on the primary tumors’ maximal standard uptake value (SUVmax) on PET scans, we found that the SUVmax ratio between FDG and FDOPA (G:D) was positively correlated with Hexokinase 2 (HK2; P = 0.002) gene expression but negatively with Dopa decarboxylase (DDC; P = 0.03) gene expression levels. Tumors with higher-than-median G:D ratio (G:D ≥ 1.4), indicating a “glycolytic” phenotype with less catecholaminergic differentiation, was also correlated with poor-risk genomic types (P < 0.001) and a lower probability of GTR (56% vs. 100%; P = 0.007). Using the G:D ratio to predict GTR also complemented the anatomical IDRF from CT/MRI (GTR rate, 46% vs. 100% among 20 patients with IDRF; P = 0.04). Yet, GTR or IDRF per se was not associated with survival outcome.

Conclusion:

NB tumors with higher FDG uptake and lower FDOPA uptake at diagnosis were associated with a less likelihood of GTR. The incorporation of functional PET imaging may help to develop a more tailored, risk-directed surgical strategy for NB patients.

 

34.02 Rate of Secondary Interventions After Open Versus Endovascular AAA Repair

H. Krishnamoorthi1,3, H. Jeon-Slaughter2,4, A. Wall1, S. Banerjee2,4, B. Ramanan1,3, C. Timaran1,3, J. G. Modrall1,3, S. Tsai1,3  1VA North Texas Health Care System,Vascular Surgery,Dallas, TX, USA 2VA North Texas Health Care System,Cardiology,Dallas, TX, USA 3University Of Texas Southwestern Medical Center,Vascular Surgery,Dallas, TX, USA 4University Of Texas Southwestern Medical Center,Cardiology,Dallas, TX, USA

Introduction:  While long-term durability and improved peri-operative outcome of endovascular AAA repair has been demonstrated, some studies have suggested an increased rate of secondary interventions compared with open AAA repair. More recent data suggest that rates between the two modalities may be similar. We investigated the rate of secondary intervention in patients undergoing elective EVAR or open AAA repair and the effect of AAA size in these two groups of patients.

Methods:  A retrospective, single-institution review was conducted between January 2003 and December 2012. Secondary intervention was defined as any intervention within 30 days of the procedure or an AAA repair-related procedure after 30 days, which included repair of endoleaks and incisional hernia repair. Cochran-Mantel-Haenszel statistics were conducted to examine associations between AAA size and need for secondary interventions over 10 years.

Results: A total of 342 patients underwent elective AAA repair. 274 patients underwent elective EVAR and 68 patients underwent open AAA repair.  The mean age of patients treated with EVAR was 69±9 years, while the mean age of patients treated with open AAA repair was 67±7 years. The mean follow-up period was 49 months post-EVAR (standard deviation 29 months) and 78 months post-open repair (standard deviation 46 months).  The rate of secondary intervention was significantly lower in the EVAR group compared with the open AAA repair group (14.9% vs 27.9%, p=0.004). The most common secondary intervention was repair of type II endoleak (n=14, 5.1%) after EVAR and incisional hernia repair (n=4, 5.9%) after open AAA repair. Of the 274 EVAR patients, 133 (48.5%) died during the study period, while 34 (50%) of the 68 open AAA repair patients died during the study period.  Need for secondary intervention was not associated with long-term mortality in either the EVAR or the open repair group (p=0.11 and p=0.87, respectively).  Furthermore, in both the open repair and EVAR groups, AAA size was not associated with rate of secondary intervention.

Conclusion: The rate of secondary intervention in patients treated with EVAR is significantly lower than in patients treated with open AAA repair.  However, secondary intervention is not associated with long-term survival in either group.
 

33.10 Medical Optimization Prior to Surgery Improves Outcomes but is Underutilized

I. L. Leeds1, J. K. Canner1, F. Gani1, P. M. Meyers1, E. R. Haut1, J. E. Efron1, F. M. Johnston1  1Johns Hopkins University School Of Medicine,Department Of Surgery,Baltimore, MD, USA

Introduction:  Preoperative comorbidities can have substantial effects on operative risk and outcomes. The modifiability of these risks remains poorly understood. The purpose of this study was to evaluate the impact of non-surgeon preoperative comorbidity optimization on short-term postoperative outcomes.

Methods: Patients with employer-sponsored commercial insurance undergoing a colectomy (ICD-9 codes: 17.3x, 45.7x, 45.8x, 48.5) were identified in the Truven Health MarketScan database (2010-2014). Patients were included if they could be matched to a preoperative surgical clinic visit within 90 days of an operative intervention by the same surgeon. The time interval between the surgical visit and the colectomy was defined as the “potential preoperative optimization period.” In this time interval, patients were defined as “optimized” if they were seen by an appropriate non-surgeon for at least one of their preexisting comorbidities (e.g., primary care or endocrinology visit for diabetic patient). Propensity score matching with 1:1 nearest-neighbor matching with replacement was performed prior to regression analysis to account for between-group covariate extremes. Bivariate analysis and mult

Results: We identified 16,279 eligible colectomy episodes, of which 3,940 (24.2%) were in patients with at least one clinically significant comorbidity. 64.8% of patients with comorbidities were medically optimized prior to surgery. 2,545 medical optimized patients were matched to 1,388 non-optimized controls. Operative indications included neoplasm (50.5%) and diverticulitis (32.6%). The optimized subgroup was significantly older, more likely to be male, more comorbid at baseline by Charlson score, and more likely to reside in the northeastern United States.

 

Medically optimized patients had a lower risk of complications (29.9% vs. 33.7%, p=0.014) driven largely by fewer postoperative gastrointestinal, renal, hepatic, wound, and septic complications. Multivariable logistic regression controlling for patient demographics, operative indication, and Charlson Comorbidity Index demonstrated that patients optimized prior to surgery had a 15% lower odds (OR 95% CI = 0.73-0.99, p=0.036) of having a complication compared with non-optimized patients. The median increase in preoperative costs for optimized patients was $1,519 (p<0.001) while the median increased total cost with a complication was $18,941 (p<0.001).

Conclusion: Many surgical patients do not receive focused preoperative care for their medical comorbidities. Patients who receive comorbidity-associated nonsurgical care prior to an operation have better short-term surgical outcomes. The individual costs of medical optimization are much less than the cost of a surgical complication. These findings support further prospective study of whether patients undergoing high-risk surgery can benefit from more intensive preoperative optimization.

33.09 Laparoscopic Cholecystectomy Is Safe Both Day and Night

E. S. Tseng1, J. Imran1, J. Byrd1, I. Nassour1, S. S. Luk1, M. Choti1, M. Cripps1  1University Of Texas Southwestern Medical Center,Dallas, TX, USA

Introduction: The acute care surgical model has increased the ability to perform non-elective laparoscopic cholecystectomies (LC) during day and night hours. Despite the potential to reduce hospital length of stay (LOS) and improve operating room usage, it is reported that performing LC at night leads to increased rates of complications and conversion to open. We hypothesize that it is safe to perform LC at night in appropriately selected patients.

Methods:  We performed a retrospective review of over 5200 non-elective LC in adults at a large urban tertiary referral hospital performed between April 2007 and February 2015. We dichotomized the cases to either day (case started between 7am-6:59pm) or night (case started between 7pm-6:59am). Univariate analysis was performed using Mann-Whitney U, chi-squared, and Fisher's exact tests.

Results: A total of 5206 patients underwent LC, with 4628 during the day and 576 at night. There was no difference in age; body mass index (BMI); ASA class; race; insurance type; pregnancy rate; history of hypertension, diabetes, or renal failure; or white blood cell count. However, patients who underwent LC during the day were more likely to have presented with obstructive biliary complications of cholelithiasis as evidenced by higher median total bilirubin (0.6 [0.4, 1.3] vs. 0.5 [0.3, 1.0] mg/dL, p = 0.002) and lipase (33 [24, 56] vs. 30 [22, 42] U/L, p < 0.001). Operatively, there was no difference in case length, estimated blood loss, rate of conversion to open, biliary complications, LOS after operation, unanticipated return to the hospital in 60 days, or 60-day mortality. There were significant differences in median LOS before surgery (1 [1, 2] vs. 1 [0, 2] days, p < 0.001) and median total LOS (3 [2, 4] vs. 2 [1, 3] days, p < 0.001) with day patients spending more time in the hospital compared to night patients. Logistic regression to look at the effects of ASA class, total bilirubin, lipase, BMI, and day vs. night status on the likelihood of biliary complications showed that none of the factors had statistical significance.

Conclusion: In this center with an acute care surgery service, it is safe to perform LC during day or night. The lack of complications and shorter LOS justifies performing LC at any hour.

 

33.08 ED Visits After Joint Arthroplasty: Appropriate Outpatient Care Decreases Utilization

M. A. Chaudhary1, L. M. Pak1, D. Sturgeon1, T. P. Koehlmoos2, A. H. Haider1, A. J. Schoenfeld1  1Brigham And Women’s Hospital,Center For Surgery And Public Health,Boston, MA, USA 2Uniformed Services University Of The Health Sciences,Bethesda, MD, USA

Introduction:
Emergency department (ED) visits after elective surgical procedures are not only a significant quality of care indicator but also a potential target for interventions to reduce healthcare costs.  With the volume of hip and knee arthroplasties soaring over 1 million annually, investigation of patterns of ED utilization in patients undergoing these procedures becomes critical. The objective of this study was to evaluate the patterns and predictors of 30- and 90-day ED utilization in a national sample of total hip arthroplasty (THR) and total knee arthroplasty (TKR) patients.

Methods:
The military health insurance database, TRICARE (2006-2014), was queried for patients aged 18-64 years who underwent THR and TKR. Patients demographics, clinical characteristics and environment of care information was abstracted. Sponsor rank was used as a proxy for socio-economic status. The outcome of interest was ED utilization. Multivariable logistic regression models were used to identify predictors of 30- and 90-day ED utilization.

Results:
Among the 44,557 patients included in the analysis, 14,187 (31.8%) underwent THR and 30,370 (68.2%) underwent TKR. Forty-nine percent and 70% patients received orthopedic outpatient care within 30- and 90- days after discharge respectively. The proportion of patients who presented to ED within 30 and 90 days were 24% and 35% respectively. The most common primary ICD-9 diagnoses associated with post-discharge ED visits were “Care involving other physical therapy” (V57.1) (17.6%), “Pain in joint” (719.46) (6.3%), “after care of joint replacement” (V54.81) (5.1%) and “encounter for therapeutic drug monitoring” (V58.83) (4.8%). In the risk adjusted analysis lower socio-economic status, LOS, comorbid conditions and complications were associated with higher odds of ED utilization, while orthopedic outpatient care was associated with lower odds of ED utilizations (Table).

Conclusion:
Almost one-third of the patients present to the ED within 90-days of THR and TKR. Lower socio-economic status, longer LOS, presence of comorbid conditions and complications were associated with increased ED visits; Whereas, orthopedic outpatient visits were associated with decreased ED visits. We concluded that appropriate outpatient care may reduce ED utilization after THR and TKR.
 

33.07 Characterizing Surgeon Prescribing Practices and Opioid Use after Outpatient General Surgery

J. R. Imbus1, J. L. Philip1, J. S. Danobeitia1, D. F. Schneider1, D. Melnick1  1University Of Wisconsin,Surgery,Madison, WI, USA

Introduction: Surgeons typically prescribe opioids for patients undergoing outpatient general surgery operations, yet opioid prescribing practices are not standardized. Excess opioid supply in the community leads to abuse and diversion. Identifying patient and operative characteristics associated with postoperative opioid use could reduce overprescribing, and optimize prescribed quantity to patient need. Our aim was to characterize prescribing practices and opioid use after common outpatient general surgery operations, and to investigate predictors of opioid amount used.

Methods: We developed a postoperative pain questionnaire for adult patients undergoing outpatient inguinal hernia repair (IHR), laparoscopic cholecystectomy (LC), breast lumpectomy +/- sentinel lymph node biopsy, and umbilical hernia repair (UHR) at our institution. This facilitated a retrospective review of patients undergoing operations from January to May 2017, excluding those with postoperative complications. We collected opioid prescription data, operative details, and patient characteristics. All opioids were standardized to morphine milligram equivalents (MME) and reported as a corresponding number of 5mg hydrocodone pills for interpretability. Multivariable linear regression was used to investigate factors associated with opioid use.

Results: The 374 eligible cases included 114 (30.6%) unilateral and 59 (15.8%) bilateral IHRs, 90 (24%) LCs, 17 (4.6%) lumpectomies, 33 (8.9%) lumpectomies with sentinel node biopsy, and 60 (16.1%) UHRs. Forty-eight providers prescribed six different opioids. There was variation in prescribed quantity for all procedures, ranging from zero to 80 pills. Median numbers of pills prescribed vs taken were 20 vs 5.5 for unilateral IHR, 20 vs 4 for bilateral IHR, 20 vs 10 for LC, 10 vs 1 for lumpectomy, 20 vs 2 for lumpectomy with sentinel node biopsy, and 20 vs 5 for UHR. Most patients (86%) were over-prescribed. Nearly all (95%) patients took 30 or fewer pills. Twenty-four percent of patients took zero pills.

Univariate analysis showed operation type (p<.001), age (p<.001), body mass index (p<0.01), chronic pain history (p<0.01), and pre-operative opioid use (p<0.01) to be associated with MME amount taken. On multivariable analysis, there was a significant relationship between opioid use and age (p<0.001), with 16-34% less MME taken for every ten year age increase. Patients who underwent LC took over twice as much opioids compared to patients undergoing UHR (p<0.05). Opioid amount taken was independently associated with opioid amount prescribed (p<0.001), with patients taking 24% more MME for every additional ten pills prescribed.

Conclusion: Marked variation exists in opioid type and amount prescribed, and most patients receive more opioids than they consume. Higher prescription amounts contribute to more opioid use, and certain patient subsets may be more (LC) or less (elderly) likely to use opioids postoperatively.

33.05 Underuse of Post-Discharge Venous Thromboembolism Prophylaxis After Abdominal Surgery for Cancer

J. W. McCullough1, J. Schumacher1, D. Yang1, S. Fernandes-Taylor1, E. Lawson1  1University Of Wisconsin,Madison, WI, USA

Introduction:
The efficacy and safety of post-discharge venous thromboembolism (VTE) prophylaxis for patients undergoing major abdominal surgery for cancer has been demonstrated in numerous studies and has been recommended by multiple national organizations over the past decade. Our objective was to identify factors associated with post-discharge VTE prophylaxis after major abdominal surgery for cancer and quantify associated costs to patients and insurers.

Methods:
Adult patients undergoing a major abdominal surgical procedure (colectomy, proctectomy, pancreatectomy, hepatectomy, gastrectomy, or esophagectomy) for cancer in 2012-2015 were identified in the Marketscan® databases, which include comprehensive claims for a nationwide cohort of patients. Patients on anticoagulation preoperatively or with a VTE diagnosis prior to discharge were excluded. Use of post-discharge VTE prophylaxis and associated costs for the 28 days following surgery were assessed. Multivariable logistic regression, including demographics, comorbidities and surgical factors, assessed predictors of receipt of post-discharge VTE prophylaxis.

Results:
Of 23,509 patients undergoing major abdominal surgery for cancer, 5.6% received post-discharge VTE prophylaxis. The median cost to payers was $378 (Interquartile range $212-$579), while patient out-of-pocket costs were $10 (Interquartile range $5-$32). Receipt of post-discharge VTE prophylaxis by procedure and associated costs are displayed in the table. Compared to colectomy, patients undergoing proctectomy and pancreatectomy had significantly higher risk-adjusted odds of receiving post-discharge VTE prophylaxis (OR 1.7, p=0.01 and OR 2.1, p<0.01, respectively). Patients undergoing open procedures (OR 1.4, p<0.01) had higher odds of receiving prophylaxis, as did patients with obesity (OR 1.3, p<0.01), congestive heart failure (OR1.5, p<0.01) or metastatic disease (OR 1.5 p<0.01). In contrast, patients with anemia were significantly less likely to receive prophylaxis (OR 0.85, p=0.02). There were no significant differences in rates of post-discharge VTE prophylaxis observed between insurance plan types. However, significant variation was observed by region, with patients in the south and west regions less likely to receive post-discharge VTE prophylaxis.

Conclusion:
The vast majority of patients undergoing major abdominal surgery for cancer do not receive post-discharge VTE prophylaxis. This is despite a decade of strong recommendations for post-discharge VTE prophylaxis from national organizations. These findings suggest that substantial efforts are needed in order to change clinical practice and increase prescribing of post-discharge VTE prophylaxis for patients undergoing major abdominal surgery for cancer.
 

33.06 Surgeon Annual and Cumulative Volume Variably Predict Outcomes of Complex Hepatobiliary Procedures

M. M. Symer1, L. Gade3, A. Sedrakyan2, H. Yeo1,2  1Weill Cornell Medical College,Surgery,New York, NY, USA 2Weill Cornell Medical College,Healthcare Policy,New York, NY, USA 3NewYork-Presbyterian / Queens,Surgery,New York, NY, USA

Introduction: There is a strong volume-outcome relationship in pancreatectomy, but whether the same relationship exists for other complex hepatopancreatobiliary (HPB) procedures is not known. The role of surgeon experience is clearly important, but whether it should be defined by cumulative volume or a more contemporaneous measure like annual volume is unclear. We compared the outcomes of surgeons across the spectrum of experience to better define the volume-outcome relationship in complex HPB surgery. 

Methods: We identified all patients undergoing major elective HPB operations in New York State from 2000 to 2014 using the Statewide Planning and Research Cooperative Database. Major resections such as liver lobectomy, proximal pancreatectomy, as well as bile duct resection and complex repair were included, while wedge resections, distal pancreatectomy, and percutaneous or endoscopic procedures were excluded. In-hospital mortality and perioperative outcomes were compared across four categories of surgeons based on high or low annual and high or low cumulative operative volume. Median volume was used as the cut-point for high vs. low categories.

Results:13,236 operations performed by 893 surgeons were included in the study. Median cumulative volume was 89 operations, and median annual volume was 21 operations. Similar numbers of procedures were performed by low cumulative/low annual (LCLA) volume surgeons and high cumulative/high annual (HCHA) volume surgeons (6106 vs. 6176 operations). HCHA surgeons treated slightly older patients than LCLA surgeons (63.0y vs. 61.1y, p<0.01). HCHA surgeons also treated fewer Medicaid (5.6% vs. 10.0%, p<0.01) or Black patients (5.2% vs. 10.2%, p<0.01). HCHA surgeons performed many more minimally invasive procedures (15.2% of HCHA operations vs. 5.7% of LCLA operations, p<0.01). Mortality was lowest for HCHA and highest for LCLA surgeons (1.6% vs. 3.7%, p<0.01). Adjusted odds of in-hospital mortality were lower only for those patients undergoing surgery by HCHA volume surgeons (OR 0.47 95%CI 0.32-0.67), but not HCLA volume surgeons (OR 0.58 95%CI 0.28-1.20), or LCHA volume surgeons (OR 0.82 95%CI 0.44-1.53). 30d major events (e.g. stroke, shock), reoperation, and readmission were not affected by cumulative or annual experience. 

Conclusion:In this large New York State based study of complex, elective HPB operations only surgeons with high cumulative and high annual volume have improved in-hospital mortality. In isolation, neither high cumulative volume, nor high annual volume alone were associated with improved outcomes. Racial and socioeconomic disparities in access to high-volume care persist. Interventions to regionalize complex surgical care should account for these distinctions.

 

33.03 Sarcopenia Predicts Mortality Following Above Knee Amputation for Critical Limb Ischemia

D. Strosberg1, T. Yoo1, K. Lecurgo1, M. J. Haurani1  1Ohio State University,Division Of Vascular Surgery / Department Of Surgery,Columbus, OH, USA

Introduction:
Sarcopenia, the measurement of muscle decline, has been shown to be an independent predictor of performance status and mortality in the cancer and trauma literature. Others have applied frailty scores and other measures to predict outcomes after surgical procedures, but these require information that is not always readily available in the electronic health records. Total psoas muscle area (TPA) normalized for body surface area (TPA/m^2) can quickly be assessed with most modern image viewing software available to surgeons.  There are also no accepted guidelines for what constitutes sarcopenia in a subset of patients with critical limb ischemia.  The objectives of this study were to evaluate the feasibility of easily calculating TPA/m^2, and then studying whether lower TPA/m^2 predicted mortality in patients undergoing above knee amputation (AKA) for critical limb ischemia. 

Methods:
We evaluated patients who underwent AKA between July 2013 and July 2016 at a single institution. Patients with abdominal/pelvis computed tomography (CT) scans within 3 months of their amputation were included.  Total psoas muscle area (TPA) was manually measured at L3, and then normalized for body surface area (TPA/m2) calculated using height and weigh from the anesthesiology records at the time of surgery. We defined sarcopenia as patients whose TPA/m^2 were in the lowest quartile of our cohort. Univariate analysis was used to look for difference in mortality between patients undergoing AKA for critical limb ischemia. 

Results:
97 patients underwent AKA, of whom 48 had a CT scan that met inclusion criteria. Total mortality was 44% (21 patients), with a median survival of 90 days (range 1-648 days).  35 patients (70%) were cleared for prosthetic use, however only 5 patients (10%) were noted to be using a prosthesis on follow up, and 13 patients were ambulatory with or without a prosthetic at their last clinic visit (26%). 4 patients (8%) required revision of their residual limb. Mean TPA/m^2 was 1156.3mm2/m2 (range 372.7 – 2572.5mm2/m2). When comparing the demographics of the amputees in the lowest quartile based on TPA/m^2, there was no differences noted in their age (63 vs 59y.o. P=0.1), or discharge status (21% vs 33% discharged home P=0.5). The mortality rate of patients in the lowest TPA/m^2 quartile (372.7 – 781.1mm2/m2) was significantly higher at 62% (8 patients), compared to 35% (13 patients) (P=0.04).

Conclusion:
CT imaging was available making TPA/m^2 measurement possible in this subset of patients undergoing AKA.  Patients with low TPA/m^2 have a significantly higher mortality rate following AKA for critical limb ischemia, despite no differences in age or discharge status. Psoas muscle mass may be used as a predictive indicator for mortality risk, and patients should be counseled accordingly prior to AKA.