48.08 Vascular Access Modifies the Protective Effect of Obesity on Hemodialysis Survival

M. B. Malas1, I. J. Arhuidese1, T. Obeid1, U. Qazi1, C. Abularrage1, I. Botchey1, J. H. Black1, T. Reifsnyder1  1Johns Hopkins Medical Institutions,Department Of Surgery, Division Of Vascular Surgery,Baltimore, MD, USA

Introduction: The protective effect of obesity on survival of patients undergoing hemodialysis for end stage renal disease; described as the obesity paradox has previously been established. Increased survival benefits have also been ascribed to permanent modes of hemodialysis access (fistula/graft) compared to catheter at first hemodialysis. The purpose of this study is to evaluate the impact of incident hemodialysis access type on the obesity paradox.

Methods: We conducted a retrospective study of all patients with end stage renal disease in the United States Renal Database System, who initiated hemodialysis between 2006 and 2010 without prior renal replacement therapy. Relative mortality within categories of body mass index (BMI) as well as modes of hemodialysis access (fistula/graft Vs catheter) was quantified using multivariable Cox proportional hazard models. Multivariable logistic regression was employed to compare vascular access utilization between the BMI categories. Interaction terms were employed to assess the modifier effect of hemodialysis access type on the association between BMI and survival.

Results: There were 510,000 dialysis initiates in the study cohort; 83% via catheter, 14% via fistula and 3% via grafts. Mortality was significantly lower for patients initiating hemodialysis with permanent forms of access compared to catheter (aHR 0.68, 95%CI 0.67-0.69, P value<0.001). Higher BMI categories were associated with lower mortality as shown in Table 1. Patients in the higher BMI categories were also more likely to initiate hemodialysis via permanent modes of access. Table 1. The interaction term for the modifier effect of vascular access method on the association between BMI and mortality was significant (P<0.001).

Conclusion: We have shown that the highly popularized protective effect of increased BMI on survival in hemodialysis patients is significantly influenced by the method of hemodialysis access. Thus, the obesity paradox is in part accounted for by hemodialysis access type. There is greater use of catheters with their attendant complications and higher mortality amongst patients in the lower BMI categories compared to patients in higher BMI categories. There remains a critical need to increase permanent access utilization at incident hemodialysis so as to improve survival outcomes irrespective of BMI status. 

 

48.09 Factors Associated with Ischemic Colitis in Contemporary Aortic Surgery

J. C. Iannuzzi1, F. J. Fleming1, K. N. Kelly1, K. Noyes1, J. R. Monson1, M. J. Stoner2  1University Of Rochester,Surgical Health Outcomes & Research Enterprise, Department Of Surgery,Rochester, NY, USA 2University Of Rochester,Vascular Surgery,Rochester, NY, USA

Introduction:

While large clinical databases in surgery have been useful for defining mortality and overall morbidity, vascular specific morbidity has been absent representing a major gap in quality reporting for vascular surgery.  Ischemic colitis (IC) following abdominal aortic aneurysm repair is a potentially devastating complication yet little data exists in part due to the relative infrequency, leaving surgeons and patients subject to historical dogmatic practices. 

 

Methods:

NSQIP vascular procedure specific data was abstracted from the 2011-2012 NSQIP participant user file.  Open abdominal aortic aneurysm repair (OPEN) and endovascular abdominal aortic aneurysm repair (EVAR) were evaluated for factors associated with IC.  Bivariate analysis identified candidate variables, and subsequent manual stepwise binary logistic regression was performed assessing factors with p<0.1 for association with the primary end point of IC.   Factors meeting p<0.05 were retained in the model.

 

Results:

Overall, 3734 cases were analyzed comprising 949 OPEN and 2,785 EVAR cases.  The IC rate was 2.4% (n=88), [OPEN: 5.6% (n=53), EVAR: 1.6% (n=35,) p<0.001].  On multivariable analysis OPEN was associated with 2.25 times the odds of IC compared to EVAR (table). Other risk factors included male sex, renal insufficiency, operative time, and preoperative blood transfusion.  On multivariable analysis of OPEN alone, only supraceliac clamping was additionally associated with increased IC risk (OR 2.62, CI: 1.26-5.04, p=0.004).  Inferior mesenteric artery IMA management was not associated with IC risk. Procedural factors associated with IC after EVAR on multivariable analysis included ruptured aneurysm for hypotension (OR: 21.39, CI:9.44-48.49p<0.001), ruptured aneurysm without hypotension (OR:10.74, CI: 3.86-29.83, p<0.001), iliac-branched device (OR: 2.71, CI:1.22-5.99, p<0.001), prior abdominal surgery (OR, 2.39, CI: 1.14-5.04, p=0.022), and renal stenting (OR: 3.25, CI: 1.29-8.22, p=0.013).  IC itself was associated with longer ICU stay, increased hospital stay, and overall major complications.

 

Conclusion:

 Procedural data on abdominal aortic aneurysm repair demonstrates that the OPEN approach has over twice the adjusted risk for IC compared to EVAR.   This study for the first time describes the association of renal insufficiency with IC.  Procedural specific data also demonstrated significantly increased risk particularly when patients present with rupture increasing IC risk from ten to twenty fold depending on presence of hypotension.  These findings help in risk stratifying patients for post-operative IC.

48.10 Transfusion During Amputation has Increased Risk of Pneumonia, Thromboembolism and Length of Stay

T. Tan1, W. W. Zhang1, M. Eslami2, A. Coulter1, D. V. Rybin2, G. Doros2, A. Farber2  1Louisiana State University Health Shreveport,Vascular And Endovascular Surgery,Shreveport, LA, USA 2Boston Medical Center,Vascular And Endovascular Surgery,Boston, MA, USA

Introduction

We evaluated the outcomes of patients undergoing major lower extremity amputation who received packed red blood cell transfusion.
Methods
Using the NSQIP(2005-2011), we examined 5739 above knee(AKA) and 6725 below knee amputations(BKA). Patients were stratified by perioperative (preoperative, intraoperative and postoperative) transfusion. Outcomes included perioperative mortality, surgical site infection(SSI), myocardial infarction(MI), thromboembolism and hospital length of stay(LOS). Patients who received transfusion were cohort matched for risk-adjusted comparisons using age, smoking, diabetes, cardiac disease, renal failure, ASA Classification, functional status, indication and procedure(AKA vs. BKA) with those who were not transfused. Multivariable logistic and gamma regression was used to examine associations between transfusion and outcomes.
Results
There were 12,464amputations in the study cohort and 2,133patients required transfusion(17%). 8205 amputations(66%) were performed for critical limb ischemia and overall 30-days mortality was 9%. In both crude and matched cohorts transfusion was associated with a higher risk of pneumonia(crude:6.1%vs.3%,p<.001;matched:5.9%vs.3.7%,p<.001), thromboembolism(2.5%vs.1.6%,p=.003;2.5% vs.1.4%,p=.002) and longer LOS (18±19 vs.13.6±14.3day,p<.001; 17.8±18.4vs.14.2±14.5day,p<.001). In multivariable analysis of the crude cohort, transfusion was associated with a higher risk of perioperative pneumonia(OR 1.6, 95%CI1.3-2, p<.001), thromboembolism(OR 1.6, 95%CI1.1-2.4, p=.03) and longer LOS (OR1.3, 95%CI 1.1-1.2, p<.0001). 
Conclusions
Patients who receive perioperative transfusion during major limb amputation have a higher risk of perioperative pneumonia, thromboembolism and longer hospital length of stay.  Further study is required to clarify the role of transfusion during lower extremity amputation.   

 

49.05 Clinical comparison of laparoscopic and open liver resection after propensity matching selection

M. Meguro1, T. Mizuguchi1, M. Kawamoto1, S. Ota1, M. Ishii1, T. Nishidate1, K. Okita1, Y. Kimura1, K. Hirata1  1Sapporo Medical University School Of Medicine,Department Of Surgery, Surgical Oncology And Science,Sapporo, HOKKAIDO, Japan

Introduction:  Number of laparoscopic liver resection (LR) tends to be increasing in recent years.  Although the short term safety in the LR was comparable to the classical open liver resection (OR), the long-term prognosis between LR and OR has not been elucidated yet. So, we retrospectively analyzed the patients who received liver resection consecutively and selected matching paired group among LR and OR after propensity score analysis.  Aim of this study was to show any prognostic difference between LR and OR in the hepatocellular carcinoma (HCC) patients who received initial liver resection.  

Methods: From January 2003 and June 2011, consecutive 260 HCC patients (LR: n=60 and OR: n=200) were enrolled in this study.  Propensity scores were calculated for each patient in the two groups, using the following 10 covariate factors, such as age, gender, tumor size, number of tumors, vascular invasion, poor differentiation/non-poor differentiation, serum total bilirubin, serum albumin, PT, and ICGR15.  One-to-one matching was carried out by propensity score analysis with Greedy method.  The survival curves were compared by log-rank test using the Kaplan-Meier method.

Results: In the full analysis set (n = 260), patients of advanced age and female patients were significantly different between the groups.  Tumor factors such as size, the number of tumors, vascular invasion, and the frequency of poor differentiation were significantly more favorable, the amount of bleeding during surgery was significantly lower, and operation time was significantly shorter in the Lap group.  A recurrence-free survival rates (RFS) and postoperative overall survival rates (OS) in the LR (Fig. 1a and 1b) were significantly longer than in the OR (P = 0.048 and 0.004, respectively).  After propensity score matching analysis, 35 each patient was selected and analyzed as matching set.  The intraoperative bleeding in the LR was significantly lower than in the OR (p=0.002), although other clinical variables were generally consistent in the two groups.  In addition, there was no significant difference (Fig. 1c and 1d) in the RFS and OS (P = 0.954 and 0.672, respectively).

Conclusions: When the prognoses of HCC patients after initial liver resection were compared between the LR and the OR after matching patient background factors, including tumor factors and liver function factors, no significant differences in RFS or OS were observed, demonstrating non-superiority of LR.  Therefore, LR seems to offer comparable oncological curability to the classical OR in the long-term prognosis.

49.07 Sublay Versus Underlay in Open Ventral Hernia Repair: A Multi-Institutional Risk-Adjusted Comparison

J. Holihan1, I. L. Bondre1, E. P. Askenasy2, J. A. Greenberg3, J. Keith6, R. G. Martindale5, J. S. Roth4, C. J. Wray1, L. S. Kao1, M. K. Liang1  1University Of Texas Health Science Center At Houston,Houston, TX, USA 2Baylor College Of Medicine,Houston, TX, USA 3University Of Wisconsin,Madison, WI, USA 4University Of Kentucky,Lexington, KY, USA 5Oregon Health And Science University,Portland, OR, USA 6University Of Iowa,Iowa City, IA, USA

Introduction:
The ideal location for mesh placement in open ventral hernia repair(OVHR) remains under debate. Current trends lean toward underlay (intra-peritoneal) or sublay (retro-muscular or pre-peritoneal) with onlay and inlay repairs being largely abandoned. We hypothesize that in patients undergoing OVHR, sublay versus underlay placement of mesh results in fewer recurrences and surgical site infections(SSI).

Methods:
A multi-institution retrospective study was performed of all patients who underwent OVHR from 2010-2011. All patients with mesh placed in a sublay or underlay position and with at least 1 month of clinical follow-up were included. Primary outcome was SSI as defined by the Centers for Disease Control and Prevention (CDC). Secondary outcome was hernia recurrence (assessed by clinical examination or radiographic diagnosis). Multivariable analysis was performed using backwards stepwise logistic regression adjusting for variables selected a priori (ASA, smoking, BMI, acute, primary versus incisional, prior VHR, wound class, fascial release, fascial closure, and mesh type-biologic versus synthetic). Data was also analyzed using inverse probability weighting, which corrects for selection bias and missing data.

Results:
Of 328 patients followed for a median (range) of 17.2 (1.0-50.2) months, 97(29.6%) had a sublay repair.  The unadjusted rates of SSI and recurrence were lower for the sublay group compared to the underlay group (Table). Underlay repair had more superficial, mesh/deep, and organ/space SSIs.  On multivariable analysis, underlay was associated with an increased risk of SSI compared to sublay (OR 2.5, 95% CI 1.1-5.2).  There was no statistically significant difference in hernia recurrence between the two techniques (OR 1.0, 95% CI 0.5-2.1). Using inverse probability weighting, sublay placement of mesh had a 9.3% (4.2-14.4%) rate of SSI, and underlay placement had a 22.3% (15.2-29.4%) rate of SSI.  There was no difference in recurrence between the two techniques (underlay 18.5% CI 8.9-28.3%%; sublay 15.3% CI 7.0-23.7%).

Conclusion:
In this multi-center, risk-adjusted study, sublay repair was associated with fewer SSIs than underlay repair; however, there was no difference in rates of hernia recurrence.  In the absence of a randomized trial or more rigorous data, sublay mesh placement should be considered whenever possible for open ventral hernia repairs. 
 

49.08 Voice Messaging System Associated With Improved Survival In Patients With Hepatocellular Carcinoma

A. Mokdad1, A. Singal1, J. Mansour1, G. Balch1, M. Choti1, A. Yopp1  1University Of Texas Southwestern Medical Center,Surgery Oncology,Dallas, TX, USA

Introduction:  Hepatocellular carcinoma (HCC) treatment involves multiple specialties risking delayed treatment and worse outcomes. The aim was evaluating outcomes following implementation of a voice messaging system (VMS) designed to reduce delays in treatment following HCC diagnosis.

Methods:  A retrospective study of HCC patients was conducted in an outpatient safety net hospital between February 2008 and January 2012. In February 2010, VMS notification of HCC to the ordering physician and downstream treating physicians was implemented. Patients were divided into: 1) pre-intervention: diagnosis two years prior to implementation or failure of notification following implementation, 2) post-intervention: diagnosis two years following implementation. Demographics, tumor characteristics, treatment, and survival were compared.

Results: Ninety-seven patients diagnosed with HCC, 51 in the pre-intervention group and 46 in the post-intervention group. The main etiology of chronic liver disease was HCV infection and no differences in symptoms, liver dysfunction, tumor characteristics, or treatment were observed between groups. The time from diagnosis to clinic contact (0.5 months vs. 2.9 months, p=0.003) and time from detection to treatment (2.2 months vs. 5.5 months, p=0.005) was significantly shorter following VMS. BCLC A status (HR 3.4, 95%CI 2,6), treatment (HR 2.0, 95%CI 1,4), and VMS (HR 1.7, 95%CI 1,3) were independently associated with improved overall survival. Patients diagnosed following VMS had a median survival of 31.7 months compared to 15.7 months, p=0.008.

Conclusion: Implementation of VMS reduces time to treatment and reduction in time to initial clinic visit. Reduction in time to treatment is associated with improved outcome independent of tumor stage, underlying liver function, and treatment.

 

47.01 Age-related Mortality in Blunt Traumatic Hemorrhagic Shock: the Killers and the Life Savers

J. O. Hwabejire1, C. Nembhard1, S. Siram1, E. Cornwell1, W. Greene1  1Howard University College Of Medicine,Surgery,Washington, DC, USA

Introduction:
Hemorrhagic shock (HS) is the leading treatable cause of trauma deaths but there are sparse data on the association between age and mortality in this condition. We examined the relationship between age and mortality as well as identified the predictors of mortality in HS.

Methods:
The Glue Grant database  was analyzed. Patients aged≥16 years who sustained blunt traumatic HS were initially stratified into 8 age groups (16-24, 25-34, 35-44, 45-54, 55-64, 65-74, 75-84, 85 and above) in order to identify the mortality inflection point. For subsequent analyses, patients were stratified into: Young (16-44), Middle Age (45-64) and Elderly (65 and above). Multivariable  analysis was then used to determine predictors of mortality by group.

Results:
1976 patients were included, 66% males and 89% white, with mortality of 16%. Mortality by  initial age group are as follows: 16-24 (13.0%), 25-34 (11.9%), 35-44 (11.9%), 45-54 (15.6%), 55-64 (15.7%), 65-74 (20.3%), 75-84 (38.2%), 85 and above (51.6%), delineating 65 years as the mortality inflection point. Overall, 55% were Young, 30% Middle Age, and 15% Elderly. In the Young, survivors had lower emergency room (ER) lactate (4.4±2.5 vs. 8.0±4.3, p<0.001), Marshall’s multiple organ dysfunction score, MODS (4.8±2.4 vs. 6.8±4.1, p<0.001), and Injury Severity Score (ISS,32±13 vs. 39±14, p<0.001) than non-survivors. Predictors of mortality include MODS (OR:1.93,CI:1.62-2.30, p<0.001), ER lactate (OR:1.14,CI:1.02-1.27, p<0.022), ISS (OR:1.06,CI:1.03-1.09, p<0.001) and cardiac arrest (OR:10.60,CI:3.05-36.86, p<0.001. In Middle, survivors had lower MODS (5.0±2.3 vs. 7.3±4.2, p<0.001) and higher ER mean arterial pressure (74±41 mmHg vs. 63±43 mmHg, p=0.023) and were less likely than non-survivors to get a craniotomy (4% vs. 10%, p=0.025) or a thoracotomy (8% vs. 26%, p<0.001). Predictors of mortality in this group include MODS (OR:1.38,CI:1.24-1.53, p<0.001), cardiac arrest (OR:12.24,CI:5.38-27.81, p<0.001), craniotomy (OR:5.62,CI:1.93-16.37, p=0.002), and thoracotomy (OR:2.76,CI:1.28-5.98, p=0.010. In Elderly, survivors were slightly younger (74±7 vs. 78±7, p<0.001), had  lower MODS (5.3±2.1 vs. 6.6±3.0, p<0.001), received higher volume of prehospital hypertonic saline (1.97±0.16 L vs. 1.83 ±0.38 L, p=0.002) and were less likely to get a laparotomy (26% vs. 63%, p<0.001).  Predictors of mortality in this group include age (OR:1.07,CI:1.02-1.13, p=0.005), MODS (OR:1.47,CI:1.26-1.72, p<0.001), laparotomy (OR:2.04,CI:1.02-4.08, p=0.045) and cardiac arrest (OR:11.61,CI:4.35-30.98, p<0.001) .

Conclusion:
In blunt HS, mortality parallels increasing age, with the inflection point at 65 years. MODS and cardiac arrest uniformly predict mortality across all age groups. Open fixation of non-femur bone is uniformly protective against mortality across all age groups. Craniotomy and thoracotomy are associated with mortality in Middle Age whereas laparotomy is associated with mortality in Elderly.
 

47.02 Serum Transthyretin is a Predictor of Clinical Outcomes in Critically Ill Trauma Patients

V. Cheng1, K. Inaba1, T. Haltmeier1, A. Gutierrez1, S. Siboni1, E. Benjamin1, L. Lam1, D. Demetriades1  1University Of Southern California,Division Of Trauma And Surgical Critical Care, Department Of Surgery, LAC+USC Medical Center,Los Angeles, CA, USA

Introduction:
In surgical patients, low preoperative serum Transthyretin (TTR) level is associated with significantly longer hospital and intensive care unit (ICU) stays, higher infectious complication rates, and mortality rates.  However, the predictive value of TTR levels on outcomes after major trauma has not yet been studied.

Methods:
After IRB approval, a retrospective analysis was conducted on critically ill trauma patients admitted to the Surgical ICU at the LAC+USC Medical Center between January 2008 and May 2014.  The study included all patients who underwent a surgical procedure for trauma and had their TTR measured ≤24 hours after ICU admission.  Outcome metrics included hospital length of stay (LOS), ICU LOS, ventilator days (VD), infectious complication rate, and mortality rate.  Significance of TTR on outcome metrics was determined using univariable (Mann-Whitney U test and Fisher’s exact test) and multivariable (linear and binary logistic regressions) analyses.  In univariable analysis, patients were stratified into two TTR groups: Normal (≥19 mg/dL) and Low (<19 mg/dL).  In multivariable analysis, TTR level was maintained as a continuous variable.

Results:
348 patients met inclusion criteria (median age 36 years, 79.6% male, median Injury Severity Score 17, 71.0% blunt trauma).  The Normal and Low TTR groups consisted of 189 (54.3%) and 159 (45.7%) patients, respectively.  Compared to the Normal TTR group, the Low TTR group was associated with longer hospital LOS (median: 17 vs. 9 days, p < 0.001), longer ICU LOS (6 vs. 4 days, p < 0.001), increased VD (1 vs. 0 days, p < 0.001), higher infectious complication rates (45.3% vs. 20.1%, p < 0.001), and higher mortality rates (17.0% vs. 7.4%, p = 0.007).  Even after adjusting for age, sex, and Injury Severity Score in multivariable regression analyses, TTR level was a significant independent predictor of clinical outcomes.  Lower TTR levels were associated with longer hospital LOS (p < 0.001), longer ICU LOS (p = 0.005), increased VD (p = 0.018), higher infectious complication rates (p < 0.001), and higher mortality rates (p = 0.017).

Conclusion:
In critically ill trauma patients, low serum TTR level is associated with longer hospital LOS, longer ICU LOS, increased VD, higher infectious complication rates, and higher mortality rates.  These results warrant prospective validation of the utility of TTR levels as an outcome predictor for critically ill trauma patients.
 

47.03 Will I miss an aneurysm? The role of CTA in traumatic subarachnoid hemorrhage

K. J. Balinger1, A. Elmously1, B. A. Hoey1, C. D. Stehly1,2, S. P. Stawicki1,2, M. E. Portner1  1St Luke’s University Health Network,Level I Regional Trauma Center,Bethlehem, PA, USA 2St. Luke’s University Health Network,Department Of Research & Innovation,Bethlehem, PA, USA

Introduction: Computed tomographic angiography (CTA) tends to be over utilized in patients with traumatic subarachnoid hemorrhage (tSAH) to rule out occult aneurysmal rupture and arteriovenous malformations (AVM).  We hypothesized that there are two specific categories of patients with tSAH that are at increased risk for aneurysm/AVM and warrant targeted CTA use: (a) patients "found down" with an unknown mechanism of injury and (b) those with central subarachnoid hemorrhage (CSH) or blood in the subarachnoid cisterns and Sylvian fissures.

Methods: A retrospective analysis was performed on trauma patients with blunt head injury and tSAH who underwent CTA of the brain between January 2008 and December 2012 at a Level I Regional Trauma Center. Variables utilized in the current analysis included patient demographics, injury mechanism and severity (ISS), Glasgow Coma Scale (GCS), CTA and related radiographic studies, as well as operative interventions.  The principal outcome measure was "confirmed diagnosis" of a ruptured aneurysm/AVM.  Independent sample t-test and chi square test were used for univariate analyses. Logistic regression was utilized in multivariate analyses. Statistical significance was set at alpha = 0.05.

Results: Out of 617 patients with tSAH, 186 underwent CTA.  Mean age of the study group was 57 years, with 64% of patients being male. The mean GCS on presentation was 11±5.0, with mean ISS of 20±11.5. CTA scans were positive in 23/186 cases (12.3%) with an aneurysm found in 21 patients and an AVM in 2 patients. Findings were felt to be incidental in 15/23 patients with "positive" CTA.  Among 14/186 patients (7.5%) who were "found down" none had an aneurysm or an AVM. A total of 8 patients had a ruptured aneurysm, with 5/8 (62.5%) presenting after a fall and 3 (37.5%) presenting after an MVC.  All 8 patients with aneurysmal rupture (100%) had CSH.  None of the 81 patients with only peripheral SAH had a ruptured aneurysm/AVM. Multivariate regression analysis demonstrated that suprasellar cistern hemorrhage on CT is independently associated with aneurysm rupture (OR, 6.39; CI 1.32-30.8). Patients with a ruptured aneurysm had a significantly higher mean arterial pressure (MAP) on presentation (mean, 116±7 mmHg) than those without an aneurysm/AVM (mean, 104±18 mmHg, p<0.005). Of the 8 patients with a ruptured aneurysm, 6 patients underwent neurosurgical clipping or coiling, 1 underwent a ventriculostomy, and 1 underwent a craniotomy for evacuation of hemorrhage.

Conclusion: These preliminary data support a more selective approach to screening CTAs in patients with tSAH. CTA should be utilized in those patients with CSH regardless of mechanism of injury.  A more selective approach should be considered in those patients with only peripheral SAH. Overall cost savings would be significant.
 

47.04 CANNABIS USE HAS NEGLIGIBLE EFFECTS AFTER SEVERE INJURY

K. R. AbdelFattah1, C. R. Edwards1, M. W. Cripps1, C. T. Minshall1, H. A. Phelan1, J. P. Minei1, A. L. Eastman1  1University Of Texas Southwestern Medical Center,Burns, Trauma, And Critical Care Surgery,Dallas, TX, USA

Introduction:  Since 1996, 22 states have legalized medical marijuana (MJ) use and two have legalized recreational use.  With more states considering legislation to legalize the use of the drug, emergency responders and facilities recieving these patients need to understand the impact on acute injuries. The effects of MJ use on injured patients has not been thoroughly evaluated. Our group sought to evaluate the effects of cannabis use at the time of severe injury on hospital course and patient outcomes.

Methods:  A retrospective chart review was undertaken at an urban Level 1 Trauma Center covering a two-year period. Patients presenting with an ISS>16 were divided into four groups based on urine drug screen results. Negative urine drug screen patients represented our control group.  Positive subjects were subdivided into marijuana-only (MO), other-drugs only (OD), and mixed-use (MU) groups.  These groups were compared for differences in presenting characteristics, hospital length of stay, ICU stays, ventilator days, and death.

Results: 8441 subjects presented during the study period, of which 2134 had drug testing performed. 843(40%) had an ISS>16, with 347(41%) having negative tests (NEG). 70(14%) tested positive for marijuana only (MO), 325 (65%) for drugs other than marijuana (OD), and 103 (21%) subjects showed mixed-use (MU). Alcohol levels were higher in the MO group than any other group (p<0.05) No differences were seen in presenting GCS, ICU/hospital length of stay, ventilator days, and blood administration when comparing the MO group to the NEG group. Significant differences were found between the OD group and the NEG/MO/MU groups for presenting GCS (OD 9.7 vs NEG 11.9, MO 12.4, MU 10.7, p<0.05), ICU days (OD 6.0 vs NEG 4.7, MO 4.6, MU 3.7, p<0.05) and hospital days (OD 14.2 vs. NEG 12.0, MO 12.0, MU 10.5 p<0.05), and hospital charges (OD 182k vs. NEG 147k, MO 157k, MU 132k p<0.05).

Conclusion: Cannabis users suffering severe injury demonstrated no acute detrimental outcomes in this study compared with non-drug users. With regards to presenting GCS, ICU/hospital length of stay, and hospital charges, marijuana, alone or in combination with other drugs appeared more similar to the NEG group rather than the OD group.

 

47.05 Pre-Hospital Care And Transportation Times Of Pediatric Trauma Patients

C. J. Allen2, J. P. Meizoso2, J. Tashiro1, J. J. Ray2, C. I. Schulman2, H. L. Neville1, J. E. Sola1, K. G. Proctor2  1University Of Miami,Pediatric Surgery,Miami, FL, USA 2University Of Miami,Trauma And Critical Care,Miami, FL, USA

Introduction:  Trauma is the leading cause of death and morbidity in children in the US.  Aggressive efforts have been made to improve emergency medical transportation of injured children to major trauma centers. Still, controversy exists whether pre-hospital care improves outcomes or simply delays the necessary immediate transportation. We hypothesize that at large level 1 trauma center, with a mature pre-hospital network, pre-hospital care of severely injured children does not influence transportation time.

Methods:  From January 2000 to December 2012, consecutive pediatric admissions (≤17y) at a Level I trauma center were retrospectively reviewed for demographics, mechanisms of injury (MOI), mode of transportation, transportation times, pre-hospital interventions, injury severity score (ISS), length of stay (LOS), and survival. We analyzed pre-hospital interventions and compared transport times in survivors and non-survivors, as this cohort represents the most severely injured. Parametric data presented as mean±standard deviation and nonparametric data presented as median(interquartile range).

Results:  1,878 admitted patients were transported via emergency medical services (EMS).  Age was 11±6y with 70% male, 50% black; 76% sustained blunt injuries with an ISS of 13±12. Of these, 31% required operative intervention, LOS of 7±12, and mortality of 3.6%.  Pre-hospital care, transport times, and ISS were compared between survivors and those who died in-hospital, see Table. There were no significant differences in EMS scene to hospital arrival times between those with and without on-scene shock (27(15)min vs 27(15)min, p=NS), or between those who required on-scene intubation (32(14)min vs 27(15)min, p=NS). 

Conclusion: In the most severely injured children, those with ultimately fatal injuries, there are significantly increased rates of pre-hospital interventions, but on-scene and transportation times are not prolonged. There is no difference in pre-hospital transportation times between those with and without on-scene shock, or those requiring on-scene intubation. These results support the concept that pre-hospital interventions by skilled EMS are not associated with prolonged transportation times of critically injured pediatric trauma patients.

47.06 Trends in 1029 Trauma Deaths at a Level 1 Trauma Center

B. T. Oyeniyi1, E. E. Fox1, M. Scerbo1, J. S. Tomasek1, C. E. Wade1, J. B. Holcomb1  1University Of Texas Health Science Center At Houston,Acute Care Surgery/Surgery,Houston, TX, USA

Introduction:  Over the last decade the age of trauma patients and injury mortality has increased. At the same time, we have implemented many interventions focused on improved hemorrhage control. The objective of our study was to analyze the temporal distribution of trauma-related deaths, the factors that characterize that distribution and how those factors have changed over time at our level 1 trauma center. 

Methods:  The trauma registry, weekly Morbidity & Mortality reports and electronic medical records at Memorial Hermann Hospital in Houston, TX were reviewed.  Patients with primary burn injuries and pediatric age (<16) patients were excluded.  Two time periods (2005-2006 and 2012-2013) were included in the analysis. Baseline characteristics, time and cause of death were recorded. Mortality rates were directly adjusted for age, gender and mechanism of injury.  Results are expressed comparing 2005-2006 with 2012-2013. The Mann-Whitney and chi square tests were used to compare variables between periods, with significance set at the 0.05 level.

Results: 7080 patients including 498 deaths were examined in the early time period, while 8767 patients including 531 deaths were reviewed in the recent period.  The median age increased 6 years between the two groups, with a similar increase in those who died, 46 (28-67) to 53 (32-73) (p<0.01) years. In patients that died, no differences by gender, race or ethnicity were observed. Fall-related deaths increased from 20% to 28% (p<0.01) while deaths due to motor vehicle collisions decreased from 39% to 25% (p<0.01). Deaths associated with hemorrhage decreased from 36% to 25% (p<0.01).   26% of all deaths (including dead on arrival, DOAs) occurred within one hour of hospital arrival, while 59% occurred within 24 hours, and were similar across time periods. Unadjusted overall mortality dropped from 7.0% to 6.1% (p=0.01) and in-hospital mortality (excluding DOA) dropped from 6.0% to 5.0% (p<0.01). Adjusted overall mortality dropped 24% from 7.6% (95% CI: 6.9-8.2) to 5.8% (95% CI: 5.3-6.3) and in-hospital mortality decreased 30% from 6.6% (95% CI: 6.0-7.2) to 4.7% (95% CI: 4.2-5.1). 

Conclusion: Although US data show a 20% increase in death rate due to trauma over a similar time period, this single-site study demonstrated a significant reduction in adjusted overall and in-hospital mortality. It is possible that concentrated efforts on improving resuscitation and multiple other hemorrhage control interventions resulted in the observed reduction in hemorrhage related mortality. Most trauma deaths continue to be concentrated very soon after injury. We observed an aging trauma population and an increase in deaths due to falls. These changing factors provide guidance on potential future prevention and intervention efforts. 

 

47.07 The Economic Burden Of Care For Severe Work Related Injuries In A Level-One Trauma Referral Centre

C. T. Robertson-More1, B. Wells1,2, D. Nickerson3, A. Kirkpatrick1,2, C. Ball1,2  1University Of Calgary,General Surgery,Calgary, AB, Canada 2University Of Calgary,Trauma Surgery,Calgary, AB, Canada 3University Of Calgary,Plastic Surgery,Calgary, AB, Canada

Introduction: Work-related injuries (WRI) are common and represent a significant logistical and economic burden to health care systems. It is also possible that insurers and/or public health care systems do not account for the potentially higher cost of caring for these patients when compared to patients with non-WRI (NWRI). The primary aim of this study was to evaluate the demographics, volume, costs and outcomes associated with WRI at a high volume trauma center.

Methods: The Alberta Trauma Registry and clinical information system were used to perform a retrospective cohort study describing all patients with severe WRI (ISS>12) admitted to a high volume, tertiary care trauma referral center between April, 1995 and March, 2013. Patients who died within the emergency department were excluded. Standard statistical methodology was utilized (p<0.05).

Results: Of 14,964 total trauma admissions, 1,270 (8.5%) were for severe WRI. Overall, the patients’ mean age was 45 years with a male to female ratio of 2.8:1 and mean Injury Severity Score (ISS) of 22.7. Blunt (94%), penetrating (4%), and burn (2%) injury mechanisms were observed. Compared to patients with NWRI, the WRI group was significantly younger (41 vs. 46 years, 95% CI: -5.7 to -3.9yrs), typically male (94% vs. 72%, p<0.05), and had fewer pre-injury comorbidities (p<0.05). Although they displayed statistically equivalent ISS, the WRI group had a greater length of stay in the intensive care unit (2.8 vs. 2.3 days, 95% CI: 0.06 to 0.86 days), length of mechanical ventilation (2.2 vs. 1.8 days, 95% CI: 0.08 to 0.68 days), and mean number of surgical/operative procedures (0.86 vs. 0.67 per patient, 95% CI: 0.11 to 0.27). In contrast, significantly fewer patients with WRI died while in hospital (8% vs. 12%, p<0.05). Consequently, more patients with WRI were discharged home without support services (62% vs. 57%, p<0.05) and significantly fewer were transferred to long-term care facilities (0.5 vs. 1%, p<0.05). The acute care economic burden of patients with WRI was significantly higher (p<0.05). Increased costs were related to the care of these patients in the intensive care unit (p<0.05) and operating theatre (p<0.05), as well as for physician compensation (p<0.05).

Conclusion: Patients with WRI admitted to our trauma center were younger, less comorbid, more likely male and had a significantly higher utilization of acute care resources despite a similar ISS when compared to those with NWRI. These increased costs and economic burden in critical care, operative and physician based services are not recovered from work place insurers in public health care systems.

47.08 Successful Observation of Small Traumatic Pneumothoraces in Patients Requiring Aeromedical Transfer

N. Lu1, C. Ursic1, H. Penney1,2, S. Steinemann1, S. Moran1  1University Of Hawaii,Department Of Surgery,Honolulu, HI, USA 2University Of Hawaii,Department Of Radiology,Honolulu, HI, USA

Introduction:  With the widespread use of computed tomography (CT) imaging, the occult pneumothorax (PTX) has become a common finding. It has been shown that it is safe to monitor occult PTX in stable patients, even if they are on positive pressure ventilation. Observation of occult PTX without chest tube placement has been supported for those seen on CT to be <7mm measured perpendicular from lung to chest wall. However, patients transported by air are not optimally monitored and not in the care of practitioners skilled in thoracostomy tube placement.

Methods:  We undertook a retrospective chart review of patients with traumatic PTX who were transported by air over the course of three years (2010-2012) to a level II trauma center that serves 1.3 million people. Occult PTX was defined as a pneumothorax that was not visible on chest radiograph (CXR), but was visible on CT imaging. Patients who did not have an overt PTX or a clinical reason for immediate chest tube placement were divided into two groups: those with PTX<7mm and those with PTX>7 mm.

Results: From 2010 to 2012, 66 patients were transferred with a total of 83 PTX. Eleven PTX in 8 patients were treated with chest tubes placed for clinical reasons such as CPR or needle decompression in the field. For 11 PTX, we have no information about pre-transport CXR or were unable to measure the PTX on CT. Eleven overt PTX were treated with thoracostomy tubes. Of the 10 large occult (>7mm) PTX, 8 were treated with thoracostomy tubes and two were treated with observation in transport. Of the 39 small (<7mm) PTX, 19 were treated with thoracostomy tubes (15 ventilated, 4 not ventilated); and 20 were observed during transportation (5 ventilated, 15 not ventilated). Of all patients without thoracostomy tubes prior to transport, 3 were placed on arrival. One was placed in a patient whose repeat CXR showed the PTX (no longer occult), though the patient was stable.  One was placed in a patient whose follow up CT showed expansion to 8mm and who was to be intubated for an operation. One was placed in a patient with a pre-transfer PTX>7 mm and with copious subcutaneous emphysema which expanded en route. There were 15 total complications. Thirteen were malpositioned and two were related to empyema requiring thoracoscopic drainage.

Conclusion: Patients with small PTX can safely be transported by air without thoracostomy tubes. Only one of 20 patients sent without a chest tube required immediate chest tube placement and, in retrospect, it would have been recommended that a tube be placed prior to transport due to the size of the PTX and the amount of subcutaneous air. Mechanical ventilation prompted more thoracostomy tube placements.  In addition, observation may reduce complications from chest tube placement (malposition, infection, increased number of CXR, increase in hospital length of stay, and delay in returning home). Further studies with large numbers of patients are warranted.

47.09 Unplanned Intensive Care Unit Admissions Following Trauma

J. A. Rubano1, J. A. Vosswinkel1, J. E. McCormack1, E. C. Huang1, M. Paccione1, R. S. Jawa1  1Stony Brook University Medical Center,Trauma,Stony Brook, NY, USA

Introduction: Unplanned Intensive Care Unit (UP-ICU) admission is a key quality measure of the American College of Surgeons Committe on Trauma. We sought to evaluate frequency, timing, risk factors, and morbidity associated with unplanned ICU admission following acute traumatic injury.

Methods: Retrospective analysis of a state-designated level I trauma center's registry.  All adult trauma admissions from January 2007 through December 2013 were considered.  Burns, isolated hip fractures, field/emergency department intubations and patients takend directly to the operating room were excluded.  Univariate and multivariate statistical analyses were performed; p≤ 0.05 was considered significant.

Results: Of 5465 patients meeting study criteria, 85.2% required no ICU (NO-ICU) stay, 10.9% had planned (PL-ICU) admission, and 3.9% were UP-ICU admissions.  Patient demographics are presented in the table.  UP-ICU admissions more frequently had ≥2 National Trauma Data Standard comorbid conditions (65.1%) than NO-ICU (33.2%) and PL-ICU admissions (47.2%), p<0.05. Median length of stay prior to UP-ICU admission was significantly longer than PL-ICU admission (2 days, IQR 0-4 vs. 0 days, IQR 0-0).  UP-ICU admissions had significantly more frequent strokes (2.4% vs 0.5%), MI (14.2% vs. 4.0%), respiratory failure (10.9% vs. 1.7%), pneumonia (30.2% vs. 9.9%), renal failure (7.6% vs. 2.7%), sepsis (10.9% vs. 2.9%), and DVT/PE (11,8% vs. 5.2%) as compared to PL-ICU admissions.  Rates of these complications in the NO-ICU group were each ≤1.1% and correspondingly significantly less than in UP-ICU group.  Finally, UP-ICU patients had a higher mortality (18.4%) than NO-ICU (0.49%, p<0.001) or PL-ICU admission groups (5.71%, p < 0.001).  In subsequent multivariate logistic regression, risk factors for unplanned ICU admission were respiratory failure (odds ratio 3.74, 95% confidence interval 1.62-8.63), PE/DVT (2.27, 1.23-4.18), MI (1.98, 1.05-3.74), and pneumonia (2.60, 1.66-4.08).  Age, presence of ≥ 2 comorbidities, sepsis, and stroke were not risk factors. ISS was slightly negatively associated with UP-ICU admission (OR 0.97 (95% CI 0.95 – 0.99).

Conclusion: Unplanned ICU admission is an infrequent but morbid event. It is associated with a threefold increase in mortality as compared to planned ICU admission.  A slightly lower ISS in UP-ICU would be expected as these patients were not directly admitted to the ICU.  Earlier identification of risk factors may decrease unplanned ICU admission.
 

47.10 Analysis of the Coagulation System in Burn Patients: Perhaps Not As Simple As INR

S. Tejiram1, K. Brummel-Ziedins3, T. Orfeo3, S. Butenas3, B. Hamilton2, J. Marks2, L. Moffatt2, J. Shupp1,2  1MedStar Washington Hospital Center,The Burn Center, Department Of Surgery,Washington, DC, USA 2MedStar Health Research Institute,Firefighters’ Burn And Surgical Research Laboratory,Washington, DC, USA 3University Of Vermont,Department Of Biochemistry,Colchester, VT, USA

Introduction: While a body of literature exists on coagulopathy in trauma patients, understanding of abnormal coagulation in burn patients is limited. Studies have shown alterations in antithrombin, protein C and S levels after burn, but controversy remains over whether burn injury induces coagulopathy. There is no consensus on whether burn patients with variable injury severity are at risk for hyper- or hypocoagulation. Coagulation is a complex process that is frequently assessed only by laboratory values such as PT, PTT, and INR. These measurements do not account for clotting factor dynamics or clot characteristics. Real time assessment of a patient’s coagulation profile may help clinicians better understand the pathophysiology underlying abnormal coagulation in burn patients. Here, we monitored clotting factor levels in a pilot group of burn patients for 96 hours after admission to study potential perturbations in the coagulation system and help elucidate potentially meaningful dynamics in coagulation after burn injury not indicated by INR alone.

Methods: Nine thermally injured patients with total body surface area injuries of 25% or greater who presented to a verified burn center between 2013 and 2014 were included for analysis. Citrated plasma was collected at admission and at regular intervals over a 96 hour period. Clinical laboratory information, specifically PT, PTT, and INR, collected over the same time was compared to levels of factors II, IIa, V, VII, VIII, IX, IXa, X, XI, XIa, antithrombin, and tissue factor pathway inhibitor measured in plasma.

Results: Of the patients profiled, 4 died and 5 survived. Seven patients had factor VIII levels beyond the upper limit of normal range upon admission. Four of these had factor VIII levels 2-3 fold higher than normal. Over the subsequent 24 hours, all patients experienced an initial decrease of factor VIII levels to normal ranges before increasing again above the normal range. Factor IX was also elevated approximately 1.5 times normal levels upon admission in all patients and remained above normal range for all but 2 patients. Conversely, factor VII levels decreased below normal ranges for 3 patients after 24 hours. Only 4 patients had antithrombin levels in normal range upon admission and all patients had antithrombin levels below normal range shortly thereafter for the subsequent 96 hours. Three patients showed an increase in INR and PTT beyond normal range. Clinical laboratory values of INR and PTT remained within normal limits (INR < 1.3 and PTT 23-45s) for all other patients during this 96 hour period.

Conclusion: Dynamic changes in clotting factor levels follow immediately after thermal injury that may not be detected by monitoring of INR and PTT alone. These changes may be important in early identification of coagulopathy in this patient population, which to date is poorly characterized. Further study is warranted to explore the scope of abnormal coagulation in burn patients.

 

48.01 Blood Transfusion & Adverse Surgical Outcomes – the Good, the Bad, & the Ugly

M. Hochstetler1, S. P. Saha1, J. Martin1, A. Mahan1, V. Ferraris1  1University Of Kentucky Chandler Medical Center,Surgery,Lexington, KY, USA

Introduction:
Every experienced surgeon has a patient whose life was saved by a blood transfusion (the GOOD).  On the other hand, an overwhelming amount of evidence suggests that perioperative blood transfusion translates into adverse surgical outcomes (the BAD).  We wondered what patient characteristics, if any, can explain this clinical dichotomy with certain patients benefiting from transfusion while others are harmed by this intervention. 

Methods:
We queried the NSQIP database containing patient information entered between 2010 and 2012 in order to identify mortality and morbidity differences in patients receiving blood transfusion within 72 hours of their operative procedure compared to those who did not receive any blood.  We calculated the relative risk of developing a serious complication or of having operative mortality in propensity matched patients with equivalent risk of having a blood transfusion. 

Results:
There were 470,407 patients in the study group.  Of these, 32,953 patients (7.0%) received at least a single blood transfusion within 72 hours of operation.  The transfusion rate in patients having operative mortality or serious morbidity was 11.3% and 55.4% compared to the transfusion rate of 1.3% and 0% in survivors of operation without complications (both p < 0.001).  Dividing patients into deciles of increasing operative mortality risk or risk of serious morbidity found that patients at the highest risk for development of death or serious complications had non-significant risk of harm from blood transfusion, while patients in the lowest risk deciles had between 8 to 10 fold increased risk of major adverse events associated with transfusion (Figure). 

Conclusions:
We found that high risk patients do not have significant risk from blood transfusion, but the lowest risk patients have between an 8 and 10 fold excess risk of adverse outcomes when they receive a blood transfusion (the UGLY).  We speculate that careful preoperative assessment of transfusion risk, and intervention based on this assessment, could minimize operative morbidity and mortality, especially since the lowest risk patients are more likely to have elective operations and provide time for therapeutic interventions to improve risk profiles. 
 
 

48.02 Evaluation of the accuracy of endoscopic ultrasound for nodal staging in esophageal cancer

V. Bianco1, K. S. Mehta1, M. Sablowsky1, W. E. Gooding1, J. D. Luketich1, A. Pennathur1  1University Of Pittsburgh Medical Center,Department Of Cardiothoracic Surgery,Pittsburgh, PA, USA

Introduction:
The accurate staging of esophageal cancer is important for both prognostic and therapeutic decisions, as well as evaluation of results of treatment.  Of particular importance is the presence of nodal metastases which has a major impact on prognosis and therapeutic approach. Endoscopic ultrasound (EUS) is an important and increasingly used clinical staging modality for the pretreatment evaluation of nodal status in esophageal cancer.   Recently the new AJCC 7TH edition staging system, which incorporates the number of positive nodes in the nodal (N) staging has been adopted.  While the accuracy of EUS nodal staging has been well studied with the previous AJCC staging version, there are limited studies evaluating the accuracy of EUS in N staging using the recently revised esophageal cancer staging (AJCC 7th edition).  The objective of this study was to analyze the accuracy of EUS, in nodal staging of esophageal cancer using the most recent staging system (AJCC 7th edition).

Methods:
We reviewed the records of 172 patients who had undergone esophagectomy without neoadjuvant treatment and collected data which included both clinical and pathological staging.  The preoperative N stage was acquired from EUS and the pathological stage was assigned based on the esophagectomy specimen.  Staging data was recorded for each patient based on the specific criteria for the 7th edition.  The accuracy of identifying EUS in nodal disease was evaluated.

Results:
A total of 172 patients (mean age 67, 137 male, 35 female) underwent esophagectomy for esophageal cancer.  The median number of nodes resected was 20 per patient. For the AJCC 7th edition only 80 of 172 (46%) patients had the N stage correctly classified with EUS (Table).  We identified 92 of 172 (54%) patients who were misclassified (31% under staged, 23% over staged) for AJCC 7. 

Conclusion:
Our results indicate a substantial reduction in the EUS staging accuracy of nodal disease for the AJCC 7th edition.  These  findings suggest that further advancement is necessary for accurate clinical staging of nodal disease preoperatively.  Potential solutions may include refinement of EUS technology with routine use of EUS-FNA, evaluation of other staging modalities such as laparoscopic staging, and the use of  molecular staging with validated biomarkers.
 

48.03 Decade-long Trends of Survival and Cost for Extracorporeal Life Support: results from a modern series

E. B. Pillado1, R. Kashani1, H. Wu1, S. Grant1, C. Hershey1, R. Shemin1, P. Benharash1  1David Geffen School Of Medicine, University Of California At Los Angeles,Division Of Cardiac Surgery,Los Angeles, CA, USA

Introduction: Extracorporeal membrane oxygenation (ECMO) has been used to support patients with advanced cardiac and/or pulmonary failure. More recently, venoarterial (VA) ECMO has been used as an adjunct to CPR, which has increased the number of patients on extracorporeal support. With an increase in ECMO utilization worldwide and the need for a cost efficient healthcare system, the present study aimed to evaluate patient outcomes and hospital costs at our institution.

Methods: A retrospective review of the UCLA Health Extracorporeal Life Support Organization (ELSO) database was performed to identify adult patients who underwent VA-ECMO between 2004-2014. Our institutional Society of Thoracic Surgeons Database was used to extract the volume and type of adult cardiac surgeries, defined as patients having procedures requiring cardiopulmonary bypass as well as heart transplants during the same period. Publicly available cost data was obtained for our institution for ECMO services exclusive of bed cost. STATA 12.1 (College Station, TX) was used to run regression analysis on groups. 

Results:Out of 263 (33% female) patients who underwent venoarterial ECMO, 117 (44%) were weaned, 55 (21%) bridged to transplantation or mechanical assist device, and 91 (35%) expired while on ECMO. The average time on ECMO was 5.3±0.3 days and the mean age was 50.3 (±1.2). The procedural volume for ECMO showed an annual increase of 27%. Success of wean from ECMO showed a non-significant trend towards improvement over the study period (43% in 2004 to 69% in 2013, p=0.17). The average cost per patient was $36,669(±13,951) in 2004 and $32,776(±15,658) in 2014 (p=0.083). During the same period, there were also significant changes in the volume of cardiac transplants and total number of cases at our institution (total heart transplant, p=0.046, total cardiac surgeries, p=0.001, and VA-ECMO patients, p=0.016) (Figure 1).

Conclusion:We have demonstrated that there was a disproportionate increase in VA-ECMO volume when compared to our institutional volume for cardiac surgical and transplant procedures. With more widespread use of ECMO, the hospital costs have increased over the past decade while the cost per patient has remained relatively constant. This may be explained by having shorter periods on ECMO for each patient. The high institutional burden of ECMO and increasing volumes for the procedure mandate better selection criteria and ECMO protocols in order to maintain a cost-efficient healthcare system

 

48.04 Virtual HLA Crossmatching As A Means To Safely Expedite The Transplantation Of Shipped-in Pancreata

B. C. Eby1, T. M. Ellis2, R. R. Redfield1, G. Leverson1, J. S. Odorico1  1University Of Wisconsin,Division Of Transplantation/Department Of Surgery/School Of Medicine And Public Health,Madison, WI, USA 2University Of Wisconsin,Department Of Pathology And Laboratory Medicine,Madison, WI, USA

Introduction:  Cold ischemia time (CIT) accumulates with shipped-in pancreata and limits utilization and outcomes. Flow cytometric HLA crossmatching (FXM) is used to assess histocompatibility between pancreas and recipient before transplantation. Waiting for a FXM, which is typically done once the organ arrives, prolongs the CIT. A ‘virtual crossmatch’ (VXM) can be performed before the transport of the organ using the results of the single antigen bead (SAB) Luminex assay thereby allowing the assessment of recipient donor specific antibodies. This study investigates whether it is acceptable and safe to proceed with transplantation of shipped-in pancreata based solely on VXM results rather than waiting for FXM results.

Methods:  We retrospectively reviewed outcomes of pancreas transplants (n=153 patients) performed from June 2010 to December 2013. Comparisons were made between three patient groups: 1) shipped-in pancreas, VXM only (n=39), 2) shipped-in pancreas, VXM and FXM (n=12), and 3) local pancreas, VXM and FXM (n=102). Graft-survival, patient survival, and CIT were determined.

Results: There were 51 shipped-in pancreata: 39 transplants were performed based on the results of a VXM, not a FXM and 12 were based on FXM results. Transplants that began based on VXM results did have a FXM performed retrospectively. Donor and recipient demographics, immunosuppression regimens and surgical parameters were comparable between groups with the exception of transplantation type. Shipped-in pancreata were primarily solitary pancreas (SP) transplants whereas the local group had primarily SPK transplants (87% SP in Shipped-in VXM only group vs. 100% SP in Shipped-in VXM + FXM group vs. 13% SP in Local group, p < 0.001). Graft survival, death-censored graft survival, and patient survival did not differ between groups. CIT was shorter in the local group than either of the shipped-in groups (15.9h: Shipped-in VXM only vs. 17.5h: Shipped-in VXM + FXM vs. 13.2h: Local, p < 0.001). CIT was compared among pancreata originating from similar destinations. For pancreata shipped in from UNOS regions 3 and 4, proceeding to surgery without a FXM saved 5.1 hours (95% CI [3.25, 6.98]) (p = 0.0001). In no circumstance was the retrospective FXM positive in the VXM negative cases that proceeded to transplantation. 

Conclusion: VXM enables the transplant center to proceed to surgery without waiting for a FXM. For shipped in pancreata, CIT can then be minimized without adversely affecting graft or patient survival. This policy, if widely adopted, could increase utilization of pancreas grafts from further distances. In addition, such a policy is applicable to other transplanted organs.