78.09 Stop Flying the Patients! Evaluation of the Overutilization of Helicopter Transport of Trauma Patients

C. R. Horwood1, C. G. Sobol1, D. Evans1, D. Eiferman1  1The Ohio State University,Departemnt Of Trauma And Critical Care,Columbus, OH, USA

Introduction: On average, helicopter transport is $6,000 more compared to ground transportation of a trauma patient. Air transport has the theoretical advantage of allowing patients to receive injury treatment more promptly.  However, there are no defined criteria for which patients require expedited transport. The primary study objective is to evaluate the appropriateness of helicopter transport determined by operative care within 1-hour of transfer at an urban level 1 trauma center.

Methods: All trauma patients transported by helicopter from January 2015-December 2017 to an urban level 1 trauma center from referring hospitals or the scene were retrospectively analyzed. The entire cohort was reviewed for level of trauma activation, disposition from trauma bay, median time to procedure. A subgroup analysis was performed evaluating patients that required a procedure within 1-hour of transport compared to the remainder of the patient cohort who were transported via helicopter. Data was analyzed using summary statistics, chi-square test and Mann-Whitney test when appropriate. 

Results: A total of 1,590 patients were transported by helicopter. Only 32% (n=507) were level 1 activations, 60% (n=962) were level 2 activations and 8% (n=121) were not a trauma activation upon arrival. 39% percent of patients (n=612) were admitted directly to the floor from the trauma bay and 16% (n=249) of patients required only observation or were discharged home after helicopter transfer. Roughly 1/3 of the entire study cohort (36%, n=572) required any procedure, with a median time to procedure of 31.5 hours (IQR 54.4). Of which, 13% (n= 74) required a procedure within 1-hour of helicopter transport. There was a significant difference in median ISS score for patients who required a procedure within 1- hour of transport (median 22, IQR=27) vs remainder of cohort transported via helicopter (median 9, IQR=12) (p-value<0.001). The average distance (in miles) if the patient had been driven by ground transport rather than helicopter was 67.0 miles (SD±27.9) and would take an estimated 71.5 minutes (±28.4) for patients who required a procedure within 1-hour compared to 61.6 miles (SD±30.9) with an estimated 66.1 minutes (SD±30.8) for the remainder of the cohort (p-value=0.899 and p-value=0.680 respectively). In the group who required a procedure within 1- hour 24.3% of patients had a penetrating injury compared to 6.4% for the remainder of the cohort (p-value<0.001).

Conclusion: This analysis demonstrates that helicopter transport was not necessary for the vast majority of trauma patients as they did not meet Level 1 trauma activation and did not require emergent interventions to treat injuries. However, there was a significant difference in ISS and type of injury for patients who required a procedure within one hour of transport. Stricter selection is necessary to determine which patients should be transported by helicopter.

78.08 Variability of Radiological Grading of Blunt Cerebrovascular Injuries in Trauma Patients

A. K. LaRiccia1,2, T. W. Wolff1,2, M. O’Mara1, T. V. Nguyen1, J. Hill1, D. J. Magee4, R. Patel4, D. W. Hoenninger4, M. Spalding1,3  1Ohiohealth Grant Medical Center,Trauma And Acute Care Surgery,Columbus, OH, USA 2Ohiohealth Doctors Hospital,Surgery,Columbus, OH, USA 3Ohio University Heritage College of Osteopathic Medicine,Dublin, OH, USA 4Ohiohealth,Columbus Radiology,Columbus, OH, USA

Introduction:  Blunt cerebrovascular injury (BCVI) occurs in 1-2% of all blunt trauma patients. Computed tomographic angiography of the neck (CTAn) has become commonplace for diagnosis and severity determination of BCVIs. Management often escalates with injury grade and inaccurate grading can lead to both under- and over-treatment of these injuries. Several studies have investigated the sensitivity of CTAn, however, there remains a lack in understanding the inter-reader reliability. In this study, we determine the extent of variability in BCVI grades among neuro-radiologist interpretation of CTAn in traumatically injured patients.

Methods:  This was a retrospective review of trauma patients with a BCVI reported on initial CTAn imaging, admitted to an urban, Level I trauma center from January 2012 to December 2017. Patients were randomly assigned for CTAn re-evaluation by two of three blinded, independent neuro-radiologists. The evaluations were compared and the variability among the BCVI grades was measured using coefficient of unalikeability (u), which can quantify variability for categorical variables on a scale of 1-100 where the higher the value, the more unalike the data. Inter-reader reliability of the radiologists was calculated using weighted Cohen’s kappa (k).

Results: In total, 228 BCVIs in 217 patients were analyzed. Seventy-six (33%) involved the carotid vessels, 144 (63%) involved only vertebral vessels, and 8 (4%) involved both. The initial grades consisted of 71 (31%) grade 1, 74 (32%) grade 2, 26 (11%) grade 3, 57 (25%) grade 4, and 0 grade 5. Interpretation variability was present in 93 (41%) of all BCVIs. Initial grade 1 injuries had the lowest occurrence of uniform consensus (u = 1) with a mean of 31% among all interpretations (see figure). Grade 4 injuries had the highest consensus (92%). Grade 2 and 3 injuries had a mean consensus of 63% and 61%, respectively. Total variability of grade interpretations (u = 100) occurred most frequently with grade 3 BCVIs (21%). No significant differences were found between carotid and vertebral injuries. Weighted Cohen’s k calculations had a mean of 0.07, indicating poor reader agreement. Treatment recommendations would have been affected in 30% of these patients, with the treatment scope downgraded in 22% and upgraded in 8%.

Conclusion: Our study revealed BCVI variability of initial radiological grade interpretation in more than a third of patients and poor reader agreement. The reliability of CTAn interpretation of BCVI grades is not uniform, potentially leads to 8% under treatment and worse neurologic outcomes. Comparisons with variability in digital subtraction angiography may be beneficial to further understand the complexity of BVCI radiologic injury grading.

78.07 Does Time Truly Heal All? A Longitudinal Analysis of Recovery Trajectories One Year After Injury

A. Toppo6,7, J. P. Herrera-Escobar6, R. Manzano-Nunez6, J. B. Chang3, K. Brasel2, H. M. Kaafarani3, G. Velmahos3, G. Kasotakis5, A. Salim1, D. Nehra1, A. H. Haider1,6  1Brigham And Women’s Hospital,Division Of Trauma, Burn, & Surgical Critical Care,Boston, MA, USA 2Oregon Health And Science University,Department Of Surgery,Portland, OR, USA 3Massachusetts General Hospital,Division Of Trauma, Emergency Surgery, & Surgical Critical Care,Boston, MA, USA 5Boston University School Of Medicine,Division Of Acute Care, Trauma Surgery, & Surgical Critical Care,Boston, MA, USA 6Brigham And Women’s Hospital,Center For Surgery And Public Health,Boston, MA, USA 7Tufts University School Of Medicine,Boston, MA, USA

Introduction:  We are increasingly aware of the fact that trauma patients that survive to hospital discharge often suffer from significant long-term consequences of their injury including physical disability, psychological disturbances, chronic pain and overall reduced quality of life. The recovery trajectory of traumatically injured patients is less well understood. In this study, we aim to describe the recovery trajectories of moderate-to-severely injured patients from 6 to 12 months after injury.

Methods:  Adult trauma patients with moderate-to-severe injuries (ISS ≥ 9) admitted to one of three Level 1 Trauma Centers in Boston between 2016 and 2018 were contacted by phone at 6 and 12 months post-injury. Patients were asked to complete 12 item Short-Form Health (SF-12) survey to assess physical health, mental health, social functioning, and bodily pain, a validated Trauma Quality of Life (TQoL) questionnaire, and a screen for PTSD. This information was linked to the index hospitalization through the trauma registry. A longitudinal analysis was conducted to evaluate the change in outcomes between 6 and 12 months post-injury. Outcomes were also evaluated for gender and age (young < 65 years of age, old ≥ 65 years of age) subgroups.

Results: A total of 271 patients completed the phone screen at both 6 and 12 months post-injury. Overall, physical health improved significantly from six to twelve months post-injury (p < 0.001), but still remained well below the population norm (Figure 1A). Conversely, mental health was similar to the population norm at both 6 months and 12 months post-injury (Figure 1B). The elderly exhibited better social functioning than the young at both time points and remained within population norms. Young males experienced a significant improvement in social functioning over time, getting to the population norm by 12 months post-injury. Young females in contrast demonstrated no improvement in social functioning over time and remained well below population norms even 12 months post-injury (Figure 1C). Overall, 50% of patients reported having pain daily at 6 months post-injury and 75% of these patients continued to have daily pain 12 months post-injury. Looking at the SF-12 pain scores, only young females experienced significant improvement in bodily pain scores over time (Figure 1D). PTSD screens were positive for 20% of patients 6 months post-injury, and 76% still screened positive at 12 months.

Conclusion: The recovery trajectories of trauma patients between 6 and 12 months post-injury are not encouraging with minimal to no improvement in overall physical health, mental health, social functioning, and chronic pain. These recovery trajectories deserve further study so that appropriate post-discharge support services can be developed.

78.06 Outcomes in Trauma Patients with Behavioral Health Disorders

M. Harfouche1, M. Mazzei1, J. Beard1, L. Mason1, Z. Maher1, E. Dauer1, L. Sjoholm1, T. Santora1, A. Goldberg1, A. Pathak1  1Temple University,Trauma,Philadelpha, PA, USA

Introduction:  The relationship between behavioral health disorders (BHDs) and outcomes after traumatic injury is not well understood and the data is evolving.  The objective of this study was to evaluate the association between BHDs and outcomes such as mortality, length of stay (LOS), and inpatient complications in the trauma patient population.

Methods:  We performed a review of the Trauma Quality Improvement Program (TQIP) database from the years 2013 to 2016 comparing patients with and without a BHD.  Patients were classified as having a BHD if they had a comorbidity listed as a psychiatric disorder, alcohol abuse, drug abuse, dementia, and attention deficit hyperactivity disorder (ADHD).  Psychiatric disorder included major depressive disorder, bipolar disorder, schizophrenia, anxiety/panic disorder, borderline or antisocial personality disorder and/or adjustment disorder/post-traumatic stress disorder.  Descriptive statistics were performed and multivariable regression examined mortality, LOS, and inpatient complications. Statistics were performed using Stata/IC v15.

Results: In the study population, 254,882 (25%) patients were reported to have a BHD. Of these, psychiatric disorders were most prevalent at 38.3% (n=97,668) followed by alcohol abuse (33.3%, n=84,845), substance abuse (26.4%, n=67,199), dementia (20.2%, n=51,553), and ADHD (1.7%, n= 4,301).  There was no difference in age between the groups (mean 44.1 v 44.3 in BHD v non-BHD groups), however, the BHD group was more likely to be female (38.4% v 37.4%, OR 1.04, CI 1.03-1.05, p<0.001).  The overall mortality was lower in the BHD group (OR 0.81, CI 0.79-0.83 p<0.001) when controlling for age, gender, race, injury severity score and non-BHD comorbidities such as stroke, chronic obstructive pulmonary disease, congestive heart failure, diabetes and hypertension. Within the BHD group, patients with dementia had an increased likelihood of mortality when controlling for other risk factors (OR 1.62, CI 1.56-1.69, p<0.001). LOS was 8.4 days (s=0.02) for patients with a BHD versus 7.3 days (s=0.01) for patients without a BHD (p<0.001). Comorbid BHD was significantly associated with any inpatient complication (OR 1.19, CI 1.18-1.20, p<0.001). Select complications are presented in Table 1.

Conclusion: Trauma patients with a BHD have a lower overall mortality risk when compared to those without a BHD. However, subgroup analysis revealed that among patients with a BHD, those with dementia have an increased mortality risk. BHD increased risk for any inpatient complication overall and prolonged the LOS.  Further study is needed to define and understand the risk factors for these associations.

 

78.05 Elderly Falls Hotspots – A Novel Design for Falls Research and Strategy-Implementation Programs

S. Hawkins1, L. Khoury1, V. Sim1, A. Gave1, M. Panzo1, S. M. Cohn1  1Staten Island University Hospital-Northwell Health,Surgery,Staten Island, NY, USA

Introduction:
Falls in the elderly remain a growing public health burden despite decades of research on a variety of falls-prevention strategies. This trend is likely due to current strategies only capturing a limited proportion of those in the community at risk for falls. A new approach to falls-prevention focused on wider community-based dissemination of falls-prevention strategies is called for. We created a model of falls that identifies high risk areas or “hot-spots” for fall risk, identifying community-based study populations for subsequent falls-reduction strategy and implementation research.  

Methods:
We queried the trauma registry of a level 1 trauma center, representing a relatively captured trauma population in a dense urban-suburban setting.  We extracted the resident addresses of all patients age 60 and over who were admitted with a mechanism of falls over the period 2014 to 2017.  We used geographic information systems software to map the addresses to census zones, and generated a heat map representing the fall density within each zone in our region. 

Results:
The area is served by two trauma centers that capture nearly all of the trauma volume of a region with a population of nearly half a million. The county is divided into 107 populated census tracts that range from 0.3 to 1.1 square km. The incidence of falls in the elderly was consistent over the 4 years of study throughout the populated census zones within the hospital’s catchment area. The density of residents who presented to the trauma center with a fall mechanism ranged from less than 1 to 180 per sq km. There were 6 census zones with falls density above 80, which can be considered “hot-spots” for falls risk (see Figure). These zones are similar with respect to land use, population, and demographics.

Conclusion:
Using Geographic Information Systems with trauma registry data identified discreet geographic regions with a higher density of elderly falls. These “hot-spots” will be the target of future community-directed falls-reduction strategy and implementation research.
 

78.04 Early versus late venous thromboembolism: a secondary analysis of data from the PROPPR trial

S. P. Myers1, J. B. Brown1, X. Chen1, C. E. Wade2,3,4, J. C. Cardenas2,3, M. D. Neal1  1University Of Pittsburgh,Division Of Trauma And General Surgery, Department Of Surgery,Pittsburgh, PA, USA 2McGovern Medical School at UTHealth,Division Of Acute Care Surgery, Department Of Surgery, McGovern School Of Medicine,Houston, TX, USA 3McGovern Medical School at UTHealth,Center For Translational Injury Research,Houston, TX, USA 4McGovern Medical School at UTHealth,Center For Translational And Clinical Studies,Houston, TX, USA

Introduction: Venous thromboembolic events (VTE) are common after severe injury, but factors predicting their timing remain incompletely understood. As the balance between hemorrhage and thrombosis is dynamic during a patient’s hospital course, early and late VTE may be physiologically discrete processes. We conducted a secondary analysis of the Pragmatic, Randomized Optimal Platelet and Plasma Ratios (PROPPR) trial hypothesizing that risk factors would differ between early and late VTE.

Methods:  A threshold for early and late events was determined by cubic spline analysis of VTE distribution. Univariate analysis determined association of delayed resuscitation with early or late VTE. Multinomial regression was used to analyze association of clinical variables with early or late VTE compared to no VTE adjusting for predetermined confounders including mortality, demographics, injury mechanism/severity, blood products, hemostatic adjuncts, and comorbidities. Serially collected coagulation assays were analyzed for differences that might distinguish between early and late VTE and no VTE.

Results: After plotting VTE distribution over time, cubic spline analysis established a threshold at 12 days corresponding to a change in odds of early versus late events (Figure 1). Multinomial regression revealed differences between early and late VTE.  Variables associated with early but not late VTE included older age (RR 1.03; 95%CI 1.01, 1.05; p=0.01), femur fracture (RR 2.96; 95%CI 0.99, 8.84; p=0.05), chemical paralysis (RR 2.67; 95%CI 1.20, 5.92; p=0.02), traumatic brain injury (RR 14.17; 95%CI 0.94, 213.57; p=0.05), and plasma transfusion (RR 1.13; 95%CI 1.00, 1.28, p=0.05). In contrast, late VTE events were predicted by vasopressor use (RR 4.49; 95%CI 1.24, 16.30; p=0.02) and ICU length of stay (RR 1.11; 95%CI 1.02, 1.21; p=0.02). Sepsis increased risk of early (RR 3.76, 95% CI 1.71, 8.26; p<0.01) and late VTE (5.91; 95% CI 1.46, 23.81; p=0.01). Coagulation assays also differed between early and late VTE. Prolonged lag time (RR 1.05, 95% CI 0.99, 1.1; p=0.05) and time to peak thrombin generation (RR 1.03; 95% CI 1.00, 1.06; p=0.02) were associated with increased risk of early VTE alone. Delayed resuscitation approaching ratios of 1:1:1 for plasma, platelets, and red blood cells among patients randomized to 1:1:2 therapy was a risk factor for late (RR 6.69; 95% CI 1.25, 35.64; p=0.03) but not early VTE.

Conclusion: There is evidence to support that early and late thromboembolic events may differ in their pathophysiology and clinically relevant risk factors. Defining chronologic thresholds and clinical markers associated with temporal trends in VTE distribution may allow for a more individualized approach to thromboprophylaxis.

 

78.03 Association of TXA with VTE in Trauma Patients: A Preliminary Report of an EAST Multicenter Study

L. Rivas1, M. Vella8, J. Pascual8, G. Tortorello8, D. Turay9, J. Babcock9, A. Ratnasekera4, A. H. Warner6, D. R. Mederos5, J. Berne5, M. Mount2, T. Schroeppel7, M. Carrick3, B. Sarani1  1George Washington University School Of Medicine And Health Sciences,Surgery,Washington, DC, USA 2Spartanburg Medical Center,Surgery,Spartanburg, SC, USA 3Plano Medical Center,Surgery,Plano, TX, USA 4Crozier Keystone Medical Center,Surgery,Chester, PA, USA 5Broward Health Medical Center,Surgery,Fort Lauderdale, FL, USA 6Christiana Care Medical Center,Surgery,Newark, DE, USA 7University of Colorado Colorado Springs,Surgery,Colorado Springs, CO, USA 8University Of Pennsylvania,Surgery,Philadelphia, PA, USA 9Loma Linda University School Of Medicine,Surgery,Loma LInda, CA, USA

Introduction: Tranexamic acid (TXA) is an anti-fibrinolytic agent that lowers mortality of injured patients who are bleeding or at risk of bleeding. It is commonly used in trauma centers as an adjunct to massive transfusion protocols in the management of bleeding patients. But, its potent antifibrinolytic activity may result in an increased risk of venous thromboembolism (VTE). We hypothesized that the incidence of VTE events was greater in injured persons receiving TXA along with massive transfusion. 

Methods:  A multicenter, retrospective study was performed. Inclusion criteria were: age 18 years or older, patients who received 10 units or more of blood in the first 24 hours after injury. Exclusion criteria included: death within 24 hours, pregnancy, and routine ultrasound surveillance for possible asymptomatic deep venous thrombosis (DVT). Patients were divided in 2 cohorts based on whether or not they received TXA. Incidence of VTE was the primary outcome. Secondary outcomes included myocardial infarction (MI), stroke (CVA), and death. Multivariate logistic regression analysis was performed to control for demographic and clinically significant variables. A power analysis using expected DVT and PE rates based on prior studies found that a total of 830 patients were needed to find a statistically significant difference with a minimum power of 80%.

Results:269 patients fulfilled criteria; 124 (46%)  of whom received TXA. No difference was noted in age (31 v 29, p=0.81), injury severity score (29 v 27, p=0.47), or mechanism of injury (62% penetrating v 61% blunt, p=0.81). Patients who received TXA had significantly lower systolic blood pressure on arrival (90 mmHg vs 107 mmHg, p=0.002). Incidence of VTE did not differ between the patients who received TXA and those who did not (DVT: 16% vs 13%, p=0.48 and PE 8% vs 6%, p=0.55). There was no difference in CVA or MI. There was no difference in mortality on multivariate analysis (OR 0.67, CI 0.30 – 1.12).

Conclusion:This preliminary report did not find an association between TXA and VTE or other prothrombotic complications.  It remains to be seen if more subtle differences between groups will become manifest when the study accrual is complete. 

 

78.02 Multicenter observational analysis of soft tissue infections: organisms and outcomes

A. Louis1, S. Savage2, W. Li2, G. Utter3, S. Ross4, B. Sarani5, T. Duane6, P. Murphy7, M. Zielinski8, J. Tierney9, T. Schroeppel10, L. Kobayashi11, K. Schuster12, L. Timsina2, M. Crandall1  1University of Florida College of Medicine Jacksonville,Surgery,Jacksonville, FL, USA 2Indiana University School Of Medicine,Surgery,Indianapolis, IN, USA 3University Of California – Davis,Surgery,Sacramento, CA, USA 4Cooper University Hospital,Surgery,Camden, NJ, USA 5George Washington University School Of Medicine And Health Sciences,Surgery,Washington, DC, USA 6JPS Health Network,Surgery,Fort Worth, TX, USA 7University of Western Ontario,Surgery,London, ON, Canada 8Mayo Clinic,Surgery,Rochester, MN, USA 9University Of Colorado Denver,Surgery,Aurora, CO, USA 10University of Colorado,Surgery,Colorado Springs, CO, USA 11University Of California – San Diego,Surgery,San Diego, CA, USA 12Yale University School Of Medicine,Surgery,New Haven, CT, USA

Introduction:  Skin and soft tissue infections (STIs) run the spectrum from mild cellulitis to life-threatening necrotizing infections.  The severity of illness may be affected by a variety of factors including organism involved and patient comorbidities.  The American Association for the Surgery of Trauma (AAST) has spent the last five years developing grading scales for impactful Emergency General Surgery (EGS) diseases, including STIs.  The purpose of this study was to characterize patient and infection factors associated with increasing severity of STI using the AAST EGS grading scale.

Methods:  This study was a retrospective multi-institutional trial, with each of 12 centers contributing 100 patients to the data set.  Patient demographics, comorbidities and infection data were collected on each patient, as were outcomes including management strategies, mortality and hospital and intensive care unit (ICU) length of stay (LOS).  Data were compared using Student’s t-test and Wilcoxon Rank Sum tests where appropriate.  Simple and multivariate logistic regression, as well as ANOVA, were also used in analysis.

Results:1,140 patients were included in this analysis.  The mean age of the cohort was 53 years (SD 19) and 68% of the patients were male.  Hospital stay and mortality risk increased with STI grade (Table 1).  The only statistical difference was noted between Group 3 and Group 5 (p=0.002).  Higher EGS grade STIs were significantly associated with infection by Gram Positive Organisms (GPC) (when compared to Gram Negative Rods (GNR); OR 0.09, 95% CI 0.06-0.14, p<0.001 for Grade 5.  Polymicrobial infections were also significantly more common with higher grade STI (compared to STI Grade 1: Grade 2 OR 2.29 (95% CI 1.18-4.41); Grade 3 OR 5.11 (95% CI 3.12-8.39); Grade 4 OR 4.28 (95% CI 2.49-7.35); Grade 5 OR 2.86 (95% CI 1.67-4.87); all p-values were less than 0.001.  GPC infections were associated with significantly more surgical debridements per patient (GNR 1.64 (SD 1.83) versus 2.37 (SD 2.7), p < 0.001).  There were no significant differences in preponderance of organism based on region of the country except in Canada, which had a significantly higher incidence of GNRs compared to GPCs.  

Conclusion:This study provides additional insight into the nature of STIs.  Higher grade STIs are dominated by GPCs, which also require more aggressive surgical debridement.  Understanding the natural history of these life-threatening infections will allow centers to plan their operative and antibiotic approach more effectively.

 

78.01 Attenuation of a Subset of Protective Cytokines Correlates with Adverse Outcomes After Severe Injury

J. Cai1, I. Billiar2, Y. Vodovotz1, T. R. Billiar1, R. A. Namas1  1University Of Pittsburgh,Pittsburgh, PA, USA 2University Of Chicago,Chicago, IL, USA

Introduction: Blunt trauma elicits a complex, multi-dimensional inflammatory response that is intertwined with late complications such as nosocomial infection and multiple organ dysfunction. Among multiple presenting factors (age and gender), the magnitude of injury severity appears to have the greatest impact on the inflammatory response which in turn correlates with clinical trajectories in trauma patients. However, a relatively limited number of inflammatory mediators have been characterized in human trauma.  Here, we sought to characterize the time course changes in 31 cytokines and chemokines in a large cohort of blunt trauma patients and analyze the differences as a function of injury severity.

Methods: Using clinical and biobank data from 472 blunt trauma patients admitted to the intensive care unit (ICU) and who survived to discharge, three groups were identified based on injury severity score (ISS): Mild (ISS: 1-15, n=180), Moderate (ISS: 15-24, n=170), and Severe (ISS: ≥25, n=122). Three samples within the first 24 h were obtained from all patients and then daily up to day 7 post-injury. Thirty-one cytokines and chemokines were assayed using Luminex™ and were analyzed using Kruskal–Wallis test (P<0.05). Principal component analysis (PCA) was used to define the principal characteristics / drivers of the inflammatory response in each group.

Results: The severe group had statistically significantly longer ICU and hospital stays, days on mechanical ventilation, and higher prevalence of nosocomial infection (47%) when compared to the mild and moderate groups (16% and 24%; respectively). Time course analysis of biomarker trajectories showed that 21 inflammatory mediators were significantly higher in the severe group upon admission and over time vs the mild and moderate groups. However, 8 inflammatory mediators (IL-22, IL-9, IL-33, IL-21, IL-23, IL-17E/25, IP-10, and MIG) were significantly attenuated during the initial 16 h post-injury in the severe group when compared to the mild and moderate groups. PCA suggested that the circulating inflammatory response during the initial 16 h in the mild and moderate groups was characterized primarily by IL-13, IL-1β, IL-22, IL-9, IL-33, and IL-4. Interestingly, and over 16 h post-injury, IL-4, IL-17A, IL-13, IL-9, IL-1β, and IL-7 were the primary characteristics of the inflammatory response in the severe group.

Conclusion: These findings suggest that severe injury is associated with an early suppression of a subset of cytokines known to be involved in tissue protection and regeneration (IL-22, IL-33, IL-25 and IL-9), lymphocyte differentiation (IL-21 and IL-23) and cell trafficking (CXC chemokines) post-injury which in turn correlates with adverse clinical outcomes. Therapies targeting the immune response after injury may need to be tailored differently based on injury severity and could be personalized by the measurement of inflammatory biomarker patterns.

 

77.10 Kidney Donor Contrast Exposure and Recipient Clinical Outcomes

S. Bajpai1, W. C. Goggins1, R. S. Mangus1  1Indiana University School Of Medicine,Surgery / Transplant,Indianapolis, IN, USA

Introduction:
The use of contrast media in hospital procedures has been increasing since its initial use in 1923. Despite developments in utility and safety over time, contrast media has been associated with kidney injury in exposed patients. Several studies have investigated contrast-induced nephropathy (CIN) in hospital patients and kidney recipients post-transplant. However, there are few studies that connect kidney donor contrast exposure to kidney transplant recipient outcomes. This study reviews all deceased kidney donors at a single center over a 15-year period to determine if donor contrast exposure results in CIN in the donor, or is associated with delayed graft function or graft survival in the transplant recipient.

Methods:
The records of all deceased kidney transplants were reviewed. Donor initial, peak and last serum creatinine levels were recorded. Recipient renal function was recorded, including delayed graft function, creatinine clearance at one-year and 36 months graft survival. Donor contrast exposure was recorded and generally included computed tomography studies and angiograms. Contrast dosing was not available, so exposure was recorded as the number of contrasted studies received by the recipient.

Results:
The records of 1585 deceased donor kidney transplants were reviewed. Complete donor records were available for 1394 (88%).  There were 51% of donors who received any contrast study (38% 1 study, 12% 2 studies, 1% 3 studies).  Donor contrast exposure was not associated with any significant changes in donor pre-procurement serum creatinine levels. Post-transplant, donor contrast exposure was not associated with risk of delayed graft function (4% for all), nor with kidney graft survival at 7-, 30- or 90-days.  Creatinine clearance at 1-year was equivalent for the study groups. Cox regression analysis demonstrated slightly higher graft survival at 36 months post transplant for donor grafts that were exposed to contrast (p=0.02).

Conclusion:
These results fail to demonstrate any negative effect of donor contrast administration on early and late kidney graft function in a large number of exposed patients over a long time period. These results included donor kidneys exposed to as many as 3 contrasted studies prior to graft procurement. Long term survival was higher in donor grafts exposed to any contrast. This finding may be related to more desirable donors undergoing more extensive pre-donation testing.
 

77.08 Racial Disparities in Access to Kidney Transplantation: Insights from the Modern Era

K. Covarrubias1, K. R. Jackson1, J. H. Chen1, C. M. Holscher1, T. Purnell1, A. B. Massie1, D. L. Segev1, J. M. Garonzik-Wang1  1The Johns Hopkins University School Of Medicine,Surgery,Baltimore, MD, USA

Introduction: One goal of the new Kidney Allocation System (KAS) was to increase access to deceased donor kidney transplantation (DDKT) for racial and ethnic minorities, who prior to KAS had lower DDKT rates than Whites. Early studies after KAS implementation reported narrowing disparities in DDKT rates for Black and Hispanic candidates; however, it is unclear if these changes have translated into long-term equivalent DDKT rates for racial and ethnic minorities.

Methods: We studied 270,722 DDKT candidates using SRTR data from 12/4/2011–12/3/2014 (‘pre-KAS’) and 12/4/2014–12/3/2017 (‘post-KAS’), analyzing DDKT rates for Black, Hispanic, and Asian candidates using negative binomial regression, adjusting for candidate characteristics. We first determined whether DDKT rates for each race/ethnicity had improved post-KAS compared to pre-KAS, and then whether these changes had resulted in equivalent DDKT rates for minorities compared to Whites. We then calculated the cumulative incidence of DDKT for each race using a competing-risk framework.

Results: Post-KAS, Black candidates had an increased DDKT rate compared to pre-KAS (adjusted incidence rate ratio [aIRR]: 1.011.131.25, p=0.03). However, there were no post-KAS changes in DDKT rates for Hispanic (aIRR: 0.830.961.11, p=0.6) and a decrease in DDKT rates for Asian candidates (aIRR: 0.660.790.94, p=0.009). Relative to White candidates, KAS resulted in a similar DDKT rate for Black candidates (aIRR: 0.870.99­1.12, p=0.9), but a decreased DDKT rate for Hispanic (aIRR: 0.560.740.98, p=0.04) and Asian (aIRR: 0.560.720.93, p=0.01) candidates. The range of likelihood of DDKT at 3-years for a given racial/ethnic minority decreased post-KAS (range 28.7-32.4%) compared to pre-KAS (range 27.0–34.3%). The 3-year cumulative incidence of DDKT improved post-KAS for Black (pre-KAS: 29.5%; post-KAS: 34.9%) and Hispanic candidates (pre-KAS: 27.0%; post-KAS: 30.5%). However, the 3-year cumulative incidence of DDKT remained similar for Asian candidates (pre-KAS: 29.0%; post-KAS: 28.7%), while it decreased for White candidates (pre-KAS: 34.3%; post-KAS: 31.6%).

Conclusion: KAS has produced sustained improvements in DDKT rates for Black candidates, but not for Hispanic or Asian candidates. Nevertheless, the cumulative incidence of DDKT has become more similar post-KAS. While KAS has been successful in improving access to DDKT for Blacks, further work is necessary to identify methods to improve DDKT rates for Hispanic and Asian candidates.

 

77.07 Earlier is better: Evaluating the timing of tracheostomy after liver transplantation

R. A. Jean1, S. M. Miller1, A. S. Chiu1, P. S. Yoo1  1Yale University School Of Medicine,Department Of Surgery,New Haven, CT, USA

Introduction: Morbidity and mortality are relatively high following liver transplantation. Furthermore, severe pulmonary complications progressing to respiratory failure, though rare, are associated with increased postoperative mortality and prolonged hospitalization. Although these cases may require tracheostomy, there is uncertainty regarding how soon this should be pursued. The purpose of this study is to quantify the comparative effectiveness of early versus late tracheostomy in postoperative liver transplant patients in relation to in-hospital mortality and length of stay.

Methods:  The National Inpatient Sample (NIS) dataset between 2000 and 2014 was queried for discharges among adult patients who underwent both orthotopic liver transplant (OLT) and post-transplant tracheostomy (PTT). Patients receiving tracheostomy by post-transplantation day 14 were classified as “early” tracheostomies, while those receiving after day 14 were classified as “late". In-hospital mortality was compared between groups using adjusted logistic regression models. Cox proportional hazards regression was used to model the impact of early tracheostomy on post-tracheostomy length of stay (PTLOS), accounting for the competing risk of inpatient mortality.

Results: There were 2,149 weighted discharges after OLT and PTT during the study period, of whom 783 (36.4%) were performed by post-transplant day 14 and classified as “early.” Patients receiving early PTT were more likely to have a Charlson Comorbidity score (CCI) of 3+ compared to those receiving late PTT (early 71.1% vs late 60.0%, p=0.04), but there were otherwise no significant baseline differences between groups. Despite this increased comorbidity, early PTT had significantly lower in-hospital mortality (early 26.4% vs late 36.7%, p=0.01). Unadjusted median PTLOS was 31 days (IQR 20-48 days) for early PTT, versus 39 days (IQR 23-61 days) for late PTT (p=0.03). In adjusted logistic regression, early PTT was associated with 37% decreased odds of in-hospital mortality in comparison to late PTT (OR 0.63, p=0.04). Furthermore, after accounting for competing risk of mortality, early tracheostomy had a 41% higher daily rate of discharge alive during the post-transplant hospitalization (HR 1.41, p<0.0001).

Conclusion: Among patients with OLT, early PTT, despite being performed on patients with significantly higher comorbidity scores, was associated with lower in-hospital mortality, lower PTLOS, and quicker discharge alive. These results support our hypothesis that among patients with respiratory failure after OLT, early consideration of PTT may portend more favorable outcomes than a delayed approach.

 

77.06 Impact of Donor Diabetes on the Survival of Lung Transplant Patients

A. L. Mardock1, S. E. Rudasill1, Y. Sanaiha1, H. Khoury1, H. Xing1, J. Antonios2, P. Benharash1  1David Geffen School Of Medicine, University Of California At Los Angeles,Cardiothoracic Surgery,Los Angeles, CA, USA 2University Of California – Los Angeles,Los Angeles, CA, USA

Introduction:  Diabetes mellitus is among several factors considered when assessing the suitability of donated organs for transplantation. Currently, lungs from diabetic donors (LDDs) are feasible for all eligible recipients. The present study utilized a national database to assess the impact of donor diabetes on the longevity of lung transplant recipients.

Methods:  This retrospective study of the United Network for Organ Sharing (UNOS) database analyzed all adult lung transplant recipients from June 2006-December 2015. Donor and recipient demographics including the presence of diabetes were used to create a multivariable model. The primary outcome was five-year mortality, with hazard ratios assessed using multivariable Cox regression analysis. Survival curves were calculated using the Kaplan-Meier method.

Results: Of the 17,843 lung transplant recipients analyzed, 1,203 (12.2%) received LDDs. Recipients of LDDs were more likely to be female (44.1 vs. 40.2%, p<0.01) and have mismatched race (47.5 vs. 42.1%, p<0.01), but otherwise comparable to recipients of non-diabetic lungs. Relative to non-diabetic donors, diabetic donors were older (46.5 vs. 33.6 years, p<0.01), more likely to be female (48.3 vs. 39.1%, p<0.01), and more likely to have a history of smoking (12.2 vs. 9.8%, p<0.01), hypertension (74.6 vs. 19.0%, p<0.01), and higher BMI (28.6 vs. 25.7, p<0.01). Multivariable analysis revealed LDDs to be an independent predictor of mortality at five years (HR 1.16 [1.04-1.29], p<0.01), especially when transplanted to diabetes-free recipients (HR 1.24 [1.11-1.40], p<0.01). Transplantation of LDDs to diabetic recipients showed no independent association with five-year mortality (HR 0.81 [0.63-1.06], p=0.12).

Conclusion: Significantly higher five-year mortality was seen in patients receiving LDDs, particularly among non-diabetic recipients. However, patients with diabetes at the time of transplant who received LDDs saw no decrement in survival compared to those receiving non-diabetic lungs. Therefore, matching non-diabetic recipients to non-diabetic donors may confer a survival benefit and should be considered in lung allocation algorithms.
 

77.05 A 15-Year Experience with Renal Transplant in Undocumented Immigrants: An Argument For Transplant

M. Janeway2, A. Foster1, S. De Gue2, K. Curreri2, T. Dechert2, M. Nuhn2  1Boston University School of Medicine,Boston, MA, USA 2Boston Medical Center,Department Of Surgery,Boston, MA, USA

Introduction:  Health and financial benefits of renal transplant are well demonstrated, yet transplantation in undocumented immigrants remains rare and little published data on outcomes in this population exists. We investigated whether undocumented immigrants have similar outcomes after renal transplant compared to documented recipients.

Methods:  We retrospectively analyzed records of adult renal transplant recipients at our academic medical center between 2002 and 2016. Primary endpoints were recipient and graft survival. Secondary endpoints were delayed graft function (DGF), acute rejection, and post-transplant complications. Patients were matched 1:1 using a propensity score matching model based on age, sex, race, type of donor (living vs. cadaveric), and cause of end-stage renal disease.

Results: We identified 44 undocumented and 137 documented patients. Undocumented patients were younger and more likely to receive a living-donor kidney. Unadjusted survival rates were comparable between undocumented and documented recipients at 1-year (97% vs. 96%) and 3-years (96% vs. 96%), as were graft survival at 1-year (92% vs. 93%) and at 3-years (87% vs. 86%), and post-transplant complications (44% vs. 41%). After matching, documentation status was not significantly associated with graft survival at one year (OR=1.50, 95%CI[0.27, 9.50], p= 0.6669) or three years (OR=1.33, 95%CI[0.30, 5.88] p=0.7039), DGF (OR=1.62, 95%CI[0.57, 4.59], p=0.3632), acute rejection (OR=1.58, 95%CI[0.25, 10.00], p=0.6265), transplant complications (OR 1.62, [0.68, 3.84], p=0.2752), or post-transplant CKD (OR=0.60, [0.20, 1.80] p=0.3598).

Conclusion: Documentation status is not associated with adverse renal transplant outcomes in this small study. Given these outcomes data, we feel transplant centers should consider renal transplant for undocumented patients.

 

77.04 Going beyond MELD: A data-driven mortality predictor for liver transplantation waiting list

G. Nebbia2, E. R. Dadashzadeh1,3, C. Rieser3, S. Wu1  1University Of Pittsburgh,Department Of Biomedical Informatics,Pittsburgh, PA, USA 2University Of Pittsburgh,Intelligent Systems Program,Pittsburgh, PA, USA 3University Of Pittsburgh,Department Of Surgery,Pittsburgh, PA, USA

Introduction:  Since 2002, the liver allocation policy for adults is based on the Model for End-stage Liver Disease (MELD). While MELD was not originally created for this purpose, given its ability to predict short-term mortality, it has been serving as an urgency-based mechanism for organ allocation. Aiming to improve the MELD criteria, the purpose of this study is to investigate a data-driven approach using machine learning (ML) techniques to build a predictor of mortality for patients awaiting liver transplantation.

Methods: We retrospectively used the Scientific Registry of Transplant Recipients (SRTR) dataset, which included patients waitlisted for liver transplantation from 1985 to 2017, and we divided the dataset into three survival cohorts (i.e., 3, 12, and 24 months) including 88,758, 63,205, and 53,361 patients, respectively. We applied three ML algorithms (Logistic Regression, Random Forests, and Neural Networks) to predict the survival for each cohort, by training each ML model using 30 clinical factors such as functional status, additional laboratory values, diagnosis, blood type, BMI, and MELD itself. We removed patients that have substantial missing data in these factors, resulting in the final three cohorts of 25,560, 17,295, and 14,203 patients, respectively.  For each cohort, 75% data were used for training and the rest unseen 25% for testing, measuring the prediction performance by the Area Under the ROC Curve (AUC). We analyzed each cohort as a whole and also grouped patients based on their specific diagnosis categories for sub-group analysis. The diagnosis categories we analyzed are Acute Hepatic Necrosis, Cholestatic Liver Disease/Cirrhosis, Malignant Neoplasm, Metabolic Disease, Non-Cholestatic Cirrhosis, and Other. AUCs of different models are compared by Delong test to assess statistical significance.

Results: MELD alone reached an AUC of 0.87, 0.78, and 0.75 for the 3, 12, and 24-month cohort, respectively. Logistic Regression reached AUC of 0.89, 0.83, and 0.82, while the other two ML models performed comparably. All the AUC improvements over the MELD baseline were statistically significant (p<0.05). In sub-group analyses, the AUCs of diagnosis-specific models showed consistent improvement over the sub-groups; in particular, the largest increase in AUC is achieved on the 24-month cohort for the diagnosis of Malignant Neoplasm, Non Cholestatic Cirrhosis, and Metabolic Disease.

Conclusion: This study shows that data-driven ML modeling outperforms the MELD criteria in predicting mortality for patients awaiting liver transplantation. We see a larger improvement (0.82 vs 0.75) when predicting a longer survival (24 months) and a smaller improvement (0.89 vs 0.87) for the 3 months. More promisingly, the improvement over different sub-groups indicates the ML models may be particularly beneficial to certain group of patients with specific diagnosis, potentially enabling precision prediction of survival on stratified patient cohorts. 

 

77.03 Effect of Kidney Allocation System Policy on Transplant Rates Across UNOS Regions in the US

A. C. Perez-Ortiz1,2, E. Heher1, N. Elias1  1Massachusetts General Hospital,Transplant Center,Boston, MA, USA 2Yale University School Of Public Health,New Haven, CT, USA

Introduction:
The new Kidney Allocation System (KAS) aimed to improve transplantation rates and to address other core needs of deceased donor (DD) kidney allocation. Three years after implementation the regional effects of KAS have not been well studied. Since the United States (US) is heterogeneous, particular states might have experienced significant improvements compared to others. We here aimed to test if such regional differences existed after KAS implementation.

Methods:
We abstracted regional and state data on DD from the Organ Procurement and Transplantation Network, end-stage renal disease prevalence from the US Renal Data System, and US Census, and constructed Poisson regression models to estimate kidney transplant incidence ratios (IRs) by region compared to the national average between 2012–2017. We also tested the additive effect of KAS policy by average marginal effects (AMEs), specifically in the post-implementation period (2015 – 2017) regionally.  and plotted our findings in a 50-state choropleth map where lighter colors represent regions with the highest improvement and darkest null effects post-KAS.

Results:
KAS impact was different across regions in two ways. First, the multiplicative effect of KAS post-implementation, measured by IRs, significantly increased the base rate by a factor of 1.16 and 1.09 in regions six and ten respectively. Second, the additive effect of KAS (2015 onward), measured by AMEs, significantly improved the expected mean number of transplants in regions 2, 3, 4, 5, and 9 (Figure). KAS was most impactful in Southern -states with both IRs and AMEs were higher than the national average. In comparison, the mid-west and north-west regions had the lowest AMEs.

Conclusion:
KAS had a higher impact in Southern states improving deceased donor kidney transplant compared to other regions in the US. KAS regional effect warrants exploration, specifically to identify characteristics driving the increase in transplantation so that public policies can be improved accordingly.?
 

77.02 Survival Benefit After Liver Transplant in the Post-MELD Era: 15 Year Analysis of 100,000 Patients

T. A. Russell1, D. Graham1, V. Agopian2, J. DiNorcia2, D. Markovic3, S. Younan2, D. Farmer2, H. Yersiz2, R. Busuttil2, F. Kaldas2  1University Of California – Los Angeles,General Surgery,Los Angeles, CA, USA 2University Of California – Los Angeles,Liver & Pancreas Transplantation,Los Angeles, CA, USA 3University Of California – Los Angeles,Biomathematics,Los Angeles, CA, USA

Introduction:  Annually less than 60% of waitlisted patients receive liver transplantation (LT), resulting in over 2,000 waitlist deaths. Historically, the minimum threshold for survival benefit (SB) with LT is a model for end-stage liver disease (MELD) score of 15. Limited organ availability and geographic disparities require examination of the relative LT-SB in the post MELD era to ensure optimization of lives saved.  

Methods:  All waitlisted adults from 2/2002-3/2017 (excluding Status-1A, MELD-exception candidates) within in the United Network for Organ Sharing (UNOS) database were included. Patients were followed from the time of listing to 3-months post-transplant or waitlist-removal. Survival-time was accrued to MELD categories according to score changes over time. LT-SB hazard ratios were computed comparing waitlist to post-LT survival for the entire cohort and by UNOS regions and by eras (2002-2006, 2007-2011, 2012-2017). The threshold for SB was defined by a HR <1.0, indicating SB for receiving LT as compared to remaining on waitlist.

Results: 107,503 patients were waitlisted; 46,249 underwent LT. By era, the 3-month LT-SB threshold was at MELD 19, 20-23, and 20-23 (Figure 1). All UNOS regions had a common 3-month LT-SB threshold of MELD 21-29 for the entire study period. At time of LT 10,899 (24%) patients had a MELD of 15-20, while 3,756 (8.1%) had a MELD<15. Fifty percent (n=1891) of LT for MELD <15 were done in 3 of the 11 UNOS regions.

Conclusion: The 3-month SB-LT threshold of MELD >20 suggests an increase from the previously established score of 15, yet patients continue to undergo LT at MELDs even below 15 in donor rich regions. These findings highlight the potential to save more lives by allocating organs to higher acuity patients at increased risk of 3-month pre-transplant mortality.

 

77.01 Persistent Gender Disparities in Access to Kidney Transplantation

C. Holscher1, C. Haugen1, K. Jackson1, A. Kernodle1, S. Bae1, J. Garonzik Wang1, D. Segev1  1Johns Hopkins University School Of Medicine,Baltimore, MD, USA

Introduction: While national policies direct organ allocation for waitlisted candidates, the decision to list a candidate for transplantation is made at the center- and patient-level. Historically, women have had decreased access to kidney transplantation (KT). We sought to investigate if gender disparities in access to KT have improved over time. 

Methods: To explore temporal trends in access to KT, we studied 1,511,863 adults (age 18-99) with incident end-stage renal disease (ESRD) using the United States Renal Data System (USRDS) from 2000 to 2015. We divided the study period into four eras and compared characteristics of patients who were and were not listed for transplantation (Chi-square and Student’s t tests), and tested if waitlisting changed over time (Cuzick test of trend).  We used Cox regression to determine the association between era and access to transplantation while controlling for candidate factors.  As a sensitivity analysis to determine whether a differential risk of death before waitlisting impacted our inferences, we used a competing risk regression using the Fine and Gray method with a 5% random sample.

Results: The proportion of ESRD patients who were subsequently waitlisted decreased over time (13.2% in 2000-2003 to 8.7% in 2012-2015, p<0.001). Compared to those who were never waitlisted, waitlist registrants were less likely to be female (37% vs 45%, p<0.001), were younger (mean 50 vs. 66 years, p<0.001), were more likely to be African American (32% vs 28%, p<0.001), were more likely to be Hispanic (20% vs. 13%, p<0.001), and were more likely to have private insurance (38% vs. 17%) or be uninsured (13% vs 6%, p<0.001). After controlling for age, race, ethnicity, and prior insurance, men had similar access to KT over time (per 4-year era, aHR 1.00, 95% CI 1.00-1.01, p=0.6), while women had less access (aHR 0.80, 95% CI 0.20-0.81, p<0.001) that worsened with time (interaction p<0.001) (Figure). For context, in 2000-2003, women were 20% less likely to be waitlisted for kidney transplant (aHR 0.80, 95% CI 0.78-0.82, p<0.001), while in 2012-2015 this worsened to 22% less likely (aHR 0.78, 95% CI 0.76-0.80, p<0.001). Our sensitivity analysis using a competing risk regression also showed persistent gender disparities in waitlisting.

Conclusion: Despite decades of studies showing that women have less access to kidney transplantation, gender disparities in access to KT have not improved over time, rather they have worsened. Further focus and novel interventions are needed to improve access for female KT candidates. 

 

76.10 Molecular Profiling and Mitotic Rate in Cutaneous Melanoma

K. Liang1, G. Gauvin1, E. O’Halloran1, D. Mutabdzic1, C. Mayemura1, E. McGillivray1, K. Loo1, A. Olszanski2, S. Movva2, M. Lango1, H. Wu3, B. Luo4, J. D’Souza5, S. Reddy1, J. Farma1  1Fox Chase Cancer Center,Department Of Surgical Oncology,Philadelphia, PA, USA 2Fox Chase Cancer Center,Department Of Hematology/Oncology,Philadelphia, PA, USA 3Fox Chase Cancer Center,Department Of Pathology,Philadelphia, PA, USA 4Fox Chase Cancer Center,Molecular Diagnostics Laboratory,Philadelphia, PA, USA 5Fox Chase Cancer Center,Molecular Therapeutics Program,Philadelphia, PA, USA

Introduction:  Mitotic rate (MR) is a measure of tumor cellular proliferation in melanoma and has been associated with the tumor’s likelihood to metastasize. Although higher mitotic rate is associated with worse prognosis, specific genetic mutations associated with MR are less known. In this study, we examine the relationship between mitotic rate and genetic mutations in melanoma using next generation sequencing (NGS) technology.

Methods:  A retrospective chart review was conducted on all melanoma patients who received NGS and had pathology reports with documented mitotic rates at an NCI designated cancer center. We compared no mitosis versus ≥ 5 mitoses. Groups were compared using chi squared tables and linear regression models.

Results: Between 1997 and 2018, 239 melanoma patients had NGS performed and were included in this study. The median age of the study group was 64 and 62% were male. Primary tumor locations were trunk (n=70), lower extremity (n=59), upper extremity (n=50), head and neck (n=31), mucosal (n=10), genital (n=5), and other (n=14). Pathological staging included stage I (n=25), stage II (n=64), stage III (n=109), stage IV (n=20), and unknown (n=21). Only 5 patients had 0 mitoses/mm2, while 104 patients had ≥ 5 mitoses/mm2. Out of a total of 380 mutations, the most common gene mutations were any BRAF (18%, n=69) or NRAS (14%, n=53) mutation, but these were not associated with mitotic rate. Mutations in ERBB4, PIK3CA, and SMAD genes were protective against high mitotic rate, associated with 0 mitoses/mm2 (p=0.009, 0.002, 0.044, respectively). Higher mitotic rates, greater than 5/mm2, were associated with mutations in TP53 (p=0.015), KRAS (p=0.002), and FGFR3 (p=0.048). Only three patients had mutations in all three of these genes; these patients had 8, 9, and 20 mitoses/mm2 on final pathology. After controlling for mutations in KRAS and FGFR3, a mutation in TP53 is associated with 2.74-fold increased odds of having more than 5 mitoses/mm2 (95%CI 1.15-6.52, p=0.023).

Conclusion: Mitotic rate is an important prognostic indicator in melanoma. Our data demonstrate that certain genetic mutations such as TP53, FGFR3, and KRAS are associated with higher mitotic rate while other mutations, including ERBB4, PIK3CA, and SMAD4 are more frequently found in patients with no mitoses. Further studies are needed to determine whether next generation sequencing can be used to predict more aggressive tumors so that treatment and surveillance can be better tailored to these patients.

 

76.09 Disconnected Pancreatic Duct Syndrome: Spectrum of Operative Management

T. K. Maatman1, A. M. Roch1, M. A. Heimberger1, K. A. Lewellen1, R. Cournoyer1, M. G. House1, A. Nakeeb1, E. P. Ceppa1, C. Schmidt1, N. J. Zyromski1  1Indiana University School Of Medicine,Surgery,Indianapolis, IN, USA

Introduction:  Disconnected pancreatic duct syndrome (DPDS), complete discontinuity of the pancreatic duct with a viable, but undrained tail, is a relatively common complication following necrotizing pancreatitis (NP). DPDS represents a complex and heterogeneous problem to the clinician; decision-making must consider the presence of sinistral portal hypertension, a variable volume of disconnected pancreatic remnant, and timing relative to definitive management of pancreatic necrosis. Treatment commonly falls to the surgeon; however, limited information is available to guide operative strategy. The aim of this study is to evaluate outcomes after operative management for DPDS. 

Methods:  An institutional necrotizing pancreatitis database was queried to identify patients with DPDS requiring operative management. When feasible, an internal drainage procedure was performed. In the presence of sinistral portal hypertension, small-volume disconnected pancreatic remnant, or concurrent infected necrosis requiring débridement,  distal pancreatectomy with or without splenectomy (DPS/DP) was performed. Descriptive statistics were applied; median (range) values are reported unless otherwise specified. 

Results: Among 647 NP patients treated between 2005-2017, DPDS was diagnosed in 289 patients (45%). Operative management was required in 211 patients; 78 patients were managed non-operatively or died of NP prior to DPDS intervention. Median EBL was 250 mL (10-5000). Median follow-up was 19 months (1-158). In 21 patients (10%), pancreatic débridement and external drainage resulted in subsequent fistula closure without need for further intervention. The remaining 185 patients underwent operation as definitive therapy. Internal drainage was performed in 99 and DPS/DP in 86. Time from NP diagnosis to OR was 108 days (5-2439). Morbidity was 53% (table 1). Length of stay was 8 days (3-65). Readmission was required in 49 patients (23%). Post-operative mortality was 1.9%. Death was caused by: ruptured splenic artery pseudoaneurysm (1); intra-operative cardiac event (1); and progressive organ failure following concomitant enterocutaneous fistula (2). Repeat pancreatic intervention was required in 23 patients (11%) at a median of 407 days (119-2947); initial management was internal drainage in 18 and DPS in 5. Salvage pancreatectomy was performed in 10 patients and the remaining 13 patients were managed with endoscopic therapy. 

Conclusion: DPDS is a common yet extremely challenging consequence of necrotizing pancreatitis. Patient selection is critical as perioperative morbidity and mortality are serious. Appropriate operation requires complex decision-making, however provides durable long-term therapy in nearly 90% of patients.