79.07 Validation of a Tool: Bystander’s Self-efficacy to Provide Aid to Victims of Traumatic Injury

S. Speedy1, L. Tatebe1, B. Wondimu1, D. Kang1, F. Cosey-Gay2, M. Swaroop1  1Feinberg School Of Medicine – Northwestern University,Chicago, IL, USA 2University Of Chicago,Chicago, IL, USA

Introduction: Violent traumatic injury is a leading cause of death among people aged 1-44 years old. Violence disproportionately affects socioeconomically disadvantaged neighborhoods. Increasing self-efficacy, an individual’s belief in his or her ability to achieve a goal, amongst community members in these neighborhoods reduces the rate of violence. Furthermore, bystanders are more likely to intervene and provide assistance to victims if they feel they possess the skills to provide aid. Our aim was to develop and validate a survey tool to assess lay persons’ self-efficacy to intervene and provide first aid to victims of traumatic injury.

Methods: An evidence-based trauma first responder’s course (TFRC), TRUE (Trauma Responders Unified to Empower) Communities, survey tool for measuring first aid self-efficacy among lay persons was constructed. It was developed using focus groups with community members, input from field experts, and Bandura’s self-efficacy scales development guide. The tool contained seven questions measuring self-efficacy and one personal safety question. Community members living in the south side of Chicago who participated in a 3-hour long TFRC completed the tool immediately following the course (n=459) and at 6-month follow up (n=46).  Reliability testing using Spearman correlation was undertaken to examine internal consistency. Validation of the tool was conducted using Wilcoxon signed rank test and repeated measures mixed effects model.

Results: Spearman correlations between pre-course and immediate post-course surveys demonstrated a moderate magnitude of change for all seven self-efficacy survey questions (r = 0.35 to 0.41, p < 0.001). The signed rank test confirmed that all self-efficacy questions measuring willingness to intervene and empowerment were increased immediately following the course (p < 0.001). Repeated measures mixed effects model demonstrated there was a significant increase in all self-efficacy questions over the three time points (pre, post, and 6-month post course) when adjusted for age, gender, race, and course (p < 0.001). The one personal safety question measuring fear of self-injury while aiding victims was the only survey question not achieving statistical change immediately post course or at 6-month follow up.     

Conclusions: The TFRC survey tool is a reliable and valid instrument for measuring bystander’s self-efficacy to provide first aid to trauma victims. Perception of personal safety may not necessarily be affected by educational interventions. The tool will be useful to researchers and educators interested in teaching bystanders how to provide first aid to victims of traumatic injury and for developing interventions to improve empowerment.

79.06 Validation of the American Association for the Surgery of Trauma Grade for Mesenteric Ischemia

M. C. Hernandez1, H. Saleem1, E. J. Finnesgard1, N. Prabhakar1, J. M. Aho1, A. K. Knight1, D. Stephens1, K. B. Wise1, M. D. Sawyer1, H. J. Schiller1, M. D. Zielinski1  1Mayo Clinic,Surgery,Rochester, MN, USA

Introduction:

 

Acute mesenteric ischemia (AMI) is a lethal and variable disease without uniform severity reporting. The American Association for the Surgery of Trauma (AAST) developed an Emergency General Surgery (EGS) grading system for AMI where grade I represents low disease severity and grade V severe in order to standardize risk assessment. We aimed to validate this system by stratifying patients using the AAST EGS grade hypothesizing that disease severity would correspond with clinical outcomes.

Methods:

 

Retrospective, single-institution review of adults with AMI was performed (2013-2017). Preoperative, procedural, and postoperative data were abstracted. Univariate comparisons of imaging and operative grades and covariates were performed and a multivariate analysis evaluated for factors independently associated with 30-day mortality (odd ratios ±95% confidence intervals).

Results:

 

There were 230 patients; 137 (60%) were female. AMI etiologies included: hypovolemia (137, 60%), thrombosis/atherosclerosis (68, 30%), and embolism (25, 10%). The imaging AAST EGS grades were I (108, 47%), II (38, 17%), III (53, 23%), IV (24, 10%), V (7, 3%). Compared to patients who received an operation, patients managed non-operatively (91, 40%) demonstrated a lesser imaging grade (1 [1-2] vs 2 [1-3]) and the etiology was more commonly (75% vs 50%;both p<0.05). Increased imaging grade was associated with diminished systolic blood pressure and increased serum lactate concentrations but not with other physiologic or demographic covariates (Table 1). The type of operation (laparotomy, laparoscopy, conversion to open), need for multiple operations, open abdomen therapy, bowel resection, intensive care management, and 30-day mortality were associated with increasing imaging grade (Table 1). After adjustment for age, sex, AAST EGS grade, operation type, qSOFA score, and etiology, the following factors were independently associated with 30-day mortality: age 1.02 (95%CI 1.0-1.05). imaging grade I (reference), grade II 2.6 (1.01-6.9), grade III 3.1 (1.3-7.4), grade IV 6.4 (1.9-12.2) and grade V 16.6 (2.4-21.3) and increasing qSOFA 2.9 (1.9-4.5). Operative AAST EGS grade was similar to preoperative imaging AAST EGS grade, Spearman correlation 0.88 (p=0.0001).

Conclusion:

 

The AAST EGS grade, used as a surrogate for AMI disease severity, incrementally demonstrated greater odds of 30-day mortality. Decreasing blood pressure and increasing lactate correlated with increasing AAST EGS grade. Operative approach was also associated with AAST EGS grade with few patients receiving vascular interventions at higher grades. The AAST EGS grade for AMI is valid and may be used as a benchmarking tool on these disease severity definitions.

 

79.05 Application of Artificial Intelligence Developed Point of Care Imaging Clinical Decision Support

R. A. Callcut1,2, M. Girard2, S. Hammond2, T. Vu2,3, R. Shah2,3, V. Pedoia2,3, S. Majumdar2,3  1University Of California – San Francisco,Surgery,San Francisco, CA, USA 2University Of California – San Francisco,Center For Digital Health Innovation,San Francisco, CA, USA 3University Of California – San Francisco,Radiology,San Francisco, CA, USA

Introduction: Chest Xrays (CXR) are the most common imaging modality used worldwide.  The time from image ascertainment until review creates an inherent delay in identification of potentially life threatening findings.  Bedside deployed or point of care (POC) tools developed from neural networks have potential to speed the time to clinician recognition of critical findings.  This study applies neural networks to CXRs to automate the detection of peumoperitoneum (PP). 

Methods: We utilized a multi-step deep learning pipeline to create a clinical decision support system for the detection of pneumoperitoneum under the right diaphragm. 5528 training and 1368 validation images were used to train a Unet to initially segment the right and left lung.  By combining the lung segmentation with simple, rule-based algorithms we generated a region of interest in the original image where important features of positive-case detection were likely to be found (Figure 1a). Two readers blindly read images in a second clinical dataset (1821 CXRS total with 771 positive for PP) to classify PP presence or absence.  Images were then divided randomly into a 75% training, 15% validation, and a 10% testing sets.  With the cropped, full resolution images of the region of interest (Figure 1b), a DenseNet neural network classifier was trained to identify PP.

Results: The AUROC for training was 0.99, validation 0.95, and testing 0.96 (Figure 1c).  This yielded a specificity of 94% in the validation group and the results remained consistent in the testing set (92%).  Overall, the accuracy for detection of PP exceeded 90% in the validation group and was confirmed to be excellent in the testing set (92%).

Conclusion: This work demonstrates the potential power of integration of Artificial Intelligence into POC clinical decision support tools.  These algorithms are highly accurate and specific and could yield earlier clinician recognition of potential life threatening findings.
 

79.04 The Influence of Healthcare Resource Availability on Amputation Rates in Texas

J. Cao1, S. Sharath1, N. Zamani1, N. R. Barshes1  1Baylor College Of Medicine,Division Of Vascular Surgery And Endovascular Therapy,Houston, TX, USA

Introduction:  Amputation rates in Texas are high, and racial disparities continue to affect leg amputation rates. Targeted interventions aimed at reducing health disparities may benefit patients in high-need, low-resource areas, and reduce gaps in care.

Methods:  We collated 2005-2009 data on 254 Texas counties from three sources: Texas Inpatient Public Use Data File, Health Resources and Services Administration, and the County Health Rankings and Roadmaps. The primary outcome measure was the number of non-traumatic, lower-extremity amputations. Counties with greater than 11 leg amputations per 100,000 patients per year were designated as “hotspot” counties. Population-adjusted linear and logistic regressions identified factors that could explain increasing amputations among Texas counties.

Results: We identified 33 counties in Texas as “hotspot” counties. Hotspot counties had fewer healthcare resources and lower healthcare utilization. Dual Medicare/Medicaid enrollment and ER visits for foot complications are each associated with more amputations. In the presence of more ER visits, greater dual enrollment decreases total associated amputations (coefficients = -1.21*10-06, P<0.001). In counties with more than 70% rural communities, additional primary care providers decreased the total associated amputations (coefficients = -0.004, P=0.022). Populations in hotspot counties consisted of more people with diabetes (OR = 1.49, P<0.001) and more people categorized as black (OR = 1.09, P=0.007).

Conclusion: Healthcare availability plays a critical role in decreasing PAD-related amputations. Insurance enrollment and improved access to primary care providers may help reduce PAD-associated leg amputations. Strategic resource allocation may promote the reduction in PAD-associated amputations.

 

79.03 Assessing Fatigue Recovery in Trauma Surgeons Utilizing Actigraphy Monitors

Z. C. Bernhard1,2, T. W. Wolff1,3, B. L. Lisjak1, I. Catanescu1, E. Baughman1,4, M. L. Moorman1,4, M. C. Spalding1,4  1OhioHealth Grant Medical Center,Division Of Trauma And Acute Care Surgery,Columbus, OHIO, USA 2West Virginia School of Osteopathic Medicine,Lewisburg, WEST VIRGINIA, USA 3OhioHealth Doctors Hospital,Department Of Surgery,Columbus, OHIO, USA 4Ohio University Heritage College of Osteopathic Medicine,Athens, OHIO, USA

Introduction: Mental fatigue is a psychobiological state caused by prolonged periods of demanding cognitive activity. For over 20 years, the relationship between mental fatigue and physical performance has been extensively researched by the US military, the transportation industry, and other high-risk occupations. This is a growing area of interest within the medical community, yet there remain relatively few investigations specifically pertaining to surgeons. This study sought to quantify and evaluate fatigue and recovery time following 24-hour call among trauma surgeons to serve as a starting point in optimizing staffing and scheduling. We expected more sleep both during and after call, prior to the next normal circadian sleep cycle, would lead to faster recovery times.

Methods:  This was a prospective analysis of trauma surgeons employed at an urban, Level 1 trauma center. Readiband actigraphy monitors (FatigueScience, Vancouver, BC) incorporating a validated Sleep, Activity, Fatigue, and Task Effectiveness Model, were used to track sleep/wake cycles over a 30-day period. Recovery time was measured as the time required during the post-call period for the surgeon to return to his/her pre-call 24-hour mean alertness level. Three groupings were identified based on recovery time: rapid (0-6 hours), intermediate (6-18 hours), and extended (>18 hours). Tri-linear regression analysis was performed to assess correlation between recovery time and on-call, post-call, and combined sleep quantities.

Results: Twenty-seven 24-hour call shifts among 8 trauma surgeons (6 males, 2 females) were identified and analyzed. Mean age was 41.0 ± 5.66. Mean work hours per week was 54.7 ± 13.5, mean caffeinated drinks per day was 3.19 ± 1.90, and mean hours of exercise per week was 4.0 ± 2.5. Six call shifts met rapid criteria, 11 shifts intermediate, and 10 shifts extended, with mean recovery times of 0.49 ± 0.68, 8.86 ± 2.32, and 24.93 ± 7.36 hours, respectively. Table 1 shows the mean alertness levels and sleep quantities for each group. Statistically significant and moderate positive correlations were found between recovery time and the amount of sleep achieved on-call (p=0.0001; R2=0.49), post-call (p=0.0013; R2=0.49) and combined (p<0.0001; R2=0.48).

Conclusion: This early analysis indicates that increased sleep quantities achieved on-call, post-call, and combined are partially indicative of quicker recovery time in surgeons following 24-hour call shifts, thus serving as a viable starting point to optimize trauma surgeon staffing and scheduling. Further studies to validate these findings and evaluate the impact of additional sleep components, such as number of awakenings, should be undertaken.

 

79.01 The Impact of Prehospital Whole Blood on Arrival Physiology, Shock, and Transfusion Requirements

N. Merutka1, J. Williams1, C. E. Wade1, B. A. Cotton1  1McGovern Medical School at UT Health,Acute Care Surgery,Houston, TEXAS, USA

Introduction: Several US trauma centers have begun incorporating uncrossmatched, group O whole blood into civilian trauma resuscitation. Our hospital has recently added this product to our aeromedical transport services. We hypothesized that patients receiving whole blood in the field would arrive to the emergency department with more improved vital signs, improved lactate and base deficit, and would receive less transfusions following arrival when compared to those patients receiving pre-hospital component transfusions. 

Methods: In Novemeber 2017, we added low-titer group O whole blood (WB) to each of our helicopters, alongside that of existing RBCs and plasma. We collected information on all trauma patients receiving prehospital uncrossed, emergency release blood products between 11/01/17 and 07/31/18. Patients were divided into those who received any prehospital WB and those who received only RBC and or plasma (COMP). Initial field vital signs, arrival vital signs, arrival lbaoratory values, and ED and post-ED blood products were captured. Statistical analysis was performed using STATA 12.1. Continuous data are presented as medians (25th-75th IQR) with comparisons performed using Wilcoxon ranksum. Categorical data are reported as proportions and tested for significance using Fisher’s exact test. Following univariate analyses, a multivariate model was created to evaluate post-arrival blood products, controlling injury severity score, field vital signs, and age. 

Results: 174 patients met criteria, with 98 receiving prehospital WB and 63 receiving COMP therapy. 116 WB units were transfused in the prehospital setting. Of those receiving WB prehospital, 84 (82%) received 1 U, 14 (12%) received 2U. There was no difference in age, sex, race, or injury severity scores between the two groups. While field pulse was similar (WB: median 117 vs. COMP: 114; p=0.649), WB patients had lower field systolic pressures (median 101 vs. 125; p=0.026) and were more likely to have positive field FAST exam (37% vs. 20%; p=0.053). On arrival, however, WB patients had lower pulse and higher systolic pressures than COMP patients (TABLE). There was no difference in arrival base excess and lactate values (TABLE). However, WB patients had less ED and post-ED blood transfusions than the COMP group. A multivariate linear regression model demonstrated that field WB was associated with a reduction in ED blood transfusions (corr. coef. -10.8, 95% C.I. -19.0 to -2.5; p=0.018).

Conclusion: Prehospital WB transfusion is associated with improved arrival physiology with similar degrees of shock compared to COMP treated pateints. More importantly, WB pateints received less transfusions after arrival than their COMP counterparts. 

78.10 Oral Nutrition for Patients Undergoing Tracheostomy: The Use of an Aggressive Swallowing Program

J. Wisener1,2, J. Ward2, C. Boardingham2, P. P. Yonclas1,2, D. Livingston1,2, S. Bonne1,2, N. E. Glass1,2  1Rutgers New Jersey Medical School,Trauma Surgery,Newark, NJ, USA 2University Hospital,Trauma Surgery,Newark, NJ, USA

Introduction:
The insertion of a tracheostomy is thought to compromise protective swallowing mechanisms leading to aspiration and dysphagia. Consequently, clinicians are reluctant to allow oral nutrition for patients with tracheostomies and continue nasoenteric tube feeds. To maximize the number of patients receiving oral nutrition and to minimize aspiration, we began an aggressive swallowing program by dedicated speech and language pathologists (SLP) using fiberoptic endoscopic evaluation of swallowing (FEES). We hypothesized that despite the presence of a tracheostomy, most patients would be able to be safely fed orally and this approach is optimal for this patient population.

Methods:
Retrospective chart review of all trauma patients who underwent a tracheostomy between 7/1/2016-6/30/2018. Data collected included, demographics, injury severity, time to tracheostomy, ICU and hospital lengths of stay. The time to SLP evaluation and FEES as well as outcomes of those assessments were also captured.

Results:
115 patients underwent a tracheostomy during this period with 90 (78%) evaluated by SLP.  72 (80%) underwent FEES and 53 (76%) of those passed and were allowed oral nutrition. 11 (61%) of the 18 patients seen by SLP and not evaluated by FEES had swallowing evaluated by another method and 5 of those were allowed to eat. 40 patients (55%) passed their first FEES. Among those who failed, 21 (66%) underwent a second FEES approximately a week later, and 10 (48%) passed. Total success rate for patients undergoing SLP ± FEES was 70% (58/83). Days between tracheostomy and time of first FEES was not significant between groups (11 vs 15, p=0.486). The median time to passing FEES was 13 days [IQR 7, 20.5]. Patients who passed FEES were younger (42 vs 55 years, p=0.005) and had more severe injuries (ISS 20 vs 14, p=0.03) compared to those who did not pass FEES. Both groups had similar ICU and hospital lengths of stay (32 vs 31, p=0.95 and 43 vs 36, p=0.14). 12 patients underwent PEG placement prior to SLP evaluation; 7 of which passed their FEES and were fed orally.  There were few incidences of documented aspiration in all patients who were orally fed (3/55).

Conclusion:
Over two-thirds of trauma patients who have undergone a tracheostomy can safely take oral nutrition. Aggressive use of SLP and FEES allows oral nutrition, less use of nasoenteric tubes and gastrostomies which likely improves patient satisfaction. Failure to pass a FEES within the first 2 attempts allows objective indications for a gastrostomy tube. As patients who failed FEES were older, age may be a factor in the decision for earlier gastrostomy tube placement. In conclusion, oral nutrition is not only possible, but preferable in trauma patients undergoing tracheostomy and all eligible patients should be evaluated by FEES.
 

78.09 Stop Flying the Patients! Evaluation of the Overutilization of Helicopter Transport of Trauma Patients

C. R. Horwood1, C. G. Sobol1, D. Evans1, D. Eiferman1  1The Ohio State University,Departemnt Of Trauma And Critical Care,Columbus, OH, USA

Introduction: On average, helicopter transport is $6,000 more compared to ground transportation of a trauma patient. Air transport has the theoretical advantage of allowing patients to receive injury treatment more promptly.  However, there are no defined criteria for which patients require expedited transport. The primary study objective is to evaluate the appropriateness of helicopter transport determined by operative care within 1-hour of transfer at an urban level 1 trauma center.

Methods: All trauma patients transported by helicopter from January 2015-December 2017 to an urban level 1 trauma center from referring hospitals or the scene were retrospectively analyzed. The entire cohort was reviewed for level of trauma activation, disposition from trauma bay, median time to procedure. A subgroup analysis was performed evaluating patients that required a procedure within 1-hour of transport compared to the remainder of the patient cohort who were transported via helicopter. Data was analyzed using summary statistics, chi-square test and Mann-Whitney test when appropriate. 

Results: A total of 1,590 patients were transported by helicopter. Only 32% (n=507) were level 1 activations, 60% (n=962) were level 2 activations and 8% (n=121) were not a trauma activation upon arrival. 39% percent of patients (n=612) were admitted directly to the floor from the trauma bay and 16% (n=249) of patients required only observation or were discharged home after helicopter transfer. Roughly 1/3 of the entire study cohort (36%, n=572) required any procedure, with a median time to procedure of 31.5 hours (IQR 54.4). Of which, 13% (n= 74) required a procedure within 1-hour of helicopter transport. There was a significant difference in median ISS score for patients who required a procedure within 1- hour of transport (median 22, IQR=27) vs remainder of cohort transported via helicopter (median 9, IQR=12) (p-value<0.001). The average distance (in miles) if the patient had been driven by ground transport rather than helicopter was 67.0 miles (SD±27.9) and would take an estimated 71.5 minutes (±28.4) for patients who required a procedure within 1-hour compared to 61.6 miles (SD±30.9) with an estimated 66.1 minutes (SD±30.8) for the remainder of the cohort (p-value=0.899 and p-value=0.680 respectively). In the group who required a procedure within 1- hour 24.3% of patients had a penetrating injury compared to 6.4% for the remainder of the cohort (p-value<0.001).

Conclusion: This analysis demonstrates that helicopter transport was not necessary for the vast majority of trauma patients as they did not meet Level 1 trauma activation and did not require emergent interventions to treat injuries. However, there was a significant difference in ISS and type of injury for patients who required a procedure within one hour of transport. Stricter selection is necessary to determine which patients should be transported by helicopter.

78.08 Variability of Radiological Grading of Blunt Cerebrovascular Injuries in Trauma Patients

A. K. LaRiccia1,2, T. W. Wolff1,2, M. O’Mara1, T. V. Nguyen1, J. Hill1, D. J. Magee4, R. Patel4, D. W. Hoenninger4, M. Spalding1,3  1Ohiohealth Grant Medical Center,Trauma And Acute Care Surgery,Columbus, OH, USA 2Ohiohealth Doctors Hospital,Surgery,Columbus, OH, USA 3Ohio University Heritage College of Osteopathic Medicine,Dublin, OH, USA 4Ohiohealth,Columbus Radiology,Columbus, OH, USA

Introduction:  Blunt cerebrovascular injury (BCVI) occurs in 1-2% of all blunt trauma patients. Computed tomographic angiography of the neck (CTAn) has become commonplace for diagnosis and severity determination of BCVIs. Management often escalates with injury grade and inaccurate grading can lead to both under- and over-treatment of these injuries. Several studies have investigated the sensitivity of CTAn, however, there remains a lack in understanding the inter-reader reliability. In this study, we determine the extent of variability in BCVI grades among neuro-radiologist interpretation of CTAn in traumatically injured patients.

Methods:  This was a retrospective review of trauma patients with a BCVI reported on initial CTAn imaging, admitted to an urban, Level I trauma center from January 2012 to December 2017. Patients were randomly assigned for CTAn re-evaluation by two of three blinded, independent neuro-radiologists. The evaluations were compared and the variability among the BCVI grades was measured using coefficient of unalikeability (u), which can quantify variability for categorical variables on a scale of 1-100 where the higher the value, the more unalike the data. Inter-reader reliability of the radiologists was calculated using weighted Cohen’s kappa (k).

Results: In total, 228 BCVIs in 217 patients were analyzed. Seventy-six (33%) involved the carotid vessels, 144 (63%) involved only vertebral vessels, and 8 (4%) involved both. The initial grades consisted of 71 (31%) grade 1, 74 (32%) grade 2, 26 (11%) grade 3, 57 (25%) grade 4, and 0 grade 5. Interpretation variability was present in 93 (41%) of all BCVIs. Initial grade 1 injuries had the lowest occurrence of uniform consensus (u = 1) with a mean of 31% among all interpretations (see figure). Grade 4 injuries had the highest consensus (92%). Grade 2 and 3 injuries had a mean consensus of 63% and 61%, respectively. Total variability of grade interpretations (u = 100) occurred most frequently with grade 3 BCVIs (21%). No significant differences were found between carotid and vertebral injuries. Weighted Cohen’s k calculations had a mean of 0.07, indicating poor reader agreement. Treatment recommendations would have been affected in 30% of these patients, with the treatment scope downgraded in 22% and upgraded in 8%.

Conclusion: Our study revealed BCVI variability of initial radiological grade interpretation in more than a third of patients and poor reader agreement. The reliability of CTAn interpretation of BCVI grades is not uniform, potentially leads to 8% under treatment and worse neurologic outcomes. Comparisons with variability in digital subtraction angiography may be beneficial to further understand the complexity of BVCI radiologic injury grading.

78.07 Does Time Truly Heal All? A Longitudinal Analysis of Recovery Trajectories One Year After Injury

A. Toppo6,7, J. P. Herrera-Escobar6, R. Manzano-Nunez6, J. B. Chang3, K. Brasel2, H. M. Kaafarani3, G. Velmahos3, G. Kasotakis5, A. Salim1, D. Nehra1, A. H. Haider1,6  1Brigham And Women’s Hospital,Division Of Trauma, Burn, & Surgical Critical Care,Boston, MA, USA 2Oregon Health And Science University,Department Of Surgery,Portland, OR, USA 3Massachusetts General Hospital,Division Of Trauma, Emergency Surgery, & Surgical Critical Care,Boston, MA, USA 5Boston University School Of Medicine,Division Of Acute Care, Trauma Surgery, & Surgical Critical Care,Boston, MA, USA 6Brigham And Women’s Hospital,Center For Surgery And Public Health,Boston, MA, USA 7Tufts University School Of Medicine,Boston, MA, USA

Introduction:  We are increasingly aware of the fact that trauma patients that survive to hospital discharge often suffer from significant long-term consequences of their injury including physical disability, psychological disturbances, chronic pain and overall reduced quality of life. The recovery trajectory of traumatically injured patients is less well understood. In this study, we aim to describe the recovery trajectories of moderate-to-severely injured patients from 6 to 12 months after injury.

Methods:  Adult trauma patients with moderate-to-severe injuries (ISS ≥ 9) admitted to one of three Level 1 Trauma Centers in Boston between 2016 and 2018 were contacted by phone at 6 and 12 months post-injury. Patients were asked to complete 12 item Short-Form Health (SF-12) survey to assess physical health, mental health, social functioning, and bodily pain, a validated Trauma Quality of Life (TQoL) questionnaire, and a screen for PTSD. This information was linked to the index hospitalization through the trauma registry. A longitudinal analysis was conducted to evaluate the change in outcomes between 6 and 12 months post-injury. Outcomes were also evaluated for gender and age (young < 65 years of age, old ≥ 65 years of age) subgroups.

Results: A total of 271 patients completed the phone screen at both 6 and 12 months post-injury. Overall, physical health improved significantly from six to twelve months post-injury (p < 0.001), but still remained well below the population norm (Figure 1A). Conversely, mental health was similar to the population norm at both 6 months and 12 months post-injury (Figure 1B). The elderly exhibited better social functioning than the young at both time points and remained within population norms. Young males experienced a significant improvement in social functioning over time, getting to the population norm by 12 months post-injury. Young females in contrast demonstrated no improvement in social functioning over time and remained well below population norms even 12 months post-injury (Figure 1C). Overall, 50% of patients reported having pain daily at 6 months post-injury and 75% of these patients continued to have daily pain 12 months post-injury. Looking at the SF-12 pain scores, only young females experienced significant improvement in bodily pain scores over time (Figure 1D). PTSD screens were positive for 20% of patients 6 months post-injury, and 76% still screened positive at 12 months.

Conclusion: The recovery trajectories of trauma patients between 6 and 12 months post-injury are not encouraging with minimal to no improvement in overall physical health, mental health, social functioning, and chronic pain. These recovery trajectories deserve further study so that appropriate post-discharge support services can be developed.

78.06 Outcomes in Trauma Patients with Behavioral Health Disorders

M. Harfouche1, M. Mazzei1, J. Beard1, L. Mason1, Z. Maher1, E. Dauer1, L. Sjoholm1, T. Santora1, A. Goldberg1, A. Pathak1  1Temple University,Trauma,Philadelpha, PA, USA

Introduction:  The relationship between behavioral health disorders (BHDs) and outcomes after traumatic injury is not well understood and the data is evolving.  The objective of this study was to evaluate the association between BHDs and outcomes such as mortality, length of stay (LOS), and inpatient complications in the trauma patient population.

Methods:  We performed a review of the Trauma Quality Improvement Program (TQIP) database from the years 2013 to 2016 comparing patients with and without a BHD.  Patients were classified as having a BHD if they had a comorbidity listed as a psychiatric disorder, alcohol abuse, drug abuse, dementia, and attention deficit hyperactivity disorder (ADHD).  Psychiatric disorder included major depressive disorder, bipolar disorder, schizophrenia, anxiety/panic disorder, borderline or antisocial personality disorder and/or adjustment disorder/post-traumatic stress disorder.  Descriptive statistics were performed and multivariable regression examined mortality, LOS, and inpatient complications. Statistics were performed using Stata/IC v15.

Results: In the study population, 254,882 (25%) patients were reported to have a BHD. Of these, psychiatric disorders were most prevalent at 38.3% (n=97,668) followed by alcohol abuse (33.3%, n=84,845), substance abuse (26.4%, n=67,199), dementia (20.2%, n=51,553), and ADHD (1.7%, n= 4,301).  There was no difference in age between the groups (mean 44.1 v 44.3 in BHD v non-BHD groups), however, the BHD group was more likely to be female (38.4% v 37.4%, OR 1.04, CI 1.03-1.05, p<0.001).  The overall mortality was lower in the BHD group (OR 0.81, CI 0.79-0.83 p<0.001) when controlling for age, gender, race, injury severity score and non-BHD comorbidities such as stroke, chronic obstructive pulmonary disease, congestive heart failure, diabetes and hypertension. Within the BHD group, patients with dementia had an increased likelihood of mortality when controlling for other risk factors (OR 1.62, CI 1.56-1.69, p<0.001). LOS was 8.4 days (s=0.02) for patients with a BHD versus 7.3 days (s=0.01) for patients without a BHD (p<0.001). Comorbid BHD was significantly associated with any inpatient complication (OR 1.19, CI 1.18-1.20, p<0.001). Select complications are presented in Table 1.

Conclusion: Trauma patients with a BHD have a lower overall mortality risk when compared to those without a BHD. However, subgroup analysis revealed that among patients with a BHD, those with dementia have an increased mortality risk. BHD increased risk for any inpatient complication overall and prolonged the LOS.  Further study is needed to define and understand the risk factors for these associations.

 

78.05 Elderly Falls Hotspots – A Novel Design for Falls Research and Strategy-Implementation Programs

S. Hawkins1, L. Khoury1, V. Sim1, A. Gave1, M. Panzo1, S. M. Cohn1  1Staten Island University Hospital-Northwell Health,Surgery,Staten Island, NY, USA

Introduction:
Falls in the elderly remain a growing public health burden despite decades of research on a variety of falls-prevention strategies. This trend is likely due to current strategies only capturing a limited proportion of those in the community at risk for falls. A new approach to falls-prevention focused on wider community-based dissemination of falls-prevention strategies is called for. We created a model of falls that identifies high risk areas or “hot-spots” for fall risk, identifying community-based study populations for subsequent falls-reduction strategy and implementation research.  

Methods:
We queried the trauma registry of a level 1 trauma center, representing a relatively captured trauma population in a dense urban-suburban setting.  We extracted the resident addresses of all patients age 60 and over who were admitted with a mechanism of falls over the period 2014 to 2017.  We used geographic information systems software to map the addresses to census zones, and generated a heat map representing the fall density within each zone in our region. 

Results:
The area is served by two trauma centers that capture nearly all of the trauma volume of a region with a population of nearly half a million. The county is divided into 107 populated census tracts that range from 0.3 to 1.1 square km. The incidence of falls in the elderly was consistent over the 4 years of study throughout the populated census zones within the hospital’s catchment area. The density of residents who presented to the trauma center with a fall mechanism ranged from less than 1 to 180 per sq km. There were 6 census zones with falls density above 80, which can be considered “hot-spots” for falls risk (see Figure). These zones are similar with respect to land use, population, and demographics.

Conclusion:
Using Geographic Information Systems with trauma registry data identified discreet geographic regions with a higher density of elderly falls. These “hot-spots” will be the target of future community-directed falls-reduction strategy and implementation research.
 

78.04 Early versus late venous thromboembolism: a secondary analysis of data from the PROPPR trial

S. P. Myers1, J. B. Brown1, X. Chen1, C. E. Wade2,3,4, J. C. Cardenas2,3, M. D. Neal1  1University Of Pittsburgh,Division Of Trauma And General Surgery, Department Of Surgery,Pittsburgh, PA, USA 2McGovern Medical School at UTHealth,Division Of Acute Care Surgery, Department Of Surgery, McGovern School Of Medicine,Houston, TX, USA 3McGovern Medical School at UTHealth,Center For Translational Injury Research,Houston, TX, USA 4McGovern Medical School at UTHealth,Center For Translational And Clinical Studies,Houston, TX, USA

Introduction: Venous thromboembolic events (VTE) are common after severe injury, but factors predicting their timing remain incompletely understood. As the balance between hemorrhage and thrombosis is dynamic during a patient’s hospital course, early and late VTE may be physiologically discrete processes. We conducted a secondary analysis of the Pragmatic, Randomized Optimal Platelet and Plasma Ratios (PROPPR) trial hypothesizing that risk factors would differ between early and late VTE.

Methods:  A threshold for early and late events was determined by cubic spline analysis of VTE distribution. Univariate analysis determined association of delayed resuscitation with early or late VTE. Multinomial regression was used to analyze association of clinical variables with early or late VTE compared to no VTE adjusting for predetermined confounders including mortality, demographics, injury mechanism/severity, blood products, hemostatic adjuncts, and comorbidities. Serially collected coagulation assays were analyzed for differences that might distinguish between early and late VTE and no VTE.

Results: After plotting VTE distribution over time, cubic spline analysis established a threshold at 12 days corresponding to a change in odds of early versus late events (Figure 1). Multinomial regression revealed differences between early and late VTE.  Variables associated with early but not late VTE included older age (RR 1.03; 95%CI 1.01, 1.05; p=0.01), femur fracture (RR 2.96; 95%CI 0.99, 8.84; p=0.05), chemical paralysis (RR 2.67; 95%CI 1.20, 5.92; p=0.02), traumatic brain injury (RR 14.17; 95%CI 0.94, 213.57; p=0.05), and plasma transfusion (RR 1.13; 95%CI 1.00, 1.28, p=0.05). In contrast, late VTE events were predicted by vasopressor use (RR 4.49; 95%CI 1.24, 16.30; p=0.02) and ICU length of stay (RR 1.11; 95%CI 1.02, 1.21; p=0.02). Sepsis increased risk of early (RR 3.76, 95% CI 1.71, 8.26; p<0.01) and late VTE (5.91; 95% CI 1.46, 23.81; p=0.01). Coagulation assays also differed between early and late VTE. Prolonged lag time (RR 1.05, 95% CI 0.99, 1.1; p=0.05) and time to peak thrombin generation (RR 1.03; 95% CI 1.00, 1.06; p=0.02) were associated with increased risk of early VTE alone. Delayed resuscitation approaching ratios of 1:1:1 for plasma, platelets, and red blood cells among patients randomized to 1:1:2 therapy was a risk factor for late (RR 6.69; 95% CI 1.25, 35.64; p=0.03) but not early VTE.

Conclusion: There is evidence to support that early and late thromboembolic events may differ in their pathophysiology and clinically relevant risk factors. Defining chronologic thresholds and clinical markers associated with temporal trends in VTE distribution may allow for a more individualized approach to thromboprophylaxis.

 

78.03 Association of TXA with VTE in Trauma Patients: A Preliminary Report of an EAST Multicenter Study

L. Rivas1, M. Vella8, J. Pascual8, G. Tortorello8, D. Turay9, J. Babcock9, A. Ratnasekera4, A. H. Warner6, D. R. Mederos5, J. Berne5, M. Mount2, T. Schroeppel7, M. Carrick3, B. Sarani1  1George Washington University School Of Medicine And Health Sciences,Surgery,Washington, DC, USA 2Spartanburg Medical Center,Surgery,Spartanburg, SC, USA 3Plano Medical Center,Surgery,Plano, TX, USA 4Crozier Keystone Medical Center,Surgery,Chester, PA, USA 5Broward Health Medical Center,Surgery,Fort Lauderdale, FL, USA 6Christiana Care Medical Center,Surgery,Newark, DE, USA 7University of Colorado Colorado Springs,Surgery,Colorado Springs, CO, USA 8University Of Pennsylvania,Surgery,Philadelphia, PA, USA 9Loma Linda University School Of Medicine,Surgery,Loma LInda, CA, USA

Introduction: Tranexamic acid (TXA) is an anti-fibrinolytic agent that lowers mortality of injured patients who are bleeding or at risk of bleeding. It is commonly used in trauma centers as an adjunct to massive transfusion protocols in the management of bleeding patients. But, its potent antifibrinolytic activity may result in an increased risk of venous thromboembolism (VTE). We hypothesized that the incidence of VTE events was greater in injured persons receiving TXA along with massive transfusion. 

Methods:  A multicenter, retrospective study was performed. Inclusion criteria were: age 18 years or older, patients who received 10 units or more of blood in the first 24 hours after injury. Exclusion criteria included: death within 24 hours, pregnancy, and routine ultrasound surveillance for possible asymptomatic deep venous thrombosis (DVT). Patients were divided in 2 cohorts based on whether or not they received TXA. Incidence of VTE was the primary outcome. Secondary outcomes included myocardial infarction (MI), stroke (CVA), and death. Multivariate logistic regression analysis was performed to control for demographic and clinically significant variables. A power analysis using expected DVT and PE rates based on prior studies found that a total of 830 patients were needed to find a statistically significant difference with a minimum power of 80%.

Results:269 patients fulfilled criteria; 124 (46%)  of whom received TXA. No difference was noted in age (31 v 29, p=0.81), injury severity score (29 v 27, p=0.47), or mechanism of injury (62% penetrating v 61% blunt, p=0.81). Patients who received TXA had significantly lower systolic blood pressure on arrival (90 mmHg vs 107 mmHg, p=0.002). Incidence of VTE did not differ between the patients who received TXA and those who did not (DVT: 16% vs 13%, p=0.48 and PE 8% vs 6%, p=0.55). There was no difference in CVA or MI. There was no difference in mortality on multivariate analysis (OR 0.67, CI 0.30 – 1.12).

Conclusion:This preliminary report did not find an association between TXA and VTE or other prothrombotic complications.  It remains to be seen if more subtle differences between groups will become manifest when the study accrual is complete. 

 

78.02 Multicenter observational analysis of soft tissue infections: organisms and outcomes

A. Louis1, S. Savage2, W. Li2, G. Utter3, S. Ross4, B. Sarani5, T. Duane6, P. Murphy7, M. Zielinski8, J. Tierney9, T. Schroeppel10, L. Kobayashi11, K. Schuster12, L. Timsina2, M. Crandall1  1University of Florida College of Medicine Jacksonville,Surgery,Jacksonville, FL, USA 2Indiana University School Of Medicine,Surgery,Indianapolis, IN, USA 3University Of California – Davis,Surgery,Sacramento, CA, USA 4Cooper University Hospital,Surgery,Camden, NJ, USA 5George Washington University School Of Medicine And Health Sciences,Surgery,Washington, DC, USA 6JPS Health Network,Surgery,Fort Worth, TX, USA 7University of Western Ontario,Surgery,London, ON, Canada 8Mayo Clinic,Surgery,Rochester, MN, USA 9University Of Colorado Denver,Surgery,Aurora, CO, USA 10University of Colorado,Surgery,Colorado Springs, CO, USA 11University Of California – San Diego,Surgery,San Diego, CA, USA 12Yale University School Of Medicine,Surgery,New Haven, CT, USA

Introduction:  Skin and soft tissue infections (STIs) run the spectrum from mild cellulitis to life-threatening necrotizing infections.  The severity of illness may be affected by a variety of factors including organism involved and patient comorbidities.  The American Association for the Surgery of Trauma (AAST) has spent the last five years developing grading scales for impactful Emergency General Surgery (EGS) diseases, including STIs.  The purpose of this study was to characterize patient and infection factors associated with increasing severity of STI using the AAST EGS grading scale.

Methods:  This study was a retrospective multi-institutional trial, with each of 12 centers contributing 100 patients to the data set.  Patient demographics, comorbidities and infection data were collected on each patient, as were outcomes including management strategies, mortality and hospital and intensive care unit (ICU) length of stay (LOS).  Data were compared using Student’s t-test and Wilcoxon Rank Sum tests where appropriate.  Simple and multivariate logistic regression, as well as ANOVA, were also used in analysis.

Results:1,140 patients were included in this analysis.  The mean age of the cohort was 53 years (SD 19) and 68% of the patients were male.  Hospital stay and mortality risk increased with STI grade (Table 1).  The only statistical difference was noted between Group 3 and Group 5 (p=0.002).  Higher EGS grade STIs were significantly associated with infection by Gram Positive Organisms (GPC) (when compared to Gram Negative Rods (GNR); OR 0.09, 95% CI 0.06-0.14, p<0.001 for Grade 5.  Polymicrobial infections were also significantly more common with higher grade STI (compared to STI Grade 1: Grade 2 OR 2.29 (95% CI 1.18-4.41); Grade 3 OR 5.11 (95% CI 3.12-8.39); Grade 4 OR 4.28 (95% CI 2.49-7.35); Grade 5 OR 2.86 (95% CI 1.67-4.87); all p-values were less than 0.001.  GPC infections were associated with significantly more surgical debridements per patient (GNR 1.64 (SD 1.83) versus 2.37 (SD 2.7), p < 0.001).  There were no significant differences in preponderance of organism based on region of the country except in Canada, which had a significantly higher incidence of GNRs compared to GPCs.  

Conclusion:This study provides additional insight into the nature of STIs.  Higher grade STIs are dominated by GPCs, which also require more aggressive surgical debridement.  Understanding the natural history of these life-threatening infections will allow centers to plan their operative and antibiotic approach more effectively.

 

78.01 Attenuation of a Subset of Protective Cytokines Correlates with Adverse Outcomes After Severe Injury

J. Cai1, I. Billiar2, Y. Vodovotz1, T. R. Billiar1, R. A. Namas1  1University Of Pittsburgh,Pittsburgh, PA, USA 2University Of Chicago,Chicago, IL, USA

Introduction: Blunt trauma elicits a complex, multi-dimensional inflammatory response that is intertwined with late complications such as nosocomial infection and multiple organ dysfunction. Among multiple presenting factors (age and gender), the magnitude of injury severity appears to have the greatest impact on the inflammatory response which in turn correlates with clinical trajectories in trauma patients. However, a relatively limited number of inflammatory mediators have been characterized in human trauma.  Here, we sought to characterize the time course changes in 31 cytokines and chemokines in a large cohort of blunt trauma patients and analyze the differences as a function of injury severity.

Methods: Using clinical and biobank data from 472 blunt trauma patients admitted to the intensive care unit (ICU) and who survived to discharge, three groups were identified based on injury severity score (ISS): Mild (ISS: 1-15, n=180), Moderate (ISS: 15-24, n=170), and Severe (ISS: ≥25, n=122). Three samples within the first 24 h were obtained from all patients and then daily up to day 7 post-injury. Thirty-one cytokines and chemokines were assayed using Luminex™ and were analyzed using Kruskal–Wallis test (P<0.05). Principal component analysis (PCA) was used to define the principal characteristics / drivers of the inflammatory response in each group.

Results: The severe group had statistically significantly longer ICU and hospital stays, days on mechanical ventilation, and higher prevalence of nosocomial infection (47%) when compared to the mild and moderate groups (16% and 24%; respectively). Time course analysis of biomarker trajectories showed that 21 inflammatory mediators were significantly higher in the severe group upon admission and over time vs the mild and moderate groups. However, 8 inflammatory mediators (IL-22, IL-9, IL-33, IL-21, IL-23, IL-17E/25, IP-10, and MIG) were significantly attenuated during the initial 16 h post-injury in the severe group when compared to the mild and moderate groups. PCA suggested that the circulating inflammatory response during the initial 16 h in the mild and moderate groups was characterized primarily by IL-13, IL-1β, IL-22, IL-9, IL-33, and IL-4. Interestingly, and over 16 h post-injury, IL-4, IL-17A, IL-13, IL-9, IL-1β, and IL-7 were the primary characteristics of the inflammatory response in the severe group.

Conclusion: These findings suggest that severe injury is associated with an early suppression of a subset of cytokines known to be involved in tissue protection and regeneration (IL-22, IL-33, IL-25 and IL-9), lymphocyte differentiation (IL-21 and IL-23) and cell trafficking (CXC chemokines) post-injury which in turn correlates with adverse clinical outcomes. Therapies targeting the immune response after injury may need to be tailored differently based on injury severity and could be personalized by the measurement of inflammatory biomarker patterns.

 

77.10 Kidney Donor Contrast Exposure and Recipient Clinical Outcomes

S. Bajpai1, W. C. Goggins1, R. S. Mangus1  1Indiana University School Of Medicine,Surgery / Transplant,Indianapolis, IN, USA

Introduction:
The use of contrast media in hospital procedures has been increasing since its initial use in 1923. Despite developments in utility and safety over time, contrast media has been associated with kidney injury in exposed patients. Several studies have investigated contrast-induced nephropathy (CIN) in hospital patients and kidney recipients post-transplant. However, there are few studies that connect kidney donor contrast exposure to kidney transplant recipient outcomes. This study reviews all deceased kidney donors at a single center over a 15-year period to determine if donor contrast exposure results in CIN in the donor, or is associated with delayed graft function or graft survival in the transplant recipient.

Methods:
The records of all deceased kidney transplants were reviewed. Donor initial, peak and last serum creatinine levels were recorded. Recipient renal function was recorded, including delayed graft function, creatinine clearance at one-year and 36 months graft survival. Donor contrast exposure was recorded and generally included computed tomography studies and angiograms. Contrast dosing was not available, so exposure was recorded as the number of contrasted studies received by the recipient.

Results:
The records of 1585 deceased donor kidney transplants were reviewed. Complete donor records were available for 1394 (88%).  There were 51% of donors who received any contrast study (38% 1 study, 12% 2 studies, 1% 3 studies).  Donor contrast exposure was not associated with any significant changes in donor pre-procurement serum creatinine levels. Post-transplant, donor contrast exposure was not associated with risk of delayed graft function (4% for all), nor with kidney graft survival at 7-, 30- or 90-days.  Creatinine clearance at 1-year was equivalent for the study groups. Cox regression analysis demonstrated slightly higher graft survival at 36 months post transplant for donor grafts that were exposed to contrast (p=0.02).

Conclusion:
These results fail to demonstrate any negative effect of donor contrast administration on early and late kidney graft function in a large number of exposed patients over a long time period. These results included donor kidneys exposed to as many as 3 contrasted studies prior to graft procurement. Long term survival was higher in donor grafts exposed to any contrast. This finding may be related to more desirable donors undergoing more extensive pre-donation testing.
 

77.08 Racial Disparities in Access to Kidney Transplantation: Insights from the Modern Era

K. Covarrubias1, K. R. Jackson1, J. H. Chen1, C. M. Holscher1, T. Purnell1, A. B. Massie1, D. L. Segev1, J. M. Garonzik-Wang1  1The Johns Hopkins University School Of Medicine,Surgery,Baltimore, MD, USA

Introduction: One goal of the new Kidney Allocation System (KAS) was to increase access to deceased donor kidney transplantation (DDKT) for racial and ethnic minorities, who prior to KAS had lower DDKT rates than Whites. Early studies after KAS implementation reported narrowing disparities in DDKT rates for Black and Hispanic candidates; however, it is unclear if these changes have translated into long-term equivalent DDKT rates for racial and ethnic minorities.

Methods: We studied 270,722 DDKT candidates using SRTR data from 12/4/2011–12/3/2014 (‘pre-KAS’) and 12/4/2014–12/3/2017 (‘post-KAS’), analyzing DDKT rates for Black, Hispanic, and Asian candidates using negative binomial regression, adjusting for candidate characteristics. We first determined whether DDKT rates for each race/ethnicity had improved post-KAS compared to pre-KAS, and then whether these changes had resulted in equivalent DDKT rates for minorities compared to Whites. We then calculated the cumulative incidence of DDKT for each race using a competing-risk framework.

Results: Post-KAS, Black candidates had an increased DDKT rate compared to pre-KAS (adjusted incidence rate ratio [aIRR]: 1.011.131.25, p=0.03). However, there were no post-KAS changes in DDKT rates for Hispanic (aIRR: 0.830.961.11, p=0.6) and a decrease in DDKT rates for Asian candidates (aIRR: 0.660.790.94, p=0.009). Relative to White candidates, KAS resulted in a similar DDKT rate for Black candidates (aIRR: 0.870.99­1.12, p=0.9), but a decreased DDKT rate for Hispanic (aIRR: 0.560.740.98, p=0.04) and Asian (aIRR: 0.560.720.93, p=0.01) candidates. The range of likelihood of DDKT at 3-years for a given racial/ethnic minority decreased post-KAS (range 28.7-32.4%) compared to pre-KAS (range 27.0–34.3%). The 3-year cumulative incidence of DDKT improved post-KAS for Black (pre-KAS: 29.5%; post-KAS: 34.9%) and Hispanic candidates (pre-KAS: 27.0%; post-KAS: 30.5%). However, the 3-year cumulative incidence of DDKT remained similar for Asian candidates (pre-KAS: 29.0%; post-KAS: 28.7%), while it decreased for White candidates (pre-KAS: 34.3%; post-KAS: 31.6%).

Conclusion: KAS has produced sustained improvements in DDKT rates for Black candidates, but not for Hispanic or Asian candidates. Nevertheless, the cumulative incidence of DDKT has become more similar post-KAS. While KAS has been successful in improving access to DDKT for Blacks, further work is necessary to identify methods to improve DDKT rates for Hispanic and Asian candidates.

 

77.07 Earlier is better: Evaluating the timing of tracheostomy after liver transplantation

R. A. Jean1, S. M. Miller1, A. S. Chiu1, P. S. Yoo1  1Yale University School Of Medicine,Department Of Surgery,New Haven, CT, USA

Introduction: Morbidity and mortality are relatively high following liver transplantation. Furthermore, severe pulmonary complications progressing to respiratory failure, though rare, are associated with increased postoperative mortality and prolonged hospitalization. Although these cases may require tracheostomy, there is uncertainty regarding how soon this should be pursued. The purpose of this study is to quantify the comparative effectiveness of early versus late tracheostomy in postoperative liver transplant patients in relation to in-hospital mortality and length of stay.

Methods:  The National Inpatient Sample (NIS) dataset between 2000 and 2014 was queried for discharges among adult patients who underwent both orthotopic liver transplant (OLT) and post-transplant tracheostomy (PTT). Patients receiving tracheostomy by post-transplantation day 14 were classified as “early” tracheostomies, while those receiving after day 14 were classified as “late". In-hospital mortality was compared between groups using adjusted logistic regression models. Cox proportional hazards regression was used to model the impact of early tracheostomy on post-tracheostomy length of stay (PTLOS), accounting for the competing risk of inpatient mortality.

Results: There were 2,149 weighted discharges after OLT and PTT during the study period, of whom 783 (36.4%) were performed by post-transplant day 14 and classified as “early.” Patients receiving early PTT were more likely to have a Charlson Comorbidity score (CCI) of 3+ compared to those receiving late PTT (early 71.1% vs late 60.0%, p=0.04), but there were otherwise no significant baseline differences between groups. Despite this increased comorbidity, early PTT had significantly lower in-hospital mortality (early 26.4% vs late 36.7%, p=0.01). Unadjusted median PTLOS was 31 days (IQR 20-48 days) for early PTT, versus 39 days (IQR 23-61 days) for late PTT (p=0.03). In adjusted logistic regression, early PTT was associated with 37% decreased odds of in-hospital mortality in comparison to late PTT (OR 0.63, p=0.04). Furthermore, after accounting for competing risk of mortality, early tracheostomy had a 41% higher daily rate of discharge alive during the post-transplant hospitalization (HR 1.41, p<0.0001).

Conclusion: Among patients with OLT, early PTT, despite being performed on patients with significantly higher comorbidity scores, was associated with lower in-hospital mortality, lower PTLOS, and quicker discharge alive. These results support our hypothesis that among patients with respiratory failure after OLT, early consideration of PTT may portend more favorable outcomes than a delayed approach.

 

77.06 Impact of Donor Diabetes on the Survival of Lung Transplant Patients

A. L. Mardock1, S. E. Rudasill1, Y. Sanaiha1, H. Khoury1, H. Xing1, J. Antonios2, P. Benharash1  1David Geffen School Of Medicine, University Of California At Los Angeles,Cardiothoracic Surgery,Los Angeles, CA, USA 2University Of California – Los Angeles,Los Angeles, CA, USA

Introduction:  Diabetes mellitus is among several factors considered when assessing the suitability of donated organs for transplantation. Currently, lungs from diabetic donors (LDDs) are feasible for all eligible recipients. The present study utilized a national database to assess the impact of donor diabetes on the longevity of lung transplant recipients.

Methods:  This retrospective study of the United Network for Organ Sharing (UNOS) database analyzed all adult lung transplant recipients from June 2006-December 2015. Donor and recipient demographics including the presence of diabetes were used to create a multivariable model. The primary outcome was five-year mortality, with hazard ratios assessed using multivariable Cox regression analysis. Survival curves were calculated using the Kaplan-Meier method.

Results: Of the 17,843 lung transplant recipients analyzed, 1,203 (12.2%) received LDDs. Recipients of LDDs were more likely to be female (44.1 vs. 40.2%, p<0.01) and have mismatched race (47.5 vs. 42.1%, p<0.01), but otherwise comparable to recipients of non-diabetic lungs. Relative to non-diabetic donors, diabetic donors were older (46.5 vs. 33.6 years, p<0.01), more likely to be female (48.3 vs. 39.1%, p<0.01), and more likely to have a history of smoking (12.2 vs. 9.8%, p<0.01), hypertension (74.6 vs. 19.0%, p<0.01), and higher BMI (28.6 vs. 25.7, p<0.01). Multivariable analysis revealed LDDs to be an independent predictor of mortality at five years (HR 1.16 [1.04-1.29], p<0.01), especially when transplanted to diabetes-free recipients (HR 1.24 [1.11-1.40], p<0.01). Transplantation of LDDs to diabetic recipients showed no independent association with five-year mortality (HR 0.81 [0.63-1.06], p=0.12).

Conclusion: Significantly higher five-year mortality was seen in patients receiving LDDs, particularly among non-diabetic recipients. However, patients with diabetes at the time of transplant who received LDDs saw no decrement in survival compared to those receiving non-diabetic lungs. Therefore, matching non-diabetic recipients to non-diabetic donors may confer a survival benefit and should be considered in lung allocation algorithms.