17.19 Back to the Basics: Trauma Team Assessment and Decision Making is Associated with Improved Outcomes

M. A. Vella1, R. Dumas1,2, K. Chreiman1, M. Subramanian1, M. Seamon1, P. Reilly1, D. Holena1  1University Of Pennsylvania,Traumatology, Surgical Critical Care And Emergency Surgery,Philadelphia, PA, USA 2University Of Texas Southwestern Medical Center,General And Acute Care Surgery,Dallas, TX, USA

Introduction:  Teamwork and decision making are critical elements of trauma resuscitation. While assessment instruments such as the non-technical skills (NOTECHS) tool have been developed, correlation with patient outcomes is unclear. Using emergency department thoracotomy (EDT) as a model, we sought to describe the distribution of NOTECH scores during resuscitations. We hypothesized that patients undergoing EDT whose resuscitations had better scores would be more likely to have return of spontaneous circulation (ROSC).

Methods:  Continuously recording video was used to review all captured EDTs during the study period. We used a modification of the NOTECH instrument to measure 6 domains (leadership, cooperation/resource management, communication/interaction, assessment/decision making, situation awareness/coping with stress, and safety) on a 3-point scale (1 = best, 2 = average, 3 = worst).  For each resuscitation, an overall total NOTECH (6-18 points) score was calculated. The primary outcome metric was ROSC. Associations between demographic, injury, and NOTECH variables and ROSC were examined using univariate regression analysis.

Results: 61 EDTs were captured during the study period. 19 patients had ROSC (31%) and 42 (69%) did not. The median NOTECH score for all the resuscitations was 9 [IQR 8-11]. As demographic and injury data (age, gender, mechanism, signs of life) were not associated with ROSC in univariate analysis, they were not considered for inclusion in a multivariable regression model between NOTECH scores and ROSC.  The association between overall NOTECH score and ROSC did not reach statistical significance, p=0.09, but examination of the individual components of the NOTECH score (Table 1) demonstrated that compared to resuscitations that had “average” (2) or “worst” (3) scores on “Assessment and Decision Making,” resuscitations with a “best” score were 5.3x more likely to lead to ROSC, p=0.017 (OR 5.3, CI 1.2-31.9).

Conclusion: While the association between overall NOTECH scores and ROSC did not reach statistical significance, assessment and decisions making did.  In patients arriving in cardiac arrest who undergo EDT, better team performance is associated with improved rates of ROSC.  Future analysis of the timing and quality of elements of resuscitation using video review may elucidate the mechanistic underpinnings of these findings.

 

17.18 Level 1 Trauma Surgeon Staffing: Is more really better?

A. Ansari1, A. Kothari1, E. Eguia1, M. Anstadt1, R. Gonzalez1, F. Luchette1, P. Patel1  1Loyola University Chicago Stritch School Of Medicine,Department Of Surgery,Maywood, IL, USA

Introduction:
Trauma is the fourth leading cause of death in the United States. Care in level 1 trauma centers is associated with improved outcomes and the determinants of this relationship continue to be studied. The objective of this study was to determine if the number of trauma surgeons on staff at level 1 trauma centers impacted outcomes.

Methods:
This study utilized data from the American College of Surgeon’s (ACS) National Trauma Data Bank (NTDB) for years 2013-2016. Inclusion criteria was set as all patient presenting to a Level 1 trauma centers with severe traumatic injuries defined as an Injury Severity Score (ISS) of 15 or greater. The primary outcome was patient survival. A multivariable logistic regression model was constructed to estimate the adjusted effect of trauma surgeon staffing on the primary outcome.

Results:
A total of 180,999 encounters were included in this study. Injured patients that received care at a trauma center with less than 4 staff surgeons had a mortality of 16.0% vs those at a trauma center with > 4 surgeons 12.4% (P=0.01). After controlling for injury severity, age, sex, and race, the odds of mortality were 0.70 (95% CI 0.53 – 0.92) comparing high staff to low staff centers. Secondary outcomes, including length of stay, ventilator time, and ICU length of stay did not differ based on trauma center staffing.

Conclusion:
Current ACS requirements for trauma surgeon staffing at Level 1 trauma centers require that there be a minimum of one trauma surgeon per center. Based on our evaluation, there seems to be clinical improvement in outcomes when a center has 4 or greater trauma surgeons on staff. This warrants further evaluation at the requirements for trauma surgeon staffing at level 1 trauma centers.
 

17.17 The Current Composition and Depth of Massive Transfusion Protocols at US Level-1 Trauma Centers

J. Williams1, C. E. Wade1, B. A. Cotton1  1McGovern Medical School at UTHealth,Acute Care Surgery,Houston, TEXAS, USA

Introduction: Recent guidelines from the American College of Surgeons Trauma Quality Improvement Program (TQIP) and the Eastern Association for the Surgery of Trauma (EAST) have made several recommendations for optimal resuscitation and transfusion of the bleeding patient. These guidelines were developed to improve outcomes in this patient population through a reduction in variation in massive transfusion protocols (MTP) at different institutions, including the recommendation of transfusion of products in ratio approximating 1:1:1 (plasma:platelets:red blood cells). However, there is little data showing how well these guidelines have been implemented. Moreover, given the concern for supporting care durig mass casualty events, there is no data evaluating the depth of product availability at these centers. The purpose of this study was to evaluate existing MTPs and on-hand blood products at academic level-1 trauma centers (TC) throughout the US and describe current and existing pratices.

Methods:  Trauma directors at the 25 busiest US level-1 TCs were asked to complete an anonymous survey regarding their MTPs and a cross-sectional survey of on-hand blood products. Continuous data are presented as medians with the 25th and 75th percentile interquartile range (IQR). Categorical data are reported as proportions.

Results: Responses were obtained from 17 TCs, with all centers having an MTP in place. The median number of trauma admissions for calendar year 2016 for responding TCs was 2838 (IQR 1813-4888), with a median number of 54 MTP patients (IQR 38-107). 76% of responding TCs report using a 1:1 ratio of plasma:red blood cells for trauma resuscitation. 82% of responding TCs are using platelets either in their first or subsequent MTP coolers, with 58% of TCs reporting platelet use in their first MTP cooler. The most commonly reported transfusion ratio of platelets:plasma:RBCs was 1:1:1, with 35% of TCs using this ratio in their first MTP cooler, and 47% for subsequent MTP coolers. Additionally, 89% of TCs report using viscoelastic testing to guide resuscitation efforts. TABLE depicts median on-hand blood products across the 17 centers.

Conclusion: This study provides a snapshot of current MTP practices throughout the US at busy level-1 trauma centers. Although all surveyed programs have a MTP in place, variation exists in the ratio of blood products used despite clear recommendations from recent guidelines. Additionally, there is great variation in the quantity of blood products at TCs, especially with regards to platelets. Further action analysis is needed to understand how differences in MTPs affect patient outcomes. 
 

17.16 Surgical Critical Care Billing at the End of Life: Are We Recognizing Our Own Efforts?

S. J. Zolin1,2, J. Bhangu1,2, B. T. Young1,2, S. Posillico1,2, H. Ladhani1,2, J. Claridge1,2, V. P. Ho1,2  1Case Western Reserve University School Of Medicine,Cleveland, OH, USA 2MetroHealth Medical Center,Division Of Trauma, Critical Care, Burns, And Emergency General Surgery,Cleveland, OH, USA

Introduction:
Practitioners in the intensive care unit (ICU) provide not only physiologic support to severely injured patients, but also spend time to counsel families and provide primary palliative care services, including goals of care conversations and symptom palliation. It is unclear whether ICU physicians account for these services consistently in their critical care billing and documentation (CCBD). We analyzed CCBD practices for moribund trauma patients cared for in the ICU of an academic level 1 trauma center, hypothesizing that CCBD would be inconsistent despite the critically ill status of these patients near the end of life.

Methods:
An analysis of all adult admitted trauma patients who died between 12/2014 and 12/2017 was performed to evaluate the presence of CCBD on the day prior to death and on day of death. CCBD was defined as the critical care time documented in daily ICU progress notes. Age, injury severity score (ISS), race, code status at time of death, and family meetings discussing prognosis and/or goals of care held within one day of death were recorded. Patients already designated as comfort care prior to the day of analysis were not considered eligible for CCBD and patients who died within 24 hours of arrival were excluded. Multivariate logistic regression was used to determine patient factors associated with CCBD.

Results:
A total of 134 patients met study criteria. 71.6% were male and 87.3% were white. The median age was 69 (IQR 58-82). Median ISS was 26 (IQR 20-33). 82.1% had a family meeting within 1 day of death. 76.5% were made comfort care prior to death. Of patients eligible for CCBD, 42.5% had no CCBD on the day prior to death and 59.3% had no CCBD for day of death, corresponding to lost potential hospital compensation in excess of $30,000. For the day prior to death, a family meeting within 1 day of death was associated with increased likelihood of CCBD (p = 0.011), while increasing age was associated with decreased likelihood of CCBD (p = 0.008).

Conclusion:
In critically ill trauma patients near death, CCBD was inconsistent, representing an opportunity for improvement. Family meetings within 1 day of death were frequent and were associated with CCBD, suggesting that additional time spent with patients and families in end of life conversations may lead to more consistent CCBD. Given the downstream impacts of CCBD on health systems, further investigation into the mechanisms and generalizability of these findings is needed.
 

17.15 Implementation of a Bedside ICU Visual Clinical Decision Support Tool Reduces Acute Kidney Injury

J. E. Baker1, C. A. Droege1, J. A. Johannigman1, J. B. Holcomb2, T. A. Pritts1, M. D. Goodman1  1University of Cincinnati,Department Of Surgery,Cincinnati, OHIO, USA 2The University of Texas,Department Of Surgery,Houston, TX, USA

Introduction:
Acute kidney injury (AKI) is a secondary insult in critical illness commonly associated with an increase in morbidity and mortality. Analyzing and determining the onset and extent of AKI remains challenging. We hypothesized that the use of a visual clinical decision support tool with validated staging and recognition for AKI may be helpful in identifying patients transitioning into different stages of injury severity. 

Methods:
A commercially available bedside clinical surveillance and decision support dashboard system was implemented in 12 of the 34 beds in a surgical intensive care unit (SICU) at an academic level I trauma center. An automated AKI bundle based on the Kidney Disease: Improve Global Outcomes (KDIGO) criteria stages was utilized to aid in identification of patients in various AKI stages. A pre-and-post analysis was performed on patients in SICU beds with (WDB) and without the dashboard (WODB) to assess the impact of the bundle in identification of patients with AKI and minimization of ongoing renal dysfunction. Data five months prior to and fourteen months after implementation were compared. Patients with known chronic or end-stage renal disease were excluded.

Results:
A total of 2813 patients were included: 988 WDB patients and 1825 WODB patients. Age and gender were similar in each group both before and after implementation. Overall AKI incidence was reduced in the WDB group after implementation (28.8% vs. 22.4%, pre vs. post; p=0.04). Individual KDIGO stages of AKI were reduced in WDB post-implementation, but none were statistically significant. By contrast, in the WODB group there were no differences in overall AKI incidence or individual KDIGO stages when comparing before and after implementation. ICU and hospital lengths of stay (LOS) were similar in all patients and on subgroup analysis between individual KDIGO stages. No difference in mortality was demonstrated between WDB and WODB cohorts.

Conclusion:
Implementation of a bedside visual clinical decision support tool was associated with a statistically significant decrease in overall AKI incidence in patients with the bedside dashboard. We did not find a difference in LOS or mortality, but this initial retrospective study may be underpowered to detect these changes. Nevertheless, integration of an AKI bundle within this tool in SICU patients may increase clinician’s identification of AKI in real time and facilitate implementation of therapies to improve quality of care.
 

17.14 A COMPARISON OF TWO THROMBOEMBOLIC PROPHYLAXIS REGIMENS WITH LOW MOLECULAR WEIGHT HEPARIN IN TRAUMA

M. Jackson1, M. S. O’Mara1, A. Vang1, P. Beery1, M. Bonta1, M. C. Spalding1  1OhioHealth/Grant Medical Center,Trauma And Acute Care Surgery,Columbus, OH, USA

Introduction:

Trauma patients are at an increased risk for the development of venous thromboembolic events (VTE).  Controversy remains regarding the adequate dosing regimen of low molecular weight heparin (LMWH, enoxaparin) for thromboprophylaxis treatment in trauma patients.  We hypothesized that 30 mg enoxaparin twice daily is superior to 40 mg enoxaparin once daily both in safety and effectiveness.

Methods:

A retrospective controlled cohort study was performed of trauma patients who received prophylactic enoxaparin before and after protocol dosing changes. The clinically significant VTE screening criteria was constant throughout both study times.  The patients in the pre-protocol change cohort received 40 mg enoxaparin once daily while those in the post-protocol change cohort received 30 mg twice daily.  Samples of 950 patients in each of the treatment groups was estimated to provide at least 80% statistical power to detect a difference between the reported VTE rates of 2.9% and 1.1%. This is based on a two-sided chi-square test, with Type I error= 0.05, comparing two independent groups. Demographics, risk factors, and incidences of VTE events were compared between the two cohorts.

Results:

2638 patients were initially analyzed and 1900 met inclusion criteria; 950 patients in the pre-protocol change cohort and 950 in the post-protocol change cohort.  The demographics between the two groups were similar.  The once daily cohort experienced VTE rates of 4.1% (39 incidences) while the twice daily cohort experienced VTE rates of 3.7% (35 incidences) (P = 0.64 NS).   When the groups were corrected for variability by logistic regression, there remained no difference in VTE rate (p=0.60 NS).

Conclusion:

30 mg enoxaparin twice daily and 40 mg enoxaparin once daily dosing regimens did not result in statistically significant changes to the incidence rates of clinically significant VTE in the population cohorts.  Both dosing regimens were effective for VTE prophylaxis in trauma patients.  There was no difference in the rates of VTE.
 

17.13 A Multicenter Study of Nutritional Adequacy in Neonatal and Pediatric Extracorporeal Life Support

K. Ohman1, H. Zhu3, I. Maizlin4,10, D. Henry5, R. Ramirez6, L. Manning7, R. F. Williams7,8, Y. S. Guner6,11, R. T. Russell4,10, M. T. Harting5,9, A. M. Vogel2,3  1Washington University,Surgery,St. Louis, MO, USA 2Baylor College Of Medicine,Surgery,Houston, TX, USA 3Texas Children’s Hospital,Surgery,Houston, TX, USA 4The Children’s Hospital Of Alabama,Surgery,Birmingham, AL, USA 5Children’s Memorial Hermann Hospital,Surgery,Houston, TX, USA 6Children’s Hospital of Orange County,Surgery,Orange, CALIFORNIA, USA 7LeBonheur Children’s Hospital,Surgery,Memphis, TN, USA 8Univeristy Of Tennessee Health Science Center,Surgery,Memphis, TN, USA 9McGovern Medical School at UTHealth,Pediatric Surgery,Houston, TX, USA 10University Of Alabama at Birmingham,Surgery,Birmingham, Alabama, USA 11University Of California – Irvine,Surgery,Orange, CA, USA

Introduction:  Extracorporeal life support (ECLS) allows for life saving treatment for critically ill neonates and children. Malnutrition in critically ill patients is extremely common and is associated with increased morbidity and mortality. The purpose of this study is to describe nutritional practice patterns of parenteral (PN) and enteral (EN) nutrition and nutritional adequacy of neonates and children receiving ECLS. We hypothesize that nutritional adequacy is highly variable, overall nutritional adequacy is poor, and enteral nutrition is underutilized compared to parenteral nutrition.

Methods:  An IRB approved, retrospective study of neonates and children (age<18 years) receiving ECLS at 5 centers from 2012 to 2014 was performed. Demographic, clinical, and outcome data were analyzed. Continuous variables are presented as median [IQR]. Adequate nutrition was defined as meeting 66% of daily caloric goals during ECLS support.

Results: 283 patients were identified; the median age was 12 days [3 days, 16.4 years] and 47% were male. ECLS categories were neonatal respiratory 33.9%, neonatal cardiac 25.1%, pediatric respiratory 17.7%, and pediatric cardiac 23.3%.  The predominant mode was venoarterial (70%). Mortality was 41%. Pre-ECLS enteral and parenteral nutrition was present in 80% and 71.5% of patients, respectively. The median caloric and protein goals for the population were 90 kcals/kg [70, 100] and 3 grams/kg [2, 3], respectively. Figure 1 shows goal, caloric and protein nutritional adequacy for the population over the duration of ECLS. The median percent days of adequate caloric and protein nutrition were 50% [0, 78] and 67% [22, 86], respectively. The median percent days with adequate caloric and protein nutrition by the enteral route alone was 22% [0, 65] and 0 [0, 50], respectively. Gastrointestinal complications occurred in 19.7% of patients including: hemorrhage (4.2%), ileus (3.2%), enterocolitis (2.5%), intraabdominal hypertension or compartment syndrome (0.7%), perforation (0.4%), and other (11%).

Conclusion: Although nutritional adequacy in neonates and children that receive ECLS improves over the course of the ECLS run, the use of enteral nutrition is remains low despite relatively infrequent gastrointestinal complications.

 

17.12 CT Scan Analysis Indicates Nutritional Status in Trauma Patients

F. Cai1, J. C. Lee2, E. J. Matta4, C. E. Wade1,3, S. D. Adams1,3  1McGovern Medical School,Surgery,Houston, TX, USA 2Memorial Hermann Hospital,Clinical Nutrition,Houston, TX, USA 3Center for Translational Injury Research,Houston, TX, USA 4McGovern Medical School,Diagnostic Radiology,Houston, TX, USA

Introduction:
More than 2 million people are hospitalized in the US annually for traumatic injuries. These patients are at risk for malnutrition due to prolonged preoperative fasting and minimal intake due to ileus or intestinal injury, and their injuries increase metabolic demands. The gold standard diagnosis for malnutrition is a dietician interview and physical exam to assess ASPEN/AND malnutrition consensus criteria. Weight loss, loss of muscle mass and fat are commonly used as indicators, along with calorie intake history, however, this requires time, resources and training. Given the prevalence and accessibility of CT imaging in trauma admissions, morphometric analysis has the potential to be an indicator of admission nutritional status. We hypothesized that admission CT scans can identify individuals at high risk of being malnourished on arrival, and this early identification can target them for aggressive nutrition supplementation.

Methods:
We did a retrospective review of adult (>15 years) patients with traumatic injuries admitted to our level I trauma center.  We included patients with admission abdominal CT scans and a dietician nutritional assessment within 3 days.  Patients were stratified by gender, age (Young<65 years, Older≥65 years), and nutritional status, designated as non-malnourished (NM) or moderate-severe malnourished (MSM). CT images were analyzed using Aquarius TeraRecon software to calculate the average psoas area at the level of 4-5th lumbar disc. Statistical significance was determined by stepwise selection modeling and set at p<0.05.

Results:
Images were analyzed in 120 patients, of which 58% were male. The mean age was 53.6 ± 21.6 and 37% were Older (n=44). The median average psoas area in NM Young males (n=47) was 18.6 cm2, compared to a median of 12.9 cm2 in the MSM. For Young females (n=29), the medians were 10.6 cm2 in the NM and 9.2 cm2 in the MSM. When looking at the older population, Older males (n=23), had a median of 12.1 cm2 in the NM and 9.7 cm2 in the MSM. Older females (n=21) had a median of 8.4 cm2 in the NM and 6.6 cm2 in the MSM. (IQ ranges in box plot graph.) With stepwise selection modeling, we found that gender and psoas size each had a significant effect on the nutritional status. Age by psoas size demonstrated an interaction on nutritional status, but did not reach significance.

Conclusion:
Our data show that average psoas area significantly decreases in patients diagnosed with malnutrition. Gender is also associated with a significant increased risk in having malnutrition. In trauma patients with admit CT scans, psoas area analysis can potentially be used to trigger a more aggressive nutrition supplementation plan upon admission, even before dietician assessment.

 

17.11 Intravenous Lidocaine as an Analgesic Adjunct in Trauma Patients: Is It Safe?

H. L. Warren2, P. Faris1, D. H. Bivins2, E. R. Lockhart2, R. Muthukattil2, D. Lollar1,2  1Virginia Tech Carilion School of Medicine,Roanoke, VIRGINIA, USA 2Carilion Clinic,Roanoke, VIRGINIA, USA

Introduction: Pain control in patients suffering traumatic injury can be challenging. Exposure to opioid pain medications can lead to prolonged dependence, therefore regimens which reduce the amount of opioid analgesia are needed. We have identified no data regarding the use of intravenous lidocaine (IVL) in trauma populations. We sought to explore the safety of IVL in these patients.

 

Methods: We performed a single- institution retrospective review of trauma patients receiving IVL from 6/30/16-6/30/17. We extracted data on demographics, pre-admission substance use, injury severity, in-hospital analgesic use, PT/OT participation rates and side- affect events. The lidocaine group was compared with a non-lidocaine control (C) group which was matched based on age, sex, race and ISS score. Patients with length of stay <24 hours were excluded from the control group.

Results:81 patients received IVL and were compared to 89 controls. Age, sex, race and ISS were no different. Significantly more patients receiving IVL had a history of narcotic and polysubstance use (p<0.01). Mortality was the same (p=1.0) Hospital length of stay was longer in the IVL group (7.5 vs 11.8, p=0.01). 38/81 patients received a bolus and all patients received a drip. The mean rate was 1.47mg/hr. Duration of therapy was 1-41 days however the mode was 3 days. 28 side effect events occurred in 23 of 81 IVL patients (28.4%). The most common side effect was delirium (14/28). The rate of side effects was higher in the elderly cohort (7/13, 53.8%) than in the adult cohort (16/68, 23.5%). There was no relationship between side effects and blood lidocaine levels. Side effects resolved with cessation of medication. Side effects occurred in 5 of 89 control patients (5.6%).

Conclusion:Side effects of IVL were common but resolved with cessation of IVL. No mortality was attributed to IVL. IVL may be a useful adjunct for patients requiring high narcotic use with careful monitoring. Use of IVL in elderly patients requires caution. These results should be clarified with prospective evaluation.

 

17.09 The Optimal Length of Stay for 90-day Readmissions after Surgeries in Tricare

T. Andriotti1, E. Goralnick1, M. Jarman1, M. A. Chaudhary1, L. Nguyen3, P. Learn2, A. Haider1, A. Schoenfeld1  1Harvard Medical School,Surgery,Boston, Massachusetts, MASSACHUSETTS, USA 2Uniformed Services University Of The Health Sciences,Surgery,Bethesda, MD, USA 3Harvard Medical School,Vascular And Endovascular Surgery,Boston, Massachusetts, MASSACHUSETTS, USA

Introduction:

Healthcare performance evaluators have prioritized reduction of length of stay (LOS) and readmissions as important measures of quality in health care. However, these two measures represent competing demands as decreased LOS may result in increased unplanned readmissions.  Our objective was to assess the optimal LOS that leads to the lowest readmission risk after discharge for knee replacement arthroplasty.

Methods:

A retrospective, open cohort study design was performed using Tricare claims, the Department of Defense’s Health Insurance product, to identify all eligible adult patients (18 – 64 years) who were discharged from elective total knee arthroplasty from 2006-2014. To estimate the optimal timepoint for LOS related to lowest 90-day readmissions, a generalized additive model with spline regression was generated to assess for the predicted risks of readmission (graph 1) adjusted for age, sex, gender, military rank as a proxy of socioeconomic status, any complications during hospital stay and Charlson comorbidity score. Readmissions included stays for all unplanned causes, reported by the principal diagnosis at the index (i.e., initial) inpatient stay within 90 days after discharge from elective total knee arthroplasty.

Results:

11,517 patients (6,910 women and 4,607 men) with a mean [SD] age of 56.94[6.34] years underwent the procedure within the study frame. 50.14% were white, 1.37% were Asian, 9.31% were black, 0.66% were American Indian, 3.12% were other and 35.39% were unknown. The median LOS was 3 days (IQR: 2-3 days) and the main causes of 90-day readmissions were post-operative infection (0.81%), mechanical complication of other internal orthopedic devices (0.41%) and knee lymphedema (0.31%). The lowest risk of being readmitted in 90 days was observed in patients discharged on the 1st day of post-operatory discharge (POD-1) (risk = 4.8%). Moreover, as LOS increases, the risk of readmissions significantly increases up to 9.73% for patients discharged on the 8th day (POD-8) (p=.0004). 

Conclusion:

LOS reduction up to one day may not culminate in increased risk of readmissions in patients whose clinical conditions allow them to be discharged on POD-1 after elective total knee arthroplasty. Hence, orthopedists may consider discharge patients with good post-surgical conditions as soon as one day after elective total knee arthroplasty.

17.08 PREDICTORS FOR DIRECT ADMISSION TO THE OPERATING ROOM IN SEVERE TRAUMA

D. Meyer1, M. McNutt1, C. Stephens2, J. Harvin1, R. Cabrera4, L. Kao1, B. Cotton1,3, C. Wade3, J. Love1  1McGovern Medical School at UTHealth,Acute Care Surgery/Surgery/McGovern Medical School,Houston, TX, USA 2McGovern Medical School at UTHealth,Trauma Anesthesiology/Anesthesiology/McGovern Medical School,Houston, TX, USA 3McGovern Medical School at UTHealth,Center For Translational Injury Research/Surgery/McGovern Medical School,Houston, TX, USA 4Memorial Hermann Hospital,LifeFlight,Houston, TX, USA

Introduction:  Many trauma centers utilize protocols for expediting critical trauma patients directly from the helipad to the OR. Used judiciously, bypassing the ED can decrease resource utilization and the time to definitive hemorrhage control. However, criteria vary by center, rely heavily on physician gestalt, and lack evidence to support their use. With prehospital ultrasound and base excess increasingly available, opportunities may exist to identify risk factors for emergency surgery in severe trauma.

Methods: All highest-activation trauma patients transported by air ambulance between 1/1/16 and 7/30/17 were included retrospectively. Transfer, CPR, and isolated head trauma patients were excluded. Patients were dichotomized into two groups based on ED time: those spending <30min who underwent emergency surgery by the trauma team and those spending >60min. Prehospital and ED triage data were used to calculate univariable and multivariable odds ratios.

Results: 435 patients met enrollment criteria over the study period. 76 (17%) spent <30min in the ED before undergoing emergency surgery (median age 31y [21-45], 82% male, 41% penetrating). 359 (83%) patients spent >60min (median age 35y [21-48], 74% male, 15% penetrating).  HR, SBP, and BE values were similar in the two groups. Mortality was higher in <30min (32% vs 9%, p<0.001). Compared to >60min, the <30min group was more likely to have: (1) penetrating trauma with SBP<80mmHg or BE<-16 (OR 15.02, 95% CI 4.64-48.61); (2) penetrating trauma with positive FAST (OR 27.54, 95% CI 9.00-84.28); or (3) blunt trauma with a positive FAST and SBP<80mmHg or BE<-10 (OR 11.98, 95% CI 4.03-35.63). Collectively, these criteria predicted 39 (51%) of the <30min group.

Conclusion: Both blunt and penetrating trauma patients with positive FAST and profound hypotension or acidosis were much more likely to require emergency surgery within 30 minutes of hospital presentation and may not benefit from time spent in the emergency department.

 

17.07 Investigation of the Reliability of EMS Triage Criteria in a Level 1 Trauma Center

R. L. Dailey1, M. Hutchison2, C. Mason2, K. Kimbrough2, B. Davis2, A. Bhavaraju2, R. Robertson2, K. Sexton2, J. Taylor2, B. Beck2  1University of Arkansas for Medical Sciences,College of Medicine,Little Rock, AR, USA 2University of Arkansas for Medical Sciences,Trauma Surgery,Little Rock, AR, USA

Introduction: EMS triage criteria determines if a patient receives the appropriate level of care. Increased mortality has been associated with triage of severely injured patients to hospitals who cannot provide definitive care, resulting in inter-hospital transfer (Nirula et al.). From the limited research, findings indicate EMS criteria is relatively insensitive for identifying seriously injured patients (Newgard; van Rein et al.). We hypothesized that trauma triage category would correlate with ISS. 

Methods: This is a retrospective observational study of trauma patients transported to the state’s only level 1 trauma center. 

Results: After excluding four patients for lack of assignment of trauma triage category and 16 patients with a designation of chief complaint not blunt or penetrating, 516 patients underwent final analysis. Additionally, patients missing either ISS, NISS, or TRISS scores were excluded in analysis involving these categories. When compared to trauma triage categories of minor (mn), moderate (md), and major (mj), ISS > 15 (p < .0001), mortality (p < .0001), and GCS category (p < .0001) were found to be significantly different according to chi square test for independence. Likewise, when compared to trauma triage category, ISS (mj: 16 ± 15.0, md: 9.4 ± 7.4, mn: 6.2 ± 6.1; p < .0001), NISS (mj: 21.6 ± 20.2, md: 11.7 ± 9.3, mn: 8.1 ± 8.3; p < .0001), and TRISS (mj: 0.81 ± 0.32, md: 0.98 ± 0.041, mn: 0.98 ± 0.028; p < .0001) were found to be significantly different according to ANOVA. Tukey post hoc analysis revealed significant differences (p < .0001) between the major and moderate and major and minor categories in ISS, NISS, and TRISS; whereas, the difference between moderate and minor categories were not significant in ISS (p = 0.0129), NISS (p = 0.0415), and TRISS (p = 0.9998). Percentages of patients discharged by emergency department services were as follows: mj: 18.9%, md: 19.2%, mn: 25.3%. 

Conclusion: Results indicate that moderate and minor trauma triage categories are similar across ISS, NISS, and TRISS. This implies a lack of sensitivity in criteria to distinguish these categories. The scores (ISS, NISS, and TRISS) were more differentiated between major and moderate and major and minor categories. Results suggest that three categories of trauma triage may not be needed, or that additional parameters are needed to better define moderate and minor triage categories. In response to this study and findings from Newgard and Van Rein et al., future research should focus on improving prehospital trauma triage protocols. 
 

17.06 Dedicated Intensivist Staffing Decreases Ventilator Days and Tracheostomy Rates in Trauma Patients

J. D. Young1, K. Sexton1, A. Bhavaraju1, M. K. Kimbrough1, B. Davis1, D. Crabtree1, N. Saied1, J. Taylor1, W. Beck1  1University of Arkansas for Medical Sciences,Division Of Acute Care Surgery/Department Of Surgery,Little Rock, AR, USA

Introduction:  Various physician staffing models exist for providing care to trauma patients requiring intensive (ICU) care.  Our institution went from an open ICU to a closed ICU in August 2017. The closed ICU mandated the primary responsibility for the care of trauma patients to be directed by board certified/eligible surgical intensivists. We hypothesized that this would decrease respiratory failure requiring tracheostomy in trauma patients.

Methods: After IRB approval, a retrospective review of all patients in our trauma registry with ventilator days > 1 were included for this study (2,206 total patients).  We then examined all National Trauma Data Set (NTDS) variables and procedures to include tracheostomy.

Results: There was no difference observed in gender, race, or mortality rates.  The open ICU was noted to have had a higher percentage of penetrating trauma (21.4% vs 13.4%, P = .0019).  The following data were observed.

Conclusion: A closed, surgical intensivist run ICU resulted in a statistically significant difference not only in tracheostomy rates, but also ICU length of stay, hospital length of stay, and ventilator days. These changes were also achieved while seeing a significantly sicker patient population as evidenced by a higher Injury Severity Score (ISS).

 

17.05 Patients with Gunshot Wounds to the Torso Differ in Risk of Mortality Depending on Treating Hospital

A. Grigorian1, J. Nahmias1, T. Chin1, E. Kuncir1, M. Dolich1, V. Joe1, M. Lekawa1  1University of California, Irvine,Surgery,Orange, CA, USA

Introduction: The care provided and resulting outcomes may differ in patients with a gunshot-wound (GSW) treated at an American College of Surgeon’s Level-I trauma center compared to a Level-II center. In addition, there has recently been an increase in the non-operative management (NOM) of GSWs in the right upper quadrant or those with a tangential trajectory. Previous studies have had conflicting results when comparing risk of mortality in patients with GSWs treated at Level-I and II centers. However, the populations studied were restricted geographically. We hypothesized that patients presenting after a GSW to the torso at a Level-I center would have a shorter time to surgical intervention (exploratory laparotomy or thoracotomy), compared to a Level-II in a national database. We also hypothesized that patients with GSWs managed operatively at a Level-I center would have a lower risk of mortality.

Methods: The Trauma Quality Improvement Program (2010-2016) was queried for patients presenting to a Level-I or II trauma center after a GSW. Patients with grade>1 for abbreviated injury scale of the head, neck and extremities were excluded to select for patients with injuries to the torso. A multivariable logistic regression analysis was performed.

Results: From 17,965 patients with GSWs, 13,812 (76.8%) were treated at a Level-I center and 4,153 (23.2%) at a Level-II center. There was no difference in the median injury severity score (ISS) (14, p=0.55). The Level-I cohort had a higher rate of laparotomy (38.9% vs. 36.5%, p<0.001) with a shorter median time to laparotomy (49 vs. 55 minutes, p<0.001) but no difference in rate (p=0.14) and time to thoracotomy (p=0.62). GSW patients at a Level-I center managed with laparotomy (11.5% vs. 13.8%, p=0.02) or thoracotomy (50.8% vs. 61.5%, p=0.01) and those with NOM (12.8% vs. 14.0%, p=0.04) had a lower rate of mortality. After adjusting for covariates, only patients undergoing thoracotomy (OR=0.67, CI=0.47-0.95, p=0.02) or those with NOM (OR=0.85, CI=0.74-0.98, p=0.03) at a Level-I center had lower risk for death, compared to Level-II.

Conclusion: Despite having a similar ISS, patients presenting after GSWs to the torso at a Level-I center undergo laparotomy in a shorter time compared to those treated at a Level-II center and although they had a trend towards a lower mortality risk, this was not statistically significant. Patients with GSWs managed with thoracotomy or with NOM at a Level-I center have a lower risk of mortality, compared to a Level-II. Future prospective studies examining variations in practice, resources available and surgeon experience to account for these differences are warranted and to determine optimal pre-hospital trauma designation for this population.

 

17.04 Evaluating Failure-to-Rescue as a Center-Level Metric in Pediatric Trauma

L. W. Ma1, B. P. Smith1, J. S. Hatchimonji1, E. J. Kaufman1, C. E. Sharoky1, D. N. Holena1  1University Of Pennsylvania,Philadelphia, PA, USA

Introduction:  Failure-to-rescue (FTR) is defined as death after a complication and has been used to evaluate quality of care in adult patients after injury. The role of FTR as a quality metric in pediatric populations is unknown. The aim of this study was to define the relationship between rates of mortality, complications, and FTR at centers managing pediatric (<18 years of age) trauma in a nationally representative database. We hypothesized that centers with high mortality would have higher FTR rates but complication rates would be similar between high- and low-mortality centers.

 

Methods:  We performed a retrospective cohort study of the 2016 National Trauma Data Bank. We included patients <18 years with an Injury Severity Score (ISS) of ≥9. We excluded centers with a pediatric patient volume of <50 patients or that reported no complications. We calculated the complication, FTR, mortality, and precedence (the proportion of deaths preceded by a complication) rates for each center and then divided the centers into tertiles of mortality. We compared complication and FTR rates between high and low tertiles of mortality using the Kruskal-Wallis test.

 

Results: In total, we included 25,792 patients from 171 centers in the study. Patients were 67% male, 65% white, had a median age of 10 (IQR 5-15), and had a median ISS of 10 (IQR 9-17), a median GCS motor score of 6 (IQR 6-6), and a median systolic blood pressure of 120 (IQR 109-132). Overall, 948 patients had at least one complication for an overall complication rate of 4% (center level 0-19%), while 47 patients died after a complication for an overall FTR rate of 5% (center level 0-60%). High-mortality centers had both higher FTR rates (8% vs 0.5%, p = .013) and higher complication rates (5% vs 3%, p = .011) than lower-mortality hospitals. The overall precedence rate was 15% with a median rate of 0% (IQR 0%-20%).

 

Conclusion: Both complication and FTR rates are low in the pediatric injury population. However, complication and FTR rates are both higher at higher-mortality centers. The low overall complication rates and precedence rates likely limit the utility of FTR as a valid center-level metric in this population, but further investigation into individual FTR cases may reveal important opportunities for improvement.

 

17.03 Optimizing Lower Extremity Duplex Ultrasound Screening After Injury

J. E. Baker1, G. E. Niziolek1, N. Elson1, A. Pugh1, V. Nomellini1, A. T. Makley1, T. A. Pritts1, M. D. Goodman1  1University Of Cincinnati,Department Of Surgery,Cincinnati, OH, USA

Introduction:
Venus thromboembolism (VTE) remains a significant cause of morbidity and mortality after traumatic injury. Multiple assessment strategies have been developed to determine which patients may benefit from lower extremity duplex ultrasound (LEDUS) screening for deep vein thrombosis (DVT). We hypothesized that screening within 48 hours of admission and in patients with a Risk Assessment Profile (RAP) ≥  8 would result in fewer LEDUS screening exams performed and a shorter time to VTE diagnosis without increasing the rate of VTE-related complications. 

Methods:
A retrospective review was conducted on trauma patients admitted from 7/1/2014-6/30/2015 and 7/1/2016-6/30/2017. In 2014-2015, patients with a RAP score ≥  5 underwent weekly screening LEDUS exams starting on hospital day 4. By 2016-2017, the protocol was changed to start screening patients with a RAP score ≥  8 by hospital day 2. Patients were identified based on the aforementioned criteria and demographic data, injury characteristics, LEDUS exam findings, chemoprophylaxis type, and time of initial administration were collected.

Results:
In 2014-2015 a total of 3920 patients underwent evaluation by the trauma team, while in 2016-2017 a total of 4213 patients underwent trauma evaluation (Table). Fewer LEDUS exams were performed in 2016-2017. Of those patients who underwent screening LEDUS exams, a significantly higher RAP score and ISS score were demonstrated in 2016-2017. No significant difference was seen in the number of patients presenting with DVT or pulmonary embolism (PE) between the two cohorts. DVTs were most often identified on the first LEDUS exam in both cohorts. Of patients in whom a DVT was diagnosed on screening LEDUS exam, a significantly higher RAP score (12 vs. 10), a shorter time to first duplex (1 vs. 3 days), and a shorter time to DVT diagnosis (2 vs. 4 days) were observed in the 2016-2017 cohort. There was no significant difference in the time to initiate VTE prophylaxis, the number of DVTs found, the type of DVTs found, or the treatment of the DVTs. In patients who were found to have PE, no significant differences were demonstrated between RAP score, time to VTE prophylaxis, time to PE, percentage of patients with a DVT as well as PE, or reasons for duplex performed in all cohorts.

Conclusion:
By changing LEDUS screening to a RAP ≥  8 and within 48 hours of admission, fewer duplexes were performed and the majority of DVTs were found earlier without a difference in DVT location or PE incidence.  Refinement of lower extremity Doppler ultrasound screening protocols decreases over-utilization of hospital resources without compromising patient outcomes.
 

17.02 Are We Failing Trauma Patients with Serious Mental Illness? A Survey of Level 1 Trauma Centers

D. Ortiz1, J. V. Barr1, J. A. Harvin1, M. K. McNutt1, L. Kao1, B. A. Cotton1  1McGovern Medical School at UTHealth,Acute Care Surgery,Houston, TX, USA

Introduction: Psychiatric illness is an independent risk factor for trauma and recidivism. Budget cuts have steadily decreased funding for public hospitals and have resulted in states closing public psychiatric inpatient beds. It is unclear if and how these trends have affected resources for trauma patients with preexisting mental illness. The purpose of this study was to gauge perceptions of needed and currently available resources for this patient population.

Methods:  A 10-question survey was developed to capture the volume of psychiatric patients, available psychiatric services, and perceived need for resources. The questions were inspired by discussions with three independent psychiatrists with trauma patient practices. The survey was peer reviewed and modified by two separate trauma researchers. It was sent to 27 trauma surgery colleagues at different Level-1 trauma centers across the United States using a SurveyMonkey email link. Responses were anonymous and descriptive analyses were performed.

Results: 22 of 27 surgeons responded (81% response rate). Of the responding centers, 10 (47.6%) admitted 1-5 patients with preexisting serious mental illness weekly, while 6 (27.3%) and 5 (22.7%) admitted 6-10 and >10, respectively. One center did not respond to this question. 14 of 22 (63.6%) reported having acute situational support services available for trauma patients. Ten (45.5%) respondents did not know how many psychiatry consultants were available at their institution, while a single center had one consultant available. Six (27.3%) and 5 (22.7%) had 2-4 and 5 or more consultants, respectively. Twelve (54.6%) surgeons reported to have no designated outpatient follow-up for acute or chronic psychiatric issues for trauma patients, while 2 (9.1%) didn’t know. Sixteen (72.7%) stated that expanded psychiatric services are needed for their trauma center, while 4 (18.8%) said they didn’t know, one said no (4.55%), and one (4.55%) hasn’t thought about it. The final question allowed the respondents to choose multiple areas of perceived need for improvement in psychiatric care for the trauma patient (Table).

Conclusion: Trauma patients frequently present with preexisting serious mental illness. Over half of the surveyed surgeons reported having no outpatient follow-up for these patients, and almost three quarters perceived the need for expansion of psychiatric services. Strikingly, many respondents were unaware of the psychiatric resources available at their centers, while a few had not thought about the challenges in treating this vulnerable patient population. In addition to a lack of resources, these findings highlight an overlooked gap in high quality, patient-centered trauma care. 

 

17.01 To Close or Not to Close – Skin Management after Trauma Laparotomy

J. Woloski1, S. Wei1, G. E. Hatton1, J. A. Harvin1, C. E. Wade1, C. Green1, V. T. Truong1, C. Pedroza1, L. S. Kao1  1McGovern Medical School at UTHealth,Trauma Surgery,Houston, TX, USA

Introduction:  Skin management after fascial closure may influence the risk of superficial surgical site infection (SSSI) development, which occurs in up to 25% of patients after emergent trauma laparotomy. Leaving skin open is thought to decrease SSSI risk, but increases wound care burden and results in poor cosmesis. Given the lack of high-quality evidence guiding skin management after trauma laparotomy, it is unknown whether skin incisions are being closed or left open appropriately. We aimed to characterize skin management in adult trauma laparotomy patients and to determine whether skin closure strategy is associated with SSSI.

Methods:  We performed a retrospective cohort study of a trauma laparotomy database between 2011 and 2017 at a high-volume, level-1 trauma center. SSSI diagnoses were determined by chart review according to the Center for Disease Control definition. Patients who never achieved fascial closure and those who died prior to the first recorded SSSI (on postoperative day 2) were excluded. Open versus closed skin management was determined by reviewing operative reports. Open skin entailed use of gauze packing or wound VAC, and closed skin entailed closured with staples (with or without wicks) or sutures. Univariate and multivariable analyses were performed. The multivariable model included variables that generated the best area under the curve (AUC). Inverse probability weighted propensity scores (IPWPS) were used to compare patients’ predicted probability for open versus closed skin management with the skin management strategy they received.

Results: Of 1322 patients, 309 (23%) received open skin management, while 1013 (77%) had skin closure. The overall SSSI rate was 6%. On univariate analysis, there were no significant differences in development of SSSI in open versus closed skin groups (8% versus 6%, p = 0.12). On adjusted analysis, damage control laparotomy, wound class 2, skin closure, large bowel resection, and higher body mass index were significantly associated with SSSI (Table). Skin closure has 3-times higher odds of SSSI development. IPWPS assignment showed that 75% of patients with closed skin had a propensity score of >0.9 for skin closure. In contrast, 11% of patients with open skin had a propensity score of <0.1 for skin closure.

Conclusion: Even though the rate of SSSI was only 6%, almost 25% of trauma patients had initial open skin management. Although there was consistency in the use of skin closure based on patient and wound characteristics, skin closure was associated with higher odds of SSSI. Better predictive models are needed to accurately stratify patients’ risk for SSSI after emergent trauma laparotomy to determine optimal skin management strategy.

102.20 Atraumatic Splenic Rupture Does it Mandate Intervention? A Case Series and Review of the Literature

A. Rogers1, L. Sadri1, V. Eddy2, O. Kirton1, T. Vu1  1Abington Jefferson Health,Department Of Surgery,Abington, PA, USA 2Maine Medical Center,Department Of Surgery,Portland, ME, USA

Introduction:
Management of acute splenic trauma and injury has been well studied. National trauma societies have published guidelines to support clinical decision making. Meanwhile, splenic “injury” not associated with trauma is confined to the realm of case reports and antidote. Most cases discussing the management of “atraumatic splenic injury” focus on an underlying diseased spleen and advocate for aggressive management. We aim to better define the literature and propose a guideline for management of splenic injury in non-trauma patients.

Methods:
We reviewed a series of 5 cases between two institutions over the period of two years focusing on patient presentation, hemodynamic stability, underlying disease, choice of management, and ultimate outcome. We then conducted a review of the available literature regarding the management of atraumatic splenic rupture and injury. We focused on operative (splenectomy) compared non-operative (embolization or expectant management) treatment strategies.

Results:
Each case we reviewed was handled differently and showed significant variation at the discretion of the attending surgeon. Treatment ranged from ICU admission with serial exams and laboratory studies to splenectomy. There appeared to be a mild correlation between initial presentation and imaging results and aggressive management, variations did not appear to alter ultimate patient outcome.

Conclusion:
The management of splenic injury in the absence of trauma or on the diseased spleen is poorly studied and lacks any standardization or existing guidelines. Based on our review of cases at our two institutions we would propose that conservative management of splenic injury in the diseased spleen with minimal to no preceding trauma can be safely managed in a similar manner to that of an acute injury associated with a traumatic event. 
 

102.19 The Hazards of Ingesting Wire Grill-Brush Bristles: Optimizing Prevention, Diagnosis and Management.

K. A. Calabro1,2, J. Y. Zhao2, E. A. Bowdish1,2, C. M. Harmon1,2, K. Vali1,2  1John R. Oishei Children’s Hospital,Department Of Pediatric Surgery,Buffalo, NY, USA 2University at Buffalo Jacobs School of Medicine and Biomedical Sciences,Department Of Surgery,Buffalo, NY, USA

Intro:
Accidental wire grill-brush ingestion is a largely unidentified threat to children. Injuries affect multiple organ systems, resulting in morbidity and even mortality. We sought to review available literature to characterize wire grill-brush injury.

Methods:
A review of Ovid MEDLINE ®, PubMed, Google Scholar, and two injury databases National Electronic Injury Surveillance System (NEISS), and Safer Products (SP) government database was conducted by two independent auditors. The literature search was performed using the terms “bristle brush,” “grill brush,” and “wire brush.” The injury database search required that all events had one of the following codes linked with it: (41) ingestion, or (56) foreign body, (0) internal, (88) mouth, or (89) neck, (480) household cleaning products, (837) wire unspecified, (3218) charcoal or wood-burning grills, (3229) electric grills, (3248) gas or LP grills or stoves, (3230) kerosene grills or stoves, (3233) other grills or stoves, (3249) grills not specified. Variables of interest included common symptomatology, associated foods, time to presentation, and treatment course.

Results:
A total of 92 cases of wire grill-brush injury were identified; 43 from literature review, 35 from NEISS, and 14 from SP. The combined case list was reviewed and data was extracted. Complete case information was missing in a majority of patients, but in general, genders were affected equally and 10% of patients were under 19 years of age. The most common foods were hamburgers and grilled chicken. The main diagnostic imaging tests were CT scan (38%), and XR (29.3%). Of the known 58 cases 22.4% required intervention using a combination of laryngoscopy, endoscopy and surgery. Operative management alone was used in 23 (39.7%), whereas 6 (10.3%) were treated by laryngoscopy alone and 6 (10.3%), endoscopy alone. The majority of known cases (18, 58.0%) presented over 24 hours after suspected ingestion; of those, 7 (22.6%) presented over 1 week after suspected ingestion. Injuries involving the head and neck were more frequent (53.2%) than abdominal injuries (23.9%), and a significant amount of the injuries were unknown/unlisted (22.8%). Neck exploration occurred in 6.8%, abdominal surgery (laparoscopy or laparotomy) in 29.3%, laryngoscopy or endoscopy in 27.5%, and 3.4% required multiple operative procedures that resulted in failed retrieval.

Conclusions:
Wire grill-brush associated injuries are variable, and often present with a significant delay after presumed ingestion. Diagnostic imaging modalities are quite variable, and significant proportions of patients treated for ingestion require operative intervention. More information is needed to better characterize rare but perhaps underappreciated injuries stemming from wire grill-brush ingestion, and to better inform prevention strategies.