18.09 The relationship between bonding social capital and long-term outcomes after traumatic injury

R. Manzano-Nunez1, J. P. Herrera-Escobar1, A. Toppo3, D. Blake2, N. Levy-Carrick5, T. DeRoon-Cassini4, G. Velmahos6, G. Kasotakis2, A. Salim1, A. Haider1, D. Nehra1  1Brigham And Women’s Hospital,Surgery,Boston, MA, USA 2Boston University,School Of Medicine,Boston, MA, USA 3Tufts Medical Center,Boston, MA, USA 4University Of Wisconsin,Madison, WI, USA 5Brigham And Women’s Hospital,Psychiatry,Boston, MA, USA 6Massachusetts General Hospital,Surgery,Boston, MA, USA

Introduction:  Social capital (SC) refers to the quality and quantity of social relations. Bonding SC refers to close relationships between family members or good friends. An individuals' health outcomes are known to be closely associated with individual-level SC, but the effect of SC after trauma is not known. We aim to determine the association between individual level bonding SC and long-term outcomes after injury. 

Methods: Adult trauma patients with an ISS ≥9, admitted to one of three Level I Trauma Centers between 2015-2018 were contacted by phone 6 and 12 months post-injury. Patients were asked to complete an interview to assess their Health-Related Quality of Life (HRQoL) using the 12-item Short Form Survey (SF-12) and the validated Trauma Quality of Life (TQoL) survey.  As a part of the TQoL survey all patients were asked to respond on a 5-point Likert scale to the following question: “My injuries have negatively changed my relationships with my family, friends, or intimate partner.”  We used responses to this question as a proxy of Bonding SC. Respondents who chose strongly agree/agree and strongly disagree/disagree were considered to have weak and strong bonding SC respectively. Multivariable adjusted linear regression models were used to determine independent cross-sectional associations between weak Bonding SC and HRQoL.For the subset of patients that completed the survey at 6 and 12 months post-injury longitudinal analyses were conducted to evaluate recovery trajectories.

Results: A total of 609 patients completed the phone screen at 6 months post-injury. Of these, 480 were classified as having strong bonding SC and 129 as having weak bonding SC. Patients with weak bonding SC were significantly younger [52±21.2 vs 61±19.7years; p<0.0001] and had significantly higher ISS [15.1±7 vs 13.7±6.8; p=0.03]. After multivariable linear regression, weak bonding SC was an independent predictor for both worse mental (β=-14.34, 95% CI: -16.57 to -12.11; p<0.001) and physical health (β=-6, 95% CI: -8.32 to -3.69; p<0.001) 6 months post-injury. Of the 609 patients recruited, 229 patients were successfully followed at 12 months post-injury. Of these 229 patients, 175 were classified as having strong bonding SC and 54 as having weak bonding SC. Longitudinal analyses showed that there were significant differences in recovery trajectories between individuals with strong bonding SC when compared to those with weak bonding SC with respect to mental (p=0.005) and physical health (p=0.037) (Figure).

Conclusion:  Identifying individuals with deficient close social relationships during their follow-up after trauma may help guide interventions to improve their long-term recovery.

 

18.08 The Association of ABO Blood Groups and Trauma Outcomes

M. W. Sauder1,2, T. Wolff1,3, M. C. Spalding1,2, U. B. Pandya1,2  3OhioHealth Doctors Hospital,Department Of Surgery,Columbus, OHIO, USA 1OhioHealth Grant Medical Center,Division Of Trauma And Acute Care Surgery,Columbus, OH, USA 2Ohio University,Heritage College Of Osteopathic Medicine,Dublin, OH, USA

Introduction:
Certain ABO blood types have been identified as risk factors for a variety of disease processes including acute respiratory distress syndrome, acute kidney injury, myocardial infarction, and venous thromboembolism. However, there is a relative paucity of literature regarding the implications of ABO blood type on characteristics and outcomes of traumatically injured patients. A recent study concluded that blood type O was associated with higher mortality in severely injured patients in Japan. The purpose of this study was to determine the association of ABO blood types with outcomes in traumatically injured patients in the United States.

Methods:
This retrospective study evaluated all category 1 and 2 trauma alerts at an urban, Level 1 trauma center from January 1, 2017 through December 31, 2017. Data was obtained from the institutional trauma database and electronic medical record. Patients were excluded if they were pregnant, less than 16 years old, or if blood type data was unavailable. Recorded outcomes included: ABO blood group, mortality, Injury Severity Score (ISS), race, ventilator days, transfusion requirements, massive transfusion protocol, injury type, mechanism of injury, and complications. Data analysis was performed using descriptive statistics including chi-squared and analysis of variance (ANOVA) calculations. 

Results:
A total of 3,779 patients met inclusion criteria. The proportions of ABO blood types represented by the patients in our sample data were not significantly different than published national averages. Likewise, no significant differences in age, gender, or ISS were present between blood types. Blood type AB was associated with a statistically significant increase in mortality rate in severely injured (ISS>15) Caucasian patients compared to non-AB blood types (39% vs. 16%; p=0.01). This relationship was not consistent among African-American patients (p=0.37). Neither race exhibited differences in hospital length of stay, intensive care unit length of stay, or ventilator days.

Conclusion:
Blood type AB is associated with increased mortality in severely injured Caucasian patients. This is in contrast to findings in Japanese and African American patients. Though this requires further validation, there is a potential correlation between ABO blood type, ethnicity, and trauma outcomes.

18.07 Chemoprophylaxis and Venous Thromboembolism in Traumatic Brain Injury at Different Trauma Centers

E. O. Yeates1, A. Grigorian1, S. D. Schubl1, C. M. Kuza2, V. Joe1, M. Lekawa1, B. Borazjani1, J. Nahmias1  2University Of Southern California,Anesthesia,Los Angeles, CA, USA 1University Of California – Irvine,General Surgery,Orange, CA, USA

Introduction: Patients presenting after severe traumatic brain injury (TBI) are at an increased risk of developing venous thromboembolism (VTE). Due to concerns of worsening intracranial hemorrhage, some clinicians are hesitant to start VTE chemoprophylaxis in this high-risk population. We hypothesized that American College of Surgeons (ACS) verified Level-I trauma centers are more likely to start VTE chemoprophylaxis in adults with severe TBI, compared to Level-II centers. Additionally, we hypothesized that Level-I centers would start VTE chemoprophylaxis earlier and have a lower risk of VTE.

Methods: The Trauma Quality Improvement Program (2010-2016) was queried for patients with a severe grade (>3) for abbreviated injury scale (AIS) of the head. Those that died within 24 hours or had an AIS grade of 6 were excluded. Patients were compared based on the treating hospital: Level-I versus Level-II. A multivariable logistic regression analysis was performed.

Results: From 204,895 patients with severe TBI, 143,818 (70.2%) were treated at a Level-I center and 61,077 (29.8%) at a Level-II center. Compared to severe TBI patients treated at a Level-II center, those at a Level-I center had a lower rate of midline shift >5 mm (26.2% vs. 27.6%, p=0.01), but higher rates of severe AIS grades for the spine (0.9% vs. 0.7%, p<0.001) and lower extremity (0.6% vs. 0.4%, p<0.001). There was no difference in total length of stay (LOS) or intensive care unit (ICU) LOS between the two groups (p>0.05). The Level-I cohort had a higher rate of using VTE chemoprophylaxis (43.2% vs. 23.3%, p<0.001) and shorter median time to chemoprophylaxis (61.9 vs. 85.9 hours, p<0.001), with a lower rate of deep vein thrombosis (4.9% vs. 5.8%, p<0.001) compared to Level-II centers. After controlling for covariates, Level-I centers had a higher likelihood of starting VTE chemoprophylaxis (OR=2.47, CI=2.39-2.56, p<0.001), but had no difference in the risk of VTE (p=0.41) compared to Level-II centers.

Conclusion: ACS Level-I trauma centers were found to be more than twice as likely to start VTE chemoprophylaxis and administered it nearly 24-hours sooner than Level-II centers. However, this did not translate to a decreased risk for VTE events at Level-I centers compared to Level-II centers on a multivariable analysis. Future prospective studies are warranted to evaluate the timing, safety, and efficacy of early VTE chemoprophylaxis in severe TBI patients.
 

18.06 A Novel Method of Quantifying Pain Associated with Rib Fractures

P. Farley1, R. L. Griffin2, J. O. Jansen3, P. L. Bosarge3  1University Of Alabama at Birmingham,Emergency Medicine,Birmingham, Alabama, USA 2University Of Alabama at Birmingham,Epidemiology,Birmingham, Alabama, USA 3University Of Alabama at Birmingham,Acute Care Surgery,Birmingham, Alabama, USA

Introduction:

Rib fractures are a major problem characterized by pain, which is poorly understood. Measuring total pain experience, taking into account the duration as well as intensity, could facilitate comparisons of different treatments. The aim of this study was to evaluate the feasibility of quantifying pain reported over the course of an admission and to identify factors associated with more pain in patients with rib fractures.

Methods:

Patients admitted to a Level-I academic trauma center with rib fracture or flail chest between 2015 and 2017 were included. For each patient, the maximum pain score (verbal or non-verbal) was calculated for each hospital day. Total pain reported was defined as the sum of the area under the curve (AUC) of the max pain scores plotted against time. The AUC was calculated based on the trapezoidal rule. A general linear model was used to determine demographic, injury, and clinical predictors of the pain AUC. In a post-hoc analysis, models were stratified by Injury Severity Score categories (i.e., 1-8, 9-14, 16-75) to determine whether predictors differed by injury severity.

Results:

We identified 3,713 patients. Increased pain experience was observed for those aged 40-59 years compared to those 18-39 years (B=6.1, p=0.002); ISS 9-14 (b=11.5, p<0.001) and ≥16 (B36.9, p<0.0001) compared to ISS 1-8; patients with flail chest compared to those with multiple rib fractures (B=17.1, p<0.001); and patients who underwent rib fixation (B=20.7, p=0.004). Decreased pain experience was observed for male gender (B=-3.7, p=0.032) and blunt mechanism of injury (B=-13.7, p<0.0001). Associations were broadly similar when the analysis was stratified by overall injury severity, though blunt mechanism was not associated with pain among ISS 1-8 and age ≥60 was associated with decreased pain among those with ISS 16-75 (B=-7.4, p=0.026).

Conclusion:

This study demonstrates the feasibility and usefulness of measuring patients’ total pain experience over the duration of their admission. As reported, pain—a product of pain and analgesia—is a subjective but highly relevant measure of patients’ experience. Our study identifies a number of predictive factors, some expected (such as overall injury severity, reflecting additional injuries) and some unexpected. Increased overall pain experience following fixation may be the result of severe pain prior to intervention. We are planning further work to advance the concept of total reported pain experience.
 

18.04 Novel Oral Anticoagulants vs LMWH for Thromboprophylaxis in Operative Pelvic Fractures

M. Khan1, J. Con1, F. Jehan1, R. Latifi1  1Westchester Medical Center,Surgery,Valhalla, NY, USA

Background: Pelvic fractures have been identified as a risk factor for venous thromboembolic (VTE) complications. Recent literature shows the superiority of novel oral anticoagulants (NOACs) over low molecular weight heparin (LMWH) for thromboprophylaxis in orthopedic patients. The aim of our study was to evaluate the impact of NOACs vs. LMWH on outcomes in patients with operative pelvic fractures.

Methods: We performed a 2-year (2015-16) analysis of the ACS-TQIP database. We included all adult patients with isolated blunt pelvic fractures who were managed operatively and received post-operative thromboprophylaxis with either LMWH or NOACs (Factor Xa inhibitor and direct thrombin inhibitor). Patients were stratified into two groups based on the type of thromboprophylactic agent (NOACs vs. LMWH) and were matched in a 1:2 ratio for demographics, admission vitals, injury parameters, hospital stay, facility, and timing of initiation of thromboprophylaxis. Primary outcomes were rate of DVT and/or PE. Secondary outcomes were pRBC transfusions, and intervention for hemorrhage control after initiation of thromboprophylaxis. 

Results: We analyzed 11,219 patients with pelvic fractures. A total of 3,529 patients with isolated pelvic fractures were included of which 708 patients were matched (NOACs: 236; LMWH: 472). Mean age was 61±12 and median ISS was 12[10-16]. Matched groups were similar in demographics, vitals and injury parameters, hospital stay, and timing of initiation of thromboprophylaxis. Overall 5.8% of patients had DVT, and 1.8% PE%. Patients who received NOACs were less likely to develop DVT (2.9% vs. 7.2%, p=0.01) compared to LMWH. There was no difference in PE (1.6% vs. 1.9%, p=0.28) between the two groups. Similarly there was no difference in post prophylaxis blood products transfusion, and post-prophylaxis intervention for hemorrhage control(Table 1.).

Conclusion: In patients with operative pelvic fracture, thromboprophylaxis with novel oral anticoagulant is associated with lower rate of DVT. There was no association between type of thromboprophylactic agent with PE. Further prospective clinical trials should evaluate the role of NOACs for thromboprophylaxis in high-risk trauma patients.

18.03 Repeat CT Head is Not Indicated for Patients on Novel Anticoagulants with Mild TBI

C. Cohan1, G. Beattie1, D. Dominguez1, M. Cochran1, B. Palmer1, G. P. Victorino1  1University of California San Francisco – East Bay,Department Of Surgery,Oakland, CA, USA

Introduction:  Guidelines regarding management of patients on anticoagulation suffering from mild blunt traumatic brain injury (TBI) are unclear. Repeat CT head (CTH) is commonly performed on anticoagulated patients after an initial negative CTH to assess for delayed intracranial hemorrhage (ICH-d). The reported incidence of ICH-d in anticoagulated patients is 0.51-6%. Current literature focuses on patients taking prehospital warfarin. Since approval of several novel oral anticoagulants (NOACs) including rivaroxaban, apixaban, and dabigatran, there has been a steady increase in trauma patients presenting on these medications. The rate of ICH-d in this patient population is unknown. We hypothesized that the incidence of ICH-d in patients on NOACs would be low, similar to patients on warfarin, and that routine repeat CTH after initial negative CTH in patients on NOACs is not indicated. 

Methods:  We performed a retrospective chart review of all adult patients on anticoagulation presenting to our level I trauma center for blunt trauma between February 2016 and May 2018. We excluded patients who had a positive initial CTH, who did not have a repeat CTH within 24 hours of initial scan, or who had a GCS <13 on arrival. CTH was repeated 4-12 hours after initial CTH. Clinical outcomes including ICH-d, discharge GCS, neurosurgical intervention, readmission, and death were assessed. Comparisons were made using a chi squared test or ANOVA. Data are presented as mean ± standard error of the mean.

Results: A total of 218 patients met inclusion criteria with an average initial GCS of 14.7 ± 0.03. The following groups were evaluated: warfarin only (n=133), NOAC only (n=68), and anticoagulation (warfarin or NOAC)-ASA combination regimen (n=17). The average INR for each group was 2.7 ± 0.24, 1.4 ± 0.03, and 2.2 ± 0.26, respectively (p<0.01). Average age in years for each group was 78.3 ± 1.1, 75.7 ± 0.86, and 72.9 ± 3.0, respectively (p=0.2). The overall incidence of ICH-d after initial negative CTH was 2.5%. The incidence in the NOAC only group was 1.5% (1/68) vs. 2.3% (3/133) in the warfarin only group, (p=0.71). In the warfarin group, 66% (2/3) of patients with ICH-d had a supratherapeutic INR on presentation. There were no ICH-d events in the anticoagulation-ASA combination group.

Conclusion: To our knowledge, this is the largest study of patients on novel anticoagulants to assess ICH-d in mild TBI. We found that the incidence rates of ICH-d are similar between patients taking NOACs and warfarin. In the one patient with ICH-d on novel anticoagulation, no neurosurgical intervention, decline in GCS, readmission or death occurred. Our findings suggest there is no indication for repeat imaging in patients on novel anticoagulation presenting with mild TBI. Limiting unnecessary imaging in this substantial and growing population may save time, reduce costs, and improve allocation of resources. 

18.02 Role of Extracorporeal Membrane Oxygenation (ECMO) In Trauma Patients: A Five-Year Analysis

W. Chen1, M. Zeeshan1, E. Zakaria1, N. Kulvatunyou1, T. O’Keeffe1, L. Gries1, A. Tang1, A. Northcutt1, B. Joseph1  1University Of Arizona,Trauma And Acute Care Surgery,Tucson, AZ, USA

Introduction:
Extracorporeal-membrane-oxygenation (ECMO) has been utilized in the neonatal respiratory distress syndrome. However, its role in trauma is evolving. The aim of our study was to evaluate the survival, trends of utilization and identify factors associated with mortality after ECMO in trauma patients over a 5-year period.

Methods:
We performed a 5-year (2008-2012) review of all trauma patients in the NTDB. We included all patients who underwent ECMO. Our primary outcome measures were trends of utilization and survival after ECMO in 5-year study period. Secondary outcome measures were in-hospital complications and factors associated with mortality after ECMO. Regression and trend analysis were performed.

Results:
Of the 808,211 trauma patients identified, 179 patients underwent ECMO. Mean age was 33y±15, 80% were male, and median ISS was 22[13-33]. Mechanism of injury (MOI) was blunt in 82%. Overall mortality was 34% (penetrating: 21% vs. blunt: 37%, p=0.03), and 31% (56/179) were discharged to SNF. The utilization of ECMO increased from 13.9 to 32.2 per 100,000 trauma admissions (p=0.01) while mortality rate decreased from 46.2% to 24% (p<0.001) during the study period (Figure 1). 47.5% of patients who received ECMO had ARDS, 45.7% had pneumonia, 32.1% had cardiac arrest, 22.8% developed AKI and 14.8% developed sepsis. On regression analysis, increasing age (OR: 1.12[1.03-1.25]) and ISS (OR 1.17[1.03-1.26]) were independently associated with mortality while centers that were performing multiple ECMO were associated with improved mortality (OR: 0.55[0.12-0.82]). 

Conclusion:
Two thirds of patients who undergoes ECMO survived. The use of ECMO in trauma patients has increased while the mortality rate has declined. Patients who undergo ECMO in trauma centers that frequently perform ECMO tend to do better. Further studies are required to better define the role of ECMO in trauma and identify the subset of population that may benefit from this procedure. 
 

18.01 Are We Out of the Woods Yet? The Aftermath of Resuscitative Thoracotomy

J. L. Fitch1,3,4, S. Dieffenbaugher2, M. McNutt2, C. C. Miller1, D. J. Wainwright2, J. Villarreal1, Q. Zhang1, G. Hall1, S. Gordy1, J. Ward1, C. Wilson1, J. Suliburk1, M. A. Davis1, S. R. Todd1  1Baylor College Of Medicine,General Surgery,Houston, TX, USA 2McGovern Medical School at UTHealth,General Surgery,Houston, TX, USA 3Naval Medical Center Portsmouth,General Surgery,Portsmouth, VA, USA 4Uniformed Services University Of The Health Sciences,General Surgery,Bethesda, MD, USA

Introduction:  Survival following traumatic cardiac arrest is low, but resuscitative thoracotomy (RT) is lifesaving for select patients. Data exists on those who are likely to survive RT but is limited regarding hospital course and prognosis following admission to the intensive care unit (ICU). The objective of this study was to describe the hospital course and prognosis for RT survivors admitted to the ICU.

Methods:  This was a retrospective review of all adult trauma patients who underwent RT following traumatic arrest at the only two level one trauma centers serving our metropolitan area. Data evaluated include patient demographics, injury characteristics, hospital course, and outcome.   

Results:  Over 66 months ending June 2017, there were 52,624 trauma activations for both centers. 298 (0.6%) patients underwent RT, and 96 of these (32%) survived to ICU admission. Of these initial survivors, the mean age was 35.8±14.5 years.  79 (82%) were male, 36 (38%) sustained blunt trauma, and the mean injury severity score was 32.3±13.7. 67% of deaths in the ICU occurred within the first 24 hours of admission. 90% of those alive at day 21 survived to discharge. Of those admitted to the ICU, 22% of blunt and 34% of penetrating patients survived to discharge. The mean ICU length of stay (LOS) for survivors was 24.1±17.9 days, while the mean hospital LOS was 43.9±32.1 days. Survivors averaged 1.9±1.5 complications; most commonly acute kidney injury, deep surgical site infection, and deep vein thrombosis. 24 of 28 patients surviving to discharge went home or to a rehabilitation center.

Conclusion:  Survival following RT is 9.4%, but there is an increased likelihood of survival with each day the patient remains alive. Families should be counseled to expect a long hospital course with a high likelihood of complications. The overall prognosis for survivors of RT may not be as bleak as previously assumed.

 

17.20 ROTEM as a Predictor of Mortality in Trauma Patients with an Injury Severity Score Greater than 15.

A. R. Smith1, S. Karim1, R. J. Reif1, W. C. Beck1, J. R. Taylor1, B. Davis1, A. Bhavaraju1, M. K. Kimbrough1, K. W. Sexton1  1University Of Arkansas for Medical Sciences,Department Of Surgery, Division Of Trauma And Acute Care,Little Rock, AR, USA

Introduction:  The Injury Severity Score (ISS) is an important tool for grading the severity of injury to trauma patients. Major trauma is commonly defined using an ISS threshold of 15 and has been shown to correlate with mortality likelihood, length of hospital stay, and the need for major surgery. Assessing hemostatic function in a timely manner is crucial for these patients in order to reduce the risk of mortality. Rotational thromboelastometry (ROTEM) is a whole blood viscoelastic hemostasis analyzer that allows for the detection of and differentiation between coagulopathies. The purpose of our study is to retrospectively evaluate trauma patients with an ISS greater than 15 who also received ROTEM to determine if ROTEM is a better predictor of mortality than conventional coagulation testing. 

Methods:  We performed a retrospective review of all trauma patients with ROTEM and an ISS greater than 15 admitted to the emergency department between November 2015 and August 2017. A total of 301 patients met the aforementioned criteria and were included in this study. Univariate and bivariate statistics were performed using JMP Pro (Cary, NC). Each patient was sorted into groups based on their coagulation phenotype (hypocoaguable, normal coagulation, hypercoaguable) for both ROTEM and conventional coagulation tests (Partial Thromboplastin Time [PTT], Prothrombin Time [PT], International Normalized Ratio [INR]) and compared the test results to their relation with mortality. Nominal logistic regression was performed.

Results: For the 301 patients included in the study, significant predictors of mortality included ROTEM APTEM Clotting Time (CT), ROTEM APTEM Alpha Angle, and ROTEM INTEM Clot Formation Time (CFT), with ROTEM APTEM CT being the most significant. On nominal logistic regression, APTEM CT (p=.007), APTEM Alpha Angle (p=.028), and INTEM CFT (p=.037) were the only significant predictors. PTT (p=.059), PT (p=.141), and INR (p=.634) were not significant predictors of mortality in this data set. 

Conclusion: ROTEM APTEM Clotting Time, which is the time from start of measurement until initiation of clotting in the presence of aprotinin, a fibrinolysis inhibitor, is a significant predictor of mortality in trauma patients with an ISS greater than 15. ROTEM APTEM Alpha Angle and ROTEM INTEM Clot Formation Time are also significant predictors of mortality, whereas conventional coagulation tests did not have a significant contribution to predicting mortality in this patient population.
 

17.19 Back to the Basics: Trauma Team Assessment and Decision Making is Associated with Improved Outcomes

M. A. Vella1, R. Dumas1,2, K. Chreiman1, M. Subramanian1, M. Seamon1, P. Reilly1, D. Holena1  1University Of Pennsylvania,Traumatology, Surgical Critical Care And Emergency Surgery,Philadelphia, PA, USA 2University Of Texas Southwestern Medical Center,General And Acute Care Surgery,Dallas, TX, USA

Introduction:  Teamwork and decision making are critical elements of trauma resuscitation. While assessment instruments such as the non-technical skills (NOTECHS) tool have been developed, correlation with patient outcomes is unclear. Using emergency department thoracotomy (EDT) as a model, we sought to describe the distribution of NOTECH scores during resuscitations. We hypothesized that patients undergoing EDT whose resuscitations had better scores would be more likely to have return of spontaneous circulation (ROSC).

Methods:  Continuously recording video was used to review all captured EDTs during the study period. We used a modification of the NOTECH instrument to measure 6 domains (leadership, cooperation/resource management, communication/interaction, assessment/decision making, situation awareness/coping with stress, and safety) on a 3-point scale (1 = best, 2 = average, 3 = worst).  For each resuscitation, an overall total NOTECH (6-18 points) score was calculated. The primary outcome metric was ROSC. Associations between demographic, injury, and NOTECH variables and ROSC were examined using univariate regression analysis.

Results: 61 EDTs were captured during the study period. 19 patients had ROSC (31%) and 42 (69%) did not. The median NOTECH score for all the resuscitations was 9 [IQR 8-11]. As demographic and injury data (age, gender, mechanism, signs of life) were not associated with ROSC in univariate analysis, they were not considered for inclusion in a multivariable regression model between NOTECH scores and ROSC.  The association between overall NOTECH score and ROSC did not reach statistical significance, p=0.09, but examination of the individual components of the NOTECH score (Table 1) demonstrated that compared to resuscitations that had “average” (2) or “worst” (3) scores on “Assessment and Decision Making,” resuscitations with a “best” score were 5.3x more likely to lead to ROSC, p=0.017 (OR 5.3, CI 1.2-31.9).

Conclusion: While the association between overall NOTECH scores and ROSC did not reach statistical significance, assessment and decisions making did.  In patients arriving in cardiac arrest who undergo EDT, better team performance is associated with improved rates of ROSC.  Future analysis of the timing and quality of elements of resuscitation using video review may elucidate the mechanistic underpinnings of these findings.

 

17.18 Level 1 Trauma Surgeon Staffing: Is more really better?

A. Ansari1, A. Kothari1, E. Eguia1, M. Anstadt1, R. Gonzalez1, F. Luchette1, P. Patel1  1Loyola University Chicago Stritch School Of Medicine,Department Of Surgery,Maywood, IL, USA

Introduction:
Trauma is the fourth leading cause of death in the United States. Care in level 1 trauma centers is associated with improved outcomes and the determinants of this relationship continue to be studied. The objective of this study was to determine if the number of trauma surgeons on staff at level 1 trauma centers impacted outcomes.

Methods:
This study utilized data from the American College of Surgeon’s (ACS) National Trauma Data Bank (NTDB) for years 2013-2016. Inclusion criteria was set as all patient presenting to a Level 1 trauma centers with severe traumatic injuries defined as an Injury Severity Score (ISS) of 15 or greater. The primary outcome was patient survival. A multivariable logistic regression model was constructed to estimate the adjusted effect of trauma surgeon staffing on the primary outcome.

Results:
A total of 180,999 encounters were included in this study. Injured patients that received care at a trauma center with less than 4 staff surgeons had a mortality of 16.0% vs those at a trauma center with > 4 surgeons 12.4% (P=0.01). After controlling for injury severity, age, sex, and race, the odds of mortality were 0.70 (95% CI 0.53 – 0.92) comparing high staff to low staff centers. Secondary outcomes, including length of stay, ventilator time, and ICU length of stay did not differ based on trauma center staffing.

Conclusion:
Current ACS requirements for trauma surgeon staffing at Level 1 trauma centers require that there be a minimum of one trauma surgeon per center. Based on our evaluation, there seems to be clinical improvement in outcomes when a center has 4 or greater trauma surgeons on staff. This warrants further evaluation at the requirements for trauma surgeon staffing at level 1 trauma centers.
 

17.17 The Current Composition and Depth of Massive Transfusion Protocols at US Level-1 Trauma Centers

J. Williams1, C. E. Wade1, B. A. Cotton1  1McGovern Medical School at UTHealth,Acute Care Surgery,Houston, TEXAS, USA

Introduction: Recent guidelines from the American College of Surgeons Trauma Quality Improvement Program (TQIP) and the Eastern Association for the Surgery of Trauma (EAST) have made several recommendations for optimal resuscitation and transfusion of the bleeding patient. These guidelines were developed to improve outcomes in this patient population through a reduction in variation in massive transfusion protocols (MTP) at different institutions, including the recommendation of transfusion of products in ratio approximating 1:1:1 (plasma:platelets:red blood cells). However, there is little data showing how well these guidelines have been implemented. Moreover, given the concern for supporting care durig mass casualty events, there is no data evaluating the depth of product availability at these centers. The purpose of this study was to evaluate existing MTPs and on-hand blood products at academic level-1 trauma centers (TC) throughout the US and describe current and existing pratices.

Methods:  Trauma directors at the 25 busiest US level-1 TCs were asked to complete an anonymous survey regarding their MTPs and a cross-sectional survey of on-hand blood products. Continuous data are presented as medians with the 25th and 75th percentile interquartile range (IQR). Categorical data are reported as proportions.

Results: Responses were obtained from 17 TCs, with all centers having an MTP in place. The median number of trauma admissions for calendar year 2016 for responding TCs was 2838 (IQR 1813-4888), with a median number of 54 MTP patients (IQR 38-107). 76% of responding TCs report using a 1:1 ratio of plasma:red blood cells for trauma resuscitation. 82% of responding TCs are using platelets either in their first or subsequent MTP coolers, with 58% of TCs reporting platelet use in their first MTP cooler. The most commonly reported transfusion ratio of platelets:plasma:RBCs was 1:1:1, with 35% of TCs using this ratio in their first MTP cooler, and 47% for subsequent MTP coolers. Additionally, 89% of TCs report using viscoelastic testing to guide resuscitation efforts. TABLE depicts median on-hand blood products across the 17 centers.

Conclusion: This study provides a snapshot of current MTP practices throughout the US at busy level-1 trauma centers. Although all surveyed programs have a MTP in place, variation exists in the ratio of blood products used despite clear recommendations from recent guidelines. Additionally, there is great variation in the quantity of blood products at TCs, especially with regards to platelets. Further action analysis is needed to understand how differences in MTPs affect patient outcomes. 
 

17.16 Surgical Critical Care Billing at the End of Life: Are We Recognizing Our Own Efforts?

S. J. Zolin1,2, J. Bhangu1,2, B. T. Young1,2, S. Posillico1,2, H. Ladhani1,2, J. Claridge1,2, V. P. Ho1,2  1Case Western Reserve University School Of Medicine,Cleveland, OH, USA 2MetroHealth Medical Center,Division Of Trauma, Critical Care, Burns, And Emergency General Surgery,Cleveland, OH, USA

Introduction:
Practitioners in the intensive care unit (ICU) provide not only physiologic support to severely injured patients, but also spend time to counsel families and provide primary palliative care services, including goals of care conversations and symptom palliation. It is unclear whether ICU physicians account for these services consistently in their critical care billing and documentation (CCBD). We analyzed CCBD practices for moribund trauma patients cared for in the ICU of an academic level 1 trauma center, hypothesizing that CCBD would be inconsistent despite the critically ill status of these patients near the end of life.

Methods:
An analysis of all adult admitted trauma patients who died between 12/2014 and 12/2017 was performed to evaluate the presence of CCBD on the day prior to death and on day of death. CCBD was defined as the critical care time documented in daily ICU progress notes. Age, injury severity score (ISS), race, code status at time of death, and family meetings discussing prognosis and/or goals of care held within one day of death were recorded. Patients already designated as comfort care prior to the day of analysis were not considered eligible for CCBD and patients who died within 24 hours of arrival were excluded. Multivariate logistic regression was used to determine patient factors associated with CCBD.

Results:
A total of 134 patients met study criteria. 71.6% were male and 87.3% were white. The median age was 69 (IQR 58-82). Median ISS was 26 (IQR 20-33). 82.1% had a family meeting within 1 day of death. 76.5% were made comfort care prior to death. Of patients eligible for CCBD, 42.5% had no CCBD on the day prior to death and 59.3% had no CCBD for day of death, corresponding to lost potential hospital compensation in excess of $30,000. For the day prior to death, a family meeting within 1 day of death was associated with increased likelihood of CCBD (p = 0.011), while increasing age was associated with decreased likelihood of CCBD (p = 0.008).

Conclusion:
In critically ill trauma patients near death, CCBD was inconsistent, representing an opportunity for improvement. Family meetings within 1 day of death were frequent and were associated with CCBD, suggesting that additional time spent with patients and families in end of life conversations may lead to more consistent CCBD. Given the downstream impacts of CCBD on health systems, further investigation into the mechanisms and generalizability of these findings is needed.
 

17.15 Implementation of a Bedside ICU Visual Clinical Decision Support Tool Reduces Acute Kidney Injury

J. E. Baker1, C. A. Droege1, J. A. Johannigman1, J. B. Holcomb2, T. A. Pritts1, M. D. Goodman1  1University of Cincinnati,Department Of Surgery,Cincinnati, OHIO, USA 2The University of Texas,Department Of Surgery,Houston, TX, USA

Introduction:
Acute kidney injury (AKI) is a secondary insult in critical illness commonly associated with an increase in morbidity and mortality. Analyzing and determining the onset and extent of AKI remains challenging. We hypothesized that the use of a visual clinical decision support tool with validated staging and recognition for AKI may be helpful in identifying patients transitioning into different stages of injury severity. 

Methods:
A commercially available bedside clinical surveillance and decision support dashboard system was implemented in 12 of the 34 beds in a surgical intensive care unit (SICU) at an academic level I trauma center. An automated AKI bundle based on the Kidney Disease: Improve Global Outcomes (KDIGO) criteria stages was utilized to aid in identification of patients in various AKI stages. A pre-and-post analysis was performed on patients in SICU beds with (WDB) and without the dashboard (WODB) to assess the impact of the bundle in identification of patients with AKI and minimization of ongoing renal dysfunction. Data five months prior to and fourteen months after implementation were compared. Patients with known chronic or end-stage renal disease were excluded.

Results:
A total of 2813 patients were included: 988 WDB patients and 1825 WODB patients. Age and gender were similar in each group both before and after implementation. Overall AKI incidence was reduced in the WDB group after implementation (28.8% vs. 22.4%, pre vs. post; p=0.04). Individual KDIGO stages of AKI were reduced in WDB post-implementation, but none were statistically significant. By contrast, in the WODB group there were no differences in overall AKI incidence or individual KDIGO stages when comparing before and after implementation. ICU and hospital lengths of stay (LOS) were similar in all patients and on subgroup analysis between individual KDIGO stages. No difference in mortality was demonstrated between WDB and WODB cohorts.

Conclusion:
Implementation of a bedside visual clinical decision support tool was associated with a statistically significant decrease in overall AKI incidence in patients with the bedside dashboard. We did not find a difference in LOS or mortality, but this initial retrospective study may be underpowered to detect these changes. Nevertheless, integration of an AKI bundle within this tool in SICU patients may increase clinician’s identification of AKI in real time and facilitate implementation of therapies to improve quality of care.
 

17.14 A COMPARISON OF TWO THROMBOEMBOLIC PROPHYLAXIS REGIMENS WITH LOW MOLECULAR WEIGHT HEPARIN IN TRAUMA

M. Jackson1, M. S. O’Mara1, A. Vang1, P. Beery1, M. Bonta1, M. C. Spalding1  1OhioHealth/Grant Medical Center,Trauma And Acute Care Surgery,Columbus, OH, USA

Introduction:

Trauma patients are at an increased risk for the development of venous thromboembolic events (VTE).  Controversy remains regarding the adequate dosing regimen of low molecular weight heparin (LMWH, enoxaparin) for thromboprophylaxis treatment in trauma patients.  We hypothesized that 30 mg enoxaparin twice daily is superior to 40 mg enoxaparin once daily both in safety and effectiveness.

Methods:

A retrospective controlled cohort study was performed of trauma patients who received prophylactic enoxaparin before and after protocol dosing changes. The clinically significant VTE screening criteria was constant throughout both study times.  The patients in the pre-protocol change cohort received 40 mg enoxaparin once daily while those in the post-protocol change cohort received 30 mg twice daily.  Samples of 950 patients in each of the treatment groups was estimated to provide at least 80% statistical power to detect a difference between the reported VTE rates of 2.9% and 1.1%. This is based on a two-sided chi-square test, with Type I error= 0.05, comparing two independent groups. Demographics, risk factors, and incidences of VTE events were compared between the two cohorts.

Results:

2638 patients were initially analyzed and 1900 met inclusion criteria; 950 patients in the pre-protocol change cohort and 950 in the post-protocol change cohort.  The demographics between the two groups were similar.  The once daily cohort experienced VTE rates of 4.1% (39 incidences) while the twice daily cohort experienced VTE rates of 3.7% (35 incidences) (P = 0.64 NS).   When the groups were corrected for variability by logistic regression, there remained no difference in VTE rate (p=0.60 NS).

Conclusion:

30 mg enoxaparin twice daily and 40 mg enoxaparin once daily dosing regimens did not result in statistically significant changes to the incidence rates of clinically significant VTE in the population cohorts.  Both dosing regimens were effective for VTE prophylaxis in trauma patients.  There was no difference in the rates of VTE.
 

17.13 A Multicenter Study of Nutritional Adequacy in Neonatal and Pediatric Extracorporeal Life Support

K. Ohman1, H. Zhu3, I. Maizlin4,10, D. Henry5, R. Ramirez6, L. Manning7, R. F. Williams7,8, Y. S. Guner6,11, R. T. Russell4,10, M. T. Harting5,9, A. M. Vogel2,3  1Washington University,Surgery,St. Louis, MO, USA 2Baylor College Of Medicine,Surgery,Houston, TX, USA 3Texas Children’s Hospital,Surgery,Houston, TX, USA 4The Children’s Hospital Of Alabama,Surgery,Birmingham, AL, USA 5Children’s Memorial Hermann Hospital,Surgery,Houston, TX, USA 6Children’s Hospital of Orange County,Surgery,Orange, CALIFORNIA, USA 7LeBonheur Children’s Hospital,Surgery,Memphis, TN, USA 8Univeristy Of Tennessee Health Science Center,Surgery,Memphis, TN, USA 9McGovern Medical School at UTHealth,Pediatric Surgery,Houston, TX, USA 10University Of Alabama at Birmingham,Surgery,Birmingham, Alabama, USA 11University Of California – Irvine,Surgery,Orange, CA, USA

Introduction:  Extracorporeal life support (ECLS) allows for life saving treatment for critically ill neonates and children. Malnutrition in critically ill patients is extremely common and is associated with increased morbidity and mortality. The purpose of this study is to describe nutritional practice patterns of parenteral (PN) and enteral (EN) nutrition and nutritional adequacy of neonates and children receiving ECLS. We hypothesize that nutritional adequacy is highly variable, overall nutritional adequacy is poor, and enteral nutrition is underutilized compared to parenteral nutrition.

Methods:  An IRB approved, retrospective study of neonates and children (age<18 years) receiving ECLS at 5 centers from 2012 to 2014 was performed. Demographic, clinical, and outcome data were analyzed. Continuous variables are presented as median [IQR]. Adequate nutrition was defined as meeting 66% of daily caloric goals during ECLS support.

Results: 283 patients were identified; the median age was 12 days [3 days, 16.4 years] and 47% were male. ECLS categories were neonatal respiratory 33.9%, neonatal cardiac 25.1%, pediatric respiratory 17.7%, and pediatric cardiac 23.3%.  The predominant mode was venoarterial (70%). Mortality was 41%. Pre-ECLS enteral and parenteral nutrition was present in 80% and 71.5% of patients, respectively. The median caloric and protein goals for the population were 90 kcals/kg [70, 100] and 3 grams/kg [2, 3], respectively. Figure 1 shows goal, caloric and protein nutritional adequacy for the population over the duration of ECLS. The median percent days of adequate caloric and protein nutrition were 50% [0, 78] and 67% [22, 86], respectively. The median percent days with adequate caloric and protein nutrition by the enteral route alone was 22% [0, 65] and 0 [0, 50], respectively. Gastrointestinal complications occurred in 19.7% of patients including: hemorrhage (4.2%), ileus (3.2%), enterocolitis (2.5%), intraabdominal hypertension or compartment syndrome (0.7%), perforation (0.4%), and other (11%).

Conclusion: Although nutritional adequacy in neonates and children that receive ECLS improves over the course of the ECLS run, the use of enteral nutrition is remains low despite relatively infrequent gastrointestinal complications.

 

17.12 CT Scan Analysis Indicates Nutritional Status in Trauma Patients

F. Cai1, J. C. Lee2, E. J. Matta4, C. E. Wade1,3, S. D. Adams1,3  1McGovern Medical School,Surgery,Houston, TX, USA 2Memorial Hermann Hospital,Clinical Nutrition,Houston, TX, USA 3Center for Translational Injury Research,Houston, TX, USA 4McGovern Medical School,Diagnostic Radiology,Houston, TX, USA

Introduction:
More than 2 million people are hospitalized in the US annually for traumatic injuries. These patients are at risk for malnutrition due to prolonged preoperative fasting and minimal intake due to ileus or intestinal injury, and their injuries increase metabolic demands. The gold standard diagnosis for malnutrition is a dietician interview and physical exam to assess ASPEN/AND malnutrition consensus criteria. Weight loss, loss of muscle mass and fat are commonly used as indicators, along with calorie intake history, however, this requires time, resources and training. Given the prevalence and accessibility of CT imaging in trauma admissions, morphometric analysis has the potential to be an indicator of admission nutritional status. We hypothesized that admission CT scans can identify individuals at high risk of being malnourished on arrival, and this early identification can target them for aggressive nutrition supplementation.

Methods:
We did a retrospective review of adult (>15 years) patients with traumatic injuries admitted to our level I trauma center.  We included patients with admission abdominal CT scans and a dietician nutritional assessment within 3 days.  Patients were stratified by gender, age (Young<65 years, Older≥65 years), and nutritional status, designated as non-malnourished (NM) or moderate-severe malnourished (MSM). CT images were analyzed using Aquarius TeraRecon software to calculate the average psoas area at the level of 4-5th lumbar disc. Statistical significance was determined by stepwise selection modeling and set at p<0.05.

Results:
Images were analyzed in 120 patients, of which 58% were male. The mean age was 53.6 ± 21.6 and 37% were Older (n=44). The median average psoas area in NM Young males (n=47) was 18.6 cm2, compared to a median of 12.9 cm2 in the MSM. For Young females (n=29), the medians were 10.6 cm2 in the NM and 9.2 cm2 in the MSM. When looking at the older population, Older males (n=23), had a median of 12.1 cm2 in the NM and 9.7 cm2 in the MSM. Older females (n=21) had a median of 8.4 cm2 in the NM and 6.6 cm2 in the MSM. (IQ ranges in box plot graph.) With stepwise selection modeling, we found that gender and psoas size each had a significant effect on the nutritional status. Age by psoas size demonstrated an interaction on nutritional status, but did not reach significance.

Conclusion:
Our data show that average psoas area significantly decreases in patients diagnosed with malnutrition. Gender is also associated with a significant increased risk in having malnutrition. In trauma patients with admit CT scans, psoas area analysis can potentially be used to trigger a more aggressive nutrition supplementation plan upon admission, even before dietician assessment.

 

17.11 Intravenous Lidocaine as an Analgesic Adjunct in Trauma Patients: Is It Safe?

H. L. Warren2, P. Faris1, D. H. Bivins2, E. R. Lockhart2, R. Muthukattil2, D. Lollar1,2  1Virginia Tech Carilion School of Medicine,Roanoke, VIRGINIA, USA 2Carilion Clinic,Roanoke, VIRGINIA, USA

Introduction: Pain control in patients suffering traumatic injury can be challenging. Exposure to opioid pain medications can lead to prolonged dependence, therefore regimens which reduce the amount of opioid analgesia are needed. We have identified no data regarding the use of intravenous lidocaine (IVL) in trauma populations. We sought to explore the safety of IVL in these patients.

 

Methods: We performed a single- institution retrospective review of trauma patients receiving IVL from 6/30/16-6/30/17. We extracted data on demographics, pre-admission substance use, injury severity, in-hospital analgesic use, PT/OT participation rates and side- affect events. The lidocaine group was compared with a non-lidocaine control (C) group which was matched based on age, sex, race and ISS score. Patients with length of stay <24 hours were excluded from the control group.

Results:81 patients received IVL and were compared to 89 controls. Age, sex, race and ISS were no different. Significantly more patients receiving IVL had a history of narcotic and polysubstance use (p<0.01). Mortality was the same (p=1.0) Hospital length of stay was longer in the IVL group (7.5 vs 11.8, p=0.01). 38/81 patients received a bolus and all patients received a drip. The mean rate was 1.47mg/hr. Duration of therapy was 1-41 days however the mode was 3 days. 28 side effect events occurred in 23 of 81 IVL patients (28.4%). The most common side effect was delirium (14/28). The rate of side effects was higher in the elderly cohort (7/13, 53.8%) than in the adult cohort (16/68, 23.5%). There was no relationship between side effects and blood lidocaine levels. Side effects resolved with cessation of medication. Side effects occurred in 5 of 89 control patients (5.6%).

Conclusion:Side effects of IVL were common but resolved with cessation of IVL. No mortality was attributed to IVL. IVL may be a useful adjunct for patients requiring high narcotic use with careful monitoring. Use of IVL in elderly patients requires caution. These results should be clarified with prospective evaluation.

 

17.09 The Optimal Length of Stay for 90-day Readmissions after Surgeries in Tricare

T. Andriotti1, E. Goralnick1, M. Jarman1, M. A. Chaudhary1, L. Nguyen3, P. Learn2, A. Haider1, A. Schoenfeld1  1Harvard Medical School,Surgery,Boston, Massachusetts, MASSACHUSETTS, USA 2Uniformed Services University Of The Health Sciences,Surgery,Bethesda, MD, USA 3Harvard Medical School,Vascular And Endovascular Surgery,Boston, Massachusetts, MASSACHUSETTS, USA

Introduction:

Healthcare performance evaluators have prioritized reduction of length of stay (LOS) and readmissions as important measures of quality in health care. However, these two measures represent competing demands as decreased LOS may result in increased unplanned readmissions.  Our objective was to assess the optimal LOS that leads to the lowest readmission risk after discharge for knee replacement arthroplasty.

Methods:

A retrospective, open cohort study design was performed using Tricare claims, the Department of Defense’s Health Insurance product, to identify all eligible adult patients (18 – 64 years) who were discharged from elective total knee arthroplasty from 2006-2014. To estimate the optimal timepoint for LOS related to lowest 90-day readmissions, a generalized additive model with spline regression was generated to assess for the predicted risks of readmission (graph 1) adjusted for age, sex, gender, military rank as a proxy of socioeconomic status, any complications during hospital stay and Charlson comorbidity score. Readmissions included stays for all unplanned causes, reported by the principal diagnosis at the index (i.e., initial) inpatient stay within 90 days after discharge from elective total knee arthroplasty.

Results:

11,517 patients (6,910 women and 4,607 men) with a mean [SD] age of 56.94[6.34] years underwent the procedure within the study frame. 50.14% were white, 1.37% were Asian, 9.31% were black, 0.66% were American Indian, 3.12% were other and 35.39% were unknown. The median LOS was 3 days (IQR: 2-3 days) and the main causes of 90-day readmissions were post-operative infection (0.81%), mechanical complication of other internal orthopedic devices (0.41%) and knee lymphedema (0.31%). The lowest risk of being readmitted in 90 days was observed in patients discharged on the 1st day of post-operatory discharge (POD-1) (risk = 4.8%). Moreover, as LOS increases, the risk of readmissions significantly increases up to 9.73% for patients discharged on the 8th day (POD-8) (p=.0004). 

Conclusion:

LOS reduction up to one day may not culminate in increased risk of readmissions in patients whose clinical conditions allow them to be discharged on POD-1 after elective total knee arthroplasty. Hence, orthopedists may consider discharge patients with good post-surgical conditions as soon as one day after elective total knee arthroplasty.

17.08 PREDICTORS FOR DIRECT ADMISSION TO THE OPERATING ROOM IN SEVERE TRAUMA

D. Meyer1, M. McNutt1, C. Stephens2, J. Harvin1, R. Cabrera4, L. Kao1, B. Cotton1,3, C. Wade3, J. Love1  1McGovern Medical School at UTHealth,Acute Care Surgery/Surgery/McGovern Medical School,Houston, TX, USA 2McGovern Medical School at UTHealth,Trauma Anesthesiology/Anesthesiology/McGovern Medical School,Houston, TX, USA 3McGovern Medical School at UTHealth,Center For Translational Injury Research/Surgery/McGovern Medical School,Houston, TX, USA 4Memorial Hermann Hospital,LifeFlight,Houston, TX, USA

Introduction:  Many trauma centers utilize protocols for expediting critical trauma patients directly from the helipad to the OR. Used judiciously, bypassing the ED can decrease resource utilization and the time to definitive hemorrhage control. However, criteria vary by center, rely heavily on physician gestalt, and lack evidence to support their use. With prehospital ultrasound and base excess increasingly available, opportunities may exist to identify risk factors for emergency surgery in severe trauma.

Methods: All highest-activation trauma patients transported by air ambulance between 1/1/16 and 7/30/17 were included retrospectively. Transfer, CPR, and isolated head trauma patients were excluded. Patients were dichotomized into two groups based on ED time: those spending <30min who underwent emergency surgery by the trauma team and those spending >60min. Prehospital and ED triage data were used to calculate univariable and multivariable odds ratios.

Results: 435 patients met enrollment criteria over the study period. 76 (17%) spent <30min in the ED before undergoing emergency surgery (median age 31y [21-45], 82% male, 41% penetrating). 359 (83%) patients spent >60min (median age 35y [21-48], 74% male, 15% penetrating).  HR, SBP, and BE values were similar in the two groups. Mortality was higher in <30min (32% vs 9%, p<0.001). Compared to >60min, the <30min group was more likely to have: (1) penetrating trauma with SBP<80mmHg or BE<-16 (OR 15.02, 95% CI 4.64-48.61); (2) penetrating trauma with positive FAST (OR 27.54, 95% CI 9.00-84.28); or (3) blunt trauma with a positive FAST and SBP<80mmHg or BE<-10 (OR 11.98, 95% CI 4.03-35.63). Collectively, these criteria predicted 39 (51%) of the <30min group.

Conclusion: Both blunt and penetrating trauma patients with positive FAST and profound hypotension or acidosis were much more likely to require emergency surgery within 30 minutes of hospital presentation and may not benefit from time spent in the emergency department.