87.16 Effect of Bariatric Surgery on Cardiovascular Risk Factors: Single Institution Retrospective Study

M. A. Al Suhaibani1, A. Al Harbi1, A. Alshehri2, E. Alghamdi2, R. Aljohani2, S. Elmorsy3, J. Alshehri3, A. Almontashery3  1Qassim University,Medicine,Qassim,Saudi Arabia 2Umm-alqura University,Makkah,Saudi Arabia 3King Abdullah Medical City,Makkah,Saudi Arabia

Introduction:
Bariatric surgeries became a trend for obese individuals with or without comorbidities. It can result in remission and control of cardiovascular risk factors. We aim to describe bariatric surgeries effect on cardiovascular risk factors including glycemic control, blood pressure, lipid profile and body weight at King Abdullah Medical City (KAMC) at Makkah, Saudi Arabia. 

Methods:
Retrospective cohort study including all obese patients who underwent bariatric surgery at KAMC between 2013 to February 2016 . 

Results:
Total of 566 patients, 58.1% (n=329) were female with mean age 34.8±10.2 years and 26.9% (n=152) were smokers. Almost all of them underwent laparoscopic gastric sleeve (95.9%, n=543). Diabetes and hypertension were on the top of comorbidities (24.8%, n=118 and 18.9%, n=107 respectively). After 24 months of follow up there was significant reduction in the baseline mean of Body Mass Index (BMI) (47.6 kg/m2) and Glycosylated Hemoglobin (HbA1c) (6.8%) to 31.8 kg/m2 and 5.7% respectively (p<0.001). Baseline High Density Lipoprotein (HDL) (44.4mg/dl), non-HDL cholesterol (147.1 mg/dl) and Triglyceride (TAG) (123.6 mg/dl) also showed significant improvement to reach 52.3 mg/dl, 137.7 mg/dl and 78.5 mg/dl at 12 months of post-operative follow up (p<0.001).Out of 79 hypertensive patients, 57 patients had remission of hypertension in last follow up (p<0.001). Among 103 pre-operative diabetic patient 71 (68.9%) had complete remission, and 17 (16.6%) had partial remission in last follow up (p<0.001).The predicted 10-years risk of CVD decreased significantly from 7.4% to 5.5% (p<0.001).

Conclusion:
Bariatric surgeries were significantly effective for controlling obesity related cardiovascular risk factors and decrease its 10-years predicted risk within a maximum 2 years period of follow up.
 

87.15 Intraoperative Perfusion Assessment of High-Risk Amputation Stumps Predict Area of Necrotic Eschar

G. S. De Silva1, K. Saffaf1, L. A. Sanchez1, M. A. Zayed1,2  1Washington University In St. Louis,Department Of General Surgery/Division Of Vascular Surgery,St. Louis, MISSOURI, USA 2Veterans Affairs St. Louis Health Care System,St. Louis, MISSOURI, USA

Introduction:  More than 130,000 extremity amputations are performed in the U.S. per year. In 40% of patients, poor amputation site healing requires stump revision and/or re-amputation.  This contributes to added patient morbidity, disability, and healthcare costs.  We hypothesize that inadequate tissue perfusion is associated with poor amputation stump healing.  We evaluated this using non-invasive Laser-Assisted Fluorescent Angiography (LAFA; SPY Elite® system) in the peri-operative setting.

 

Materials and

Methods:  A pilot group of ‘higher-risk’ patients were evaluated prospectively at the time of major lower extremity amputation.  Immediately following stump creation, LAFA was intra-operatively performed. Rate of arterial inflow and peak perfusion were determined using densitometry analysis.  Post-operative stump healing was serially evaluated for 4-6 weeks using a modified Bates-Jensen Wound Assessment Tool. Non-parametric Spearman correlation analysis was performed to evaluate stump perfusion and healing variables.

 

Results:  In a cohort of 8 patients (100% smoking, 75% diabetic), the least globally well-perfused stumps had the highest necrotic eschar scores (p=0.04), as well as increased volume of eschar (p=0.05).  Similarly, amputation stumps with lower perfusion scores just along the surgical suture line were more likely to also develop a necrotic eschar (R2=0.834, p<0.05), and increased eschar volume (R2=0.842, p<0.05).  We observed no correlation between low stump perfusion scores and higher iliac and common femoral arterial runoff scores (major arterial occlusions).

 

Conclusions:  In ‘higher-risk’ patients, peri-operative perfusion assessments of amputation stumps using LAFA can help predict potential areas of necrotic eschar formation.  Intra-operative determination of areas of decreased amputation stump perfusion may encourage corrective intervention or anticipate subsequent wound care needs.

 

87.13 Sarcopenia Predicts Mortality of Trauma Patients Requiring Intensive Care

N. A. Lee1, A. Khetarpal3, L. Wolfe1, S. Demasi2, A. Stiles2, M. Aboutanos1, P. Ferrada1  1Virginia Commonwealth University,Department Of Surgery,Richmond, VA, USA 2Virginia Commonwealth University,School Of Medicine,Richmond, VA, USA 3Virginia Commonwealth University,Department Of Radiology,Richmond, VA, USA

Introduction:
As the population of the United States ages, there has been a disproportionately larger increase in the amount of elderly trauma patients. Elderly patients have worse outcomes when controlling for injury severity independent of age. Frailty, a syndrome characterized by increased vulnerability to stressors leading to functional impairment and adverse outcomes, has been found to negatively affect the outcomes of surgical patients and critically ill patients, but evaluating frailty in trauma patients has proven difficult. Sarcopenia, or decreased muscle mass and function, is a measurable factor associated with frailty. We hypothesize that sarcopenia can be used as a surrogate for frailty to independently predict risk of mortality.

Methods:
In this retrospective cohort study, trauma patients aged 45 years or greater requiring ICU care at our level-1 trauma center between April 2015 and January 2016 were assessed for sarcopenia by averaging the psoas muscle body cross-sectional area at the level of the L4 pedicles, and dividing by the cross-sectional area of the L4 body at the same level. Patients were excluded if there was traumatic injury in the L4 area, or if no CT was obtained. This psoas:lumbar vertebral index (PLVI) was then cross-referenced with data from the local trauma registry including outcomes, comorbidities, complications, and injury severity scores. Our primary outcome was in-hospital mortality. Statistical analyses including Wilcoxon rank sum test and stepwise logistic regression was performed with SAS 9.4 with a significance level of 0.05.

Results:
Over the study period, 715 patients were identified, of which 528 were eligible and assessed for sarcopenia by calculating the PLVI (median 0.95, range 0.34 – 1.91). There were 41 deaths in the study population. Patients who died had smaller PLVI (0.79, 0.40 – 1.28 vs 0.96, 0.34 – 1.91, p=0.0014), were older (72, 47 – 92 vs 62, 45 – 98, p < 0.0001), and had higher injury severity scores (26, 1 – 45 vs 14, 1 – 75, p<0.0001). Stepwise logistic regression was performed with comorbidities and complications which were approaching significance on univariate analysis. With these factors included, age was not a significant predictor of mortality; however, ISS and PLVI were included as significant contributors to mortality in addition to history of renal failure, malignancy, and prehospital DNR status, and new onset stroke, and renal failure. The odds ratio of 0.135 (0.029 – 0.623) demonstrates a strong association with lower PLVI and mortality.

Conclusion:
Sarcopenia as measured by PLVI is an independent predictor of mortality in trauma patients older than 45 requiring critical care. This may provide an opportunity to further risk-stratify this high-risk patient population on admission.

87.12 Survival After Massive Transfusion for Trauma: The Type of Injury Matters.

D. R. Fraser1, A. Snow1, C. F. McNicoll1, P. J. Chestovich1, A. J. Chapman1, J. J. Fildes1  1University Of Nevada,Acute Care Surgery,Las Vegas, NV, USA

Introduction:

Trauma patients often receive large volume blood transfusions for life-threatening bleeding. Despite routine use of Massive Transfusions (MT) (≥ 10 units of packed red blood cells [PRBC]) in devastating injuries, the threshold for transfusion futility has yet to be established. When a trauma patient receives ≥ 10 units of PRBC, the surgeon must shepherd the continued resuscitative use of PRBC judiciously. Injury patterns may predict survival in MT patients, and guide blood product stewardship.

 

Methods:

A retrospective review of our level 1 trauma center’s registry was conducted to identify all trauma patients that received at least one blood product between June 2010 and June 2015. All PRBC, fresh frozen plasma (FFP), cryoprecipitate, and platelets transfused during the first 24 hours of admission were tabulated. Primary outcome was 30-day survival, by Abbreviated Injury Scale (AIS) body region, for patients who received ≥ 10 units of PRBC. Secondary outcomes included 30-day survival of patients who received ≥ 15 units of PRBC, by AIS body region. Stepwise reverse logistic regression was performed in STATA version 11, with statistical significance of p<0.05.

 

Results:

We identified 435 patients that received at least one blood product during the study period, with 75.4% men, mean age of 42.8 years, and 72.4% blunt injuries. Of these, 116 (26.7%) patients received a MT, and 319 (73.3%) did not. The range of all blood products transfused was 1 to 203 units. The range for the 415 patients that received any PRBC was 1 to 80 units. The survivor with the largest PRBC transfusion received 57 units. Percent 30-day survival by PRBC quantity transfused was 65.9% (n=299), 44.0% (n=91), 40.0% (n=20) and 20.0% (n=5) for groups of 1-9, 10-25, 26-50, and >50 units of PRBC transfused, respectively. The odds ratio (OR) for 30-day survival in MT patients compared to non-MT patients was 0.92 (p=0.01), after accounting for age, FFP:PRBC ratio, platelets, cryoprecipitate, and AIS body regions. Extremity (OR 4.98, p=0.002) and abdominal (OR 4.63, p=0.008) injuries correlated with improved survival in MT patients (Table 1). Chest injuries were associated with worse 30-day survival in MT patients, though not significant (OR 0.35, p=0.07). These effects persisted for PRBC transfusions ≥ 15 units.

 

Conclusion:

Survival in trauma patients requiring PRBC transfusion worsens with increasing volume. In MT patients, abdominal and extremity injuries had improved 30-day survival, while no significant difference could be found for other body regions. These findings will inform the surgeon’s decision making process for trauma patients requiring MT, and potentially improve the utilization of blood bank resources.

 

87.11 Epidural Anaesthesia For Traumatic Rib Fractures Is Associated With Worse Outcome: A Matched Analysis

K. M. McKendy1, K. Boulva1, L. Lee1, A. N. Beckett1, D. L. Deckelbaum1, P. Fata1, K. A. Khwaja1, D. S. Mulder2, T. S. Razek1, J. R. Grushka1  1McGill University,General And Trauma Surgery,Montreal, QC, Canada 2McGill University,Cardiothoracic Surgery,Montreal, QC, Canada

Introduction: To determine the effect of epidural anaesthesia on the incidence of respiratory complications and in-hospital mortality in adult patients with rib fractures after blunt trauma.

 

Methods: All adult patients presenting at a university-affiliated level I trauma center from 2004 to 2013 with at least one rib fracture secondary to blunt trauma were queried from a prospectively entered database. Patients who had a combined blunt-penetrating mechanism of injury, simultaneous intracranial haemorrhage or traumatic brain injury, or underwent a laparotomy or thoracotomy were excluded from the analysis. Epidurals were placed within the initial 24 hours of presentation according to the treating physician’s preferences. Main outcome measures were respiratory complications (pneumonia, DVT/PE, and respiratory failure) and 30-day in-hospital mortality. Coarsened exact matching was used to account for differences in patient-level factors (age, sex, Injury Severity Score [ISS], number of rib fractures, flail segment, bilateral rib fractures, chest tube insertion, pulmonary contusion, and year of injury) between those who received epidural anaesthesia (EPI) and those who did not (NEPI) in a one-to-one fashion. Subgroup analyses were performed based on age (≥65 and < 65 years), number of rib fractures (≥3 and <3 fractures), burden of trauma (ISS ≥16 and ISS < 16), and bilaterality of fractures. Statistical significance was defined as p<0.05.

 

Results: A total of 1360 (EPI 329, NEPI 1031) patients met the inclusion criteria (mean age 54.2 years (SD19.7), 68% male). Overall, the mean number of rib fractures was 4.8 (SD3.3) ribs (21% bilateral) with a high total burden of injury (mean ISS 19.9 (SD8.9)). The incidence of respiratory complications was 13% (180/1360) and mortality 4% (53/1360) in the unmatched cohort. After matching, 204 EPI patients were compared to 204 NEPI patients with no differences in demographics and traumatic characteristics (. In matched analysis, EPI patients experienced more respiratory complications (19% vs. 10%, p=0.009) but no differences in 30-day mortality (5% vs. 2%, p=0.159) compared to NEPI patients. There were also no differences in duration of mechanical ventilation (EPI 148 h (SD167) vs. NEPI 117 (SD187), p=0.434) and intensive care unit length of stay (EPI 6.5 d (SD7.6) vs. NEPI 5.8 d (SD9.1), p=0.626). Total length of stay was higher in the EPI group (16.6 d (SD19.6) vs. 12.7 d (SD15.2), p=0.026). This relationship remained unchanged in all subgroup analyses.

 

Conclusion: Epidural anaesthesia is associated with increased respiratory complications and mortality after traumatic rib fractures. Alternate analgesic strategies and rib fracture fixation should be further investigated to treat these severely injured patients.

 

 

 

 

 

87.10 IS CT ALONE SUFFICIENT TO EXCLUDE CLINICALLY RELEVANT CERVICAL SPINE INJURY?

J. S. Hanna2, V. Sim2, C. King1, A. Rayner3, R. Gupta2  1Rutgers Robert Wood Johnson Medical School,Department Of Radiology,New Brunswick, NJ, USA 2Rutgers Robert Wood Johnson Medical School,Division Of Acute Care Surgery,New Brunswick, NJ, USA 3Rutgers Robert Wood Johnson Medical School,Department Of Surgery,New Brusnwick, NJ, USA

Introduction:

Missed cervical spine injuries may result in devastating morbidity and mortality. Debate continues over the optimal method for identifying clinically relevant cervical spine injuries. The adequacy of computed tomography (CT) alone and the role of magnetic resonance imaging (MRI) in assessing ligamentous injury remains controversial. We hypothesize that CT scan alone is neither 100% sensitive nor specific because of equipment and provider heterogeneity.

 

Methods:

A prospectively maintained database at a level one trauma center was queried for all trauma patients with a negative cervical spine CT scan who required an MRI for persistent midline cervical tenderness (MCT) or altered mental status (AMS) between 2011 and 2016. 838 patients were identified for which a retrospective chart review was performed to identify admission characteristics, imaging findings, and clinical management.

 

Results:

The identified cohort was composed of 649 patients with persistent MCT and 189 patients with AMS. MRI identified clinically relevant injuries not seen on CT in 5% of patients with MCT and AMS each. In the MCT group 4% were managed with a cervical collar and 1% required surgery. In the AMS group 4.8% were managed with a cervical collar and 0.2% with surgery. In the absence of a cervical spine fracture, the sensitivity and specificity of CT alone in detecting ligamentous injury in this cohort is 2.3% and 100% respectively.

 

Conclusion:

These data suggest that a negative CT alone is insufficient to exclude injuries resulting in cervical spine instability. Although recent studies have suggested CT scan is 100% sensitive and specific, we believe that equipment and interpreter heterogeneity substantially decrease the sensitivity. We recommend that each institution perform an internal validation prior to developing protocols which rely solely on CT imaging to exclude clinically relevant cervical spine injuries.

87.09 Burn Injury in Diabetics Leads to Significant Increases in Hospital-Acquired Infection and Mortality

F. N. Williams1, P. Strassle2, S. Jones1, B. Cairns1  1UNC,Surgery/Burns,Chapel Hill, NC, USA 2UNC,Surgery,Chapel Hill, NC, USA

Introduction: Outcomes in burn patients, including mortality, are affected by age of the patient and the extent of the burn injury. This is reflected in the revised Baux score, which is calculated by adding age plus percent total body surface area burned plus 17 if there is an inhalational injury. The result corresponds to a predicted mortality. Tight glucose control following burn injury improves complication and mortality rates. It remains unclear whether a pre-existing diagnosis of diabetes in burn patients influences key outcomes such as infectious complications and mortality.

Methods: The Burn Center registry, Hospital Epidemiology database, electronic medical records, and billing data were linked. Adult patients (≥18 years old) admitted between January 1, 2004 and December 31, 2013 with a burn injury were included. Only the first hospitalization within this time frame was included. Diabetes mellitus was identified using both comorbidities listed on the burn registry and d diagnostic codes attached to the inpatient hospitalization (ICD-9-CM 250). Multivariable Cox proportional hazard models were used to estimate the increased risk of diabetes on 60-day mortality and hospital-acquired infections, after adjusting for patient and burn characteristics. Only patients hospitalized for 2 or more days (i.e. at risk for infection as per CDC definitions) were included in HAI analyses.

Results: 5,539 patients met the inclusion criteria. 665 (11.8%) had a diabetes mellitus (DM) diagnosis. Diabetic patients were significantly more likely to be female (34.1% vs. 26.6%, p<0.0001), African American (36.9% vs. 26.0%, p<0.0001), and older (median age 56.7 years old vs. 39.9 years old, p<0.0001). Diabetic patients were more likely to have contact burns (8.9% vs. 4.7%, p=<0.0001) and inhalational injury (11.0% vs. 8.1%, p=0.01). No differences were seen in median burn size (4.0% vs. 4.0%, p=0.44). The median revised Baux scores was also higher among diabetics (64.0 vs. 47.6), p<0.0001. Patients with DM were more likely to be admitted to the ICU (14.3% vs. 10.6%, p<0.0001) and were hospitalized for a median 11 days (interquartile range [IQR] 4 – 26), compared to a median 7 days (IQR 2 – 13) for patients without diabetes, p<0.0001. Only 242 patients (4.4%) were hospitalized longer than 60 days and administratively censored prior to discharge or death. Overall, 243 (4.4%) died during their inpatient hospitalization. After 30 days, diabetic patients had a higher mortality risk (RD 0.03, 95% CI 0.00, 0.05) compared to non-diabetic patients, and after 60 days the risk was higher (RD 0.07, 95% CI 0.01, 0.12). Patients with DM were significantly more likely to have an HAI after 60 days.

Conclusion: Comorbid conditions can lead to worse outcomes. Diagnosed diabetics fare worse after burn injury than matched non-diabetics.

 

87.08 Uniform Grading Of Hemorrhagic Emergency General Surgery Diseases

G. Tominaga2, J. Schulz3, R. Barbosa4,5, S. Agarwal5, G. Utter6, N. McQuay7, C. Brown8, M. Crandall1  1University Of Florida,Surgery,Jacksonville, FL, USA 2Scripps Memorial Hospital,Surgery,La Jolla, CA, USA 3Massachusetts General Hospital,Boston, MA, USA 4Pacific Surgical P.C.,Portland, OR, USA 5University Of Wisconsin,Surgery,Madison, WI, USA 6University Of California – Davis,Surgery,Sacramento, CA, USA 7Johns Hopkins University School Of Medicine,Surgery,Baltimore, MD, USA 8University Medical Center Brackenridge,Surgery,Austin, TX, USA

Introduction:  Consistent grading of Emergency General Surgery (EGS) diseases is important for comparison of outcomes and development of EGS registries.  The American Association for the Surgery of Trauma (AAST) Patient Assessment Committee has previously developed a grading system for measuring anatomic severity of 16 inflammatory/infectious EGS diseases.  The purpose of this project was to develop a uniform grading template for hemorrhagic EGS diseases cared for by acute care surgeons and apply the template to common hemorrhagic EGS diseases.

Methods:  The AAST Patient Assessment Committee reviewed the literature and examined the existing grading systems available for common hemorrhagic EGS diseases.  A uniform grading template for EGS diseases was formulated and applied to four common EGS bleeding diseases:  bleeding esophageal varices (EV), hemorrhage from colonic diverticulosis (CD), bleeding peptic ulcer disease (PUD), and ruptured abdominal aortic aneurysm (AAA).

Results: A grading template was created with Grade I – occult hemorrhage, Grade II – minimal hemorrhage with no active bleeding, Grade III – limited hemorrhage with no active bleeding, Grade IV – moderate hemorrahge with active bleeding, and Grade V – large volume hemorrhage.  The template was applied to four hemorrhagic EGS diseases as noted in the table.

Conclusion: We have developed a grading template for hemorrhagic EGS diseases and have applied them to four hemorrhagic diseases commonly managed by acute care surgeons.  We believe that physiologic parameters, volume loss, and rate of bleeding are essential co-determinants of outcomes in hemorrhagic conditions.  However, adding to this an understanding of the anatomic progression of disease may help inform treatment decisions and predict outcomes.

 

87.07 Upper Extremity DVT Following Port Insertion: What Are The Risk Factors?

O. Tabatabaie1, G. G. Kasumova1, T. S. Kent2, M. F. Eskander1, A. Fadayomi1, S. Ng1, J. F. Critchlow2, N. E. Tawa3, J. F. Tseng1  1Beth Israel Deconess Medical Center,Surgical Outcomes Analysis & Research (SOAR),Boston, MA, USA 2Beth Israel Deaconess Medical Center,Department Of Surgery,Boston, MA, USA 3Beth Israel Deaconess Medical Center,Division Of Surgical Oncology,Boston, MA, USA

Introduction:

Totally implantable venous access devices (ports) are widely used for long-term central venous access, especially for cancer chemotherapy. Upper extremity DVT (U-DVT) is a reported complication of ports, however, prophylaxis remains controversial due to low event rates in the general population. The aim of this study was to determine the risk factors of U-DVT to help identify patients at increased risk who could potentially benefit from prophylaxis.

Methods:

Healthcare Cost and Utilization Project’s Florida State Ambulatory Surgery and Services Database (SASD) was queried between 2007-2011 for patients who underwent outpatient port insertion by CPT code. Patients were followed in the SASD, State Inpatient Database (SID) and State Emergency Department Database (SEDD) for U-DVT occurrence. The cohort was divided into a test cohort and a validation cohort based on the port placement time (2009-2011 and 2007-2008 for test and validation cohorts; respectively). A multivariable logistic regression model was developed to identify risk factors for U-DVT in patients with a port. The model was then tested on the validation cohort.

Results:
Of the 51,049 patients identified in the test cohort, 926 (1.81%) had at least one U-DVT coded at a follow-up visit. The mean age of the test cohort was 62.3 (SD=13.2) and there was a slight female predominance (61.94%). The median time of U-DVT development after port placement was 133 days (IQR: 47-297). Patients who had a U-DVT were more likely to be younger (61.2 vs 62.6), black (vs white, OR=1.9), a smoker (OR=1.29) or have an Elixhauser score of 1-2 (vs. 0; OR=1.22). They also had increased odds of having a history of hypercoagubility (OR=7.68), catheter-related complication at the time of port placement (OR=3.96), autoimmune disease, (OR=1.72), or a non-cancer indication for port placement (vs. genitourinary cancers; OR=2.74). Other univariate predictors were Medicaid insurance (vs. private insurance; OR=1.42), all-cause 30-day readmission (OR=2.44), previous DVTs (OR= 1.95) and end-stage renal disease (OR=4.94). All of the univariate predictors had a P-values<0.05. On multivariate analysis, age>65, black race, 30-day readmission, hypercoagubility, ESRD, indication for port placement and catheter complication were independent predictors of U-DVT (see Figure for details). C-statistics of the model for the test and validation cohorts were 0.68 and 0.66; respectively.

Conclusion:
Our model can be used to identify patients at increased risk of U-DVT after port insertion. Utility of DVT prophylaxis should be investigated in this group of patients in future prospective trials.
 

87.06 The Expedited Discharge Of Patients With Multiple Traumatic Rib Fractures Is Cost Effective

N. Fox1, M. Minarich1, M. Dalton1, K. Twaddell1, J. Hazelton1  1Cooper University Hospital,Camden, NJ, USA

Introduction: Rib fractures cause significant morbidity and mortality in trauma patients. It is well documented that optimizing pain control, mobilization and respiratory care decreases complications. However, the impact of these interventions on hospital costs and length of stay is not well defined.  We hypothesized patients with multiple rib fractures can be discharged within three hospital days resulting in decreased hospital costs. We also sought to identify patients that were not able to meet this discharge goal.

Methods: A retrospective review of adult patients (≥18yrs) admitted to our Level 1 trauma center (2011-2013) with ≥  two rib fractures (n=202).  Patients were excluded if they were intubated, admitted to the ICU, required chest tube placement or sustained significant multi-system trauma.  Demographics, clinical characteristics, hospital costs and outcome data were analyzed.  Patients discharged within three hospital days of admission were considered to have achieved expedited discharge (ED). Univariate and multivariate analyses determined predictors of failure to achieve ED. A p value of <0.05 was considered significant.

Results:Study patients (n=202) were 60 ±  19 years of age with an injury severity score (ISS) of 10 ± 5, and 4 ± 2 rib fractures. Of 202 patients, 127 (63%) achieved ED while 75 (37%) did not. No differences in chest AIS, ISS, smoking status or history of pulmonary disease were identified between the two groups (all p >0.05). Average LOS (2 ± 1 vs. 7 ± 4 days;p < 0.001) and hospital costs were lower in the ED group  ($2,865 ± 1200 vs. $6,085 ± 3033;p <0.001)(Table 1). A lower percentage of ED patients required placement in rehabilitation facilities (6% vs. 48%; p<0.001). There were no readmissions within 30 days in either group. After controlling for potential confounding variables, multiple variable logistic regression analysis revealed that advancing age (OR 1.05 per year, 1.02-1.07) independently predicted failure to achieve ED.   

Conclusion:The majority of patients admitted to the hospital with multiple rib fractures can be discharged within three days. This expedited discharge results in significant cost savings to the hospital. Early identification of patients who cannot meet the goal of expedited discharge will allow for better allocation of resources.  

 

87.05 Intrahepatic Balloon Tamponade for Penetrating Liver Injury: Rarely Needed but Effective

L. M. Kodadek1, W. R. Leeper2, K. A. Stevens1, A. H. Haider3, D. T. Efron1, E. R. Haut1  1Johns Hopkins University School Of Medicine,Surgery,Baltimore, MD, USA 2Schulich School Of Medicine And Dentistry,Surgery,London, ONTARIO, Canada 3Brigham And Women’s Hospital,Center For Surgery And Public Health,Boston, MA, USA

Introduction:
Severe penetrating liver injuries are associated with high rates of morbidity and mortality. The objective of this study was to demonstrate the experience of a single urban, Level 1 trauma center with use of intrahepatic balloon tamponade for penetrating liver injuries.  

Methods:
This retrospective study queried the trauma registry for patients age 16 and older with traumatic liver injury (ICD-9 864.00-864.19) from penetrating injury undergoing exploratory laparotomy (procedure code 54.11, 54.12, 54.19) from 2000 through 2015. Operative notes were used to identify cases employing intrahepatic balloon tamponade. Charts were reviewed for patient characteristics, injury characteristics, morbidity, and in-hospital mortality. 

Results:
Of the 4,961 penetrating trauma patients admitted during the study period, 279 (5.6%) had liver injury and underwent exploratory laparotomy. Intrahepatic balloon tamponade was attempted in 9 patients (3.2%) for liver injury secondary to gunshot (8 patients) or stab wounds (1 patient). Seven cases (77.8%) utilized a penrose drain/red rubber catheter balloon and two cases utilized foley catheter balloon. One patient had the balloon immediately removed for increased hemorrhage after placement. Two of the 9 patients (22%) were in arrest at time of balloon placement and died during the index operation; both had retrohepatic IVC injury combined with cardiopulmonary injury. Among the 7 survivors, 2 had biliary injury requiring stent, 3 required hepatic angioembolization for definitive hemorrhage control, and 2 developed liver abscess. One patient, temporized with balloon tamponade, ultimately required left hepatectomy.  

Conclusion:
Although rarely needed, trauma surgeons must be prepared to use intrahepatic balloon tamponade as one surgical technique to control major hepatic injuries. This procedure can result in survival even after major penetrating liver injury. 
 

87.04 The Hospital Agent-based Model: Modular process modeling approach to the design of a trauma center

G. An1, G. An1  1University Of Chicago,Surgery,Chicago, IL, USA

Introduction:  The social, political and economic factors involved in becoming a Trauma Center (TC) are complex, invariably involving varied goals, expertise and expectations across a range of stakeholders. The failure to objectively optimize across these factors can have catastrophic consequences on the operations of institutions aiming to become a TC.  Operational viability must be a precondition if a hospital is to serve its community, and should represent a fundamental constraint on the planning for a TC. Traditional data-centric analyses cannot transparently generate the prospective scenarios needed to forecast the consequences of planning decisions. The generation of such scenarios requires the representation of health system population and process dynamics that: 1) allows for the modular representation of system components at varying levels of spatial-temporal granularity and 2) facilitates transparency by incorporating stakeholder involvement and interaction. Agent-based modeling has been extensively utilized to aid in decision analysis of multi-component/actor systems in business and social systems. Presented herein is an agent-based modeling framework for hospital operations that can be potentially expanded to the specifics of implementing a trauma center within an existing institution. 

Methods:  An abstracted Hospital Agent-based Model (HABM) was spatially sectioned into the emergency department (ED), radiology, OR, ICU and general care units. Individual patients and healthcare providers are represented as individual computational agents located in and moving among the regions of the hospital. Patient acuity was represented by a weighted stochastic likelihood of adverse events affected by provider response. Economic costs and returns were assigned to the actions of the various agents. Simulation experiments were performed with differing trauma populations and resourcing plans to identify process bottlenecks and critical operational tipping points between viable and non-viable scenarios. 

Results: The HABM generated spatio-temporal dynamics that could account for diurnal, weekly and seasonal variation in patient type and volume. Simulation outputs of patient and economic outcomes visualized decision trade offs and the impact on non-trauma care. Robust process bottlenecks were identified in: ED patient flow, non-emergent OR requirements and surgical subspecialty resources.  

Conclusion: The delivery of Trauma care is a complex, multi-factorial process that has cascading effects on hospital operations. The HABM can dynamically represent a wide range of processes and data types currently utilized in hospital operations research, and serve as a participatory, interactive platform for “virtual Kaizen” scenario exploration among stakeholders. The transparency of the underlying assumptions and expectations provided by the HABM may also serve to aid in community, policy and political engagement.

 

87.03 Eye-Tracking Devices: A Novel Communication Method for Mechanically Ventilated ICU Patients

E. Duffy1, J. Garry1, J. Vosswinkel1, D. Fitzgerald1, K. Grant1, C. Minardi1, M. Dookram1, R. S. Jawa1  1Stony Brook University Medical Center,Stony Brook, NY, USA

Introduction:  Mechanically ventilated patients cannot communicate verbally, creating challenges in addressing their needs. They must rely on alternative means for communication: writing, head nodding, communication boards (CB), etc. It has been suggested that this deficit may be addressed with eye-tracking devices (ETD), tablet-like devices that allow screen selection and enunciation of requests through eye gaze tracking. These devices have traditionally been used by patients with neurodegenerative diseases.  We hypothesized that ETDs would be useful in mechanically ventilated surgery/trauma intensive care unit (SICU) patients.

 

Method: A prospective pilot study was conducted in a tertiary care SICU.  A convenience sample was recruited over 5 weeks; the study was conducted Monday-Friday. All adult (age > 18) patients expected to continuously receive mechanical ventilation for > 48 hours, with a RASS score ≥-1 to ≤1 were evaluated. Exclusion criteria included TBI patients with GCS <15, stroke, eye injury, non-English speakers, and pregnant women. Patients were asked five basic needs questions (pain, temperature, position, suctioning) with the ETD and the CB, in random order. Patients were also prompted to communicate anything else they wished. Response accuracy was verified with head nod, hand movement, or blinking. An occupational therapist or SICU nurse served as an objective observer. Both the patient and the observer were surveyed at the end of the session regarding their experience.

 

Results: Of the 95 patients screened, 90 were excluded: mechanically ventilated <48 hours or not ventilated (n=62), TBI with GCS <15 (n=10), cognitive impairment (n=6), RASS score <-1 or >+1 (n=10), and eye impairment (n=2). Of the remaining 5 patients, 2 patients declined participation and 3 patients were enrolled. Accuracy to yes/no questions was equivalent between the ETD and the CB (Both accurate 10/12, 83% responses), but greater with the ETD for free response answers (2/2 responses for ETD and 2/3 responses for CB). Patient preference for communication was split evenly among the three options: ETD (1), CB (1), baseline form of communication (1). The observer preferred baseline communication (2/3 patients), to the CB (1/3 patients), and the ETD (0/3) for basic and complex communication.

 

Conclusions: Previous studies theorized that a substantial proportion of mechanically ventilated ICU patients can use ETDs. Our study found a limited proportion of eligible patients, likely due to strict inclusion/exclusion criteria.  The major criteria limiting participation were short duration of mechanical ventilation and low RASS score. In terms of optimal communication method, too few patients were enrolled to make any definitive conclusions. As such, the protocol has been being modified to include patients for whom mechanical ventilation is expected for >24 hours. Increased coordination with caregivers during sedation vacations will be pursued.

87.02 Pulse Waveform Analysis vs. Pulmonary Artery Catheterization in Orthotopic Liver Transplantation

J. M. Yee1, A. M. Strumwasser1, R. Hogen1, K. Dhanireddy2, S. Biswas1, P. J. Cobb1, D. H. Clark1  1University Of Southern California,Trauma, Acute Care Surgery, And Surgical Critical Care,Los Angeles, CA, USA 2University Of Southern California,Solid Organ Transplantation,Los Angeles, CA, USA

Introduction:
Hemodynamic monitoring in end-stage liver disease (ESLD) is controversial given difficulties in assessing volume responsiveness (VR) and cardiac function (CFx). Pulse waveform analysis (PWA) may supplant pulmonary artery catheterization (PAC) as a non-invasive modality. We hypothesize that PWA is equivalent to PAC for assessing VR and CFx post-orthotopic liver transplantation (OLT). Our specific aims were to determine if post-OLT PWA and PAC data are concordant for measures of VR and CFx, vary pre-and-post extubation, and impact cardiovascular management decisions.

Methods:
Between 2014-2015, (N=49) simultaneous PWA and PAC data (303 paired measurements) were obtained. Bland-Altman analysis determined variability and bias for CFx (cardiac index, CI), VR (stroke volume index, SVI), and vascular resistance (systemic vascular resistance index, SVRI). Reference ranges: CI 2.8-4.2 L/min/m2, SVI 33-47 ml/m2, SVRI 1200-2500 dynes/m2/cm5. Data were concordant if measurements agreed. For discordant data, cardiovascular management decisions (inotrope/pressor) were determined. Patients on post-OLT vasopressors, with vascular disease and/or ventilated < 8 ml/kg IBW.

Results:
Mean difference (ventilated) was 0.06 [-0.25,0.37] L/min/m2, 1.34 [-1.93,4.6] ml/m2, 736 [584,889] dynes/m2/cm5, and (extubated) was 0.17 [-0.2,0.54] L/min/m2, 2.67 [-2.19,7.52] ml/m2, 660 [416,904] dynes/m2/cm5 for CI, SVI, and SVRI respectively. 98.6%, 97.1%, 98% of ventilated patient data and 95.1%, 95.1%, 96.7% of extubated patient data for CI, SVI, and SVRI respectively, fell within 95% of these limits. For clinical interventions, PAC led to 5 unnecessary interventions whereas PWA led to 3.

Conclusion:
Comparing PAC and PWA, mean differences for CI and SVI fall within acceptable ranges of bias with high degree of concordance whereas SVRI data appears to have proportional variability outside of normal ranges. PWA may be used as an alternative to PAC post-OLT to assess VR and CFx.
 

87.01 The Impact of a National Sporting Event on The Epidemiology of Injury at a Regional Trauma Center

N. J. Walsh1, R. L. Lassiter1, A. Schlafstein1, P. B. Ham1, J. R. Yon2, A. Talukder1, K. F. O’Malley1, S. B. Holsten1, C. J. Mentzer1,3  1Medical College Of Georgia,Trauma/Critical Care, Department Of Surgery,Augusta, GA, USA 2Swedish Medical Center,Englewood, CO, USA 3University Of Miami,Trauma/Critical Care, Department Of Surgery,Miami, FL, USA

Introduction: The influence of mass gatherings on both local and national health systems has been described but Trauma System utilization during a nationally recognized and televised sporting event has not been reported. A single Regional Trauma Center’s (RTC) institutional database was queried for a 10 year period to elucidate the relationship between an annual mass gathering and utilization of trauma services.

Methods: A retrospective analysis of trauma patients presenting to the RTC during the week of the large annual sporting event (ASE) between 2005 and 2014 was performed using the institution’s trauma database. We compared week of event in April to corresponding weeks in March and May, which were used as controls for each year in the study period. The number of patients, mechanisms of injury (MOI), patient characteristics, and outcomes were investigated.

Results: 1,041 patients presented during the period of study. Patients during the ASE were older (mean 37.4 years vs 36.7 years, p <0.0001), had more recorded diagnoses (10.2 vs 7.0, p <0.001), lower injury severity scores (10.2 vs 10.6, p <0.001), shorter hospital lengths of stay (LOS) (4.7 days vs 5.3 days, p <0.0001), and ICU length of stay (LOS) (5.6 vs 5.9 days, p < 0.001).  There was no significant difference in the average number of adult or pediatric traumas per week, MOI, or mortality.

Conclusion: In a metro area population of 500,000, despite an increase of 20-25% during the Annual Sporting Event, there was no increase in the raw number of trauma hospitalizations.  The injured patients had more comorbidities but sustained less severe injuries with shorter ICU and hospital LOS.

 

86.20 Penetrating Gastric Trauma – Significance of Acid Suppression and Decompression

D. G. Davila1, A. Goldin1, B. Appel1, N. Kugler1, T. Neideen1  1Medical College Of Wisconsin,Trauma/Critical Care,Milwaukee, WI, USA

Introduction:
Penetrating gastric injuries comprise a small portion of traumatic injuries. A paucity of data exists regarding current management, including acid suppression and nasogastric (NG) decompression. 

Methods:
A single-institution retrospective of adult patients with penetrating gastric injuries between January 2004 and December 2014 was conducted. The primary study endpoint was 30-day mortality. Secondary endpoints included organ-space infections. Patients with >48 hours of proton pump inhibitor or H2 blocker were considered managed by acid suppression; >48 hours of NG management was considered decompressed.

Results:
A total of 167 patients were identified with the majority (77.2%) the result of a gunshot injury. The cohort was predominantly (90%) male at an average age of 30.4 years and ISS score of 16.5. Twenty-one patients died within 24-hours with four additional in-hospital deaths. The liver was the most common (42%) associated injury, followed by the diaphragm and the colon. Forty-five patients had two or more operations prior to closure. A single missed gastric injury was identified on second-look. There were no instances of gastric repair breakdown with no difference in complication rates between one or two layer repair (p=0.73). Organ-space infections were identified in 31 (21%) patients, most likely the result of an alternative source. Neither acid suppression nor NG tube was significantly associated with death (p=0.29 and p=0.64, respectively) or organ-space infection (p=0.89and p=0.11, respectively). 

Conclusion:
Neither acid suppression nor NG tube decompression appear to protect against death nor the infectious morbidity associated with penetrating gastric injuries. 
 

86.19 Variability in the Practice of Resuscitative Thoracotomy for Trauma Patients

E. E. Lee1,2, J. K. Canner2, L. Lam1, E. R. Haut2  2Johns Hopkins Bloomberg School Of Public Health,Center For Surgical Trials And Outcomes Research,Baltimore, MD, USA 1University Southern California,General Surgery,Los Angeles, CALIFORNIA, USA

Introduction:
Resuscitative thoracotomy (RT) remains a controversial procedure, with ongoing discussion about the benefits, salvage rates and potential risks. Despite published guidelines, there likely is wide variation in the use of this procedure. We sought to characterize nationwide variation in the use of this procedure.

Methods:
We performed a retrospective study using the National Trauma Data Bank (NTDB) from 2007-2014. We included all penetrating or blunt trauma patients who were potentially eligible for RT based on having all three of the following criteria: those presenting to the ED with a heart rate of 0, systolic blood pressure of 0, and a Glasgow Coma Scale motor score of 1. We examined variation between trauma centers in the institutional rates of RT. We identified factors associated with the odds of a patient receiving an RT. Statistical significance was predetermined as a p-value <0.05.

Results:
Of the 39,053 patients from 852 institutions, 4,143 (10.6%) underwent RT. Significant factors associated with a patient’s odds of receiving an RT included age, sex, race, injury severity, mechanism of injury, hospital trauma level designation, hospital teaching status, and region. Some hospital variation in the use of RT is related to patient characteristics. However, significant variation based on regional and institutional differences is also present.

Conclusion:
In order to ensure consistency in practices, standardization of indications for RT should be encouraged across the country.

86.18 The Effect of Presence of a State Trauma System on Intentional Firearm-Related Mortality Rate

C. K. Cantrell1, R. Griffin1, T. Swain1, K. Hendershot1  1University Of Alabama at Birmingham,Birmingham, Alabama, USA

Introduction:  Firearm injury is one of the leading causes of death in individuals in the United States. Many factors play into mortality from firearm injuries. Two factors in mortality are time from injury to treatment and the quality of the treatment received. One recommendation that the ACS COT introduced in attempt to decrease firearm injury fatalities, as well as fatalities from other mechanisms of injury, was for each state to unify their trauma centers and create a statewide trauma system. Illinois, in 1971, was the first state to undergo this transition. Most of these transitions have been more recent, with the percent of states with a trauma system nearly doubling in the past 15 years while the rate of firearm incidents continues to rise.

Methods:  For this cross-sectional study, data on firearm-related intentional deaths (i.e., suicides and homicides excluding legal intervention) were collected by state for years 2000-2014 from the CDC’s Web-based Injury Statistics Query and Reporting System (WISQARS). For each state, the presence of a state trauma system was determined by year as derived from state Public Health Department information. A General Estimating Equations negative binomial regression was used to estimate rate ratios (RRs) for the association between presence of a state trauma system and intentional mortality rate using the state’s population as an offset.

Results: The proportion of states with a state trauma system nearly doubled from 40% (n=20) in 2000 to 78% (n=39) in 2014 (see Graph 1). Overall, there was no association between presence of a state trauma system and intentional firearm-related mortality rate (RR 0.94, 95% CI 0.81-1.09). The lack of association remained for both firearm homicides (RR 0.83, 95% CI 0.63-1.07) and suicides (RR 0.98, 95% CI 0.82-1.16). The lack of association was observed across 5-year categories, though there was noted difference in the associations by year for firearm homicide, with 23% decrease in the rate observed among states with a trauma system in 2005-2009 (RR 0.77, 95% CI 0.58-1.03) while a near-null effect was observed for 2010-2014 (RR 0.91, 95% CI 0.62-1.32). Near-null associations were observed across the board for firearm suicide rate.

Conclusion: The lack of effect of trauma system presence on firearm suicide rate is not unexpected given the high case fatality rate of these injuries. Though presence of a state trauma system is not associated with the mortality rate, it would be of interest to determine whether the case fatality rate of intentional injury varies by presence of a trauma system.
 

86.17 Variation of Packed Red Blood Cell Unit Age in a Massively Transfused Trauma Patient Population

A. R. Jones1, R. L. Griffin2, R. Patel6, H. E. Wang5, M. B. Marques6, J. Pittet4, J. Kerby3  6University Of Alabama at Birmingham,Department Of Pathology,Birmingham, Alabama, USA 1University Of Alabama at Birmingham,School Of Nursing,Birmingham, Alabama, USA 2University Of Alabama at Birmingham,School Of Public Health,Birmingham, Alabama, USA 3University Of Alabama at Birmingham,Acute Care Surgery,Birmingham, Alabama, USA 4University Of Alabama at Birmingham,Anesthesiology & Perioperative Medicine,Birmingham, Alabama, USA 5University Of Alabama at Birmingham,Emergency Medicine,Birmingham, Alabama, USA

Introduction:  Transfusion of stored older (≥ 21 days) packed red blood cells (PRBCs) has been associated with increased trauma morbidity and mortality. The age of PRBCs used in trauma patients is unknown. We sought to determine trauma center variations in the age of PRBCs used in massive transfusion.

Methods:  We used data from the Pragmatic, Randomized Optimal Platelet and Plasma Ratios (PROPPR) trial, a 12-center randomized trial comparing 1:1:1 with 1:1:2 plasma: platelet: PRBC transfusion ratios in massively transfused (≥ 10 units PRBCs) trauma patients. We categorized PRBC age as: <10, 10-14, 15-20, or ≥ 21 days. We compared the transfused PRBC age distribution between trauma centers using the Kruskal-Wallis test. We examined the correlation between center-level PRBC age using Spearman’s correlation.

Results: The study centers transfused a total of 19,655 PRBC units (median 1,434 units [IQR 946, 1,974] per center). Median PRBC age was 9 days (IQR 2-22). Median PRBC age varied by trauma center (5 days [IQR 1-16] to 21 days [IQR 3-30]) (p < 0.0001). The proportion of PRBC age varied across trauma centers; < 10 days 43-66%, 10-14 days 2-20%, 15-20 days 7-19%, ≥ 21 days 5-47% (Figure 1). There were strong negative correlations between the trauma center proportions of ≥ 21 day and < 10 day PRBCs (r = -0.89, p = 0.0001), and ≥ 21 day and 10-14 days PRBCs (r = -0.96, p < 0.0001).

Conclusion: Among trauma centers participating in the PROPPR trial, there was a wide variation in the age of transfused PRBCs.

 

86.16 Fecal Diversion in Traumatic Intraperitoneal Rectal Injuries: How much is too much?

P. S. Prakash1, D. Jafari2, R. N. Smith1, C. A. Sims1  1The Hospital Of The University Of Pennsylvania,Division Of Trauma, Surgical Critical Care, And Emergency Surgery,Philadelphia, PA, USA 2The Hospital Of The University Of Pennsylvania,Department Of Emergency Medicine,Philadelphi, PA, USA

Introduction:
Traumatic intraperitoneal rectal injuries can be managed with repair or resection and primary anastomosis similar to colonic injuries, yet controversy still exists at an institutional level on optimal management of such injuries during initial surgical intervention. We sought to characterize the incidence of fecal diversion and the associated morbidity in the management of intraperitoneal rectal injuries. 

Methods:
We conducted a retrospective cohort study at a level 1 trauma center using a prospective database from 2005-2015.  Adult patients with intraperitoneal rectal injuries after blunt and penetrating trauma were included. Operative procedures were determined after review of electronic reports and clinical characteristics and outcomes were compared between groups using appropriate statistical methods. Significance was defined as p < 0.05.

Results:
Overall, 24 patients were identified to have an intraperitoneal rectal injury in a 10 year period.  Mean age was 29.6 years (16-69 range). Twenty-one (87%) were male and 20 (83%) were due to penetrating injury. The mean AIS was 3.58 (SD=0.58) and TRISS 0.9 (SD=0.19). All patients survived to discharge. On presentation, mean GCS was 13.5 (SD=3.4), systolic pressure 129 (SD=27), and temperature 97F (SD=1.5). The mean red blood cells transfused on arrival in the trauma bay was 0.7 units (0-5 range).  Twenty-two (92%) had a fecal diversion (FD), while only 2 (8%) had a primary repair (PR). Of those who had FD, 18 (82%) received an end colostomy, 4 (18%) a diverting loop colostomy.  Overall, 7 (32%) of patients who underwent FD had a post-operative complication. Seventeen (77%) FDs had a colostomy reversal on separate admission.

Conclusion:
Although the treatment strategy for colorectal trauma has advanced during the last part of the twentieth century, complication rates are high and standard management for colorectal trauma remains a controversial issue. Though the literature suggests that intraperitoneal rectal injuries can effectively be managed by primary repair or resection with primary anastomosis, fecal diversion appears to still dominate management strategies, despite associated morbidity.