17.17 The Current Composition and Depth of Massive Transfusion Protocols at US Level-1 Trauma Centers

J. Williams1, C. E. Wade1, B. A. Cotton1  1McGovern Medical School at UTHealth,Acute Care Surgery,Houston, TEXAS, USA

Introduction: Recent guidelines from the American College of Surgeons Trauma Quality Improvement Program (TQIP) and the Eastern Association for the Surgery of Trauma (EAST) have made several recommendations for optimal resuscitation and transfusion of the bleeding patient. These guidelines were developed to improve outcomes in this patient population through a reduction in variation in massive transfusion protocols (MTP) at different institutions, including the recommendation of transfusion of products in ratio approximating 1:1:1 (plasma:platelets:red blood cells). However, there is little data showing how well these guidelines have been implemented. Moreover, given the concern for supporting care durig mass casualty events, there is no data evaluating the depth of product availability at these centers. The purpose of this study was to evaluate existing MTPs and on-hand blood products at academic level-1 trauma centers (TC) throughout the US and describe current and existing pratices.

Methods:  Trauma directors at the 25 busiest US level-1 TCs were asked to complete an anonymous survey regarding their MTPs and a cross-sectional survey of on-hand blood products. Continuous data are presented as medians with the 25th and 75th percentile interquartile range (IQR). Categorical data are reported as proportions.

Results: Responses were obtained from 17 TCs, with all centers having an MTP in place. The median number of trauma admissions for calendar year 2016 for responding TCs was 2838 (IQR 1813-4888), with a median number of 54 MTP patients (IQR 38-107). 76% of responding TCs report using a 1:1 ratio of plasma:red blood cells for trauma resuscitation. 82% of responding TCs are using platelets either in their first or subsequent MTP coolers, with 58% of TCs reporting platelet use in their first MTP cooler. The most commonly reported transfusion ratio of platelets:plasma:RBCs was 1:1:1, with 35% of TCs using this ratio in their first MTP cooler, and 47% for subsequent MTP coolers. Additionally, 89% of TCs report using viscoelastic testing to guide resuscitation efforts. TABLE depicts median on-hand blood products across the 17 centers.

Conclusion: This study provides a snapshot of current MTP practices throughout the US at busy level-1 trauma centers. Although all surveyed programs have a MTP in place, variation exists in the ratio of blood products used despite clear recommendations from recent guidelines. Additionally, there is great variation in the quantity of blood products at TCs, especially with regards to platelets. Further action analysis is needed to understand how differences in MTPs affect patient outcomes. 
 

17.16 Surgical Critical Care Billing at the End of Life: Are We Recognizing Our Own Efforts?

S. J. Zolin1,2, J. Bhangu1,2, B. T. Young1,2, S. Posillico1,2, H. Ladhani1,2, J. Claridge1,2, V. P. Ho1,2  1Case Western Reserve University School Of Medicine,Cleveland, OH, USA 2MetroHealth Medical Center,Division Of Trauma, Critical Care, Burns, And Emergency General Surgery,Cleveland, OH, USA

Introduction:
Practitioners in the intensive care unit (ICU) provide not only physiologic support to severely injured patients, but also spend time to counsel families and provide primary palliative care services, including goals of care conversations and symptom palliation. It is unclear whether ICU physicians account for these services consistently in their critical care billing and documentation (CCBD). We analyzed CCBD practices for moribund trauma patients cared for in the ICU of an academic level 1 trauma center, hypothesizing that CCBD would be inconsistent despite the critically ill status of these patients near the end of life.

Methods:
An analysis of all adult admitted trauma patients who died between 12/2014 and 12/2017 was performed to evaluate the presence of CCBD on the day prior to death and on day of death. CCBD was defined as the critical care time documented in daily ICU progress notes. Age, injury severity score (ISS), race, code status at time of death, and family meetings discussing prognosis and/or goals of care held within one day of death were recorded. Patients already designated as comfort care prior to the day of analysis were not considered eligible for CCBD and patients who died within 24 hours of arrival were excluded. Multivariate logistic regression was used to determine patient factors associated with CCBD.

Results:
A total of 134 patients met study criteria. 71.6% were male and 87.3% were white. The median age was 69 (IQR 58-82). Median ISS was 26 (IQR 20-33). 82.1% had a family meeting within 1 day of death. 76.5% were made comfort care prior to death. Of patients eligible for CCBD, 42.5% had no CCBD on the day prior to death and 59.3% had no CCBD for day of death, corresponding to lost potential hospital compensation in excess of $30,000. For the day prior to death, a family meeting within 1 day of death was associated with increased likelihood of CCBD (p = 0.011), while increasing age was associated with decreased likelihood of CCBD (p = 0.008).

Conclusion:
In critically ill trauma patients near death, CCBD was inconsistent, representing an opportunity for improvement. Family meetings within 1 day of death were frequent and were associated with CCBD, suggesting that additional time spent with patients and families in end of life conversations may lead to more consistent CCBD. Given the downstream impacts of CCBD on health systems, further investigation into the mechanisms and generalizability of these findings is needed.
 

17.15 Implementation of a Bedside ICU Visual Clinical Decision Support Tool Reduces Acute Kidney Injury

J. E. Baker1, C. A. Droege1, J. A. Johannigman1, J. B. Holcomb2, T. A. Pritts1, M. D. Goodman1  1University of Cincinnati,Department Of Surgery,Cincinnati, OHIO, USA 2The University of Texas,Department Of Surgery,Houston, TX, USA

Introduction:
Acute kidney injury (AKI) is a secondary insult in critical illness commonly associated with an increase in morbidity and mortality. Analyzing and determining the onset and extent of AKI remains challenging. We hypothesized that the use of a visual clinical decision support tool with validated staging and recognition for AKI may be helpful in identifying patients transitioning into different stages of injury severity. 

Methods:
A commercially available bedside clinical surveillance and decision support dashboard system was implemented in 12 of the 34 beds in a surgical intensive care unit (SICU) at an academic level I trauma center. An automated AKI bundle based on the Kidney Disease: Improve Global Outcomes (KDIGO) criteria stages was utilized to aid in identification of patients in various AKI stages. A pre-and-post analysis was performed on patients in SICU beds with (WDB) and without the dashboard (WODB) to assess the impact of the bundle in identification of patients with AKI and minimization of ongoing renal dysfunction. Data five months prior to and fourteen months after implementation were compared. Patients with known chronic or end-stage renal disease were excluded.

Results:
A total of 2813 patients were included: 988 WDB patients and 1825 WODB patients. Age and gender were similar in each group both before and after implementation. Overall AKI incidence was reduced in the WDB group after implementation (28.8% vs. 22.4%, pre vs. post; p=0.04). Individual KDIGO stages of AKI were reduced in WDB post-implementation, but none were statistically significant. By contrast, in the WODB group there were no differences in overall AKI incidence or individual KDIGO stages when comparing before and after implementation. ICU and hospital lengths of stay (LOS) were similar in all patients and on subgroup analysis between individual KDIGO stages. No difference in mortality was demonstrated between WDB and WODB cohorts.

Conclusion:
Implementation of a bedside visual clinical decision support tool was associated with a statistically significant decrease in overall AKI incidence in patients with the bedside dashboard. We did not find a difference in LOS or mortality, but this initial retrospective study may be underpowered to detect these changes. Nevertheless, integration of an AKI bundle within this tool in SICU patients may increase clinician’s identification of AKI in real time and facilitate implementation of therapies to improve quality of care.
 

17.14 A COMPARISON OF TWO THROMBOEMBOLIC PROPHYLAXIS REGIMENS WITH LOW MOLECULAR WEIGHT HEPARIN IN TRAUMA

M. Jackson1, M. S. O’Mara1, A. Vang1, P. Beery1, M. Bonta1, M. C. Spalding1  1OhioHealth/Grant Medical Center,Trauma And Acute Care Surgery,Columbus, OH, USA

Introduction:

Trauma patients are at an increased risk for the development of venous thromboembolic events (VTE).  Controversy remains regarding the adequate dosing regimen of low molecular weight heparin (LMWH, enoxaparin) for thromboprophylaxis treatment in trauma patients.  We hypothesized that 30 mg enoxaparin twice daily is superior to 40 mg enoxaparin once daily both in safety and effectiveness.

Methods:

A retrospective controlled cohort study was performed of trauma patients who received prophylactic enoxaparin before and after protocol dosing changes. The clinically significant VTE screening criteria was constant throughout both study times.  The patients in the pre-protocol change cohort received 40 mg enoxaparin once daily while those in the post-protocol change cohort received 30 mg twice daily.  Samples of 950 patients in each of the treatment groups was estimated to provide at least 80% statistical power to detect a difference between the reported VTE rates of 2.9% and 1.1%. This is based on a two-sided chi-square test, with Type I error= 0.05, comparing two independent groups. Demographics, risk factors, and incidences of VTE events were compared between the two cohorts.

Results:

2638 patients were initially analyzed and 1900 met inclusion criteria; 950 patients in the pre-protocol change cohort and 950 in the post-protocol change cohort.  The demographics between the two groups were similar.  The once daily cohort experienced VTE rates of 4.1% (39 incidences) while the twice daily cohort experienced VTE rates of 3.7% (35 incidences) (P = 0.64 NS).   When the groups were corrected for variability by logistic regression, there remained no difference in VTE rate (p=0.60 NS).

Conclusion:

30 mg enoxaparin twice daily and 40 mg enoxaparin once daily dosing regimens did not result in statistically significant changes to the incidence rates of clinically significant VTE in the population cohorts.  Both dosing regimens were effective for VTE prophylaxis in trauma patients.  There was no difference in the rates of VTE.
 

17.13 A Multicenter Study of Nutritional Adequacy in Neonatal and Pediatric Extracorporeal Life Support

K. Ohman1, H. Zhu3, I. Maizlin4,10, D. Henry5, R. Ramirez6, L. Manning7, R. F. Williams7,8, Y. S. Guner6,11, R. T. Russell4,10, M. T. Harting5,9, A. M. Vogel2,3  1Washington University,Surgery,St. Louis, MO, USA 2Baylor College Of Medicine,Surgery,Houston, TX, USA 3Texas Children’s Hospital,Surgery,Houston, TX, USA 4The Children’s Hospital Of Alabama,Surgery,Birmingham, AL, USA 5Children’s Memorial Hermann Hospital,Surgery,Houston, TX, USA 6Children’s Hospital of Orange County,Surgery,Orange, CALIFORNIA, USA 7LeBonheur Children’s Hospital,Surgery,Memphis, TN, USA 8Univeristy Of Tennessee Health Science Center,Surgery,Memphis, TN, USA 9McGovern Medical School at UTHealth,Pediatric Surgery,Houston, TX, USA 10University Of Alabama at Birmingham,Surgery,Birmingham, Alabama, USA 11University Of California – Irvine,Surgery,Orange, CA, USA

Introduction:  Extracorporeal life support (ECLS) allows for life saving treatment for critically ill neonates and children. Malnutrition in critically ill patients is extremely common and is associated with increased morbidity and mortality. The purpose of this study is to describe nutritional practice patterns of parenteral (PN) and enteral (EN) nutrition and nutritional adequacy of neonates and children receiving ECLS. We hypothesize that nutritional adequacy is highly variable, overall nutritional adequacy is poor, and enteral nutrition is underutilized compared to parenteral nutrition.

Methods:  An IRB approved, retrospective study of neonates and children (age<18 years) receiving ECLS at 5 centers from 2012 to 2014 was performed. Demographic, clinical, and outcome data were analyzed. Continuous variables are presented as median [IQR]. Adequate nutrition was defined as meeting 66% of daily caloric goals during ECLS support.

Results: 283 patients were identified; the median age was 12 days [3 days, 16.4 years] and 47% were male. ECLS categories were neonatal respiratory 33.9%, neonatal cardiac 25.1%, pediatric respiratory 17.7%, and pediatric cardiac 23.3%.  The predominant mode was venoarterial (70%). Mortality was 41%. Pre-ECLS enteral and parenteral nutrition was present in 80% and 71.5% of patients, respectively. The median caloric and protein goals for the population were 90 kcals/kg [70, 100] and 3 grams/kg [2, 3], respectively. Figure 1 shows goal, caloric and protein nutritional adequacy for the population over the duration of ECLS. The median percent days of adequate caloric and protein nutrition were 50% [0, 78] and 67% [22, 86], respectively. The median percent days with adequate caloric and protein nutrition by the enteral route alone was 22% [0, 65] and 0 [0, 50], respectively. Gastrointestinal complications occurred in 19.7% of patients including: hemorrhage (4.2%), ileus (3.2%), enterocolitis (2.5%), intraabdominal hypertension or compartment syndrome (0.7%), perforation (0.4%), and other (11%).

Conclusion: Although nutritional adequacy in neonates and children that receive ECLS improves over the course of the ECLS run, the use of enteral nutrition is remains low despite relatively infrequent gastrointestinal complications.

 

17.12 CT Scan Analysis Indicates Nutritional Status in Trauma Patients

F. Cai1, J. C. Lee2, E. J. Matta4, C. E. Wade1,3, S. D. Adams1,3  1McGovern Medical School,Surgery,Houston, TX, USA 2Memorial Hermann Hospital,Clinical Nutrition,Houston, TX, USA 3Center for Translational Injury Research,Houston, TX, USA 4McGovern Medical School,Diagnostic Radiology,Houston, TX, USA

Introduction:
More than 2 million people are hospitalized in the US annually for traumatic injuries. These patients are at risk for malnutrition due to prolonged preoperative fasting and minimal intake due to ileus or intestinal injury, and their injuries increase metabolic demands. The gold standard diagnosis for malnutrition is a dietician interview and physical exam to assess ASPEN/AND malnutrition consensus criteria. Weight loss, loss of muscle mass and fat are commonly used as indicators, along with calorie intake history, however, this requires time, resources and training. Given the prevalence and accessibility of CT imaging in trauma admissions, morphometric analysis has the potential to be an indicator of admission nutritional status. We hypothesized that admission CT scans can identify individuals at high risk of being malnourished on arrival, and this early identification can target them for aggressive nutrition supplementation.

Methods:
We did a retrospective review of adult (>15 years) patients with traumatic injuries admitted to our level I trauma center.  We included patients with admission abdominal CT scans and a dietician nutritional assessment within 3 days.  Patients were stratified by gender, age (Young<65 years, Older≥65 years), and nutritional status, designated as non-malnourished (NM) or moderate-severe malnourished (MSM). CT images were analyzed using Aquarius TeraRecon software to calculate the average psoas area at the level of 4-5th lumbar disc. Statistical significance was determined by stepwise selection modeling and set at p<0.05.

Results:
Images were analyzed in 120 patients, of which 58% were male. The mean age was 53.6 ± 21.6 and 37% were Older (n=44). The median average psoas area in NM Young males (n=47) was 18.6 cm2, compared to a median of 12.9 cm2 in the MSM. For Young females (n=29), the medians were 10.6 cm2 in the NM and 9.2 cm2 in the MSM. When looking at the older population, Older males (n=23), had a median of 12.1 cm2 in the NM and 9.7 cm2 in the MSM. Older females (n=21) had a median of 8.4 cm2 in the NM and 6.6 cm2 in the MSM. (IQ ranges in box plot graph.) With stepwise selection modeling, we found that gender and psoas size each had a significant effect on the nutritional status. Age by psoas size demonstrated an interaction on nutritional status, but did not reach significance.

Conclusion:
Our data show that average psoas area significantly decreases in patients diagnosed with malnutrition. Gender is also associated with a significant increased risk in having malnutrition. In trauma patients with admit CT scans, psoas area analysis can potentially be used to trigger a more aggressive nutrition supplementation plan upon admission, even before dietician assessment.

 

17.11 Intravenous Lidocaine as an Analgesic Adjunct in Trauma Patients: Is It Safe?

H. L. Warren2, P. Faris1, D. H. Bivins2, E. R. Lockhart2, R. Muthukattil2, D. Lollar1,2  1Virginia Tech Carilion School of Medicine,Roanoke, VIRGINIA, USA 2Carilion Clinic,Roanoke, VIRGINIA, USA

Introduction: Pain control in patients suffering traumatic injury can be challenging. Exposure to opioid pain medications can lead to prolonged dependence, therefore regimens which reduce the amount of opioid analgesia are needed. We have identified no data regarding the use of intravenous lidocaine (IVL) in trauma populations. We sought to explore the safety of IVL in these patients.

 

Methods: We performed a single- institution retrospective review of trauma patients receiving IVL from 6/30/16-6/30/17. We extracted data on demographics, pre-admission substance use, injury severity, in-hospital analgesic use, PT/OT participation rates and side- affect events. The lidocaine group was compared with a non-lidocaine control (C) group which was matched based on age, sex, race and ISS score. Patients with length of stay <24 hours were excluded from the control group.

Results:81 patients received IVL and were compared to 89 controls. Age, sex, race and ISS were no different. Significantly more patients receiving IVL had a history of narcotic and polysubstance use (p<0.01). Mortality was the same (p=1.0) Hospital length of stay was longer in the IVL group (7.5 vs 11.8, p=0.01). 38/81 patients received a bolus and all patients received a drip. The mean rate was 1.47mg/hr. Duration of therapy was 1-41 days however the mode was 3 days. 28 side effect events occurred in 23 of 81 IVL patients (28.4%). The most common side effect was delirium (14/28). The rate of side effects was higher in the elderly cohort (7/13, 53.8%) than in the adult cohort (16/68, 23.5%). There was no relationship between side effects and blood lidocaine levels. Side effects resolved with cessation of medication. Side effects occurred in 5 of 89 control patients (5.6%).

Conclusion:Side effects of IVL were common but resolved with cessation of IVL. No mortality was attributed to IVL. IVL may be a useful adjunct for patients requiring high narcotic use with careful monitoring. Use of IVL in elderly patients requires caution. These results should be clarified with prospective evaluation.

 

17.09 The Optimal Length of Stay for 90-day Readmissions after Surgeries in Tricare

T. Andriotti1, E. Goralnick1, M. Jarman1, M. A. Chaudhary1, L. Nguyen3, P. Learn2, A. Haider1, A. Schoenfeld1  1Harvard Medical School,Surgery,Boston, Massachusetts, MASSACHUSETTS, USA 2Uniformed Services University Of The Health Sciences,Surgery,Bethesda, MD, USA 3Harvard Medical School,Vascular And Endovascular Surgery,Boston, Massachusetts, MASSACHUSETTS, USA

Introduction:

Healthcare performance evaluators have prioritized reduction of length of stay (LOS) and readmissions as important measures of quality in health care. However, these two measures represent competing demands as decreased LOS may result in increased unplanned readmissions.  Our objective was to assess the optimal LOS that leads to the lowest readmission risk after discharge for knee replacement arthroplasty.

Methods:

A retrospective, open cohort study design was performed using Tricare claims, the Department of Defense’s Health Insurance product, to identify all eligible adult patients (18 – 64 years) who were discharged from elective total knee arthroplasty from 2006-2014. To estimate the optimal timepoint for LOS related to lowest 90-day readmissions, a generalized additive model with spline regression was generated to assess for the predicted risks of readmission (graph 1) adjusted for age, sex, gender, military rank as a proxy of socioeconomic status, any complications during hospital stay and Charlson comorbidity score. Readmissions included stays for all unplanned causes, reported by the principal diagnosis at the index (i.e., initial) inpatient stay within 90 days after discharge from elective total knee arthroplasty.

Results:

11,517 patients (6,910 women and 4,607 men) with a mean [SD] age of 56.94[6.34] years underwent the procedure within the study frame. 50.14% were white, 1.37% were Asian, 9.31% were black, 0.66% were American Indian, 3.12% were other and 35.39% were unknown. The median LOS was 3 days (IQR: 2-3 days) and the main causes of 90-day readmissions were post-operative infection (0.81%), mechanical complication of other internal orthopedic devices (0.41%) and knee lymphedema (0.31%). The lowest risk of being readmitted in 90 days was observed in patients discharged on the 1st day of post-operatory discharge (POD-1) (risk = 4.8%). Moreover, as LOS increases, the risk of readmissions significantly increases up to 9.73% for patients discharged on the 8th day (POD-8) (p=.0004). 

Conclusion:

LOS reduction up to one day may not culminate in increased risk of readmissions in patients whose clinical conditions allow them to be discharged on POD-1 after elective total knee arthroplasty. Hence, orthopedists may consider discharge patients with good post-surgical conditions as soon as one day after elective total knee arthroplasty.

17.08 PREDICTORS FOR DIRECT ADMISSION TO THE OPERATING ROOM IN SEVERE TRAUMA

D. Meyer1, M. McNutt1, C. Stephens2, J. Harvin1, R. Cabrera4, L. Kao1, B. Cotton1,3, C. Wade3, J. Love1  1McGovern Medical School at UTHealth,Acute Care Surgery/Surgery/McGovern Medical School,Houston, TX, USA 2McGovern Medical School at UTHealth,Trauma Anesthesiology/Anesthesiology/McGovern Medical School,Houston, TX, USA 3McGovern Medical School at UTHealth,Center For Translational Injury Research/Surgery/McGovern Medical School,Houston, TX, USA 4Memorial Hermann Hospital,LifeFlight,Houston, TX, USA

Introduction:  Many trauma centers utilize protocols for expediting critical trauma patients directly from the helipad to the OR. Used judiciously, bypassing the ED can decrease resource utilization and the time to definitive hemorrhage control. However, criteria vary by center, rely heavily on physician gestalt, and lack evidence to support their use. With prehospital ultrasound and base excess increasingly available, opportunities may exist to identify risk factors for emergency surgery in severe trauma.

Methods: All highest-activation trauma patients transported by air ambulance between 1/1/16 and 7/30/17 were included retrospectively. Transfer, CPR, and isolated head trauma patients were excluded. Patients were dichotomized into two groups based on ED time: those spending <30min who underwent emergency surgery by the trauma team and those spending >60min. Prehospital and ED triage data were used to calculate univariable and multivariable odds ratios.

Results: 435 patients met enrollment criteria over the study period. 76 (17%) spent <30min in the ED before undergoing emergency surgery (median age 31y [21-45], 82% male, 41% penetrating). 359 (83%) patients spent >60min (median age 35y [21-48], 74% male, 15% penetrating).  HR, SBP, and BE values were similar in the two groups. Mortality was higher in <30min (32% vs 9%, p<0.001). Compared to >60min, the <30min group was more likely to have: (1) penetrating trauma with SBP<80mmHg or BE<-16 (OR 15.02, 95% CI 4.64-48.61); (2) penetrating trauma with positive FAST (OR 27.54, 95% CI 9.00-84.28); or (3) blunt trauma with a positive FAST and SBP<80mmHg or BE<-10 (OR 11.98, 95% CI 4.03-35.63). Collectively, these criteria predicted 39 (51%) of the <30min group.

Conclusion: Both blunt and penetrating trauma patients with positive FAST and profound hypotension or acidosis were much more likely to require emergency surgery within 30 minutes of hospital presentation and may not benefit from time spent in the emergency department.

 

17.07 Investigation of the Reliability of EMS Triage Criteria in a Level 1 Trauma Center

R. L. Dailey1, M. Hutchison2, C. Mason2, K. Kimbrough2, B. Davis2, A. Bhavaraju2, R. Robertson2, K. Sexton2, J. Taylor2, B. Beck2  1University of Arkansas for Medical Sciences,College of Medicine,Little Rock, AR, USA 2University of Arkansas for Medical Sciences,Trauma Surgery,Little Rock, AR, USA

Introduction: EMS triage criteria determines if a patient receives the appropriate level of care. Increased mortality has been associated with triage of severely injured patients to hospitals who cannot provide definitive care, resulting in inter-hospital transfer (Nirula et al.). From the limited research, findings indicate EMS criteria is relatively insensitive for identifying seriously injured patients (Newgard; van Rein et al.). We hypothesized that trauma triage category would correlate with ISS. 

Methods: This is a retrospective observational study of trauma patients transported to the state’s only level 1 trauma center. 

Results: After excluding four patients for lack of assignment of trauma triage category and 16 patients with a designation of chief complaint not blunt or penetrating, 516 patients underwent final analysis. Additionally, patients missing either ISS, NISS, or TRISS scores were excluded in analysis involving these categories. When compared to trauma triage categories of minor (mn), moderate (md), and major (mj), ISS > 15 (p < .0001), mortality (p < .0001), and GCS category (p < .0001) were found to be significantly different according to chi square test for independence. Likewise, when compared to trauma triage category, ISS (mj: 16 ± 15.0, md: 9.4 ± 7.4, mn: 6.2 ± 6.1; p < .0001), NISS (mj: 21.6 ± 20.2, md: 11.7 ± 9.3, mn: 8.1 ± 8.3; p < .0001), and TRISS (mj: 0.81 ± 0.32, md: 0.98 ± 0.041, mn: 0.98 ± 0.028; p < .0001) were found to be significantly different according to ANOVA. Tukey post hoc analysis revealed significant differences (p < .0001) between the major and moderate and major and minor categories in ISS, NISS, and TRISS; whereas, the difference between moderate and minor categories were not significant in ISS (p = 0.0129), NISS (p = 0.0415), and TRISS (p = 0.9998). Percentages of patients discharged by emergency department services were as follows: mj: 18.9%, md: 19.2%, mn: 25.3%. 

Conclusion: Results indicate that moderate and minor trauma triage categories are similar across ISS, NISS, and TRISS. This implies a lack of sensitivity in criteria to distinguish these categories. The scores (ISS, NISS, and TRISS) were more differentiated between major and moderate and major and minor categories. Results suggest that three categories of trauma triage may not be needed, or that additional parameters are needed to better define moderate and minor triage categories. In response to this study and findings from Newgard and Van Rein et al., future research should focus on improving prehospital trauma triage protocols. 
 

17.06 Dedicated Intensivist Staffing Decreases Ventilator Days and Tracheostomy Rates in Trauma Patients

J. D. Young1, K. Sexton1, A. Bhavaraju1, M. K. Kimbrough1, B. Davis1, D. Crabtree1, N. Saied1, J. Taylor1, W. Beck1  1University of Arkansas for Medical Sciences,Division Of Acute Care Surgery/Department Of Surgery,Little Rock, AR, USA

Introduction:  Various physician staffing models exist for providing care to trauma patients requiring intensive (ICU) care.  Our institution went from an open ICU to a closed ICU in August 2017. The closed ICU mandated the primary responsibility for the care of trauma patients to be directed by board certified/eligible surgical intensivists. We hypothesized that this would decrease respiratory failure requiring tracheostomy in trauma patients.

Methods: After IRB approval, a retrospective review of all patients in our trauma registry with ventilator days > 1 were included for this study (2,206 total patients).  We then examined all National Trauma Data Set (NTDS) variables and procedures to include tracheostomy.

Results: There was no difference observed in gender, race, or mortality rates.  The open ICU was noted to have had a higher percentage of penetrating trauma (21.4% vs 13.4%, P = .0019).  The following data were observed.

Conclusion: A closed, surgical intensivist run ICU resulted in a statistically significant difference not only in tracheostomy rates, but also ICU length of stay, hospital length of stay, and ventilator days. These changes were also achieved while seeing a significantly sicker patient population as evidenced by a higher Injury Severity Score (ISS).

 

17.05 Patients with Gunshot Wounds to the Torso Differ in Risk of Mortality Depending on Treating Hospital

A. Grigorian1, J. Nahmias1, T. Chin1, E. Kuncir1, M. Dolich1, V. Joe1, M. Lekawa1  1University of California, Irvine,Surgery,Orange, CA, USA

Introduction: The care provided and resulting outcomes may differ in patients with a gunshot-wound (GSW) treated at an American College of Surgeon’s Level-I trauma center compared to a Level-II center. In addition, there has recently been an increase in the non-operative management (NOM) of GSWs in the right upper quadrant or those with a tangential trajectory. Previous studies have had conflicting results when comparing risk of mortality in patients with GSWs treated at Level-I and II centers. However, the populations studied were restricted geographically. We hypothesized that patients presenting after a GSW to the torso at a Level-I center would have a shorter time to surgical intervention (exploratory laparotomy or thoracotomy), compared to a Level-II in a national database. We also hypothesized that patients with GSWs managed operatively at a Level-I center would have a lower risk of mortality.

Methods: The Trauma Quality Improvement Program (2010-2016) was queried for patients presenting to a Level-I or II trauma center after a GSW. Patients with grade>1 for abbreviated injury scale of the head, neck and extremities were excluded to select for patients with injuries to the torso. A multivariable logistic regression analysis was performed.

Results: From 17,965 patients with GSWs, 13,812 (76.8%) were treated at a Level-I center and 4,153 (23.2%) at a Level-II center. There was no difference in the median injury severity score (ISS) (14, p=0.55). The Level-I cohort had a higher rate of laparotomy (38.9% vs. 36.5%, p<0.001) with a shorter median time to laparotomy (49 vs. 55 minutes, p<0.001) but no difference in rate (p=0.14) and time to thoracotomy (p=0.62). GSW patients at a Level-I center managed with laparotomy (11.5% vs. 13.8%, p=0.02) or thoracotomy (50.8% vs. 61.5%, p=0.01) and those with NOM (12.8% vs. 14.0%, p=0.04) had a lower rate of mortality. After adjusting for covariates, only patients undergoing thoracotomy (OR=0.67, CI=0.47-0.95, p=0.02) or those with NOM (OR=0.85, CI=0.74-0.98, p=0.03) at a Level-I center had lower risk for death, compared to Level-II.

Conclusion: Despite having a similar ISS, patients presenting after GSWs to the torso at a Level-I center undergo laparotomy in a shorter time compared to those treated at a Level-II center and although they had a trend towards a lower mortality risk, this was not statistically significant. Patients with GSWs managed with thoracotomy or with NOM at a Level-I center have a lower risk of mortality, compared to a Level-II. Future prospective studies examining variations in practice, resources available and surgeon experience to account for these differences are warranted and to determine optimal pre-hospital trauma designation for this population.

 

17.04 Evaluating Failure-to-Rescue as a Center-Level Metric in Pediatric Trauma

L. W. Ma1, B. P. Smith1, J. S. Hatchimonji1, E. J. Kaufman1, C. E. Sharoky1, D. N. Holena1  1University Of Pennsylvania,Philadelphia, PA, USA

Introduction:  Failure-to-rescue (FTR) is defined as death after a complication and has been used to evaluate quality of care in adult patients after injury. The role of FTR as a quality metric in pediatric populations is unknown. The aim of this study was to define the relationship between rates of mortality, complications, and FTR at centers managing pediatric (<18 years of age) trauma in a nationally representative database. We hypothesized that centers with high mortality would have higher FTR rates but complication rates would be similar between high- and low-mortality centers.

 

Methods:  We performed a retrospective cohort study of the 2016 National Trauma Data Bank. We included patients <18 years with an Injury Severity Score (ISS) of ≥9. We excluded centers with a pediatric patient volume of <50 patients or that reported no complications. We calculated the complication, FTR, mortality, and precedence (the proportion of deaths preceded by a complication) rates for each center and then divided the centers into tertiles of mortality. We compared complication and FTR rates between high and low tertiles of mortality using the Kruskal-Wallis test.

 

Results: In total, we included 25,792 patients from 171 centers in the study. Patients were 67% male, 65% white, had a median age of 10 (IQR 5-15), and had a median ISS of 10 (IQR 9-17), a median GCS motor score of 6 (IQR 6-6), and a median systolic blood pressure of 120 (IQR 109-132). Overall, 948 patients had at least one complication for an overall complication rate of 4% (center level 0-19%), while 47 patients died after a complication for an overall FTR rate of 5% (center level 0-60%). High-mortality centers had both higher FTR rates (8% vs 0.5%, p = .013) and higher complication rates (5% vs 3%, p = .011) than lower-mortality hospitals. The overall precedence rate was 15% with a median rate of 0% (IQR 0%-20%).

 

Conclusion: Both complication and FTR rates are low in the pediatric injury population. However, complication and FTR rates are both higher at higher-mortality centers. The low overall complication rates and precedence rates likely limit the utility of FTR as a valid center-level metric in this population, but further investigation into individual FTR cases may reveal important opportunities for improvement.

 

17.03 Optimizing Lower Extremity Duplex Ultrasound Screening After Injury

J. E. Baker1, G. E. Niziolek1, N. Elson1, A. Pugh1, V. Nomellini1, A. T. Makley1, T. A. Pritts1, M. D. Goodman1  1University Of Cincinnati,Department Of Surgery,Cincinnati, OH, USA

Introduction:
Venus thromboembolism (VTE) remains a significant cause of morbidity and mortality after traumatic injury. Multiple assessment strategies have been developed to determine which patients may benefit from lower extremity duplex ultrasound (LEDUS) screening for deep vein thrombosis (DVT). We hypothesized that screening within 48 hours of admission and in patients with a Risk Assessment Profile (RAP) ≥  8 would result in fewer LEDUS screening exams performed and a shorter time to VTE diagnosis without increasing the rate of VTE-related complications. 

Methods:
A retrospective review was conducted on trauma patients admitted from 7/1/2014-6/30/2015 and 7/1/2016-6/30/2017. In 2014-2015, patients with a RAP score ≥  5 underwent weekly screening LEDUS exams starting on hospital day 4. By 2016-2017, the protocol was changed to start screening patients with a RAP score ≥  8 by hospital day 2. Patients were identified based on the aforementioned criteria and demographic data, injury characteristics, LEDUS exam findings, chemoprophylaxis type, and time of initial administration were collected.

Results:
In 2014-2015 a total of 3920 patients underwent evaluation by the trauma team, while in 2016-2017 a total of 4213 patients underwent trauma evaluation (Table). Fewer LEDUS exams were performed in 2016-2017. Of those patients who underwent screening LEDUS exams, a significantly higher RAP score and ISS score were demonstrated in 2016-2017. No significant difference was seen in the number of patients presenting with DVT or pulmonary embolism (PE) between the two cohorts. DVTs were most often identified on the first LEDUS exam in both cohorts. Of patients in whom a DVT was diagnosed on screening LEDUS exam, a significantly higher RAP score (12 vs. 10), a shorter time to first duplex (1 vs. 3 days), and a shorter time to DVT diagnosis (2 vs. 4 days) were observed in the 2016-2017 cohort. There was no significant difference in the time to initiate VTE prophylaxis, the number of DVTs found, the type of DVTs found, or the treatment of the DVTs. In patients who were found to have PE, no significant differences were demonstrated between RAP score, time to VTE prophylaxis, time to PE, percentage of patients with a DVT as well as PE, or reasons for duplex performed in all cohorts.

Conclusion:
By changing LEDUS screening to a RAP ≥  8 and within 48 hours of admission, fewer duplexes were performed and the majority of DVTs were found earlier without a difference in DVT location or PE incidence.  Refinement of lower extremity Doppler ultrasound screening protocols decreases over-utilization of hospital resources without compromising patient outcomes.
 

17.02 Are We Failing Trauma Patients with Serious Mental Illness? A Survey of Level 1 Trauma Centers

D. Ortiz1, J. V. Barr1, J. A. Harvin1, M. K. McNutt1, L. Kao1, B. A. Cotton1  1McGovern Medical School at UTHealth,Acute Care Surgery,Houston, TX, USA

Introduction: Psychiatric illness is an independent risk factor for trauma and recidivism. Budget cuts have steadily decreased funding for public hospitals and have resulted in states closing public psychiatric inpatient beds. It is unclear if and how these trends have affected resources for trauma patients with preexisting mental illness. The purpose of this study was to gauge perceptions of needed and currently available resources for this patient population.

Methods:  A 10-question survey was developed to capture the volume of psychiatric patients, available psychiatric services, and perceived need for resources. The questions were inspired by discussions with three independent psychiatrists with trauma patient practices. The survey was peer reviewed and modified by two separate trauma researchers. It was sent to 27 trauma surgery colleagues at different Level-1 trauma centers across the United States using a SurveyMonkey email link. Responses were anonymous and descriptive analyses were performed.

Results: 22 of 27 surgeons responded (81% response rate). Of the responding centers, 10 (47.6%) admitted 1-5 patients with preexisting serious mental illness weekly, while 6 (27.3%) and 5 (22.7%) admitted 6-10 and >10, respectively. One center did not respond to this question. 14 of 22 (63.6%) reported having acute situational support services available for trauma patients. Ten (45.5%) respondents did not know how many psychiatry consultants were available at their institution, while a single center had one consultant available. Six (27.3%) and 5 (22.7%) had 2-4 and 5 or more consultants, respectively. Twelve (54.6%) surgeons reported to have no designated outpatient follow-up for acute or chronic psychiatric issues for trauma patients, while 2 (9.1%) didn’t know. Sixteen (72.7%) stated that expanded psychiatric services are needed for their trauma center, while 4 (18.8%) said they didn’t know, one said no (4.55%), and one (4.55%) hasn’t thought about it. The final question allowed the respondents to choose multiple areas of perceived need for improvement in psychiatric care for the trauma patient (Table).

Conclusion: Trauma patients frequently present with preexisting serious mental illness. Over half of the surveyed surgeons reported having no outpatient follow-up for these patients, and almost three quarters perceived the need for expansion of psychiatric services. Strikingly, many respondents were unaware of the psychiatric resources available at their centers, while a few had not thought about the challenges in treating this vulnerable patient population. In addition to a lack of resources, these findings highlight an overlooked gap in high quality, patient-centered trauma care. 

 

16.20 Hospital Acquired Conditions after Liver Transplantation

Z. Moghadamyeghaneh1, A. Masi1, R. W. Gruessner1  1State University of New York Downstate,Surgery,Brooklyn, NEW YORK, USA

Introduction: Hospital Acquired Conditions (HAC) are used by Medicare/Medicaid Services to define hospital performance measures that dictate payments/penalties.  However, pre-op patient comorbidity may significantly influence HAC development. 

Methods: The NIS database (2002-2014) was used to investigate HAC for the patients who underwent liver transplantation.  Multivariate analysis, using logistic regression, was used to identify HAC risk factors.

Results: We found a total of 15,048 patients who underwent liver transplantation during 2002-2014. Of these 190(1.3%) had a report of HAC.  There was a steady increase in rate of HAC after liver transplantation in US over 13 years of study (Figure 1). HAC were associated with increased:  mean hospitalization-length (56 vs 21 days, P<0.01), hospital-charges ($807,506 vs $355,603, P<0.01), but not mortality (11.6% vs 5%, AOR:1.14, P=0.51).  Most frequent HAC were: vascular catheter-associated infection [121(0.8%)], pressure ulcer stage III/IV [24(0.2%)], catheter-associated urinary tract infection [21(0.1%)], and fall and trauma [19(0.1%)].  The strongest factors correlating with HAC included: high-risk patients with significant comorbidity before transplantation [major or extreme loss function pre-op (AOR: 6.39, P=0.01), High or extreme mortality risk before transplantation (AOR: 2.36, P=0.03), preoperative weight loss (AOR: 1.76, P<0.01), and hospital factor of private vs. governmental hospital (AOR: 2.50, P<0.01). Hospital factors of bed size [large vs. small] (AOR: 1.46, P=0.17), teaching vs. non-teaching (AOR: 1.14, P=0.89) did not have significant associations with HAC.

Conclusion: The rate of HAC for liver transplantation (1.3%) is higher than the overall reported rate of HAC for GI procedure. There is a steady increase in rate of HAC since 2002 which can be related to adaptation of MELD score for liver transplantation.  Multiple non-modifiable patient factors (preoperative loss function, high or extreme mortality risk, weight loss, etc.) associated with HAC so rate of HAC is not a reliable measure to evaluate hospital performance. Vascular catheter-associated infection is the most common HAC after liver transplantation which can be avoidable.  Considering private hospitals have increased HAC risk compared to governmental hospitals, improvement in such hospitals settings may decrease rate of the complications.

 

17.01 To Close or Not to Close – Skin Management after Trauma Laparotomy

J. Woloski1, S. Wei1, G. E. Hatton1, J. A. Harvin1, C. E. Wade1, C. Green1, V. T. Truong1, C. Pedroza1, L. S. Kao1  1McGovern Medical School at UTHealth,Trauma Surgery,Houston, TX, USA

Introduction:  Skin management after fascial closure may influence the risk of superficial surgical site infection (SSSI) development, which occurs in up to 25% of patients after emergent trauma laparotomy. Leaving skin open is thought to decrease SSSI risk, but increases wound care burden and results in poor cosmesis. Given the lack of high-quality evidence guiding skin management after trauma laparotomy, it is unknown whether skin incisions are being closed or left open appropriately. We aimed to characterize skin management in adult trauma laparotomy patients and to determine whether skin closure strategy is associated with SSSI.

Methods:  We performed a retrospective cohort study of a trauma laparotomy database between 2011 and 2017 at a high-volume, level-1 trauma center. SSSI diagnoses were determined by chart review according to the Center for Disease Control definition. Patients who never achieved fascial closure and those who died prior to the first recorded SSSI (on postoperative day 2) were excluded. Open versus closed skin management was determined by reviewing operative reports. Open skin entailed use of gauze packing or wound VAC, and closed skin entailed closured with staples (with or without wicks) or sutures. Univariate and multivariable analyses were performed. The multivariable model included variables that generated the best area under the curve (AUC). Inverse probability weighted propensity scores (IPWPS) were used to compare patients’ predicted probability for open versus closed skin management with the skin management strategy they received.

Results: Of 1322 patients, 309 (23%) received open skin management, while 1013 (77%) had skin closure. The overall SSSI rate was 6%. On univariate analysis, there were no significant differences in development of SSSI in open versus closed skin groups (8% versus 6%, p = 0.12). On adjusted analysis, damage control laparotomy, wound class 2, skin closure, large bowel resection, and higher body mass index were significantly associated with SSSI (Table). Skin closure has 3-times higher odds of SSSI development. IPWPS assignment showed that 75% of patients with closed skin had a propensity score of >0.9 for skin closure. In contrast, 11% of patients with open skin had a propensity score of <0.1 for skin closure.

Conclusion: Even though the rate of SSSI was only 6%, almost 25% of trauma patients had initial open skin management. Although there was consistency in the use of skin closure based on patient and wound characteristics, skin closure was associated with higher odds of SSSI. Better predictive models are needed to accurately stratify patients’ risk for SSSI after emergent trauma laparotomy to determine optimal skin management strategy.

16.19 Optimizing Post-operative Triage after Major Surgery: A Model for Admission to Critical Care Units

F. M. Carrano1,2, Y. Fang5, D. Wang6, S. E. Sherman4, D. V. Makarov3,7, S. Cohen2, E. Newman1,2, H. Pachter2, M. Melis1,2  1VA New York Harbor Healthcare System,Department Of Surgery,New York, NY, USA 2New York University School Of Medicine, NYU Langone Medical Center,Department Of Surgery,New York, NY, USA 3New York University School Of Medicine, NYU Langone Medical Center,Department Of Urology,New York, NY, USA 4New York University School Of Medicine,Department Of Population Health,New York, NY, USA 5New Jersey Institute of Technology,Department Of Mathematical Sciences,Newark, NJ, USA 6Northwell Health,Department Of Surgery,New York, NY, USA 7VA New York Harbor Healthcare System,Department Of Urology,New York, NY, USA

Introduction:

Currently, there is a lack of standardized evidence-based criteria to determine which patients qualify for admission to a Critical Care Unit (ICU) after major surgery. Under-triage to regular floor can result in not recognizing serious post-surgical complications, which could have been prevented and treated expeditiously in the appropriate setting, while over-triage could lead to unnecessary strains on vital healthcare resources, not mentioning the cost of such miscalculations.The goal of this study is to identify objective criteria and create algorithms that may enhance post-operative triage to the appropriate level of care following major surgery.

Methods:
We performed a retrospective analysis of patients undergoing ENT, General, Urological and Vascular major surgery between 2014 and 2015 at a major VA Medical Center. Necessary ICU admissions were identified on the basis of any of 15 objective clinical events commonly observed in the post-operative period (e.g. use of pressors, re-intubation, sustained hypotension, cardiac arrest, etc.). We used 83 clinical variables and risk scores (including Charlson Comorbidity Index, Surgical Apgar Score, Mortality Probability Model, etc.) to generate a Decision Tree Model (DTM) that would objectively establish criteria as to which patients are deemed appropriate candidates for admission to an ICU post surgery. Overall quality and accuracy of the model were measured by examining the test misclassification rate.

Results:
Our study included a total of 358 patients (96% male with mean age of 67 years). Of those, 142 met at least one of the 15 objective criteria for ICU admission. Reliance on DTM for post-operative triage would have resulted in under-triage and over-triage in 29 and 21 patients respectively, for a total mistriage rate of 13.97%. In comparison to mistriage rates based on clinical judgement alone, 63% in our own experience, the DTM has resulted in a significantly lower mistriage rate. Sensitivity and specificity of the DTM were, respectively, 79.5% and 90.2%. Positive predictive value and negative predictive value were respectively 84.3% and 87.0%. Variables with most relevance within the DTM included functional status, amount of intra-operative blood losses, intra-operative administration of blood products, presence of malignancy, as well as patient ethnicity.

Conclusion:
Use of clinical judgment alone for post-operative admission to ICU after major surgery remains highly inaccurate and is associated with inordinately excessive mistriage rates. Statistical models such as DTM has proven in our hands to outperform clinical judgment in accuracy of post-operative triage. In the near future, such models, powered by artificial intelligence platforms, might be implemented in automated algorithms to enhance post-operative decision making.

16.18 Intra-Operative Bile Spillage as a Prognostic Factor for Gallbladder Adenocarcinoma

A. M. Blakely1, P. Wong1, P. Chu2, S. G. Warner1, G. Singh1, Y. Fong1, L. G. Melstrom1  1City Of Hope National Medical Center,Department Of Surgery,Duarte, CA, USA 2City Of Hope National Medical Center,Department Of Pathology,Duarte, CA, USA

Introduction:  Gallbladder adenocarcinoma is often incidentally identified on pathology following cholecystectomy for presumed benign indications. Intra-operative gallbladder rupture risks peritoneal seeding of disease. We hypothesized that bile spillage would be a negative prognostic factor after index cholecystectomy in patients with gallbladder adenocarcinoma.

Methods:  A retrospective chart review of all patients treated at a cancer center from 2009 to 2017 with histologically confirmed gallbladder adenocarcinoma was performed. Operative and pathology reports were compared. Patient, disease, and treatment factors were analyzed in terms of disease recurrence and overall survival.

Results: Of 79 patients with gallbladder adenocarcinoma, 66 (84%) had both operative and pathologic reports available. Median patient age was 68 years (range 33 to 95), and 71.2% were female. Tumor stage was T1 for 7 (11%), T2 for 25 (38%), and T3 for 35 (53%). Node stage was N0 for 22 (33%), N1+ for 26 (39%), and Nx for 18 (27%). Hepatobiliary operations performed included cholecystectomy (CCY) alone (n=34, 59%), CCY and combined or interval partial hepatectomy (n=27, 36%), and CCY with common bile duct resection (n=5, 5%). Operations were performed with palliative intent for advanced disease in 10 patients (15%). Full-thickness rupture was significantly more likely to be documented in pathology reports (n= 20 of 66, 30%) than in operative reports (n=15 of 66, 23%; p<0.0001). Median recurrence-free survival was 11 months (interquartile range [IQR] 5 to 28); median overall survival was 16 months (IQR 10 to 31). Seven patients with T1 or T2 lesions had peritoneal recurrence, of whom 4 (57%) had pathology-confirmed rupture. Subset Cox proportional hazards regression of N0 and Nx patients analyzing patient age, grade, tumor stage, and pathology-confirmed rupture was performed (Table 1), finding that only rupture was associated with overall survival at 5 years (hazards ratio 3.5, 95% confidence interval 1.1-12.1, p=0.037).

Conclusion: Surgical resection of gallbladder adenocarcinoma patients with node-negative disease limited to the gallbladder represents an opportunity for long-term survival. Rupture of the gallbladder wall during cholecystectomy risks seeding of the abdominal cavity, therefore upstaging disease and potentially diminishing overall survival. Explicit documentation of intra-operative spillage is critical as it may have implications for outcomes as well as for consideration of up-front systemic therapy prior to definitive resection.

16.17 Surgical Judgment and Mortality: Analysis by a Critique Algorithm-Based Database and Morbidity Review

A. A. Antonacci1, S. Dechario1, G. Husk1, G. Stoffels3, C. L. Antonacci2, M. Jarrett4  1North Shore University And Long Island Jewish Medical Center,Manhasset, NY, USA 2Tulane University School Of Medicine,New Orleans, LA, USA 3Feinstein Institute for Medical Research,Manhasset, NY, USA 4Donald and Barbara Zucker School of Medicine at Northwell/Hofstra,Manhasset, NY, USA

Introduction:  Morbidity and Mortality conference (MMC) review combined with standardized critique algorithm and relational database provides valuable data for surgical quality.   Complications related to mortality and the relationship between mortality and management errors were studied.

Methods:  68,993 procedures were performed at two university based medical centers. We collected Morbidity/Mortality reports from total of 1045 complication cases comprising 268 with mortality and 777 without mortality.  Complications, mortality, Clavien-Dindo scores,  management error and the role of physician team and patient disease were studied.

Results:Eighteen of twenty most common complications were associated with significantly higher mortality rates (p < 0.0001;Table 1).  885 cases identified the physician team (41%), disease (26%) and both (26%) as responsible for complications. Mortality rates were higher in complications that involved patient disease compared to complications that did not (40% vs. 7%;p< 0.001).  In cases with errors and 1 or more complications, each additional complication was associated with a 30% increase in odds of death (p<0.0001). Almost all complications without management errors involved disease (236/244;97%) whereas a significantly lower proportion of complications with management errors involved disease (259/641;40%,p< 0.0001). With complications not involving  patient disease, mortality related to judgment errors was significantly higher than mortality related to non-judgment errors (32%vs12 %,p < 0.0001).  In contrast, mortality related to technical errors was significantly lower than mortality related to non-technical errors (11%vs.29%,p<0.0001).

Conclusion:

This project describes the feasibility of combining MMC with standardized critique algorithm-based database to provide data on the frequency of complications associated with mortality and the significant relationship between mortality and judgment error.