12.04 Early Discussion of Goals of Care for the Dying Trauma Patient

B. T. Young1,2, J. K. Bhangu1,2, S. J. Zolin1,2, S. E. Posillico1,2, H. A. Ladhani1,2, C. W. Towe2,3, J. A. Claridge1,2, V. P. Ho1,2  1MetroHealth Medical Center,Division Of Trauma, Critical Care, Burns And Emergency General Surgery,Cleveland, OH, USA 2Case Western Reserve University School Of Medicine,Cleveland, OH, USA 3University Hospitals Cleveland Medical Center,Thoracic & Esophageal Surgery,Cleveland, OH, USA

Introduction:
Traumatic injury affects patients of varying age groups and comorbidity profiles and carries a significant risk of in-hospital mortality of 10-20%. The American College of Surgeons palliative care guidelines recommend a formal goals-of-care-conversation (GOCC) within 72 hours of admission for trauma patients with a prognosis of death, permanent disability, or uncertainty of either.  At our institution, no such protocol exists, and the timing of GOCC is therefore provider dependent. We hypothesize that occurrence of GOCCs within 3 hospital days (early GOCC) in moribund patients would be associated with earlier transition to comfort care status, fewer deaths during code, and shorter duration of intensive care treatment.

Methods:
We performed a retrospective analysis of all adult primary patients of the trauma surgery service at an academic Level 1 trauma center from 12/2014 to 12/2017 who died during their index admission. Patients who died within 24 hours of admission or were transferred to another service prior to death were excluded. A GOCC was defined as any documented discussion between a physician and the patient and/or surrogate regarding prognosis or goals of care.  Demographics, injury characteristics including arrival Glasgow coma scale (GCS), injury severity score (ISS), and abbreviated injury scale Head (AIS-Head) score, ventilator days, comfort care status, length of stay (LOS) and intensive care length of stay (ICU LOS) were collected. Bivariate analysis was performed to compare patients with early GOCC to those with later or no conversations.

Results:
177 patients met inclusion criteria. Patients were 68% male; 63% were over age 65. 90% were injured in a fall or other accident while 10% were injured by gunshot wound or assault. Median ISS was 26 (IQR 18-32). Median LOS was 6 days (IQR 4-12) and median time to first GOCC was 2 days (IQR 1-5). 43% of patients had an early GOCC. Compared to patients with later or no GOCC, patients who received an early GOCC had lower median GCS on arrival (6 vs 13 p = 0.004), but they did not differ in age, gender, ISS, or AIS-Head. Early GOCC was associated with reduced hospital LOS (5 vs 11 days, p < 0.001), ICU LOS (5 vs 11 days, p <0.001), ventilator days (4 vs 8.5 days, p<0.001), and deaths during a code (1.2% vs 13.2%, p = 0.001). Amongst patients who transitioned to comfort care status (n =130, 73.4%), these transitions took place sooner for patients with early GOCC (5 vs 13 days p<0.001).

Conclusion:
In patients who died after trauma, early GOCC was associated with reduced length of stay, ventilator days, death during a code, and time to comfort care transition. Based on this data, we recommend early GOCC to be routine in the trauma ICU setting. We plan to prospectively study a new palliative care protocol mandating early GOCC for critically injured patients at our trauma center.
 

12.03 Dual Pre-Injury Antiplatelet Therapy as a Risk Factor in Traumatic Brain Injury Patients

M. Crawford1, S. Mansour1,2, A. Tymchak1,2, A. A. Fokin1, A. Zuviv1, J. Wycech1,3, I. Puente1,2,3,4  1Delray Medical Center,Trauma Services,Delray Beach, FL, USA 2Florida Atlantic University,College Of Medicine,Boca Raton, FL, USA 3Broward Health Medical Center,Trauma Services,Fort Lauderdale, FL, USA 4Florida International University,College Of Medicine,Miami, FL, USA

Introduction:
Use of antiplatelet therapy (APT) has become widespread due to an aging population and increased incidence of cardiovascular disease. While effective at mitigating risks of cardiovascular disease, APT has been shown to increase the risk of hemorrhage in traumatic brain injury (TBI). We hypothesize that patients taking dual APT (Aspirin and Clopidogrel) will have more adverse events than TBI patients taking single APT, Aspirin (ASA) or Clopidogrel only.

Methods:
This IRB approved retrospective cohort study included 346 TBI patients on pre-injury ASA, Clopidogrel or both, ages 17 to 101, who were delivered to a level 1 trauma center between 1/1/2015 and 3/30/2018. Patients were divided into 2 groups by pre-injury APT: Group A was taking either ASA only or Clopidogrel only (n=259), and Group B was taking both ASA and Clopidogrel (n=87). Patients were excluded if they were also on anti-coagulants. Analyzed variables included age, Injury Severity Score (ISS), Glasgow Coma Score (GCS), Rotterdam computed tomography (CT) score, Marshall CT score, incidence of intracranial hemorrhage (ICH), midline shift, platelet function and status, platelet transfusion, need for neurosurgical intervention, days of mechanical ventilation (DMV), Intensive Care Unit length of stay (ICULOS), hospital LOS (HLOS), re-admission rate and mortality.

Results:

Group B compared to Group A had higher mean ISS (12.1 vs 14.1; p=0.02), incidence of ICH (84.2% vs 93.1%; p=0.04), midline shift (5.8% vs 12.6%; p=0.04), Platelet Function Assay (PFA)-100 epinephrine (173.8 vs 224.0; p=0.001), TEG-Platelet Mapping (PM) % inhibition arachidonic acid (55.2 vs 69.2; p=0.04), TEG-PM % inhibition adenosine diphosphate (ADP) (36.4 vs 51.8; p=0.003), and need for neurosurgical intervention (2.7% vs 11.5%; p=0.001) (Fig. 1).

The two groups had comparable mean age (81.8 vs 80.3), GCS (14.3 vs 13.8), Rotterdam score (2.6 vs 2.6), Marshall score (1.1 vs 1.2), platelet count on admission (208.6 vs 222.7), PFA-100 ADP (131.9 vs 163.7), Thromboelastography Maximum Amplitude (TEG-MA) (67.1 vs 67.9), Partial Thromboplastin Time (26.3 vs 25.7 seconds), Prothrombin Time (1.1 vs 1.2 seconds), incidence of platelet transfusion (42.0% vs 39.0%), DMV (5.1 vs 6.1 days), ICULOS (2.9 vs 3.9 days), HLOS (3.7 vs 4.2 days), readmission rate (5.5% vs 4.6%) and mortality (9.7% vs 12.6%), with all p>0.09.

Conclusion:
Patients on dual APT had increased platelet dysfunction, increased incidence of intracranial hemorrhage and need for neurosurgical intervention compared to patients on single APT. This may suggest that increased precautions should be taken with this category of a patient.

12.02 Randomized Clinical Trial of Laypeoples Ability to Apply Different Tourniquets after B-Con Training

J. C. McCarty1,2, J. P. Herrera-Escobar1, Z. G. Hashmi1, E. De Jager1, M. A. Chaudhary1, A. H. Haider1, C. J. Ezeibe1, E. J. Caterson2, E. Goralnick1,3  1Brigham And Women’s Hospital,Center For Surgery And Public Health, Department Of Surgery, Harvard Medical School,Boston, MA, USA 2Brigham And Women’s Hospital,Division Of Plastic Surgery,Boston, MA, USA 3Brigham And Women’s Hospital,Department Of Emergency Medicine,Boston, MA, USA

Introduction: Multiple national initiatives advocate for laypeople to be trained in hemorrhage control to decrease preventable deaths in trauma. The American College of Surgeons Bleeding Control Basic (B-Con) course is the most common hemorrhage control training in the world and teaches participants how to use a Combat Application Tourniquet (CAT). There are, however, multiple types of commercial tourniquets available and programs across the country have placed different tourniquet types in publicly available bleeding control kits. We compared laypeople’s ability to apply different commercial tourniquets and to improvise a tourniquet immediately after taking the B-Con course.

 

Methods: Participants were assessed immediately after completing the B-Con course on their ability to apply different tourniquets to a Hapmed trainer— a high-fidelity mannequin that simulates bleeding, which decreases as the tourniquet tightens, and records the pressure applied by the tourniquet and the estimated blood loss (EBL). Every participant applied all 5 tourniquet types in a randomized sequence: CAT, Sof-Tourniquet (Sof-T), Stretch-Wrap-And-Tuck (SWAT) tourniquet, Rapid Application Tourniquet (RAT), and an improvised tourniquet (available gauze, shoestring, stick/windlass, and belt). The primary outcome was correct tourniquet application defined as pressure >250mmHg and time < 2minutes. Secondary outcomes were pressure applied by the tourniquet and EBL if applied correctly. Paired univariate tests were used to compare each tourniquet type to the CAT as an internal control for all outcomes.

 

Results: 61 laypeople were evaluated. Participants correctly applied the CAT tourniquet at a significantly higher rate than all other tourniquet types (p<0.001)(Figure). For the improvised tourniquet, 11 people did not use a windlass and their success rate was 0%. The CAT applied more pressure than all other tourniquet types (mean±SD: 403.5±103.1 mmHg, p<0.001 for each comparison), followed by the Sof-T (326.8±162.2 mmHg), improvised tourniquet (180.8±169.5mmHg), SWAT (126.0±131.9 mmHg), and RAT (107.0±120.3mmHg). Among tourniquets applied correctly, the CAT had the lowest EBL (209.0±76.5 ml, p<0.001 for each comparison) followed by the Sof-T (248.4±108.1 ml), RAT (315.1±108.3 ml), SWAT (339.9±187.1 ml) and improvised tourniquet (477.0±174.0 ml).

 

Conclusion: B-Con is most effective at teaching participants only the single tourniquet type taught in the course. As new tourniquets come to market, this study raises concern that those trained today will not know how to use new devices; they know a single skill rather than the underlying principle. This study highlights a significant limitation of current guidelines and training in an area ripe for improvement and innovation. (Clinicaltrials.gov NCT03538379)

12.01 Falls at Skilled Nursing Facilities Lead to More Serious Lower Extremity Injuries Compared to Home

B. J. Hasjim2, A. Grigorian2, C. M. Kuza3, S. Schubl2, C. Barrios2, T. Chin2, J. Nahmias2  2University Of California – Irvine,Department Of Surgery,Orange, CA, USA 3University Of Southern California,Department Of Anesthesiology And Critical Care,Los Angeles, CA, USA

Introduction:
Nearly 60% of residents at skilled nursing facilities (SNFs) suffer a ground-level fall (GLF) at least once per year, resulting in serious injuries and increased healthcare costs. We sought to examine difference in injuries, such as traumatic brain injury (TBI) and lower extremity (LE) injury, between patients suffering GLFs at SNFs compared to those at residential homes. We hypothesized that SNF residents have an increased risk for a serious TBI and LE injury after a GLF, compared to those at home. 

Methods:
The Trauma Quality Improvement Program was used to identify patients sustaining a GLF at a private residence or SNF between 2015-2016 and determined the incidence of serious TBIs and LE injuries defined as an abbreviated injury scale >3. A multivariable logistic regression model was used to determine the risk for severe head and LE injury in SNF vs. home.

Results:
From 15,873 patients with GLFs, 14,306 (90.1%) occurred at home and 1,567 (9.9%) occurred at a SNF. Patients with GLFs at a SNF were older (median 82-years-old vs. 73-years-old, p<0.001). Compared to those at home, patients with GLFs at SNFs had a lower median injury severity score (9 vs. 10, p<0.001), and higher rates of dementia (45.5% vs. 9.1%, p<0.001), congestive heart failure (13.2% vs. 7.6%, p<0.001), diabetes (24.3% vs. 21.5%, p=0.01), and chronic obstructive pulmonary disease (13.7% vs. 11.6%, p=0.01). GLFs at SNFs resulted in a higher incidence of femur fracture (55.1% vs. 38.9%, p<0.001) but lower incidence of TBI (28.0% vs. 33.4%, p<0.001). After adjusting for covariates (female, end-stage renal disease, smoker, dementia), patients falling at SNFs were at an increased risk of sustaining a serious LE injury (OR=1.66, CI=1.48-1.87, p<0.001), but had no difference in serious TBI (p=0.11). 

Conclusion:
Compared to patients falling at home, those falling at a SNF have a 66% higher risk for a serious LE injury, but similar risk for a serious TBI. Femur fractures were the most common orthopedic injury overall. Future studies that evaluate the implementation of preventative measures, such as environmental safe-guards and pharmacologic/physical therapies, to reduce LE injuries at SNFs are warranted. 
 

11.20 Correlation Between Google Trends and Temporal Seasonal Injury Patterns at a Regional Trauma Center

V. Yellapu1,3, A. Gayner4, A. Green2,3, P. G. Thomas2, R. Wilde-Onia2, S. P. Stawicki2,3  1St. Luke’s University Health Network,Department Of Orthopaedics,Bethlehem, PA, USA 2St. Luke’s University Health Network,Department Of Surgery,Bethlehem, PA, USA 3St. Luke’s University Health Network,Department Of Research & Innovation,Bethlehem, PA, USA 4Pennsylvania State University,State College, PA, USA

Introduction: Traumatic injuries tend to have specific incidence patterns over time. Over the past two decades, “big data” trending capabilities became available, revealing correlations between various natural and man-made phenomena and “Internet search activity” (ISA). However, there is little information on the relationships between Google Trends (GT)-ISA and actual traumatic injury patterns. We sought to compare injury frequency data between our Level I Trauma Center (L1TC) and GT-ISA.

Methods: After surveying >100 injury types for “seasonal trending” using GT-ISA data, 12 major categories were identified (Table 1). GT-ISA frequencies were normalized (using yearly medians) and arranged using composite calendar months (2004-2017). L1TC data were similarly normalized/organized (2000-2017). For each injury category, composite monthly data were compared for correlation (Pearson), data distribution (Kolmogorov-Smirnov), and measures of central tendencies. Significance was set at p<0.05.

Results:Twelve injury categories identified as “seasonally trending” were analyzed using normalized GT-ISA versus L1TC occurrence frequencies. The raw correlation between GT-ISA/L1TC was low (r=0.253). However, significant heterogeneity was noted within the overall dataset, suggesting that specific injury types varied in both their susceptibility to “seasonal trending” and the correlation between L1TC/GT-ISA observations (Table 1).

Conclusion:L1TC/GT-ISA data correlated well for motorcycle/bicycle crashes, winter sports (except hockey), and football-related trauma. This did not universally correspond to normalized monthly data distribution similarities. Temporal differences between L1TC/GT-ISA for winter sports (except hockey) are likely due to geographic (e.g., Colorado vs. Pennsylvania) and trauma referral patterns (e.g., L1TC proximity to local winter sport locations). Further research in this area is warranted.

 

11.19 Skin-only closure in damage control laparotomy, a forgotten tool

W. M. Brigode1, M. Masteller3, K. Ansah4, J. Bean2, R. Sullivan2, A. Vafa2  1University of California – San Francisco – East Bay,Department Of Surgery,Oakland, CA, USA 2University of Illinois at Chicago – Mount Sinai Hospital,Division Of Trauma And Critical Care Surgery,Chicago, IL, USA 3Feinberg School Of Medicine – Northwestern University,Trauma And Critical Care,Chicago, IL, USA 4Ross University School of Medicine,Miramar, FLORIDA, USA

Introduction: The development of damage control laparotomy has been advanced in the past several decades by the use of the Bogota Bag, Barker Pack, and Negative Pressure Wound Therapy. A skin-only sutured closure with fascia left in discontinuity has not been widely reported in the modern literature. Older sources cite wound complications and abdominal compartment syndrome as potential pitfalls of the technique. We hypothesized that in closely monitored patients by surgeons familiar with the technique, the use of this closure technique would improve fascial closure rate without an accompanying change in mortality or wound complications.

Methods: We performed a retrospective review of consecutive patients managed with an open abdomen after trauma laparotomy at a busy, urban, level I trauma center, over a seven year period from October 15, 2011 to June 15, 2018. Patients undergoing laparotomy for non-traumatic indications were excluded. Primary outcomes were mortality and fascial closure at the index admission. Secondary outcomes included enteral fistula development, fascial dehiscence, and skin dehiscence.

Results: During the study period, 76 patients underwent open abdominal treatment, 55 of which underwent a total of 67 skin-only closures. Use of a skin closure at the index operation was associated with an increased fascial closure rate (62% vs 30%, 88% vs 50% in survivors, p<0.05 for both) without a change in mortality (p>0.05). The measured secondary outcomes were rare and statistically similar (p>0.05). While its use at the index operation was beneficial, the use of skin closure overall was safe (p>0.05 for all complications) but was not associated with an increase in fascial closure (p>0.05). This is likely secondary to selection bias of survivors needing salvage skin closure with a planned ventral hernia.

Conclusion: Skin-only sutured abdominal closure is a viable alternative to more commonly used open abdominal management techniques. When faced with a damage control situation in a patient’s index operation, it can increase a patient’s chances at fascial closure in the index admission without an increase in mortality or wound complications. The intrinsic apposition of the muscular abdominal wall to the skin and subcutaneous tissues may prevent loss of abdominal wall domain, while allowing for the temporary increase in the abdominal visceral volume associated with trauma resuscitation. When used after index laparotomy it remains safe, but the benefit on fascial closure rate is lost. Further research should be done to elucidate the role of this forgotten technique in the trauma surgeon’s toolbox.

 

11.18 Systemic Vascular Resistance is Superior to Heart Rate & Blood Pressure in Predicting Early Sepsis

J. J. Butz1, Y. Shan1, R. Shadis1, J. Yuschak1, O. Kirton1, T. Vu1  1Abington Memorial Hospital,Trauma / Critical Care Surgery,Abington, PA, USA

Introduction:  Sepsis as a disease process still requires further research into the diagnostic parameters best suited for practical clinical application. While electronic heart rate, blood pressure, and pulse oximetry methods are employed routinely in hospitals, room exists for additional measurements to be used in the identification, resuscitative, and monitoring phases of sepsis management.

Methods:  Using a limited sample population (n= 9), cardiac impedance monitors were placed on patients identified to fit SIRS criteria from the 2012 iteration of the Surviving Sepsis Campaign Guidelines. Measurements of stroke volume, heart rate, cardiac output, cardiac index, blood pressure, contractility index, thoracic fluid content, ejection time, ejection fraction, cardiac work index, and systemic vascular resistance were obtained and compared prior to and after administration of a resuscitative two-liter crystalloid infusion.

Results: Measurements at presentation and one hour after fluid resuscitation were: heart rate (bpm) [97±15 & 93±19: p=0.23], mean arterial pressure (mmHg) [81±14 & 85±14: p=0.55], systemic vascular resistance (dyne-s/cm-5) [861±242 & 1087±424: p=0.04], afterload measured as systemic vascular resistance index (dyne-s/cm-5/m2) [1813±369 & 2283±696: p=0.04], and left cardiac work index (kg*m/m2) [3.6±1.5 & 3.3±1.3: p=0.69].

Conclusion: In the limited sample size presented, systemic vascular resistance proved to be statistically more sensitive in its capacity to recognize and monitor resuscitative management of sepsis and septic shock than either of the conventional hemodynamic parameters of heart rate and blood pressure. As a practical application, it could serve to enter hospitals nationwide as a new vital sign of sepsis. 

 

11.17 The Effect of Fluid Balance and Body Habitus on Outcomes in Septic Shock

B. Faliks1,2, D. Aronowitz1,2, V. Patel1,2, J. Nicastro1,2, R. Barrera1,2  1Northwell Health,Department Of Surgery,New Hyde Park, NY, USA 2Donald and Barbara Zucker School of Medicine,Hempstead, NEW YORK, USA

Introduction:  Early, effective fluid resuscitation is essential in counteracting the tissue hypoperfusion seen in severe sepsis and septic shock. However, recent evidence has shown correlation between fluid overload and adverse outcomes in critically ill patients. Fluid overload has been defined in some studies as >10% increase in admission body weight. However, there is wide variation in how fluid accumulation is recorded, reported, and interpreted, especially in relationship to body habitus.

Methods: This retrospective chart review evaluated patients in septic shock on vasopressor support with identified infective source in critically ill patients at two large tertiary centers. Total fluid accumulation and net balance at time of discontinuation of vasopressors was recorded, and reported in relationship to patient weight, BMI, and BSA. The relationships between these values and mortality, ICU days, ventilator days, and hospital days were examined. Basic statistics and linear regression analysis were performed.

Results:  One hundred patients with septic shock were identified. Increasing net fluid balance at time of pressor discontinuation was significantly correlated with ICU days (R2=0.309, p=0.015), ventilator days (R2=0.250, p=0.006), and hospital days (R2=0.328, p=0.010). In addition, mortality was significantly different between those with less than the median (6L) fluid balance and those with greater than 6L balances. (30% vs 54%, respectively, p=0.015).

When looking from the lens of body habitus, obese patients (BMI>30) had significantly lower fluid balances and fluid balances per BMI (161ml/kg/m2 vs 450ml/kg/m2, p=0.0005) than nonobese patients. However, there was no significant difference between mortality in obese patients and nonobese patients (45% vs 33%, p=0.12). Although fluid balance per total body weight, per body surface area, and per BMI were all significantly correlated with ICU days, ventilator days, and hospital days as net balance was above, analyzing accumulation in this fashion did not significantly alter any of the linear regression statistics. Similarly, mortality was significantly different between those with <10% fluid balance per body weight and those in fluid overload, with >10% fluid balance per kg (31% vs 59%) but this too was not appreciably different from net balance.

Conclusion: Our data support previous studies showing that greater positive fluid balances in septic shock are associated with worse outcomes, including greater mortality, and more ICU, ventilator, and hospital days. Each of balance per BMI, balance per BSA, and balance per total body weight were similarly correlated with adverse outcomes. However, it does not appear that reporting these balances as such would give providers additional insight into risks of adverse outcome.

 

11.16 Early diagnosis and intervention for intracranial hemorrhage via method of Hybrid Emergency Room.

T. Nishimura1, S. Matsuyama1, S. Ishihara1, S. Nakayama1  1Hyogo Emergency Medical Center,Emergency And Critical Care,Kobe, HYOGO, Japan

Introduction:  Craniotomy and evacuation is a still effective method for the patients with loss of consciousness due to intracranial hematoma. To gain favorable outcome and good neurological recovery, early diagnosis and early intervention are necessary. Usually, computed tomography is a helpful device to diagnose it, after stabilization of patient’s condition.   From March 2017, our medical center has started Hybrid Emergency Room (Hybrid ER) strategy. Hybrid ER includes emergency department and radiology suite. This novel method is assumed to realize rapid image examination soon after admission safely. Therefore, we hypothesized that the usage of Hybrid ER for coma patients in need of evacuation of hematoma might lead to early diagnosis and intervention.

Methods:  From January 2015 to June 2018, patients performed craniotomy and evacuation of hematoma were tabulated from medical record. Under 15-year-old, transferred from other hospital, planned operation, after severe sub arachnoid hemorrhage, trepanation and onset after arrival were excluded. 

Results: 46 patients were enrolled; 29 in conventional group (emergency room group) and 17 in Hybrid ER group. There were 30 trauma patients and 16 non-trauma patients. Time interval from arrival of patient to computed tomography showed significant difference; 23.9 (8.2) min vs 10.4 (6.3) min respectively; p<0.001. Although interval time from arrival to craniotomy showed tendency of shortage, it was not different statistically; 154.4 (116.6) min vs 140 (64.5) min respectively; p=0.64. Patient prognosis were also similar; 6.9% vs 11.8%, p=0.57. Patients in Hybrid ER group tended to be avoided intubation even though they were under Glasgow Coma Scale 8 before computed tomography; 10/29 (34.5%) vs 4/17 (23.5%) respectively, p<0.01.

Conclusion: Our method could show significant shortage until diagnosis, however, could not improve the time until operation and patients’ outcome notably. 
 

11.15 A Rule-Based Natural Language Processing Pipeline for Anesthesia Classification from EHR Notes

A. J. Nastasi1, S. Bozkurt1, M. Manjrekar1, C. Curtin3,5, T. Hernandez-Boussard1,2,3,4  1Stanford University,Center For Biomedical Informatics Research,Palo Alto, CA, USA 2Stanford University,Department Of Biomedical Data Science,Palo Alto, CA, USA 3Stanford University,Department Of Surgery,Palo Alto, CA, USA 4Stanford University,Department Of Medicine,Palo Alto, CA, USA 5VA Palo Alto Healthcare Systems,Department Of Surgery,Palo Alto, CA, USA

Introduction:
Given that different types of anesthesia vary by physiologic mechanism, they have different associations with outcomes like postoperative pain and delirium. Despite this, in clinical practice, many are choosing anesthesia based on personal preference or simply what is available rather than considering the effects on important surgical outcomes, as strong evidence and clear guidelines are lacking, which indicates a strong need for further research as to the best anesthesia type in a particular clinical situation. However, given that there are no structured codes associated with anesthesia types, leveraging the immense amount of data in electronic health records (EHRs) to study the impact and outcomes associated with an anesthesia type typically requires time-consuming manual chart review, limiting our ability to study and understand the effects of anesthesia type on post-operative outcomes. To address this methodologic challenge, we hypothesized that a simple, rule-based natural language processing (NLP) pipeline could automatically classify the anesthesia used for a surgery using only free text from EHR notes.

Methods:
A rule-based NLP pipeline was developed in Python 3.6 to determine types of anesthesia (general, regional, local) using a clinical note associated with an operation. The pipeline first pre-processed (lowercasing, removing punctuations, etc.) the operative notes. Then, to extract the context of interest of the report, text between the anesthesia type header and the next header was extracted, if present. If not, other parts of the report were included by checking for the presence of other headers, sentence delimiters, and “anesthesia” itself. For classification, extracted contexts were matched via dictionary mapping with target terms (e.g., “GET”) and their relevant anesthesia type (e.g., general anesthesia) based on a versatile lexicon built with clinical and domain knowledge. The pipeline was first tested on a sample of 100 post-operative notes from EHRs at an academic medical center, annotated by a clinician. The classifier was then improved and re-assessed for several iterations until satisfactory performance was achieved with accuracy metrics including recall, precision, and F1 score.

Results:
On the 100 annotated validation notes, the classifier perfectly classified regional anesthesia and nearly perfectly classified general anesthesia with F1 scores of 1 and 0.99, respectively (Table). Local anesthesia was classified with a recall, precision, and F1 score of 0.89, 0.93, and 0.91, respectively. Overall, the classifier had a recall, precision, and F1 score of 0.96, 0.98, and 0.97, respectively.

Conclusion:
Our rule-based NLP classifier successfully classified unstructured free text of clinical EHR notes by type of anesthesia administered, allowing for the efficient study of the effect of anesthesia types and combinations on post-operative outcomes as well as the development of evidence-based anesthesia guidelines.
 

11.14 Method for Natural Language Processing of Semi-Structured Clinical Documentation.

G. J. Eckenrode1, H. Yeo1  1Weill Cornell Medical College,Surgery,New York, NY, USA

Introduction:
Clinical documentation written in a free-text or natural language format represents the richest source of information regarding a patient’s medical care. These documents are routinely used in small-scale research studies via manual chart review but their utility has been limited in large-scale applications due to the difficulty synthesizing them into tabular data. Fortunately, many clinical documents with free-text components are entered using flexible yet widely standardized structural formats. This mix of structure and free-text results in a semi-structured document format which would enable automated synthesis of these documents if the structure were known. Unfortunately, no widely available method currently exists to automatically extract structural elements from the documents themselves.

Methods:
We used a database of text-based clinical documents extracted from the electronic medical record of a single institution to obtain semi-structured documents by document type. “Brief Op Note” were selected for the initial trial. Using Natural Language Processing (NLP) techniques, these notes were divided into their lexical components and compared with each other to find sequences that were repeated across documents. Commonly repeated sequences were then compared and grouped into clusters based on similarity. A human reviewer then assigned clinical meaning to each cluster and ensured cluster accuracy. The reviewed concept clusters were then imported and NLP techniques were used to scan individual documents for the text associated with each clinical concept. Each document was synthesized into a human-readable report of its clinical content for human review and verification.

Results:
We evaluated 5000 randomly selected notes from the database. The algorithm found 16 clinical concept clusters which were reviewed, confirmed, and assigned meaning by a human reviewer. These included expected concepts, such as “Procedure Date” and “Procedure Name”, as well as document control concepts, such as “Electronic Signatures” and “Last Updated”. 1000 of these documents were randomly selected and decomposed by the system for evaluation by a human reviewer. The appropriateness for each data selection to concept assignment was evaluated to calculate sensitivity and specificity of topic matching. Preliminary analysis suggests a high degree of accuracy in matching document content to appropriate clinical concepts. Additional evaluations are currently being carried out for validation.

Conclusion:
Overall, this is a novel algorithm for content analysis that enables wide-scale synthesis of important clinical information from previously inaccessible portions of the medical record. These data can now be incorporated into any project requiring chart review, regardless of scale. It also has broad applicability to any task involving semi-structured documentation, including procedure notes, medical billing, quality assessment, and literature review.
 

11.13 Current State of Communication Amongst General Surgery Residents

C. E. Welch1, N. A. Royall1  1The University of Oklahoma,Department Of Surgery, School Of Community Medicine,Tulsa, OKLAHOMA, USA

Introduction:

A personal communication revolution has occurred with the advent of personal cellular phones and computers. Many individuals have adopted these personal communication modalities, while healthcare applications have not been clearly defined. Although surgical residents are required to develop healthcare-specific communication skills, the role of modern communication modalities has not been addressed. The purpose of this study was to evaluate the current state of communication amongst resident physicians within a General Surgery training program. A secondary objective was to evaluate if communication modalities have an association with resident well-being.

Methods:

A prospective study was performed of General Surgery resident physicians at a single academic institution. A survey instrument was designed and distributed to evaluate resident usage of in-person, traditional phone calls, traditional paging, text messaging, and smartphone-based application messaging. The survey included an assessment of communication modality impact on burnout using the Well-Being Index with a validated cutoff of at least 5.

Results:
We received 16 responses from the 20 surveyed residents (80%). All respondents reported owning a smartphone with 31% (5 of 16) not personally paying for smartphone data plans. 63% (10 of 16) reported a smartphone-related monthly data limit, of which 25% (4 of 10) reported exceeding within the previous year. Respondents reported a trend towards increased usage of traditional phone calls (75%), traditional paging (75%), text messaging (75%), and in-person (63%) compared to smartphone-based application messaging (19%). Residents reported no difference between healthcare and non-work settings, except for a lower usage of traditional paging (75% vs 13%; p < 0.001). Residents ranked the most preferred communication modalities for healthcare setting as in-person (81%), traditional phone calls (50%), text messaging (37.5%), traditional paging (25%), and smartphone-based application messaging (6%). In contrast, residents ranked the least preferred communication modality as smartphone-based application messaging (94%), traditional paging (56%), text messaging (31%), in-person (13%), and traditional phone call (6%). Residents most commonly reported the highest frequency communication modality for communication with other residents and with surgery attendings as text messaging (100% and 81%). Traditional paging was most commonly used with non-surgery attendings (56.3%) and traditional phone calls with nursing and other healthcare professionals (87.5%). Residents reported a low association of communication modality with respect to burnout (6.7%).

Conclusion:
Several personal communication modalities have become integrated into General Surgery resident healthcare communication. This study demonstrates a need for improved training on appropriate usage of communication modalities within modern General Surgery resident training programs.

11.12 Novel Technique of Catastrophic Abdominal Wall Closure Utilizing Biologic Xenograft

Y. P. Puckett1, M. Estrada1, V. Tran1, J. Griswold1, B. Caballero1, R. Richmond1, A. Santos1, C. Ronaghan1  1Texas Tech University Health Sciences Center,Surgery,Lubbock, TX, USA

Introduction: Closure of catastrophic open abdominal wounds after damage control laparotomy presents a challenge to the surgeon. We present an alternative option for definitive fascial closure and accelerated wound healing of catastrophic open abdominal wounds utilizing novel technique combining a mechanical closure system with biologic xenograft.

Methods:  All patients that underwent abdominal wall closure with novel technique were analyzed between 2016-2018. ABRA® dynamic tissue system (DTS) was placed and adjusted daily until fascial closure was achieved. ACell MatriStem® urinary porcine bladder biologic enograft was placed in midline wound once fascial closure was achieved. Information was abstracted on age of patient, body mass index (BMI), incision length, myofascial gap size before and after DTS placement, visceral extrusion size, number of DTS adjustments, and total time to fascial closure. All procedures were performed in the operating room.

Results: Twenty three patients were analyzed. The average age of patient was 51 (Range 26.0-72.0) years. Mean BMI of patient was 35.30 (Range 21.0-56.1). Caucasians comprised 60.9% of the populations, Hispanics 30.4%, and African-Americans 8.7%. Ostomy was present in 47.8% of patients.The abdomen was open for an average of 9.32 (Range 0-35) days prior to application of DTS device. Delayed primary fascial closure was achieved in 100% of patients. An overall reduction in wound area was achieved in 100% of patients. There were zero surgical site infections observed. No patients developed incisional hernias or surgical site infections during this time period after a follow up of one year.

Conclusion: Utilization of DTS in conjunction with biologic xenograft combines both mechanical and biologic advantage in closing complex abdominal wounds. More research needed in performing a cost analysis for this procedure. 

 

11.11 Unintended Consequences of a VTE Prophylaxis Order Set for Trauma Patients

S. O’Malley1, M. S. Stumpf1, G. Prellwitz2, J. Sutyak1, S. Ganai1, M. Smith1, E. Mackinney1  1Southern Illinois University School Of Medicine,Surgery,Springfield, IL, USA 2Memorial Medical Center,Springfield, IL, USA

Introduction: Venous thromboembolism (VTE) prophylaxis in trauma patients is an important yet often lower tier process measure due to competing priorities during trauma. Order sets are commonly employed in electronic medical records (EMR) to streamline health care performance. A pop-up order set to address VTE prophylaxis was implemented prompting the user to address VTE prophylaxis at admission. We hypothesized that a prompted VTE order set would decrease the average time to prophylaxis and ensure appropriate use of mechanical and chemical prophylaxis in a trauma population.

Methods:  A retrospective chart review was conducted on a random number-generated sample of trauma patients pre- (Spring 2016) and post-intervention (Spring 2017) to evaluate an order set implemented in June 2016 to improve VTE prophylaxis utilization at a Level-I trauma center. Exclusion criteria included trauma patients with evidence of intracranial bleeding. The quality improvement framework used Lean Six-Sigma methodology. Upper control limits (UCL) are defined at 3 standard deviations above the mean. Data were analyzed using non-parametrical statistical techniques and process control charts were created.

Results: After exclusions, a total of 54 patients in 2016 and 34 patients in 2017 were studied. Median time to mechanical prophylaxis order decreased from 120.9 (interquartile range (IQR), 63.5 to 200.1) minutes to 91.8 (IQR, 70.2 to 112.2) minutes (p=0.12). The UCL for admission time to mechanical prophylaxis order decreased from 28.5 to 4.2 hours. Median time from admission to receipt of chemical prophylaxis increased from 16.6 (IQR, 14.1 to 26.6) hours to 32.4 (IQR, 15.1 to 64.9) hours (p=0.08). The UCL for admission time to chemical prophylaxis increased from 73.6 to 99.0 hours.

Conclusion: Our data did not support the hypothesis that an order set prompt at admission would improve timing of VTE prophylaxis. While there were no statistically-significant differences between time of admission and VTE prophylaxis, variability was observed in ordering practices. Despite a trend towards decreased timing to mechanical prophylaxis order, an indicator of actual use of the order set, there was a concurrent increase in time and variability of chemical prophylaxis administration, suggesting bypass of decision to place an order for chemical prophylaxis. Future areas of study include qualitative analysis of other potential sources of delay such as click fatigue, as well as correlative studies examining the relationship of process with outcome.

 

11.09 Imaging May Partly Obviate Laparoscopy for Thoracoabdominal Stab Wounds: A Single Center Pilot Study

K. M. Galvin1, A. Grigorian1, S. D. Schubl1, V. Gabriel1, A. Anavim2, A. Rudd2, J. L. Phillips2, J. Nahmias1  1University Of California – Irvine,Trauma And Critical Care Surgery,Orange, CA, USA 2University Of California – Irvine,Radiology,Orange, CA, USA

Introduction:  The management of left thoracoabdominal stab wounds (LTASW) continues to be controversial. In hemodynamically stable patients without peritonitis, a delayed diagnostic laparoscopy to rule out diaphragmatic injury is commonly performed. We sought to determine the rate of finding no injuries on diagnostic laparoscopy in patients presenting after a LTASW with no signs of penetration to the muscle or deeper on computed tomography (CT) imaging as determined by an attending radiologist and trauma surgeon. Our secondary aim was to analyze the accuracy of identifying the depth of stab-wound penetration (i.e. muscle or deeper) by differently trained providers: radiology attending (RA) vs. radiology resident (RR) vs. trauma attending (TA) vs. general surgery resident (GSR). 

Methods:  A retrospective review of trauma patients from a single Level-I trauma center during a six year period was performed.  CT images were independently reviewed by a RA, 2ndyear RR, TA, and 3rdyear GSR. A chi-square analysis was performed.

Results: Of 36 patients with LTASW that underwent CT imaging and later diagnostic laparoscopy, 11 (30.5%) had diaphragmatic injuries intraoperatively. Both the radiology and trauma attendings read that 2 (5.5%) patients had LTASW that did not penetrate muscle or deeper on imaging and no diaphragm injury was found intraoperatively (p=0.33). Both attendings were congruent in muscle or deeper penetration for 100% of the patients.  The sensitivity and specificity in this limited sample was 100% and 92%, respectively, for the RA and TA. The negative predictive value of the attending read was 100%. The sensitivity and specificity for the RR was 100% and 88%, and for the GSR was 100% and 84%. 

Conclusion: Currently, all patients with LTASW undergo a delayed diagnostic laparoscopy to rule out diaphragm injuries. Our small single center pilot study suggests that a subset of patients with LTASW and no signs of penetration to muscle or deeper on CT imaging may not require a delayed diagnostic laparoscopy. This could lead to a decrease in hospital length of stay, health care cost, and/or complications associated with surgery. A future multicenter or prospective study appears warranted. 

 

11.08 Using Real-time Location Systems to Predict Length of Stay and Disposition after Surgery

S. Dong1, R. Barkley1, E. Levine1, M. Howard-McNatt1, P. Shen1, C. J. CLARK1  1Wake Forest University School Of Medicine,General Surgery,Winston-Salem, NC, USA

Introduction:   Early mobilization is recognized as a key component of enhanced recovery pathways after surgery.  We have developed a highly reliable, real-time location system (RTLS) to monitor patient mobility after surgery.  The aim of this study is to evaluate the ability of RTLS to predict postoperative outcomes.

Methods:   From Sept 2017 to May 2018, all patients hospitalized in the cancer center surgical ward at an academic medical center were monitored using a network-integrated RTLS comprised of 99 sensors (4,246 total sensors system-wide).  Time from surgery to first ambulatory event was measured and evaluated as a predictor of length of stay (LOS) and discharge location.

Results: 358 surgical patients were identified with a median LOS of 4.0 days (IQR 2.3-6.4) and median operative duration of 2.9 hours (IQR 1.9-3.9).  Ambulatory monitoring started a median of 3.8 hours (IQR 2.2-5.7) after surgery.  24.0% (n=86) of patients did not have an ambulatory event.  Median time to first ambulatory event was 28.6 hours (IQR 21.1-46.8).   Delay in first ambulatory event was associated with longer LOS (OR 1.84, 95% CI 1.51-2.25) and need for home health or discharge to facility (OR 1.74, 95% CI 1.32-2.29).   After adjusting for patient age, operative duration, and surgical service, first ambulatory event was still associated with LOS and disposition (p<0.001).

Conclusions: Remote monitoring of postoperative ambulation can predict increased health care resource utilization (LOS and non-home discharge) early in a surgical patient’s hospitalization. 
 

11.07 Association of Post-operative Opioid Use With Pre-Operative Opioid Exposure

A. L. Titan1, L. Graham1, T. Hernandez-Boussard1, E. Dasinger2, J. Richman2, I. Carroll1, M. Morris2, M. T. Hawn1  1Stanford University,Palo Alto, CA, USA 2University Of Alabama at Birmingham,Birmingham, Alabama, USA

Introduction: The “opioid crisis” is man-made national emergency in which over 2 million people in the United States suffer from substance use disorders related to prescription opioids. Evidence suggests inpatient use of opioids is associated with higher rates of adverse events and may impact post discharge outcomes. The aim of this study was to understand variation in preoperative/perioperative opioid exposure and its effect on patients’ perioperative oral morphine equivalent (OME) use, pain scores, and unplanned readmissions.

Methods: National Veterans Affairs Surgical Quality Improvement Program data on inpatient general, vascular, and orthopedic surgery from 2007 to 2014 were merged with inpatient analgesia data. Trajectory analysis was used to define three distinct trends in postoperative inpatient OMEs. Bivariate statistics were used to examine characteristics of patients by OME trajectory and multivariate logistic regression was used to examine associations with pain-related readmissions.

Results:Our study sample included 235,239 surgeries. 41.4% of surgeries were categorized as a low inpatient OME trajectory receiving an average of 19.1 OMEs/day (SD 36.0), 53.2% were identified as medium inpatient OME trajectory receiving an average of 39.7 OMEs/day (SD 31.5), and 5.4% were categorized as high inpatient OME use with an average of 116.1 OMEs/day (SD 51.4, Table 1). Opioid use in the prior 6 months was more frequent for the high inpatient OME group compared to the medium or low use groups (72.7% vs. 44.8% and 17.8%, respectively, p<0.01) as was having an active prescription of opioids at the time of surgery (57.2% vs. 29.3% and 21.8%, p<0.01). Patients in the high OME trajectory reported higher inpatient maximum pain scores compared to patients in the lower OME trajectories (9.1, 7.0, 8.0, p<.001) and were more likely to receive acetaminophen (40.2% vs. 27.2% and 18.4%, p<0.01) or NSAIDs (22.2% vs. 16.4% or 8.7%, p<0.01). At discharge 65.7% of patients filled an opioid prescription; high inpatient OME trajectory patients received the highest total OMEs at discharge (825.5 vs. 404.1 and 290.8, p<0.01). Despite more than a two-fold higher OME provided at discharge, patients in high inpatient OME trajectory had a 71% increased odds of pain-related readmission compared to the low OME trajectory (2.4%, 1.4%, 1.4%, p<0.01).

Conclusion:Preoperative and perioperative opioid use are associated with higher overall pain scores and increased risk for pain-related readmissions. Post-operative pain management should account for opioid tolerance. Increased inpatient perioperative use of adjunct non-opioid pain medications for all patients may facilitate decreased requirements of opioids at discharge.

11.06 Hypofibrinolysis as a Predictor of Anti-Xa Levels.

M. L. Pickett1, L. R. Taveras1, J. B. Imran1, S. W. Ross1, T. D. Madni1, H. B. Cunningham1, S. Park1, M. Zhou1, E. Huang1, J. C. Kubasiak1, M. W. Cripps1  1University Of Texas Southwestern Medical Center,Department Of Surgery, Division Of General And Acute Care Surgery,Dallas, TX, USA

Introduction:
Novel research suggests current low molecular-weight heparin dosing protocols for venous thromboembolism prophylaxis are inadequate. Reliable predictors of sub-prophylactic Xa levels remain elusive. We hypothesize hypofibrinolysis, as measured by rotational thromboelastometry (ROTEM), is associated with sub-prophylactic anti-Xa levels. We aim to evaluate the ability of admission ROTEM to predict sub-prophylactic Xa levels after enoxaparin administration. 

Methods:
Retrospective review of patients admitted to the Parkland SICU from July 2016 to June 2017 who received enoxaparin 40 mg twice daily (BID), had appropriately timed draw of anti-Xa levels, and had ROTEM drawn within 24 hours of admission. Clinical and ROTEM data were collected and subjects were grouped in prophylactic and sub-prophylactic cohorts. Hyperfibrinolysis was defined as maximum lysis percentage (ML) >15% and hypofibrinolysis was defined as ML <3%.

Results:
A total of 55 patients were included and 23.6% had penetrating injuries. Hyperfibrinolysis was found in two patients (3.6%) and hypofibrinolysis in 14 (25.5%). Thirty-two patients (58.2%) had prophylactic Xa levels. Prophylactic and sub- prophylactic groups were similar in age, body mass index, and injury severity scale. VTE events occurred in four patients (7.2%). Hypofibrinolysis was not significantly associated with sub-prophylactic anti-Xa levels (p= .763). In addition, VTE rates were not significantly different between groups.

Conclusion:
Currently, it is not possible to predict sub-prophylactic anti-Xa levels after enoxaparin administration using ROTEM. Hypofibrinolysis was not found to be associated with sub-prophylactic anti-Xa levels or VTE events. 
 

11.05 Not Further Specified: Unclassified Orthopaedic Injuries in Trauma Registries, Cause for Concern?

B. W. Oliphant1, C. Harris3, A. Cain-Nielsen2, J. Goulet1, M. Hemmila2  1University Of Michigan,Department Of Orthopaedic Surgery,Ann Arbor, MI, USA 2University Of Michigan,Department Of Surgery,Ann Arbor, MI, USA 3University Of Maryland,Department of Surgery,Baltimore, MD, USA

Introduction:  The accuracy and completeness of data in registries is essential to making valid conclusions about outcomes. The selection of Not Further Specified (NFS) in injury coding means that the data abstractor could not identify an exact diagnosis. We hypothesize that there is a significant amount of not classified orthopaedic injuries in trauma registries. Our primary outcome was to quantify the amount and type of NFS orthopaedic injuries in trauma registries. Our secondary outcome was to examine factors that could contribute to these findings.

Methods:  Data from the Michigan Trauma Quality Improvement Program (MTQIP) from 2011-2017 and from the National Trauma Data Bank (NTDB) from 2011-2015 were utilized. We analyzed orthopaedic injuries in these registries which were classified via the Abbreviated Injury Scale version 2005 (AIS2005). Fractures were identified via AIS2005 as either a specific injury type or as a NFS injury. They were also grouped by fracture complexity into “simple” (i.e. tibial diaphysis, proximal femur, etc.) or “complex” (i.e. pelvic ring, acetabulum, etc.). Average fracture volumes seen at these centers and trauma center level was also extracted. Linear regression was used to evaluate the effect of volume and trauma center level on NFS entries.

Results: In MTQIP, 18.5% of entries (13,116 of 70,918 fractures) were classified as NFS, with a range of 2.4-67.9% based on specific fracture type. In the NTDB, 27% (342,472 of 1,269,278 fractures) were NFS, range of 6.0-68.5%. There were significantly more complex NFS fractures (34.5%) than simple (9.6%) [p<0.001] in MTQIP, with similar findings in the NTDB, (41.8%) complex vs. (15.7%) simple [p<0.001]. In MTQIP, Level 1 trauma centers had a higher percentage of NFS injuries, 21.2% vs. 16.6% [p<0.001] while the opposite was true in NTDB with Level 2 centers having more NFS injuries, 26.6% vs. 24.4% [p<0.001]. Increasing fracture volume and Level 2 status were associated with a decrease in the proportion of NFS fractures recorded in both databases.

Conclusion: This analysis demonstrates that a significant amount of orthopedic injury data found in trauma registries is incomplete and could be considered “missing.” Higher injury volume and less complicated injuries improve the recording of this data. With the increase in orthopaedic research utilizing large databases and registries, the completeness of this data is paramount to being able to make accurate conclusions and subsequently drive appropriate improvements in systems, processes and policy. Further work should delve into the reasons for these findings so that these data sources can be strengthened and become a source of reliable information.

 

11.04 Identification of Postoperative Complications Using Electronic Health Records and Machine Learning

K. Colborn1, M. Bronsert4,5, A. B. Singh5, K. Hammermeister3,4,5, W. G. Henderson1,4,5, R. A. Meguid2,4,5  1University Of Colorado Denver,Biostatistics And Informatics,Aurora, CO, USA 2University Of Colorado Denver,Surgery,Aurora, CO, USA 3University Of Colorado Denver,Cardiology,Aurora, CO, USA 4University Of Colorado Denver,Adult And Child Consortium For Health Outcomes Research And Delivery Science,Aurora, CO, USA 5University Of Colorado Denver,Surgical Outcomes And Applied Research,Aurora, CO, USA

Introduction: Population ascertainment of postoperative complications is time-consuming and expensive, as it often requires manual chart review. Using the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) complication status of patients who underwent an operation at the University of Colorado Hospital, we sought to develop an algorithm for identifying patients with one or more complications using data from the electronic health record (EHR) and machine learning methodologies.

Methods:  Data were split into training (operations occurring between 2013-2015) and test (operations in 2016) sets. A binomial generalized linear model with an elastic-net penalty was used to fit the model and carry out selection of variables. Elastic-net penalized regression was used because it handles high-dimensional data and correlated covariates well. International classification of disease codes (ICD-9 & ICD-10), common procedural terminology (CPT) codes, medications, and CPT-specific complication event rate (a value indicating the complication rate for a given CPT code estimated from the national NSQIP dataset of >5 million patients) were included as predictors. The Youden’s J statistic was used to determine the optimal classification threshold

Results: Of 6,840 patients, 922 (13.5%) had at least one of the 18 complications tracked by NSQIP. Exactly 838 variables were initially included in the model, of which 117 had nonzero coefficients; 30 of these were ICD-9/-10 codes, 53 were CPT codes, 33 were medications and one was the CPT-specific complication event rate. The model achieved 86% specificity, 79% sensitivity, 96% negative predictive value, 46% positive predictive value, and an area under the receiver operating characteristic curve of 0.90 using a decision threshold of 0.12.

Conclusion: Using machine learning and NSQIP outcomes data, we found that a model with 117 predictors from the EHR identified complications well at our institution. This model can be used to scale-up complication surveillance beyond the limited NSQIP sampling for use at individual hospitals or entire health systems, or to estimate the impact of large-scale interventions on postoperative complication rates.