94.11 Social Media and Visual Abstracts to Disseminate Surgical Science

A. M. Ibrahim1, K. D. Lillemoe2, M. E. Klingensmith3, J. Dimick1  1University Of Michigan,Surgery,Ann Arbor, MI, USA 2Massachusetts General Hospital,Boston, MA, USA 3Washington University,St. Louis, MO, USA

Introduction: Surgical research is growing at an unprecedented rate making it increasingly difficult for surgeons to keep up to date. In response, many academic journals have adopted social media services, including Twitter, to disseminate their publications. It is unclear, however, what the most effective social media formats are to disseminate surgical research.

Methods: More than 1.6 million impressions were analyzed from the Twitter account of Annals of Surgery between January and August of 2016. Data was obtained and merged from three sources: Twitter Analytics, Altmetric and Symplur. We assed three different types of Tweets (Figure 1) that were introduced in a stepwise fashion over the study period beginning with (1) title alone, then (2) title with figure, and finally (3) a visual abstract. Our primary outcomes used to measure dissemination  included impressions (number of times a tweet was seen), retweets (number of times a tweet was shared) and article views (number of times a tweet led to a view on the article on publisher website.)

Results:   We found a strong correlation between the use of visual tweets and extent of dissemination. When only article titles were tweeted, the account averaged per week, 40,863 impressions, 83 retweets and 643 article views.  However, after introduction of tweets including article figures and visual abstracts, the account averaged per week 108,987 impressions (2.6 fold increase), 416 retweets (5.1 fold increase) and 1092 article views (1.7 fold increase.) Further, individual tweets with a visual abstract were able to achieve >15,000 impressions and >100 retweets within one week compared to only ~4,600 impressions and ~22 retweets for tweets including only the article title. Subset analysis of selected articles with a Visual Abstract revealed that65% of article views on the publisher website were attributed to discovery through Twitter.

Conclusions: Social Media, including Twitter, is an effective platform to disseminate surgical science. The use of visual abstracts was associated with higher levels of dissemination as measured by impressions, shares and article visits on the publishers website. All academic researchers and publishers should consider using visual design and social media to make their research more engaging and accessible.  

91.06 Anatomic Location of High-Grade Dysplasia from Adenomatous Polyp of the Colon among Black Patients

P. H. Lam1, I. D. Nwokeabia2, A. C. Obirieze3, S. C. Onyewu3, B. S. Li3, N. Enwerem3, G. Ortega3, T. M. Fullum3, W. A. Frederick3, L. L. Wilson3  1Cedars-Sinai Medical Center,Los Angeles, CA, USA 2Washington University,St. Louis, MO, USA 3Howard University College Of Medicine,Washington, DC, USA

Introduction:  Black patients have the highest incidence and mortality rates of colon cancer when compared to other racial/ethnic groups. Screening rates for colon cancer are lower in black patients, and studies have shown varying anatomic locations of adenomatous polyps and colon cancers in these patients. Studying the location of these cancers within the colon could help tailor where to screen and which screening test to use. We aim to investigate the anatomic location of high-grade dysplasia from adenomatous polyp among black patients, using a national tumor registry.

Methods:  The Surveillance Epidemiology and End Results database from 1973 to 2008 was utilized. We identified patients with a single primary diagnosis of a high-grade dysplasia arising from adenomatous polyp of the colon using appropriate ICD-O-3 codes. Age and gender-adjusted proportions of proximal vs. distal lesion location were derived for all patients using multivariable regression analysis.

Results: A total of 18,762 patient records, comprising 16,276 (86.8%) white and 2,486 (13.2%) black patients, met the study criteria. The incidence of high-grade dysplasia of the colon has been increasing in black patients over the last three decades (9.9%, 13.0%, and 15.1% for 1973-1989, 1990-1999, and 2000-2008, respectively).

The most common location in the proximal and distal colon was the cecum (16.6%) and sigmoid colon (48.5%). Most patients had distal lesions (60.0%), most of which occurred at the sigmoid colon (48.5%). On multivariate analysis, black patients were 33% less likely to have a distal lesion compared to white patients (OR: 0.67, p<0.001, 95% CI: 0.61-0.73). 

Black patients were more likely not to undergo surgery (4.9% vs. 3.3%), more likely to undergo partial or hemicolectomy (38.3% vs. 33.5%), or total colectomy (1.9% vs.1.3%), but less likely to undergo local excision with pathology (47.1% vs. 51.9%) (all p<0.001). 

Conclusion: Black patients have increasing incidence of high-grade dysplasia arising from adenomatous polyp, lower likelihood of distal lesions of the colon, and more likely not to undergo surgery. This suggests their importance for screening with colonoscopy. Nonetheless, more research on the use of different screening tests, location of cancer found, and surgical preferences among different race/ethnic groups are needed.

87.15 Intraoperative Perfusion Assessment of High-Risk Amputation Stumps Predict Area of Necrotic Eschar

G. S. De Silva1, K. Saffaf1, L. A. Sanchez1, M. A. Zayed1,2  1Washington University In St. Louis,Department Of General Surgery/Division Of Vascular Surgery,St. Louis, MISSOURI, USA 2Veterans Affairs St. Louis Health Care System,St. Louis, MISSOURI, USA

Introduction:  More than 130,000 extremity amputations are performed in the U.S. per year. In 40% of patients, poor amputation site healing requires stump revision and/or re-amputation.  This contributes to added patient morbidity, disability, and healthcare costs.  We hypothesize that inadequate tissue perfusion is associated with poor amputation stump healing.  We evaluated this using non-invasive Laser-Assisted Fluorescent Angiography (LAFA; SPY Elite® system) in the peri-operative setting.

 

Materials and

Methods:  A pilot group of ‘higher-risk’ patients were evaluated prospectively at the time of major lower extremity amputation.  Immediately following stump creation, LAFA was intra-operatively performed. Rate of arterial inflow and peak perfusion were determined using densitometry analysis.  Post-operative stump healing was serially evaluated for 4-6 weeks using a modified Bates-Jensen Wound Assessment Tool. Non-parametric Spearman correlation analysis was performed to evaluate stump perfusion and healing variables.

 

Results:  In a cohort of 8 patients (100% smoking, 75% diabetic), the least globally well-perfused stumps had the highest necrotic eschar scores (p=0.04), as well as increased volume of eschar (p=0.05).  Similarly, amputation stumps with lower perfusion scores just along the surgical suture line were more likely to also develop a necrotic eschar (R2=0.834, p<0.05), and increased eschar volume (R2=0.842, p<0.05).  We observed no correlation between low stump perfusion scores and higher iliac and common femoral arterial runoff scores (major arterial occlusions).

 

Conclusions:  In ‘higher-risk’ patients, peri-operative perfusion assessments of amputation stumps using LAFA can help predict potential areas of necrotic eschar formation.  Intra-operative determination of areas of decreased amputation stump perfusion may encourage corrective intervention or anticipate subsequent wound care needs.

 

84.20 Ventral hernia repair and mesh infection survey.

L. Knaapen1, O. Buyne1, S. Feaman4, P. Frisella4, N. Slater2, B. Matthews3, H. Van Goor1  1Radboud University Medical Center,Department Of Surgery,Nijmegen, GELDERLAND, Netherlands 2Radboud University Medical Center,Department Of Plastic And Reconstructive Surgery,Nijmegen, , Netherlands 3Carolinas Hernia Institute,Charlotte, SOUTH CAROLINA, USA 4Washington University,Department Of Surgery, Section Of Minimally Invasive Surgery,St. Louis, MISSOURI, USA

Introduction:
Choice of mesh and surgical technique in ventral hernia repair represent major surgical challenge, especially under contaminated conditions. Aim of this survey was to present international overview of current practice concerning ventral hernia repair in clean or contaminated condition.

Methods:
A survey (2013-2015) was send to surgeons worldwide performing ventral hernia repair. This survey was designed to compare differences in ventral hernia repair concerning life style/pre-operative work-up, antibiotic prophylaxis, hernia repair in clean/contaminated environment, recurrence and mesh infection. 

Results:
Responders (n=417) were male (92%;n=381), aged 36-65 (84%;n=351) and practicing inNorth- America (56%;n=234). Open repair was performed by 99% (20% expert level). Laparoscopic repair by 77% (15% expert level).
The majority agrees on benefit of pre-operative work-up/lifestyle changes like smoking cessation (80%;n=319) and weight-loss (64%;n=254)). Not reaching target(s) does not change decision on whether to operate or not.
Common practice is administer antibiotics at least one hour preoperatively (71%;n=295).
Synthetic (43%;n=180) and biologic (42%;n=175) mesh are used as often in contaminated primary hernia repair.
Concerning recurrent hernia repair, synthetic mesh (87%;n=359) is used in clean environment, biological (53%;n=215) or no mesh (28%;n=112) in contaminated environment. American surgeons prefer biologic mesh over  synthetic mesh in contaminated environment. 
Generally, percutaneous drainage and antibiotics is the first step regarding mesh abscess, independent of type of repair or mesh used. Concerning synthetic mesh infection with sepsis most explant the mesh and repair with biologic mesh (54%;n=217). There is no agreement on mesh infection without sepsis on when to explant  and how to repair.

Conclusion:
The majority agrees on the benefit of pre-operative work-up however not always with consequences. Both synthetic and biologic meshes are used for primary hernia repair in contaminated environment. Concerning recurrent hernia repair, synthetic mesh is used in clean environment and biologic mesh or no mesh in contaminated environment. 

79.10 Chemokine Ligand 5 (CXCL5) Regulates Intestinal Triglyceride Absorption After Small Bowel Resection

B. Aladegbami1, Y. Xie2, J. Guo1, C. Erwin1, N. Davidson2, B. Warner1  1Washington University,Pediatric Surgery,St. Louis, MO, USA 2Washington University,Gastroenterology,St. Louis, MO, USA

Introduction: Following massive small bowel resection (SBR), an important adaptive process occurs that is characterized by increased enterocyte proliferation, increased vessel density and an expanded mucosal surface area. CXCL5 is a proangiogenic chemokine and CXCL5 Knock out mice (CXCL5-KO) undergo normal structural adaptation but have impaired angiogenesis and a perturbed profile of intestinal absorption. The goal of this study was to further characterize the role of CXCL5 in intestinal triglyceride absorption and to understand its effect on body composition and metabolism.

Methods: CXCL5 knockout mice and C57BL/6 wild type (WT) mice were subjected to 50% proximal small bowel resection (N= 10 and 7, respectively).  On post-operative day 14 (POD14) Pluronic F127 was injected via tail vein and mice were then gavaged enterally with mixture of 20 % intralipid, corn oil and radioactive 3[H]-triolein in order to measure triglyceride (TG) absorption. Serum was collected at 1, 2 and 3 hours’ time point and remnant intestine was sectioned into 9 equal segments for analysis. Indirect calorimetry was used to measure metabolic activity and body composition determined by MRI.

Result:  CXCL5-KO mice showed a decreased level of radiation (TG absorption) in the serum at the 1 (CXCL5-KO: 5482.5±1407.1; WT: 15481±4351.6, p =0.03) and 2 (CXCL5-KO: 8964.7±1118.6; WT: 16413±1126.8 p =0.01) hours’ time point.  Total TG increased 57 fold in WT at 3 hours, but only 37 fold in CXCL5-KO (p=0.015).  In addition, CXCL5-KO mice showed a non-significant but higher level of radiolabeled TG in the intestine when compared to WT (Area under the curve CXCL5-KO: 0.35; WT: 0.11, p =0.07 (NS)). These findings are all consistent with a delayed TG absorption profile and a compensatory increase in production by the Liver.

  CXCL5-KO mice had no difference in respiratory exchange ratio, locomotion, and body composition when compared to WT both pre- and postoperatively. However there was a consistent decrease in respiratory exchange ratio, locomotion and lean body mass in all post-surgical mice when compared to their pre-operative levels.

Conclusion: Our results suggest a potential function of CXCL5 and/or angiogenesis in the regulation intestinal triglyceride absorption. However, this effect does not translate to a difference in whole body metabolism and body composition in the early postoperative period. 

76.04 What Factors Influence Attending Surgeon Decisions About Resident Guidance in the Operating Room?

J. P. Fryer1, J. Bohnen2, B. George3, M. Schuller1, D. DaRosa1, L. Torbeck4, J. Mullen2, S. Meyerson1, E. Auyang5, J. Chipman6, J. Choi4, M. Choti14, E. Endean7, C. Foley8, S. Mandell9, A. Meier10, D. Smink11, K. Terhune12, P. Wise13, N. Soper1, J. Zwischenberger7, K. Lillemoe2, R. Williams4  1Northwestern University,Surgery,Chicago, IL, USA 2Massachusetts General Hospital,Surgery,Boston, MA, USA 3University Of Michigan,Surgery,Ann Arbor, MI, USA 4Indiana University,Surgery,Indianapolis, IN, USA 5University Of New Mexico,Surgery,Albuquerque, NM, USA 6University Of Minnesota,Surgery,Minneapolis, MN, USA 7University Of Kentucky,Surgery,Lexington, KY, USA 8University Of Wisconsin,Surgery,Madison, WI, USA 9University Of Washington,Surgery,Seattle, WA, USA 10State University Of New York,Surgery,Syracuse, NY, USA 11Brigham And Womens,Surgery,Boston, MA, USA 12Vanderbilt,Surgery,Nashville, TN, USA 13Washington University,Surgery,Saint Louis, MO, USA 14University Of Texas Southwestern,Surgery,Dallas, TX, USA

Introduction: Supervising residents performing operative procedures requires balancing patient safety and resident learning needs. Little is known about how faculty decide the appropriate amount of guidance to provide to a resident. This study explores the contributions of 4 possible influencing factors: resident preparedness/performance in current case, case complexity, resident level of training (post-graduate year, PGY), and prior faculty guidance behavior.

Methods: Attending surgeon guidance patterns were captured across 15 general surgery residency programs over a 9 month period (10/15- 6/16). Attending surgeons rated the following for each operation: a) guidance level provided to residents, b) resident preparedness/performance (Table 1), and c) relative case complexity (easiest 1/3, median 1/3, hardest 1/3). Prior faculty guidance behavior was calculated as the average of guidance ratings on all PGY4 resident operations in the study period. Faculty who had performed ? 3 procedures with PGY4 residents were excluded from analysis. Descriptive statistics, correlational analyses, and multiple regression analyses were performed to explore the relative contribution of each factor.

Results: Attending surgeon assessments were completed for 7,297 resident operative performances. In univariate analyses, faculty evaluation of resident preparedness/performance in the current case was the strongest determinant of how much guidance they felt the need to provide (r=0.69, 47.7% of decision variance). Each additional factor led to a smaller but still significant improvement in predictability of faculty guidance decisions. The 4 factors together accounted for 54.5% of decision variance (r = 0.74). Semi-partial correlations revealed that the factors each accounted for the following amounts of decision variance: resident’s operative performance 21.8%, attending surgeon prior guidance habits 4.5%, case complexity 2.0% and resident PGY level 0.9% of variance. Overall, 29.3% of resident performances deemed “practice ready” occurred with “supervision only”. This increased to 75.7% when resident performances were rated “exceptional”. Surprisingly, just 38.5% of senior resident cases rated as easiest 1/3 occurred with “supervision Only”.

Conclusion: A resident’s real-time performance during an individual operation is the most important factor used by faculty to determine how much guidance is needed, outweighing PGY level, prior faculty guidance behavior, and case complexity. Strategies are needed to improve resident preparedness/performance on a daily basis so faculty can feel comfortable using guidance strategies that allow more operative autonomy while preserving patient safety.

 

75.04 The Impact of Research on Burnout during Surgical Training

L. C. Elmore1, D. B. Jeffe2, L. Jin1, M. M. Awad1, I. R. Turnbull1  1Washington University In St. Louis School Of Medicine,Surgery,Saint Louis, MISSOURI, USA 2Washington University In St. Louis School Of Medicine,Medicine,St. Louis, MO, USA

Introduction:   Surgical residency is rigorous and burnout has emerged as a prevalent problem in trainees.  Academic surgical programs provide trainees with a dedicated research experience distinct from their clinical training.  The wellbeing of residents during this time and the impact of the interruption in clinical training on resident burnout is unknown.  We report an analysis of burnout in research residents and a comparative analysis of burnout in residents who completed research during their surgical training and those who did not.  

Methods:   From April-December, 2014, an anonymous online survey was distributed to general surgery trainees via program directors.  All ACGME-accredited programs were invited to participate.  Demographic information, research history, program characteristics and professional goals were measured.  Our primary endpoint was burnout, which was evaluated by the Maslach Burnout Inventory (MBI) and measured three aspects of burnout:  emotional exhaustion (EE), depersonalization (DP) and personal accomplishment (PA).  A score in the highest tertile for EE or DP and or the lowest tertile on PA were indicative of burnout.  Tertiles were predetermined using Maslach’s normative scale previously validated in healthcare workers.  Descriptive statistics were used to assess frequency distributions and chi-square tests were used to analyze categorical data.

Results:  Of the 752 respondents, 88 (12%) were engaged in a dedicated research period at the time of survey.  Of the remaining 664 clinical residents, 118 (17.8%) completed research during their residency training and 546 (82%) did not.  Individuals who did not complete research had a higher rate of burnout than those who completed research during training and residents who were in the lab at the time of survey (70.1% vs. 61.0% vs. 59.1%, p=0.03).  Residents who did not complete research were also more likely to score in the highest tertile for EE (59.5% vs 48.3% vs. 39.8%, p=0.004) and less likely to report high levels of personal accomplishment (42.5% vs. 54.2% vs. 64.8%, p<0.001) when compared to those who completed research during residency training and those who were currently in the lab.  No significant differences were seen in rates of depersonalization.  In order to compare the impact of research on residents in their terminal years of training, we evaluated burnout in clinical years four and five.  There were no differences in overall rates of burnout or rates of EE and DP in residents who did research and those who did not; however, residents who did research were more likely to report high levels of personal accomplishment (57.5% vs. 42.5%, p=0.026).  

Conclusion:  Residents who have taken time away from residency for research or are actively engaged in research have lower rates of burnout than residents who have not completed research.  This finding suggests that time for dedicated academic development and independent investigation may be protective against burnout.

69.08 The Effects of Helmet Legislation on Pediatric Bicycle Injuries in Illinois

R. Weston1, C. Williams2, M. Crandall1  1University Of Florida,Surgery,Jacksonville, FL, USA 2Washington University,Emergency Medicine,St. Louis, MO, USA

Introduction:  Bicycling is one of the most popular forms of play and exercise for children in the U.S.  However, over 200,000 children per year are injured in bicycle crashes, and an estimated 22,000 pediatric bicycle-related traumatic brain injuries (TBIs) occur annually.  Bicycle helmets are known to decrease the risk of head injury, but efficacy and magnitude of effect of helmet legislation have not been fully elucidated.  

Methods:  
This was a retrospective, observation study of children under 18 who presented after a bicycle crash and were included in the Illinois Trauma Registry from 1999-2009.  Demographic information, injury types, injury severity, helmet usage, and location of injury data were collected. Multiple logistic regression analysis was used to quantify the independent effects of helmet usage on likelihood of TBI, and among those with TBI, the severity of injury. Data were then compared between communities with and without helmet legislation.

Results: A total of 3080 pediatric bicycle related crashes were identified. Children wearing helmets were less likely to sustain a TBI, OR 0.56 (CI 0.37-0.84). Boys were less likely to suffer a TBI, OR=0.80 (95% CI 0.67-0.97) while older children were more likely to suffer a TBI. Overall 5.0% of patients were noted as wearing helmets. As compared to non-Hispanic white children, Black and Hispanic children were less likely to wear helmets, OR=0.24 (95% CI 0.09-0.68) and OR=0.10 (95% CI 0.02-0.42) respectively. Those injured living within helmet zip code regions wore proportionally more helmets, 12.2%, than the overall 5.0%.  There was no significant change in helmet usage between pre and post legislation in helmet legislation areas or over time in non-helmet legislation areas.

Conclusion: Rates of pediatric TBI from bicycle injury in Illinois trauma centers are not changing an appreciable amount. There was also no statistically significant change across the years of the analysis in total number of severe TBI. Similar to previous studies, non-Hispanic black populations as well as Hispanic populations were much less likely to wear helmets.  Children in helmet legislation areas were significantly more likely to wear helmets throughout the years combined, although it is unclear how much of this is attributed to legislation versus other sociodemographic factors. 

 

68.09 Guideline Adherence in Screening Mammography: Behavior Patterns in Commercially-Insured U.S. Women

J. Yu1, N. P. Carlsson1, G. A. Colditz1, M. S. Goodman1, S. Chang1, J. A. Margenthaler1  1Washington University,Surgery,St. Louis, MO, USA

Introduction:
Over 2 million women currently live with breast cancer in the United States, and the annual incidence of more than 200,000 new cases is predicted to remain constant.  Secondary prevention of breast cancer with screening mammography has become the standard of care, but recent updates in recommended screening mammography frequencies have ignited substantial controversy both among physicians and from a societal perspective.  To better understand the potential impact on patients, we assess guideline adherence in a retrospective cohort of commercially-insured U.S. women diagnosed with breast cancer.

Methods:
Using the Truven Health Analytics|MarketScan®|Database from 2006-2012, we conducted a retrospective review of screening mammography frequencies in women aged 40-60 during the 5 years prior to primary breast cancer diagnosis, excluding ductal carcinoma in situ (DCIS), in 2011-2012.  Patient demographics, family history, and clinical characteristics were extracted from the database, and screening adherence was defined as annual (<14 months) and biennial (<26 months).  Unadjusted and multivariable analyses were performed, with two-sided statistical testing. Statistical significance was determined using α =0.05.

Results:
Of 1,876 women diagnosed with breast cancer in 2011-2012, mean age at diagnosis was 53.7±4.3 years, and patients underwent an average of 5.2±2.4 mammograms (2.7±1.7 screening, 2.0±1.4 diagnostic) prior to diagnosis.  Only 16.4% were adherent to annual screening vs. 51.6% adherent to at least biennial screening.  In the adjusted multivariable analysis, odds of adherence to either annual or biennial screening were significantly increased with family history of breast cancer (OR=1.74 [95% CI=1.30-2.32]; OR=1.50, [95% CI=1.19-1.89]), decreased with higher Klabunde Charlson comorbidity score (OR=0.89 [95% CI=0.82-0.97]; OR=0.92 [95% CI=0.87-0.97]), and unaffected by insurance provider (OR=0.77 [95% CI=0.57-1.0]; OR=1.14 [95% CI=0.91-1.43]) or geographic region (OR=0.98 [95%CI=0.68-1.40]; OR=1.08 [95% CI=0.82-1.42]).

Conclusion:
Biennial screening mammography recommendations will likely result in higher rates of guideline adherence.  In this retrospective cohort, more than triple the number of women included were adherent to biennial vs. annual screening; even so, nearly 50% of commercially-insured U.S. women diagnosed with breast cancer in 2011-2012 were not adherent to even biennial screening prior to diagnosis.  Further assessments of resource utilization and long-term outcomes will be critical to determine appropriate population health intervention methods to increase screening compliance.
 

54.17 Can Increased Trauma Mortality At Weekends Be Explained by Failure to Rescue?

D. Metcalfe1, O. A. Olufajo6, A. J. Rios-Diaz5, C. K. Zogg4, R. Chowdhury2,3, A. Haider2,3, J. M. Havens2,3, A. Salim2,3  6Washington University School Of Medicine,Department Of Surgery,St Louis, MO, USA 1University Of Oxford,Kadoorie Centre for Critical Care Research,Oxford, OXFORDSHIRE, United Kingdom 2Brigham And Women’s Hospital,Center For Surgery And Public Health,Boston, MA, USA 3Harvard Medical School,Boston, MA, USA 4Yale University School Of Medicine,New Haven, CT, USA 5Thomas Jefferson University Hospital,Department Of Surgery,Philadelphia, PA, USA

Introduction:

Previous studies found that trauma patients admitted to US hospitals at weekends have higher odds of mortality. We hypothesized that providers respond less effectively to serious adverse events (SAEs) at weekends. The aim of this study was to determine whether the trauma “weekend effect” could be explained by differences in FTR, i.e. death subsequent to an SAE.

Methods:

An observational study was undertaken using the Nationwide Inpatient Sample (NIS) 2001-2011. All inpatients with a primary injury diagnosis (ICD-9-CM 800-957) were included. The outcome measures were SAE (myocardial infarction, venous thromboembolism, acute renal failure, respiratory failure, pneumonia, bleeding), in-hospital mortality, and FTR. Logistic multivariable regression models were used to adjust for patient- (age, sex, race, payer status, Charlson score, ISS) and hospital-level (trauma designation) characteristics. Counterfactual modeling was used to explore the hypothetical effect of eliminating FTR.

Results:

There were 1,727,124 individual patient records (8.5 million weighted admissions). The overall rate of SAE was 11.1% (11.3% weekend, 11.1% weekday, p<0.001), in-hospital mortality 2.3% (2.4% weekend, 2.2% weekday, p<0.001), and FTR 9.8% (10.0% weekend, 9.8% weekday, p=0.181). Weekend admission was independently associated with higher adjusted odds of SAE (aOR 1.09, 95% CI 1.08-1.11) and death (OR 1.12, 1.09-1.15) but not FTR (1.04, 1.00-1.09). Within a counterfactual model, increased weekend mortality was not reduced by eliminating FTR (aOR 1.11, 1.07-1.16).

Conclusion:

Trauma patients have higher odds of death when admitted at weekends. This finding is more likely to be explained by increased SAEs at weekends than by FTR.

 

47.09 Insurance Status and Outcomes in Pediatric Trauma

C. M. Courtney1, E. J. Onufer1, P. M. Choi1, N. A. Wilson1, A. M. Vogel1, M. S. Keller1  1Washington University,St. Louis, MO, USA

Introduction:  Healthcare disparities, based on insurance status, exist in trauma patients. We sought to determine if any disparities exist in pediatric trauma patients at our institution. Specifically, we looked at certain injury patterns and patients transferred from outside hospitals.

Methods:  A retrospective review of all pediatric trauma patients was conducted at a single, ACS and State verified Level-1 pediatric trauma center from 1/1/2009 to 12/31/2014. Patients were categorized by their insurance status [Private Insurance (PI), Medicaid (MC), or Self-Pay (SP)]. Continuous data were analyzed using analysis of variance (ANOVA). Categorical data were analyzed using chi-square test for frequencies.

Results:  A total of 7937 trauma patients were included, of which there were 3677 with PI (46.3%), 3725 patients with MC (46.9%), and 535 patients with SP (6.7%). Overall Injury Severity Scores (ISS) were low, and there were no statistically significant differences in between groups. There were also no differences in the total time spent in the Emergency Department (ED) or in the percentage of patients receiving CT scans between payer status.

We next examined management and outcomes based on insurance status. More SP patients were discharged home from the ED following evaluation compared to both PI and MC patients. The SP group had the highest mortality, followed by MC, and PI (SP: 3.4% vs MC: 1.3% vs PI: 0%, p < 0.0001). When specific injury patterns were analyzed, we found that the SP group had increased incidence of penetrating injury as well as an increased Chest Abbreviated Injury Score (Table 1). Mortality was also higher in SP patients suffering from blunt trauma (SP: 2.1% vs MC: 1.1% vs PI: 0%, p < 0.0001).

4892 patients (61.6%) were outside hospital transfers. In this cohort of patients, there were no differences in ISS between insurance groups. (PI: 5.4±0.1, MC: 5.6±0.1, SP: 5.7±0.5). There were also no differences in the percentage of patients receiving CT scans at outside hospitals according to payer status. This persisted even when patients were stratified by ISS.

Conclusion: Insurance status does not seem to impact the frequency of initial CT evaluation, even in patients transferred from outside hospitals. However, healthcare disparities continue to exist in pediatric trauma. This is particularly true in SP patients who are more commonly discharged home from the ED. These patients also have an increased frequency of both penetrating injury and mortality. Further research into healthcare disparities within this high risk population may reduce costs but also may save lives.
 

46.06 Necrotizing Enterocolitis: A Temporal and Predictive Model for Disease Development

J. Primus1, I. Caban1, M. Collins1, C. Coghill1,3, C. Roane1, M. Estes1, P. Tarr2, C. Martin1  1Children’s Hospital Of Alabama,Pediatric Surgery,Birmingham, ALABAMA, USA 2Washington University In St. Louis,Pediatric Gastroenterology,St. Louis, MISSOURI, USA 3Children’s Hospital Of Alabama,Neonatal Intesive Care,Birmingham, ALABAMA, USA

Introduction:  Necrotizing enterocolitis (NEC) is an inflammatory disorder affecting the GI tract of premature infants and is a significant cause of morbidity and mortality.  The development of a mathematical model that can predict the timing of onset in at risk infants can lead to the implementation of directed surveillance strategies and treatments early in life. 

Methods:  A single institutional retrospective review was conducted which included data from the Children’s Hospital Neonatal Database for Children’s Hospital of Alabama from the years of 2010-2016.  Our criteria for perforated NEC and medical NEC was based off of the Vermont Oxford Network criteria. Patients were analyzed based on multiple variables including Apgar scores (categorized as >7 versus ≤7) Birthweight (categorized as Extremely Low Birth Weight with weight<1500 grams versus others with weight ≥1500 grams) and Gestational Age (categorized as Extremely preterm with age<28 weeks versus other with age≥28 weeks). The primary outcome reported was the timing to the diagnosis of NEC.  Analyses were done separately for those managed medically and surgically. Kaplan-Meier curves were constructed and log-rank tests were performed to compare the distributions of the time to diagnosis for different categories by Apgar scores 1 and 5, birth weight and gestational age. A parametric survival model was fitted separately for medically and surgically managed patients to examine the relationship between time to diagnosis and Apgar scores 1 and 5, birth weight and gestational age adjusted for covariates such as gender, delivery mode, and insurance type.

Results:  Our study included 113 de-identified neonates all of whom developed NEC. Most were treated with surgery (n=82).  Of those who underwent surgery 29 died.  Medically managed babies with gestational age <28 weeks have a longer time to diagnosis (median 42, 95% CI: 24-56) relative to those with age≥28 weeks (22, 95% CI: 8-27). Similarly, babies with birthweight <1500 grams have a longer time to diagnosis (median 42, 95% CI: 24-56) relative to those with birthweight ≥1500 grams (22, 95% CI: 8-27). For those surgically managed, babies with Apgar scores ≤7 have longer time to diagnosis (median 42, 95% CI: 24-56) relative to those with scores >7 (22, 95% CI: 8-27). In fitting the parametric survival model with all these variables plus gender, insurance type and delivery mode, only Apgar 1 showed to be a significant predictor of the time for those surgically managed.

Conclusion:  Although counterintuitive the data suggests that infants at high risk including ELBW, and extreme age may develop NEC later.  The retrospective design of this study limits our ability to fully explain this outcome.   We speculate that this finding may be due to  high surveillance by healthcare providers and less aggressive feeding strategies.  Future studies will validate this finding in  large multi-institutional administrative databases
 

38.04 Pluralistic Ignorance And Risk Of Attrition Among Residents.

R. Panni1, M. Laurel2, K. Nandagopal3, G. Cohen3, G. M. Walton3, A. Salles1  1Washington University,Surgery,St. Louis, MO, USA 2Washington University School Of Medicine,St. Louis, MO, USA 3Stanford University,Palo Alto, CA, USA

Introduction: Attrition continues to be a major problem in general surgery residencies with an estimated one out of five residents failing to complete training. While there are a number of reasons for this, here we examine one factor, pluralistic ignorance, and its relationship to risk of attrition among surgical residents. The difference between the perception of one's own experience compared to the experiences of those around them is termed pluralistic ignorance. For example, in academic contexts, it is common for people to think that those around them are faring better, whether that be with more success, better grades, or more happiness. This feeling is often more pronounced at times of transition. In this study, we hypothesized that those who experience greater degrees of pluralistic ignorance may be at greater risk for attrition.

Methods:  Junior residents in a single general surgery residency program were surveyed on a voluntary basis for two consecutive years (2011-2012 and 2012-2013). As part of a larger study, residents were administered a questionnaire which included measures of pluralistic ignorance with items such as the number of time per week they made any mistakes, felt down, felt bothered by blaming themselves for things, were satisfied with their performance. The participants were then asked the same questions about a typical resident in their program. We measured risk of attrition with two items, how frequently they thought about leaving residency and how likely they think it is that they will complete their current residency. We examined the correlations among these measures to see whether pluralistic ignorance was related to risk of attrition.

Results:

36 residents participated in the survey (43% response rate). We found that higher degrees of pluralistic ignorance were associated with more frequent thoughts of leaving residency (rs = 0.55, p=0.0006). The less pluralistic ignorance residents experienced, the more likely they were to intend to complete their residency (rs = -0.62, p<0.0001). Thus, pluralistic ignorance was significantly associated with these two measures of risk of attrition. In regression analyses controlling for gender and post-graduate year, pluralistic ignorance was significantly predictive of the frequency of thoughts of leaving residency (B=0.75, t=3.35, p=0.002) and intention to complete residency (B=-0.76, t=-3.25, p=0.003).

 

Conclusion:
To our knowledge, pluralistic ignorance has not been examined in the context of surgical residencies. Our data suggest that this may be a predictor of risk of attrition. Perhaps more importantly, pluralistic ignorance is modifiable. At the institution where this study was performed, each post graduate year group routinely meets with a psychologist. Residents thus have an opportunity to discuss their various struggles together and realize that others are having similar experiences. Interventions such as this may reduce pluralistic ignorance and potentially decrease the risk of attrition.

36.06 Can Emergency Department Physiologic Parameters Predict Injury Severity in Elderly Trauma Patients?

S. L. Nitzschke1, G. Barmparas2, O. Olugajo3, C. Burns1, Z. Cooper1, A. Haider1, A. Salim1  2Cedars-Sinai Medical Center,Los Angeles, CA, USA 3Washington University,St. Louis, MO, USA 1Brigham And Women’s Hospital,Trauma/Surgery/Harvard,Boston, MA, USA

Introduction:  Although physiologic data has been used to help appropriately triage severely injured trauma patients of all ages, the ability of these parameters to predict injury severity among geriatric patients has not been examined. The purpose of this study is to determine if physiologic and clinical data available at the time of triage in the emergency department (ED) would be predictive of injury severity in elderly trauma patients.

Methods:  A retrospective review of the National Trauma Data Bank (2007-2011) was queried for patients aged 65-90 with blunt trauma.  Data collection included basic demographic data, as well as initial heart rate, systolic blood pressure, and Glasgow Coma Scale (GCS). The shock index (SI) was also calculated. Our primary outcome of interest was moderate-to-severe injury defined as injury severity score (ISS) ≥ 9 or head Abbreviated Injury Score (AIS) ≥3. The other outcome of interest was in-hospital mortality.  Various univariate and multi-variable logistic regression models using different combinations of physiologic data were built and performances of the models were measured with the area under receiver operating characteristic (AUROC) curve. 

Results: A total of 394,727 patients met our inclusion criteria (61% had ISS ≥ 9, 21% had a head AIS ≥3, and overall mortality was 5.2%). None of our models were predictive of ISS ≥ 9 (all AUROC ≤0.5), and ED GCS was moderately predictive of head AIS ≥ 3 (AUROC of 0.639). The ED GCS was the best predictor of in-hospital mortality (AUROC of 0.717).

Conclusion: Our data show that initial physiologic and clinical data appear to have little predictive value on injury severity in an elderly patient population, but ED GCS appears adequate in predicting mortality. In order to facilitate appropriate triage of the severely injured trauma patient in this growing population, this study suggests that a new set of triage criteria may be required for the geriatric patient.

 

29.06 Impact of Lymph Node Ratio in Selecting Patients with Resected Gastric Cancer for Adjuvant Therapy

Y. Kim1, M. H. Squires2, G. A. Poultsides3, R. C. Fields4, S. M. Weber5, K. I. Votanopoulos6, D. Kooby2, D. J. Worhunsky3, L. X. Jin4, W. G. Hawkins4, A. W. Acher5, C. S. Cho5, N. Saunders7, E. A. Levine6, C. R. Schmidt7, S. K. Maithel2, T. M. Pawlik1,7  1Johns Hopkins University School Of Medicine,Baltimore, MD, USA 2Emory University School Of Medicine,Atlanta, GA, USA 3Stanford University,Palo Alto, CA, USA 4Washington University,St. Louis, MO, USA 5University Of Wisconsin,Madison, WI, USA 6Wake Forest University School Of Medicine,Winston-Salem, NC, USA 7Ohio State University,Columbus, OH, USA

Introduction:  The impact of adjuvant chemotherapy (CTx) and chemo-radiation therapy (cXRT) in the treatment of resectable gastric cancer remains varied.  We sought to define the clinical impact of lymph node ratio (LNR) on the relative benefit of adjuvant CTx or cXRT among patients having undergone curative-intent resection for gastric cancer.

Methods:  Using the multi-institutional U.S. Gastric Cancer Collaborative database, 769 patients with gastric adenocarcinoma who underwent curative-intent resection between 2000 and 2012 were identified. Patients with metastasis or an R2 margin were excluded. The impact of LNR on disease-free survival (DFS) among patients who received CTx or cXRT was evaluated.

Results: Median patient age was 65 years and the majority of patients were male (55.8%).  The majority of patients underwent either subtotal (40.9%) or total gastrectomy (41.4%), with the remainder undergoing distal gastrectomy or wedge resection (17.7%). On pathology, median tumor size was 4 cm; more patients had a T3 (33.5%) or T4 (28.7%) lesion and lymph node metastasis (60.6%).  Margin status was R0 in 92.2% of patients.  A total of 361 (46.9%) patients underwent surgery alone, 257 (33.4%) patients received 5-FU based cXRT, whereas the remaining 151 (19.6%) received CTx. Recurrence occurred in 236 (30.7%) patients.  At a median follow-up of 17.2 months, median disease-free survival (DFS) was 29.0 months and 5-year DFS was 34.7%. According to LNR categories, 5-year DFS for patients with LNR of 0, 0.1-0.10, >0.10-0.25, >0.25 were 52.2%, 40.0%, 43.0% and 13.9%, respectively. Factors associated with worse DFS included age (hazard ratio [HR] 1.01), tumor size (HR 1.08), tumor grade (moderate/poor: HR 1.27), GE junction (HR 1.87), T-stage (3-4: HR 2.66), and LNR (>0.25: HR 2.18) (all P<0.05). In contrast, receipt of adjuvant cXRT was associated with an improved DFS in the multivariable model (vs. surgery alone: HR 0.57; vs. CTx: HR 0.45, both P<0.001). The benefit of cXRT for resected gastric cancer was noted only among patients with LNR >0.25 (vs. surgery alone: HR 0.39; vs. CTx: HR 0.44, both P<0.001).  In contrast, there was no noted DFS benefit of CTx or cXRT among patients with LNR ≤0.25 (all P>0.05) (Figure).

Conclusion: Adjuvant CTx or cXRT were utilized in over one-half of patients undergoing curative-intent resection for gastric cancer. LNR may be a useful tool to select patients for adjuvant cXRT, as the benefit of cXRT therapy was isolated to patients with higher degrees of lymphatic spread (i.e., LNR >0.25).

 

29.04 Clinicopathologic Score Predicting Lymph Node Metastasis in T1 Gastric Cancer

T. B. Tran1, D. J. Worhunsky1, M. H. Squires2, L. X. Jin3, G. Spolverato4, K. I. Votanopoulos7, C. S. Cho5, S. M. Weber5, C. Schmidt6, E. A. Levine7, R. C. Fields3, T. Pawlik4,6, S. Maithel2, J. A. Norton1, G. A. Poultsides1  2Emory University,Atlanta, GA, USA 3Washington University In St. Louis,St. Louis, MO, USA 4John Hopkins Hospital,Baltimore, MD, USA 5University Of Wisconsin,Madison, WI, USA 6The Ohio State University,Columbus, OH, USA 7Wake Forest University,Winston-Salem, NC, USA 1Stanford University,Palo Alto, CA, USA

Introduction:  While gastrectomy with D2 lymphadenectomy is considered the standard treatment for invasive gastric adenocarcinoma, endoscopic resection (ER) has been described by Asian authors in select patients with T1 gastric cancer. Accurate preoperative prediction of lymph node (LN) metastasis in this setting is critical, since ER omits LN harvest. The objective of this study is to identify preoperative predictors of LN metastasis in US patients with T1 gastric cancer.

Methods:  Patients who underwent surgical resection for T1 gastric cancer (T1a: into lamina propria or muscularis mucosa, and T1b:  into submucosa) between 2000 and 2012 in 7 US academic institutions were identified. Clinicopathologic predictors of LN metastasis were determined using univariate and multivariate logistic regression. A preoperative score was created assigning points based on each variable’s beta-coefficient.

Results: Among 965 patients with gastric cancer undergoing surgical resection, 198 patients (20.5%) had T1 disease confirmed on final pathology.  Of those, 40 patients (20%) had LN metastasis. Independent predictors of LN involvement on multivariate analysis were poor differentiation (OR 4.5, P=0.002, beta 1.5), T1b stage (OR=4.5, P=0.02, beta 1.5), lymphovascular invasion (OR 2.8, P=0.049, beta 1.4), and tumor size > 2 cm (OR 2.8, P=0.026, beta 1.0). A clinicopathologic risk score predicting LN metastasis was created, assigning 3 points for the first 3 variables and 2 points for the last variable. The performance of the score was evaluated with an ROC curve (Figure) showing excellent discrimination (AUC = 0.79) and 100% sensitivity in detecting LN metastasis in patients with a score of 3 or less.

Conclusion: In this cohort of US patients with T1 gastric adenocarcinoma, lack of LN involvement could be predicted if none or one of the following unfavorable factors is present (T1b, poor differentiation, lymphovascular invasion, size > 2 cm). For these patients, endoscopic resection may be a potential treatment option provided it could be achieved with negative margins. 

 

17.18 Prognostic effect of lymph node ratio after curative-intent resection for distal cholangiocarcinoma

A. Y. Son1, R. Shenoy1, C. G. Ethun2, G. Poultsides3, K. Idrees4, R. C. Fields5, S. M. Weber6, R. C. Martin7, P. Shen11, C. Schmidt9, S. K. Maithel2, T. M. Pawlik10, M. Melis1, E. Newman1, I. Hatzaras1  1New York University School Of Medicine,Surgery,New York, NY, USA 2Emory University School Of Medicine,Surgery,Atlanta, GA, USA 3Stanford University,Surgery,Palo Alto, CA, USA 4Vanderbilt University Medical Center,Surgery,Nashville, TN, USA 5Washington University,Surgery,St. Louis, MO, USA 6University Of Wisconsin,Surgery,Madison, WI, USA 7University Of Louisville,Surgery,Louisville, KY, USA 9Ohio State University,Surgery,Columbus, OH, USA 10Johns Hopkins University School Of Medicine,Surgery,Baltimore, MD, USA 11Wake Forest University School Of Medicine,Surgery,Winston-Salem, NC, USA

Introduction:
The ratio of metastatic to total harvested lymph nodes (LNR) is an important prognostic factor following resection of gastrointestinal malignancies. We assessed the prognostic value of LNR in patients undergoing resection for distal cholangiocarcinoma (DCC).

Methods:
Patients who underwent curative intent resection of DCC in 10 institutions of the US Extrahepatic Biliary Malignancy Collaborative were included. Descriptive statistics were used to evaluate characteristics of demographic data. Multivariate proportional hazards regression was used to identify factors associated with recurrence-free survival and overall survival.

Results:
A total of 265 were included (median age 67 years; 63.4% male): 199 with low-LNR (00.4). The high LNR group was less likely to have undergone a Whipple procedure (85.4% vs. 82.9% vs. 60.0%, p<0.01), had a higher proportion of margin-positive resection (19.6% vs. 19.5% vs. 45.8%, p<0.05), poor differentiation (26.2% vs. 36.6% vs. 52.2%, p<0.05), lymphovascular (44.3% vs. 74.3% vs. 88.2%, p<0.001) and perineural invasion (81.0% vs. 69.2% vs. 91.3%, p>0.05). Multivariate analysis showed high-LNR as an independent predictor of poor RFS (HR 4.6, 95%CI 1.8-11.8, p=0.001) and OS (HR 2.2, 95%CI 1.0-4.6, p<0.05) (Table 1).  Rates of adjuvant chemoradiation in low-moderate LNR and high-LNR were 61.9% and 82.6%, respectively (p=0.07). Nevertheless, stratification by LNR showed no improvements in RFS or OS with either adjuvant chemoradiation.

Conclusion:
LNR can be used as a prognostic factor for recurrence and survival in patients undergoing curative-intent resection for DCC. Every effort should be made to perform an oncologic resection, with negative margins and adequate lymph node harvest, as adjuvant chemoradiation does not appear to provide LNR-specific improvements in long-term prognosis.
 

10.17 OUTCOMES OF PEDIATRIC FIREARM INJURIES AT ADULT AND PEDIATRIC LEVEL-1 TRAUMA CENTERS

E. J. Onufer1, P. M. Choi1, C. M. Courtney1, M. Wallendorf1, A. M. Vogel1, M. S. Keller1  1Washington University,General Surgery,St. Louis, MO, USA

Introduction:  Controversy exists regarding optimal trauma center qualifications for management of children injured by firearms. We sought to determine if outcome differences exist for these patients if managed at adult vs pediatric, American College of Surgeons (ACS)-verified Level-1 trauma centers.

Methods:  We conducted a retrospective review of the 2013-2014 National Trauma Databank. We included all patients aged < 18 years who were injured by a firearm and admitted to an ACS-verified Level-1 trauma center. Patients who died on arrival to the Emergency Unit or were transferred were excluded. Centers were classified as freestanding Pediatric Trauma Centers (PTC), Adult Trauma Center with Pediatric qualifications (ATC/PTC), and Adult Trauma Centers (ATC). Patients were grouped to ≤ 14 years and 15-17 years of age.

Results: 1866 children met inclusion criteria. Younger patients were treated more commonly at a PTC(Table1).  Across all centers, both age groups demonstrated demographic and injury severity differences. After controlling for these differences, children ≤14 years admitted to an ATC/PTC had a higher adjusted odds ratio of blood transfusions (OR 2.77; 95%CI 1.13-6.8); and laparotomies (OR 3.57; 95%CI 1.14-11.24) compared those admitted to a PTC.  Children 15-17 years of age, managed at either an ATC (OR 8.87; 95%CI 3.14-25.05) or ATC/PTC (OR 9.38; 95%CI 3.15-27.97), also had a greater adjusted odds ratio for laparotomy than those managed at a PTC. There were no differences in mortality, complications, computed tomography, thoracotomies, length of stay (LOS), ICU LOS, or ventilator days.

Conclusion: After accounting for demographic and injury severity differences, there were no differences in outcome variables between the PTC, ATC/PTC and ATC. These data support the management of children with firearm related injuries, typically considered an “adult pattern injury”, at PTC.

 

10.16 Is There a “Weekend Effect” in Emergency General Surgery?

D. Metcalfe1, O. A. Olufajo6, A. J. Rios-Diaz5, C. K. Zogg4, R. Chowdhury2, J. M. Havens2, A. Haider2, A. Salim2,3  6Washington University School Of Medicine,Department Of Surgery,St Louis, MO, USA 1University Of Oxford,Kadoorie Centre For Critical Care Research,Oxford, OXFORDSHIRE, United Kingdom 2Brigham And Women’s Hospital,Center For Surgery And Public Health,Boston, MA, USA 3Harvard Medical School,Boston, MA, USA 4Yale University School Of Medicine,New Haven, CT, USA 5Thomas Jefferson University Hospital,Department Of Surgery,Philadelphia, PA, USA

Introduction:  

Weekend admission is associated with increased mortality across a range of patient populations and healthcare systems. However, it is unknown whether this “weekend effect” exists in emergency general surgery (EGS). The aim of this study was to determine whether weekend admission is independently associated with serious adverse events (SAE), in-hospital mortality, or failure to rescue (FTR) in an EGS population.

 

Methods:  

An observational study using the Nationwide Inpatient Sample (NIS) 2001-2011; the largest all-payer inpatient database in the United States, which represents a 20% stratified sample of hospital admissions. The inclusion criteria were all inpatients with a primary ICD-9-CM diagnosis of acute appendicitis, abdominal cavity hernia (obstructed or strangulated), intestinal obstruction, or peritonitis. Outcomes were SAE, in-hospital mortality, and FTR (in-hospital mortality in the population of patients that developed an SAE). Logistic multivariable regression models were used to adjust for patient- (age, sex, race, payer status, Charlson comorbidity index) and hospital-level (trauma designation, hospital bed size) characteristics.

 

Results

There were 758,915 individual patient records (3.7 million weighted admissions). The overall rate of SAE was 10.6% (10.9% weekend, 10.5% weekday, p<0.001), in-hospital mortality 1.4% (1.4% weekend, 1.4% weekday, p=0.048), and FTR 8.7% (8.7% weekend, 8.7% weekday, p=0.819). Within logistic regression models, weekend admission was an independent risk factor for development of SAE (1.04, 1.02-1.06) but not for FTR (0.98, 0.91-1.05) or in-hospital mortality (1.01, 0.95-1.07).

 

Conclusion

This study did not find any evidence of increased mortality for EGS patients admitted at the weekend.

09.03 Viscoelastic Monitoring in Pediatric Trauma: A Survey of Pediatric Trauma Society Members

R. T. Russell1, I. I. Maizlin1, A. M. Vogel2  1University Of Alabama At Birmingham, Children’s Of Alabama,Department Of General Surgery, Division Of Pediatric Surgery,BIrmingham, AL, USA 2Washington University, St. Louis Children’s Hospital,Department Of General Surgery, Division Of Pediatric Surgery,St. Louis, MO, USA

Introduction: Viscoelastic monitoring (VEM), including TEG® (thromboelastography) and ROTEM® (rotational thromboelastometry) in the setting of goal directed hemostatic resuscitation has been shown to improve outcomes in adult trauma. The American College of Surgeons (ACS) Committee on Trauma recommends that “thromboelastography should be available at Level I and Level II trauma centers”. The purpose of this study is to determine the current availability and utilization of VEM in pediatric trauma.

Methods: After IRB and Pediatric Trauma Society (PTS) approval, a survey was administered to the current members of the PTS via Survey Monkey®. The survey collected demographic information, hospital and trauma program type, volume of trauma admissions, and use and/or availability of VEM for pediatric trauma patients.

Results: We received 107 responses representing 77 unique hospitals. Survey respondents were: 61% physicians, 29% nurses, 6% trauma program managers, and 4% NPs/PAs. Over half of providers worked in a free standing children’s hospital. Seventy-nine percent of respondents were from hospitals that had > 200 trauma admissions/year, 42% were providers at ACS Level 1 pediatric trauma centers, and 77% practiced at State Level 1 designated centers.  VEM was available to 63% of providers, but only 31% employed VEM in pediatric trauma patients. For those who had no VEM available, over 75% would utilize this technology if it was available. Most providers continue to rely on traditional coagulation studies (CCT) to monitor coagulopathy in pediatric trauma patients after admission (Figure 1).

Conclusions: While a growing body of evidence demonstrates the benefit of viscoelastic hemostatic assays in management of adult traumatic injuries, VEM during active resuscitation is infrequently used by pediatric trauma providers, even when the technology is readily available. This represents a timely and unique opportunity for quality improvement in pediatric trauma.