59.10 Center-level Pharmacologic VTE Prophylaxis Strategies and Risk of PE

S. Mason1, J. Byrne1, B. Haas1, C. Hoeft2, M. Neal2, A. Nathens1 1Sunnybrook Health Sciences Centre,Toronto, ONTARIO, Canada 2American College Of Surgeons,Chicago, IL, USA

Introduction:

Pulmonary embolism (PE) is a rare but potentially fatal complication of trauma. We aimed to characterize the variation in venous thromboembolism prophylaxis (VTEP) across trauma centers, and the impact of this variation on the risk of PE.

Methods:

Data were derived from the ACS Trauma Quality Improvement Program between 2013 and 2014. Patients with blunt multi-system injury admitted for at least 72 hours were included. Patients with severe head injuries were excluded, recognizing that VTEP practices may be distinct in these patients. Patients who never received any VTEP during admission were excluded. Centers with a median time of initiation of <48 hours were classified as early starters (ES) of VTEP. We evaluated the impact of centers’ classification as ES or late starters (LS) on rates of pulmonary embolism. Multivariable marginal logistic regression modeling was performed to determine the association between center-level VTE practices and PE after adjusting for case mix.

Results:

12,596 patients meeting inclusion criteria were identified at 177 trauma centers. The median time to VTEP across all centers was 45 (IQR 25) hours. The most commonly administered VTEP agent was low-molecular weight heparin (82%, N=10,269); 17% (N=2156) received unfractionated heparin. The incidence of PE was 2% (N=299) overall; 3% (N=196) in ES and 2% (N=103) in LS centers. After adjusting for case mix and center-level clustering, there was no association between PE rates and care received in an ES vs LS center (OR 0.96, 95% CI 0.69-1.32). The effect of receiving care at an ES vs LS center was not modified by choice of VTEP agent, though prophylaxis with heparin was associated with a significant increase in PE rates (OR 1.58, 95% CI 1.09-2.28).

Conclusion:

Great variability in time to VTEP exists amongst trauma centers. No significant association between center VTEP timing and hospital-level PE rates was identified. This may represent varying center-level practices not measured in TQIP, such as use of non-chemical means of VTEP, practices related to early mobility or other as yet unidentified strategies that mitigate PE risk.

59.07 DOES THORACOSTOMY TUBE POSITION REALLY MATTER?

N. W. Kugler1, P. Knechtges2, D. Milia1, T. W. Carver1, L. Goodman2, J. S. Paul1 1Medical College Of Wisconsin,Trauma And Critical Care,Milwaukee, WI, USA 2Medical College Of Wisconsin,Radiology,Milwaukee, WI, USA

Introduction: Hemothorax (HTx), pneumothorax (PTx), or both (HPTx) can be managed with tube thoracostomy (TT) in the majority of cases. Improperly placed tubes are common with rates near 30%. Management includes observation, repositioning, replacement, additional TT placement, or early surgical intervention. This study was performed to determine whether TT position affects the rate of secondary intervention.

Methods: Using the trauma registry a retrospective review of all adult trauma patients undergoing bedside TT placement over a 4-year period was performed. Staff radiologist classified the position of original TT as ideal, non-ideal, or kinked based on AP chest x-ray. Ideal TTs were apically directed, terminating in the lateral or mid thoracic cavity. Non-ideal TT was defined within the fissure or supra-diaphragmatic position. TTs with sentinel hole outside the thoracic cavity were excluded. The primary outcome was any secondary intervention (TT replacement, additional TT tube placement, or surgical intervention).

Results:486 adult trauma patients (547 hemithoraces) underwent TT placement and met inclusion criteria. Indications for placement were HPTx (37.2%), HTx (28.8 %), and PTx (34.0%). The majority of patients were male (76%), median age of 41 years (IQR 26-55 years), and blunt (67.9%) trauma. Ideal TT positioning in 429 (78.4%) and non-ideal in 118 (21.6%) hemithoraces. Secondary intervention rate was 27.8% including 109 (19.9%) additional / replaced TT, 31 (5.7%) VATS, and 12 (2.2%) thoracotomies. Rate of secondary intervention for ideal and non-ideal TT position was 25.1% and 37.3% (p=0.009) respectively. Kinked TTs were noted in 33 (6%) hemithoraces with a 45.5% secondary intervention rate. Due to likelihood for treatment bias, kinked TTs were removed from final analysis. Subsequently, the rate of secondary intervention was no longer significant (25.1% vs 34.1%, p=0.09).

Conclusion: Position of a non-kinked TT with the sentinel hole within the thoracic cavity does not affect secondary intervention rates, including the rate of surgical intervention. Inherent practice bias demonstrates with ideal position surgeons are significantly more likely to proceed with early operative intervention. Given over 20% of individuals with additional TT placement required operative intervention for definitive management, early operative intervention in the setting of non-kinked TT provides ideal patient care.

59.08 Epidemiology and One-Year Sequelae of Acute Compartment Syndrome

D. Metcalfe1, A. Haider1, O. A. Olufajo1, M. B. Harris2, C. K. Zogg1, A. J. Rios Diaz1, M. J. Weaver2, A. Salim1 1Harvard Medical School,Center For Surgery And Public Health,Boston, MA, USA 2Brigham & Women’s Hospital,Department Of Orthopedic Surgery,Boston, MA, USA

Introduction:

Acute compartment syndrome (CS) is an important diagnosis for general, vascular, orthopedic, and trauma surgeons. However, only single center studies have previously described the epidemiology of CS; and the long-term outcomes for these patients have yet to be reported. In this study, we sought to describe the epidemiology of CS and the rate of subsequent limb loss using a comprehensive statewide inpatient database.

Methods:

All CS diagnoses (ICD-9-CM 729.7* and 958.9*) were extracted from the California State Inpatient Database (2007-2011), which is an all-payer dataset that captures 98% of hospital admissions. The SID was linked to the AHA Annual Survey Database to include hospital-level characteristics and the US census (2010) provided a population denominator. Patients were tracked longitudinally using a unique identifier within the SID to identify 30-day readmissions to any hospital in California and subsequent need for amputation within 12 months. Multivariable logistic regression was used to identify independent risk factors for amputation. The covariates within this model were age, race, sex, payer status, Charlson Comorbidity Index, Injury Severity Score, lower/upper limb, weekend admission, trauma center designation, hospital bed size, and teaching hospital status.

Results:

There were 6,471 CS cases – 1,294 per year, or an annual incidence of 3.5 per 100,000 population. The mean age was 46.2 (SD 20.0). Patients were predominantly male (73.6%), white (58.6%), publicly insured (41.1%), and admitted to either a level 1 (44.1%) or level 2 trauma center (46.8%). Most cases (61.0%) were secondary to trauma and the majority of these (63.5%) were associated with fracture. Both traumatic and non-traumatic CS predominantly affected the lower limb (71.3% and 72.1% respectively).

3,119 (48.2%) of patients suffered complications, 325 (5.0%) died, 670 (10.4%) required unplanned re-admission within 30 days, and 95 (1.5%) required a major amputation within 12 months of discharge. Most amputations (87.4%) occurred during subsequent admissions and not during the acute hospitalization. Significant independent risk factors for major amputation were lower limb CS (OR 9.0, 95% CI 3.29-24.7) and a non-traumatic cause (OR 4.72, 95% CI 2.6-8.7).

Conclusion:

CS is an infrequent but potentially devastating diagnosis that can lead to limb loss. There are significant long-term sequelae with the majority of amputations becoming necessary after discharge from hospital.

59.09 Understanding the Burden of Traumatic Brain Injury: Incidence Beyond the ED

C. K. Zogg1, E. B. Schneider1,2,3, J. W. Scott1, M. Chaudhary1, A. J. Rios Diaz1, E. Kiwanuka5, S. Haring1,6, A. A. Shah1,3, L. L. Wolf1, J. K. Canner3, A. Salim1, E. J. Caterson5, A. H. Haider1 2Department Of Epidemiology, Johns Hopkins Bloomberg School Of Public Health,Baltimore, MD, USA 3Department Of Surgery, Johns Hopkins University School Of Medicine,Baltimore, MD, USA 4Division Of General Surgery, Mayo Clinic,Scottsdale, AZ, USA 5Division Of Plastic Surgery, Warren Alpert Medical School, Brown University,Providence, RI, USA 6Department Of Health Policy & Management, Johns Hopkins Bloomberg School Of Public Health,Baltimore, MD, USA 7Department Of Plastic And Reconstructive Surgery, Brigham & Women’s Hospital,Boston, MA, USA 1Center For Surgery And Public Health, Harvard Medical School & Harvard School Of Public Health, Department Of Surgery, Brigham And Women’s Hospital,Boston, MA, USA

Introduction: The CDC’s 2004 pyramid model of TBI led to recognition of deaths (tier 1), hospitalizations (tier 2), and ED presentations (tier 3) but did not account for other types of care (tier 4). There is currently no estimate for the number of TBI patients treated outside of the ED. In an effort to address the dearth of what is known about patients who seek other types of care, the objective of this study was to describe the epidemiology of TBI among a national sample of adults who died in hospital, were admitted for inpatient hospitalization and survived, were discharged from the ED, or were treated exclusively in an outpatient setting.

Methods: 2010-2012 MarketScan inpatient/outpatient databases, >100 commercial insurers/56 million enrollees, were queried for adults (18-64y) with CDC-defined TBI. A restricted definition excluding optic (951.1-3), SBS (995.55), and unspecified head (959.01) injuries was also employed. In order to ascertain representativeness, weighted death/inpatient/ED data were extracted from the 2010-2012 NIS and NEDS. Initial patient visits from each were categorized by the highest level of healthcare received. In MarketScan, differences relative to the outpatient setting were examined for significant associations with demographic, case-mix (CCI, LOC, head AIS, ISS, morbidity), managing provider(s), and diagnosis-related factors. Annual rates per 100,000 adult enrollees/US population and estimates of the national burden were calculated by pyramid treatment-tier, using stratified data from each.

Results: The study identified 673,239 incident TBI in MarketScan, of which 59.9% (n=402,987) were managed in the outpatient setting. 107,876 (weighted n=536,419) inpatients and 856,864 (n=3,858,239) ED presentations were identified in NIS and NEDS. In both, privately insured patients (68.1% of the adult US population) represented half (48.7%; 47.2%) of TBI. Demographic/Clinical factors were predominately comparable between datasets at parallel treatment-tiers. Annual incidences are presented (table). Despite being less severely injured than patients in higher treatment-tiers, a 2012 outpatient incidence of 415.6 TBI per 100,000 adult enrollees suggests an annual national burden due to outpatient consultations of nearly 1 million initial physician visits, >301,000 hours of primary care physician time, and thousands more hours of follow-up treatment/care.

Conclusion: The results demonstrate a larger number of patients requiring TBI assessment/care than previous estimates describe. As increased awareness of TBI leads to development of primary-prevention measures, better understanding of outpatient care (3/5th of patients) and its currently unknown association with long-term consequences will be required.

59.05 Fibrinolysis Shutdown: Patient Presentation and Impact on Survival Among Severely Injured Children

R. Yanney1, H. B. Moore3, K. M. Mueck1, I. Liras1, J. C. Cardenas1, M. T. Harting1,2, B. A. Cotton1 1University Of Texas Health Science Center At Houston,Houston, TX, USA 2Children’s Memorial Hermann Hospital,Houston, TX, USA 3Denver Health Medical Center,Aurora, CO, USA

Introduction: Fibrinolysis is a physiologic process that attempts to maintain microvascular patency by breaking down excessive clot. Previous data in both adults and children has shown that hyperfibrinolysis (HF) is associated with a doubling in mortality. Recently, data in adults has demonstrated that fibrinolysis shutdown (SD), an acute impairment of fibrinolysis, is also associated with significant increases in mortality. The purpose of the current study was to assess (1) the incidence and presentation of fibrinolysis phenotypes in pediatric trauma patients and (2) the impact of SD on mortality among these patients.

Methods: Pediatric trauma patients (0-17 years of age) who (1) were admitted 2010-2014, (2) met highest-level trauma activation and (3) had severe anatomic injury were included in this analysis. Severe anatomic injury was defined as injury severity score (ISS) >15. Admission fibrinolysis phenotypes were defined by the clot lysis measured by TEG at 30 minutes (LY30): SD ≤ 0.8%, physiologic 0.9-2.9%, HF ≥ 3%. Massive transfusion was defined as >10 U RBC in 6 hours. Continuous and dichotomous variables were evaluated with ANOVA and Kruskal-Wallis, respectively.

Results: 489 patients met inclusion. 34% demonstrated evidence of SD on arrival (38% were physiologic, 28% were HF). SD patients were older (median age 16 vs. 12;p<0.001) and more likely to be female (41 vs. 28%;p=0.044) compares to HF. WHile mechanism and ISS were similar between SD and HF, ISS was higher in HF than in the physiologic group (median 27 vs. 25; p=0.010). Arrival blood pressure, pulse and GCS were similar between the three groups. SD patients were more hypocoagulable by TEG k-time (median 1.5 vs. 1.3 min;p=0.029), and alpha-angle (72 vs. 73 deg; p=0.020) than the HF cohort. HF were more hypocoagulable than physiologic group by TEG ACT (median 128 vs. 121 sec; p=0.002) and mA (62 vs. 64 mm; p=0.003). All other arrival labs were similar between groups. Both SD and HF patients had incresed massive transfusion (12 and 10%, respectively vs. 3%; p=0.004) and mortality rates (19 and 20%, respectively vs. 8%; p=0.003) compared to the physiologic group (FIGURE). When the analysis was repeated among only those children 5 years of age or less, the results were similar, including that of higher mortality (SD: 15 vs. HF: 10 vs. physiologic: 2%; p=0.026).

Conclusion: Fibrinolysis shutdown (SD) appears to carry similar mortality risk to that of HF among severely injured children. The current study provides additional evidence of distinct phenotypes of coagulation impairment and that individualized hemostatic therapy may be required. While HF may be addressed with anti-fibrinolytics, SD presents a significant treatment challenge in the injured patient.

59.06 Comparison Of Platelet & RBC Indices After Splenectomy, Embolization, & Observation In Trauma

A. Cipriano1, U. MacBean1, B. Wernick1, R. N. Mubang1, T. R. Wojda1, S. Liu1, S. Bezner-Serres3, D. C. Evans2, B. A. Hoey1, S. Odom3, P. Thomas1, C. H. Cook3, S. P. Stawicki1 1St Luke’s University Health Network,Department Of Surgery,Bethlehem, PA, USA 2Ohio State University,Department Of Surgery,Columbus, OH, USA 3Beth Israel Deaconess Medical Center,Department Of Surgery,Boston, MA, USA

Introduction: Spleen is one of the most commonly injured abdominal organs in blunt trauma. Still, our knowledge of post-injury hematologic parameters is incomplete. This study compares fluctuations of platelet and red blood cell (RBC) indices in patients with splenic injuries who underwent clinical observation (O), embolization (E), or splenectomy (S) during the first 45 post-injury days. We hypothesized that patterns of thrombocytosis, RBC and/or RBC indices (RBCI) vary across the three treatment approaches.

Methods: Following IRB approval at three institutions, a retrospective study of platelet/RBC and RBCI (red blood cell distribution volume – RDW / mean corpuscular volume – MCV) was conducted (Mar 2000 – Dec 2014). We studied patients with admission lengths of >96 hours giving representative samples for each sub-group (O, E, and S). Demographics and injury severity data were abstracted. Composite 7-period moving average graphs of platelet counts and RBC/RBCI from the time of admission to the latest available lab draw (or 45 days maximum) were constructed. Non-parametric statistical testing for any corresponding differences was then performed.

Results: Multiple data points (n=1,110) from 75 patients (25 S, 29 E, 32 O) were analyzed for each study parameter. Median age was 41 years with median ISS 22 (21 for S, 19 for E, 22 for O, p=n/s), median GCS 15, 67% male. Median splenic injury grade followed interventional modality (grade 4 for S, 3 for E, 2 for O). There were no differences in RBC count for the three groups (p=n/s). In aggregate, RDW was greater following S (14.9%) than E (14.3%) or O (13.5%, p<0.01) with the three converging by day 45. Despite temporal variability, MCV was highest for the "observation" group (92.3 vs 91.1 in E and 90.6 in S, p<0.01). For platelets, there were no differences between composite values for S (mean, 378.8) and E (350.1), with both being greater than the O group (272.3, p<0.01) between 15-41 days. Prior to day 15, all groups had similar platelet counts; after day 41, the O and E groups converged at levels approximating 50% of the S group.

Conclusion: This study describes important trends and patterns in RBC, RBCI, and platelets following splenic injuries managed with S, E, or O. Although no differences were noted in RBC counts between therapies, RDW was significantly greater following S than either E or O. MCV displayed significant temporal variability. Finally, platelet counts were similar for both S and E during peak elevations (days 15-41), followed by convergence of E & O groups at normal levels around day 45. Our results provide a foundation for further research in this still poorly explored area, with focus on clinical relevance of observed patterns.

59.01 Older Adults: Partial Mitigation of Racial/Ethnic Disparities in Emergency Surgical Care?

C. K. Zogg1, W. Jiang1, A. A. Shah1,2, J. W. Scott1, E. J. Lilley1, L. L. Wolf1, M. Chaudhary1, L. M. Kodadek3, O. A. Olufajo1, D. Metcalfe1, E. B. Schneider1, Z. Cooper1, A. Salim1, A. H. Haider1 2Division Of General Surgery, Mayo Clinic,Scottsdale, AZ, USA 3Department Of Surgery, Johns Hopkins University School Of Medicine,Baltimore, MD, USA 1Center For Surgery And Public Health, Harvard Medical School & Harvard School Of Public Health, Department Of Surgery, Brigham And Women’s Hospital,Boston, MA, USA

Introduction: As the fastest growing segment of the US population, older adults (≥65y) represent an increasing number of patients requiring emergency surgical (EGS) care. Efforts are needed to understand their unique health needs. Previous studies have found that universal access to insurance mitigates racial/ethnic disparities in younger adults. However, little is known about the role of race/ethnicity in post-discharge outcomes after EGS among older patients. The objective of this study was to determine whether racial/ethnic disparities in outcomes persist among Medicare beneficiaries requiring EGS consultations/care.

Methods: Seven years (2005-2011) of Medicare data were queried for patients with a primary EGS diagnosis, defined by the AAST, who were admitted to inpatient status through the ED and had racial/ethnicity data recorded for a single race/ethnicity. Associations with demographic, clinical, and hospital-level factors were considered using descriptive statistics. Risk-adjusted multilevel logistic regression assessed differences in in-hospital mortality and mortality, major morbidity, and readmission rates at 30, 90, and 180 days. Differences in discharge disposition, including utilization of hospice or rehabilitation, were also considered.

Results: A total of 7,763,510 patients were identified: 85.5% White, 9.6% Black, 2.1% Hispanic, and 1.1% Asian/Pacific Islander. Older adults were approximately evenly distributed across 5-year increments of age (15.2-19.6%) with 9.9% (n=770,560) aged ≥90y. One-fifth (22.2%) was managed in the ICU (1.0% surgical; 9.9% general), and 32.7% had a CCI ≥3. Overall risk-adjusted outcomes are presented (Table). Worse outcomes among older minority patients were not consistently found; differences in mortality decreased, while disparities in morbidity/readmission paradoxically remained. Stratification by diagnostic group and among patients restricted to octogenarian/nonagenarians – known to experience different outcomes than younger geriatric patients (65-80y) – revealed similarly mixed associations.

Conclusion: Lack of significant differences for in-hospital/90-day mortality and protective effects reported at 30 days between Black and White patients suggest that enhanced insurance coverage of emergency surgical care among older adult patients is associated with mitigation of racial/ethnic disparities. Akin to older trauma patients, who show lower inpatient mortality for Black versus White patients, older adult EGS patients demonstrate persistent differences between Black and White patients in longer-term (30/90/180-day) survival that extend well beyond index hospital stays. Ongoing efforts are needed to understand the persistence of disparities in morbidity and readmission rates.

59.02 Damage Control Cultures in Elderly Ventilated Trauma Patients: A Predictor of Mortality

A. Ko1, M. Y. Harada1, G. Barmparas1, G. M. Thomsen1, E. Smith1, T. Li1, B. J. Sun1, E. J. Ley1 1Cedars-Sinai Medical Center,Division Of Trauma And Critical Care,Los Angeles, CA, USA

Introduction: Elderly trauma patients are likely to be vulnerable to infection and at increased risk of delayed death. When ventilated, the elderly are at high risk for infection so increased culture surveillance may be indicated. How mortality is affected by early infections compared to later infections is not well described. If mortality is increased with early infections, sending empiric cultures at admission, termed damage control cultures, may better diagnose and treat these patients. We sought to investigate whether an early infection predicts higher mortality in ventilated elderly trauma patients.

Methods: We conducted a retrospective review of all mechanically ventilated trauma patients age ≥ 65 years admitted between January 1, 2009 and December 31, 2013 at a Level 1 trauma center. Clinical data and sputum, blood, and urine culture results were collected. Patients with a positive culture within 4 days of admission (EARLY) were compared to those with a positive culture after 4 days (LATE).

Results: A total of 163 elderly trauma patients requiring ventilator support were identified, of whom 126 (77.3%) had cultures sent during hospitalization. Of these cultured patients, 96 (76.2%) had at least one positive culture. Fifty-one (53.1%) were EARLY and 46 (46.9%) were LATE. The two cohorts were similar in age, gender, admission systolic blood pressure (SBP), mechanism of injury, and injury severity score (ISS). The EARLY cohort had lower admission GCS (10.4 ± 4.6 v. 12.6 ± 3.6, p = 0.008), shorter ventilator days (6.8 ± 5.7 v. 13.9 ± 15.3, p = 0.004), hospital length of stay (LOS) (14.8 ± 9.5 v. 31.4 ± 22.0, p < 0.001), intensive care unit (ICU) LOS (9.5 ± 6.6 v. 13.0 ± 9.5, p = 0.005), but higher mortality (43.1% v. 17.8%, p = 0.007). Multivariate analysis demonstrated that having an early positive culture was a predictor of higher mortality (AOR 3.77, p = 0.008).

Conclusion: Early infection in mechanically ventilated elderly trauma patients is associated with high mortality. Damage control cultures in this population may identify early infections, allowing for early treatment with reduced mortality.

59.03 Location, Location, Location… Site of Rib Fracture Predicts Outcomes in Trauma Patients

E. DeSouza1, T. J. Zens1, G. Leverson1, H. Jung1, S. Agarwal1 1University Of WI School Of Medicine And Public Health,General Surgery,Madison, WI, USA

Introduction: The number of rib fractures an individual suffers has long been considered an independent predictor of morbidity and mortality; however, in previous studies all rib fractures were considered to be the same. We hypothesized that not only total number of ribs fractured, but also location of fracture is a strong predictor of patient outcomes in terms of mortality, length of stay, and discharge disposition.

Methods: An IRB-approved retrospective chart review was performed at an academic, level one trauma center. Patients who suffered traumatic rib fractures between January 2013 and April 2015 were identified by CPT codes for possible inclusion in the study. Individual computer tomography scans of the chest were reviewed and validated by staff radiology reads. The location of the rib fractures were characterized in terms of anterior, posterior, lateral, upper, middle, lower, and right vs. left. SAS statistical software, logistic regression curves and ANOVA data analysis examined the data for relationships between rib fracture location and patient outcomes in terms of length of stay, ICU length of stay, discharge disposition and overall mortality.

Results: 929 patients were initially reviewed for possible inclusion in the study and 248 excluded. A total of 3,864 fractures were identified in the patient population. Statistically significant positive correlation coefficients were identified between length of stay and number of rib fractures in all locations with the strongest relationships seen in the upper (0.191, p=<0.0001), middle (0.185, p=<0.0001) and lateral (0.189, p=<0.0001) locations. Similarly, statistically significant correlations were seen between all rib fracture locations and ICU days with the strongest relationships seen in patients with upper (0.184, p=<0.0001), middle (0.189, p=<0.0001) and lateral (0.205, p=<0.0001) fractures. In addition, there was a statistically significant association between in hospital mortality and lateral rib fractures, with the non-survivors having on average 2 more lateral rib fractures than survivors (p=.0192), and left-sided rib fractures, with the non-survivors also having on average 2 more left-sided rib fractures than survivors (p=.0287). A statistically significant association was seen between discharge disposition and left-sided fractures (p=0.0160) as well as lateral rib fractures (p=0.0002). There was a linear relationship between type of discharge, in terms of increasing need for support, and number of lateral rib fractures.

Conclusion: Rib fracture location, particularly left sidedness and lateral location, help predict outcomes in terms of mortality and discharge disposition. This may assist in discussing care and planning resources for traumatically injured patients. A larger validation trial is needed to confirm these results.

59.04 Rapid-thromboelastography (r-TEG) provides optimal thresholds for directed resuscitation after trauma

P. M. EINERSEN2, M. P. Chapman1, H. B. Moore2, E. Gonzalez2, C. C. Silliman2,3, A. Banerjee1, A. Sauaia1, E. E. Moore1,2 1Denver Health Medical Center,Surgery,Aurora, CO, USA 2University Of Colorado Denver,Surgery,Aurora, CO, USA 3Children’s Hospital Colorado,Pediatric Hematology-Oncology,Aurora, CO, USA

Introduction: Uncontrolled hemorrhage is the leading cause of preventable death from trauma with hemorrhagic shock accounting for approximately 50% of deaths in hospitalized patients. Massive transfusion protocols (MTPs) offer a long-proven benefit in resuscitation of these patients and recently, the superiority of TEG-guided resuscitation over pre-determined component ratio strategies has been established. However, optimal cutoff values for TEG-driven interventions have yet to be identified. We seek to establish optimal thresholds for r-TEG driven resuscitation based on prospective data collected in severely injured patients at risk for trauma-induced coagulopathy.

Methods: R-TEG data was reviewed for patients from 3 randomized, prospective studies conducted at a level 1 trauma center from September 26, 2010 to June 30, 2015. Criteria for inclusion were highest-level trauma activation in patients ≥ 18 years of age with hypotension presumed due to acute blood loss. Patients were excluded if an r-TEG was not sent within one hour of injury, injuries were deemed unsalvageable or in cases of isolated GSW to the head, pregnancy, or liver disease. Receiver operating characteristic (ROC) analysis was performed to test the predictive performance of r-TEG for substantial transfusion requirement, defined ≥ 4 units of RBCs in the first hour of hospitalization. Cut-point analysis, utilizing Youden Index, distance to (0,1) and sensitivity, specificity equality was then performed on these ROC curves to determine optimal thresholds for TEG-based resuscitation.

Results: ROC analysis of r-TEG data from194 patients who met inclusion criteria in one of three concurrent prospective studies yielded areas under the curve (AUC) with respect to substantial transfusion requirement greater than 70% for ACT, α, MA and LY30 (70%, 82%, 80% and 71% respectively). There was considerable overlap in confidence intervals of all AUCs indicating these TEG variables did not differ statistically in their predictive capacity. Optimal cut-point analysis of the resultant ROC curves was performed and for each value, sensitivity, specificity equality determination yielded the most sensitive cut-point, respectively ACT >128 sec, MA <57 mm, α <66 and LY30 >3%.

Conclusion: Through cut-point analysis of prospective TEG data, we have identified optimal thresholds for initiating TEG-based resuscitation for ACT, MA, α and LY30 favoring the more conservative thresholds set forth by sensitivity, specificity equality analysis. These thresholds should be validated in a prospective multicenter trial.

58.08 Shock Index Pediatric Age Adjusted (SIPA) is more Accurate than Hypotension for Trauma Activation

S. N. Acker1, B. Bredbeck1, D. A. Partrick1, A. M. Kulungowski1,2, C. C. Barnett2, D. D. Bensard1,2 1Children’s Hospital Colorado,Pediatric Surgery,Aurora, CO, USA 2Denver Health Medical Center,Department Of Surgery,Denver, CO, USA

Introduction: The 6 criteria for highest trauma team activation include systolic blood pressure (SBP)< 90mmHg for adults or age-adjusted hypotension for children. These criteria aim to identify injured patients who likely require emergent intervention requiring trauma team presence. We have previously demonstrated that shock index, pediatric age adjusted (SIPA) accurately identifies severely injured children following blunt trauma. We hypothesized that elevated SIPA would more accurately identify injured children requiring highest trauma team activation and emergent intervention than age adjusted hypotension.

Methods: We performed a retrospective review of all children age 4-16 admitted to one of two trauma centers following blunt trauma with an injury severity score >15 from 1/07- 6/13. Indicators of need for trauma team activation included blood transfusion, emergent operation (including craniotomy, intracranial pressure monitor or drain, chest tube, thoracotomy, aniogembolization, or laparotomy), or endotracheal intubation (ETI) within 24 hours of admission. SIPA represents the maximum normal shock index (SI) for a given age group and was derived based on the maximum normal HR divided by the minimum normal SBP for each age group. Cutoffs included SI >1.22 (age 4-6), >1.0 (7-12), and >0.9 (13-16). Age adjusted hypotension cutoffs were as follows: SBP <90 (age 4-6), SBP <100 (7-16).

Results:559 children were included; 21% underwent operation, 37% ETI, and 14% blood transfusion within 24 hours of admission. 56/559 (10%) were hypotensive. 150/559 (27%) had an elevated SIPA. Hypotension alone poorly predicted the need for operation (13%), ETI (17%), or blood transfusion (22%). In contrast, operation (30%), ETI (40%), and blood transfusion (53%) were more likely in children with an elevated SIPA (Table 1). 25 children required all three interventions, 3 (12%) were hypotensive at presentation, 15 (60%) had an elevated SIPA (P<0.001).

Conclusion:

Elevated SIPA is superior to age-adjusted hypotension to identify injured children likely to require emergent operation, ETI or early blood transfusion. 10% of injured children requiring early intervention demonstrated age specific hypotension. Using SIPA, an age-adjusted tool employing both heart rate and SBP, tripled the number of high-risk children identified who needed intervention. Still nearly half of significantly injured children requiring at least one acute intervention were not identified by either hypotension or SIPA. Thus, SIPA, like age-adjusted hypotension cannot be relied upon solely for trauma team activation. However, SIPA, rather than age-adjusted hypotension, should be considered as one of the six minimum criteria for trauma team activation.

58.09 Population-based Validation of a Clinical Prediction Model for Congenital Diaphragmatic Hernias

D. P. Bent1, A. Benedict1,2, J. Nelson2, H. C. Jen1,2 1Tufts University School Of Medicine,Boston, MA, USA 2Tufts Medical Center,Boston, MA, USA

Introduction: Newborns with congenital diaphragmatic hernia (CDH) can have dramatically different mortality rates depending on their presentation. A recently published clinical prediction model from the CDH Study Group (CDHSG) Registry stratified CDHs into low-, intermediate-, and high-risk groups based on birth information and associated anomalies. However, this registry-based prediction model has not been validated externally. The purpose of our study was to examine the validity of this new model on a statewide cohort of newborns with CDH.

Methods: Newborns with CDH in California between January 1st 2007 and December 31st 2012 were extracted from the Vital Statistics and Patient Discharge Data (VS-PDD) Linked Files. Subsequent patient transfers and discharges before one year of age were tracked via a de-identified birth id. Our primary outcome was survival to discharge during the initial phase of care. Binary independent predictors were generated from birth weight, 5-minute Apgar score, and the presence of pulmonary hypertension, major congenital cardiac and/or chromosomal anomalies. Performance of the CDHSG clinical prediction model (i.e. total CDH score) on our VS-PDD cohort was validated in terms of discrimination and calibration.

Results: There were 3,213,822 live births in California during the study period. We extracted 753 newborns with the diagnosis of CDH, an incidence of ~1/4,250 births. Forty-eight infants (6.4%) were excluded due to incomplete birth information, leaving a total of 705 unique infants with CDH in our study cohort (N=705). The majority was male (58.6%) and the median gestational age, birth weight, and 5-minute Apgar score were 38.7 weeks, 3000 g, and 8 respectively. Pulmonary hypertension was present in 30.1%, while major cardiac and chromosomal anomalies were found in 18.4% and 7.9% of our cohort. CDH newborns in our cohort were delivered in 150 different hospitals, whereas only 28 hospitals performed CDH repairs (1-85 repairs per hospital).

The observed mortality for low-, intermediate-, and high-risk groups according to the total CDH score were 7.7%, 34.3%, and 54.7%, while the predicted mortality for these three risk groups were 4.0%, 23.2%, and 58.5%. The CDHSG model performed well in our cohort with a C-statistic of 0.741 and good calibration (Fig. 1).

Conclusion: We identified the first non-voluntary population-based cohort of CDH newborns in the United States that can be accurately risk stratified using a recent clinical prediction model derived by the CDHSG. This cohort of CDH newborns may be used in the future to investigate hospital volume-outcome relationships, identify ideal resource allocation, and guide policy development for CDH infants in the U.S.

58.10 Should Children with Perforated Appendicitis be Managed on the Pediatric Hospitalist Service?

D. Ayo1, J. Fusco2, J. Fisher1, H. Ginsburg1, K. Kuenzler1, S. Tomita1 1New York University School Of Medicine,Surgery,New York, NY, USA 2Beth Israel Medical Center,Surgery,New York, NY, USA

Purpose: The pediatric hospitalist movement is growing and has largely followed the model of adult hospitalists with broadening of the hospitalist practice to include surgical patients, with little data on outcomes. This study compared the outcomes of a pediatric surgical condition, perforated appendicitis, in two models—one where patients are managed by pediatric hospitalists with pediatric surgeons consulting and one where patients are managed by pediatric surgeons.

Methods: We reviewed the data of patients aged under 13 with perforated appendicitis from 2002 to 2012 in two health systems—one where patients are admitted to the pediatric surgical service and one where patients are admitted to the pediatric hospitalist service with surgeons as consultants. The patients included those operated on and those managed nonoperatively. Data was analyzed for age, sex, admissions, length of stay, laboratory tests, consults (excluding pediatrics and surgery), ultrasounds, CT scans, total imaging tests, radiology (IR) procedures, and PICC (peripherally inserted central catheter) lines. Continuous variables were reported as means ± standard error and compared using 2-tailed unpaired t tests. Nonparametric variables were analyzed by Mann-Whitney U tests and reported as medians ± interquartile ranges. Categorical variables were compared using Chi-square testing. Statistical significance was accepted for p < .05.

Results: 52 patients were identified in the surgery group (SG) and 19 patients were identified in the hospitalist group (HG). Treatment and outcomes related characteristics of each group are shown in Table 1. Compared to the SG, the HG patients had a statistically significant higher number of laboratory tests, consults, and imaging tests. The SG patients were more likely to have ultrasound exams while the HG patients trended toward the use of more CT scans. There was little difference in the number of patients undergoing IR procedures or PICC lines. The total length of stay was greater in the HG but this did not reach statistical significance.

Conclusions: Pediatric patients with perforated appendicitis managed by pediatric hospitalists are exposed to a more laboratory tests, consults, and imaging studies which may add to hospital costs and resource use as compared to those managed by pediatric surgeons. There may be nuances in managing a surgical disease which are better appreciated by a surgeon. Also, a pediatric surgeon may be better acquainted with the use of ultrasound in appendicitis which has consequences regarding radiation exposure. This implies that surgical diseases such as perforated appendicitis are more effectively managed by pediatric surgeons with pediatric hospitalists as consultants.

58.05 Intraoperative Assessment of the Small Bowel: When Length Matters so Does the Method of Measurement

E. D. Muise1, J. J. Tackett1, K. A. Callender1, N. Gandotra1, M. C. Bamdad1, R. A. Cowles1 1Yale University School Of Medicine,Pediatric Surgery,New Haven, CT, USA

Introduction: Small intestinal length has functional and prognostic significance for patients with short bowel syndrome and accurate measurement of Roux-en-Y limbs is considered important. Factors such as the flexibility and elasticity of the bowel make its measurement highly subjective, but despite this, a recommended method for intestinal measurement allowing accurate comparisons between surgeons and institutions has not been defined. Operative measurement of intestinal length with silk suture, umbilical tape, straight ruler, and laparoscopic graspers has been described in a variety of surgical settings, but no comparison of the fidelity of measurement by each technique has been made. We hypothesized that techniques using silk suture and umbilical tape would yield the most consistent measurements.

Methods: This IRB-approved prospective trial enrolled 12 volunteer surgeons. Participants were asked to measure short, medium, and long segments of small intestine in a euthanized rabbit using common operating room tools (silk suture, umbilical tape, a 15 cm straight ruler, and laparoscopic Dorsey bowel graspers). Data were analyzed by ANOVA repeated measures model.

Results: Over short segments (grand mean M=20.88+/-SEM1.83cm), intestinal measurements by grasper (18.58+/-1.96cm) were significantly shorter than those by tape (23.52+/-2.23cm, p=0.002) or ruler (20.95+/-1.83cm, p=0.039), and not significantly different than measurement by suture (20.50+/-1.82cm, p=0.105). Over medium length (M=37.33 +/-1.29cm), measurements by grasper (34.63+/-1.87cm) were significantly shorter than those by suture (39.09+/-1.19cm, p=0.032) and tape (39.63+/-1.88cm, p=0.046), and measurements by ruler were also significantly shorter than by suture (35.96+/-1.17cm, p=0.008). Over the long segment (M=104.04+/-3.83cm) no significant differences were found between measurement by suture (103.40+/-5.45cm), tape (109.85+/-3.79cm), or ruler (98.88+/-8.34cm). There was a significant difference in measurements taken along the mesenteric border compared with those taken along the anti-mesenteric border of the small bowel (85.33+/-6.64cm vs. 122.75+/-3.83cm, p=0.001).

Conclusion: Over short distances, measurement technique appears to matter less. However, along all three lengths measured, shorter more rigid measurement tools, such as laparoscopic graspers and straight rulers, underestimate length and these errors amplify over longer segments. Smaller variances in measurements by silk suture and umbilical tape suggest these methods are more reliable across longer distances. Finally, measurement along the mesenteric border inherently traverses a shorter distance, and may minimize the variation between surgeons, and highlights that this aspect of measurement is particularly important to standardize.

58.06 Small Bowel Diameter in Short Bowel Syndrome as a Predictive Factor for Achieving Enteral Autonomy

G. C. Ives1, F. R. Demehri1, R. Sanchez2, M. Barrett1, D. H. Teitelbaum1 1University Of Michigan,Department Of Surgery, Section Of Pediatric Surgery,Ann Arbor, MI, USA 2University Of Michigan,Department Of Radiology, Section Of Pediatric Radiology,Ann Arbor, MI, USA

Introduction:

While previous research demonstrated a relationship between achieving enteral autonomy (EA) in pediatric short bowel syndrome (SBS) and anatomy of the remaining bowel, including presence of the ileocecal valve and % predicted small bowel length (SBL) remaining, changes in SB diameter (SBD) are part of a unique adaptive response that may warrant further investigation. However, little is known about the natural history of changes in SBD after SBS, and understanding which children develop this type of adaptation and factors associated with it has important implications, as SBD has been identified as a factor in determining eligibility for an intestinal lengthening procedure (ILP). The objective of this study was to evaluate contributory factors for SBD in SBS patients and the effects of SBD on patient outcomes.

Methods:

This is a retrospective review of SBS patients (defined as dependence on parenteral nutrition (PN) for >60days secondary to loss of intestinal length) born since 2000. We identified 30 children with adequate imaging to assess SBD at more than one time point following bowel resection and prior to any ILP. Demographic and clinical data were collected and SBD was measured on GI contrast studies by a pediatric radiologist. Studies that demonstrated SB obstruction or stricture were excluded, and diameter was measured relative to vertebral size and compared with proximate calibrated films when necessary. SBL was converted to % predicted length based on gestational age (GA). Analysis of factors associated with EA and SBD in the last scan before end of follow-up or EA was conducted using Fisher’s Exact Test, Student’s t-test, K-M survival analysis, Cox regression, and linear or logistic regression, as appropriate.

Results:

30 children with median GA of 31.5 (IQR 27,35) weeks were followed for 52.6 (28.1,80.4) months. Necrotizing enterocolitis was the most common etiology of SBS (n=13,43%). EA was achieved in 17 (57%), while 11 (37%) remained on PN and 2 (7%) died during follow-up. Median SBD was 23.5mm (IQR 18,42.3), and 9 children demonstrated significant dilation (SBD>40mm,30%). Bivariate analysis identified higher % SBL {OR 1.40)95% CI:1.07,1.83)} and lower SBD {OR 0.87(95% CI:0.80,0.96)} as predictive of achieving EA, and in survival analyses <10% SBL (P=0.024) and higher SBD {HR 0.93(95% CI:0.88,0.98)} predicted increased time to EA. Multivariate linear regression identified lower % SBL {OR -1.02(95% CI:-1.58,-0.45)} and higher birth weight {OR 7.32(95% CI:2.37,12.26)} as strongly predictive of higher SBD.

Conclusion:

SBD is a highly prognostic factor inversely associated with achieving EA in SBS children that can be measured in real-time to track patient progress and inform clinical decision-making.

58.07 Costs of Clostridium Difficile Infection in Pediatric Surgery: A Propensity Score Matched Analysis

A. N. Kulaylat1, A. B. Podany1, M. Twilley1, C. S. Hollenbeak1, D. V. Rocourt1, B. W. Engbrecht1, M. C. Santos1, R. E. Cilley1, P. W. Dillon1 1Penn State Hershey Medical Center,Department Of Surgery,Hershey, PA, USA

Introduction: While previous studies have evaluated risk factors for developing Clostridium difficile infection (CDI), less is known about the cost burden imposed by CDI in the hospitalized pediatric surgical population. The purpose of our analysis was to assess the occurrence of CDI across pediatric thoracic and abdominal surgeries, and characterize its influence on the cost of care.

Methods: There were 320,096 children (age 1-18 years) identified undergoing a thoracic or abdominal surgery from the Kids’ Inpatient Database (2003, 2006, 2009, 2012). Patients were stratified based on the development of CDI and compared using univariate statistics. Logistic regression was used to model factors associated with the development of CDI. A propensity score matched analysis was performed to evaluate the influence of CDI on mortality, length of stay (LOS), and costs in similar patient cohorts. Winsorization (1% and 99%) was applied to reduce the influence of extreme outliers. National population weights were used to estimate the excess burden of CDI on these outcomes.

Results: The overall prevalence of CDI in the sampled cohort was 0.31%, with greater rates of CDI present at children’s hospitals (CH) (3.9 per 1000) compared to non-children’s hospitals (NCH) (2.9 per 1000) (p<0.001). Among both hospital types, there were increasing trends in cases of CDI over time (p<0.001). CDI was associated with younger age and increasing comorbidities (p<0.001). Following propensity score matching, the mean excess LOS and costs attributable to CDI were 7.1 days and $16,021 (p<0.001), respectively, with no significant differences observed for mortality. The estimated annual number of children affected by CDI following surgery was 1,485 (including 11 mortalities), resulting in attributable annual costs of approximately $23.8 million (2012 US$) and 10,544 days spent in the hospital.

Conclusion: CDI is a relatively uncommon but costly complication in pediatric thoracic and abdominal surgery, and is more prevalent in CH compared to NCH. Given the increasing trend of CDI among hospitalized surgical patients, there is substantial opportunity for reduction of inpatient burden and associated costs in this potentially preventable nosocomial infection.

58.03 Is Fluoroscopic Enema Reduction an Effective Initial Treatment for Intussusception in Older Children?

K. B. Savoie1,2, F. Thomas2, S. S. Nouer3, E. Y. Huang1 1University Of Tennessee Health Science Center,Surgery,Memphis, TN, USA 2University Of Tennessee Health Science Center,Biostatistics & Epidemiology,Memphis, TN, USA 3University Of Tennessee Health Science Center,Preventive Medicine,Memphis, TN, USA

Introduction: Intussusception, the most common cause of bowel obstruction in infants and neonates, is less common in children older than 3 years. Standard of care for children less than 3 years is fluoroscopic enema reduction (FER). Use of FER in older children, especially children older than 6 years, is controversial. We utilized the Pediatric Health Information System (PHIS) database to determine whether older children are at higher risk for operative intervention and other morbidities, such that fluoroscopic reduction should not be attempted.

Methods: The PHIS database was reviewed from 1/1/2009-6/30/2014. Patients with chronic medical conditions, including Peutz-Jegher Disease and Henoch-Schonlien Purpura, were excluded. Individual patients were followed across admissions for 6-months from initial presentation or until bowel resection occurred. Successful FER was defined as having had radiologic reduction without subsequent surgery for 6 months.

Results: 7412 patients were identified. 6681 were ≤3 years. Of these, 175 (2.6%) underwent primary surgery, 1114 (16.7%) had surgery at some point after FER, and 5392 (80.7%) were successfully treated with FER. 731 patients were > 3 years: 176 (24.1%) underwent primary surgery, 105 (14.4%) had surgery after FER, and 450 (61.6%) were successfully treated with FER. In patients >3 years, length of stay (LOS) between patients who underwent primary surgery versus those who underwent surgery after FER at their initial visit was similar (Median 4 days vs 4 days, p = 0.06); those undergoing successful FER had a median LOS of 1 day (p <0.0001). The frequency of patients with a concurrent discharge diagnosis within a tumor category were similar in patients ≤3 years and patients >3 years (4.8% vs 6.3%, p = 0.07). The frequency of Meckel’s diverticulum was 2.3% in those ≤3 years and 13.5% in those >3 years (p <0.0001). There were only 3 patients who died; all were ≤3 years. In those failing FER and undergoing surgery, there was no difference in LOS between those ≤3 and >3 years (Median 3 vs 4 days, p = 0.09); <1% of older children were admitted to the intensive care unit as compared to 9.8% in those ≤3 years (p <0.01). Older age was not associated with increased risk of recurrent admission for intussusception (p = 0.58).

Conclusion: Although older children with intussusception were more likely to undergo operative intervention than younger children, PHIS data analysis suggests that fluoroscopy may be a safe initial procedure for a majority of older children. A prospective observational trial would be necessary to confirm this finding, which could help inform future treatment algorithm for older pediatric patients presenting with intussusception.

58.04 Real-Time Ultrasound For Central Venous Catheter Placement In Children: A Multi-institutional Study

L. A. Gurien1, R. T. Russell2, J. Kim3, K. E. Speck4, B. W. Calder5, A. P. Rogers6, A. M. Vogel7, D. A. DeUgarte8, K. B. Savoie9, S. D. St. Peter10, D. W. Parrish11, D. H. Rothstein12, E. J. Renaud13, H. C. Jen14, X. Tang15, M. S. Dassinger1 3Duke University Medical Center,Pediatric Surgery,Durham, NC, USA 4Vanderbilt University Medical Center,Pediatric Surgery,Nashville, TN, USA 5Medical University Of South Carolina,Surgery,Charleston, SC, USA 6University Of Wisconsin,Pediatric Surgery,Madison, WI, USA 7University Of Washington,Pediatric Surgery,Seattle, WA, USA 8University Of California – Los Angeles,Pediatric Surgery,Los Angeles, CA, USA 9University Of Tennessee Health Science Center, Le Bonheur Children’s Hospital,Memphis, TN, USA 10Children’s Mercy Hospital- University Of Missouri Kansas City,Pediatric Surgery,Kansas City, MO, USA 11Children’s Hospital Of Richmond At Virginia Commonwealth University Medical Center,Richmond, VA, USA 12Women & Children’s Hospital Of Buffalo,Surgery,Buffalo, NY, USA 13Albany Medical Center,Albany, NY, USA 14Floating Hospital For Children At Tufts Medical Center,Boston, MA, USA 15University Of Arkansas For Medical Sciences,Little Rock, AR, USA 1Arkansas Children’s Hospital,Pediatric Surgery,Little Rock, AR, USA 2University Of Alabama,Children’s Of Alabama,Birmingham, AL, USA 16Pediatric Surgical Research Collaborative,N/A, N/A, USA

Introduction:

Over 225,000 central venous catheters (CVC) are placed in children in the United States annually. Government and private entities have strongly encouraged use of real-time ultrasound (RTUS) for CVC placement in adults and children. These recommendations are based almost exclusively on studies involving adult patients treated by non-surgeons. This lack of evidence involving surgically placed catheters makes such recommendations less generalizable to pediatric surgeons who, based on a recent APSA survey, preferentially access the subclavian vein (SCV). Our primary aim was to determine frequency of RTUS use by pediatric surgeons during CVC placement in the operating room. Secondary aims included determining factors associated with RTUS use and evaluating adverse event rates when compared to landmark (LM) technique.

Methods:
A retrospective cohort study was performed for patients aged <18 years who underwent CVC placement in the operating room between 07/01/2013 and 06/30/2014 at 14 institutions. Patient demographics and operative details were collected. Mann-Whitney U and chi-square tests were performed to compare continuous and categorical variables, respectively. A logistic regression model evaluated factors associated with RTUS use. P-values <0.05 were considered significant.

Results:
There were 1,146 patients included, with RTUS used in 33% of attempts. The SCV (64%) was preferentially chosen over the internal jugular vein (IJV) (34%) for 1st site insertion. RTUS was less likely to be used for SCV compared to IJV (OR=0.002; p<0.0001) and more likely to be used when coagulopathy (INR>1.5) was present (OR=11.1; p=0.03). No associations for RTUS use were found for age, BMI, previous line history, or trainee involvement. Mechanical complication rate (pneumothorax, hemothorax, arterial puncture) was 3.2% and overall complication rate including central line-associated bloodstream infections was 9.1%. RTUS use was associated with a higher procedure success rates on 1st site attempt but higher risk of hemothorax compared to LM (Table). Median operative time was similar (42 vs 43 minutes; p=0.35).

Conclusion:

Pediatric surgeons preferentially choose the SCV for 1st site insertion, yet are more likely to use RTUS when placing a CVC in the IJV. RTUS was superior to LM for 1st site procedure success, yet was associated with a higher hemothorax rate. Adoption of RTUS guidelines in children would require significant practice change with unclear safety benefits. The retrospective nature of this study precludes evaluation of the quality of RTUS implementation. Prospective trials involving children treated by pediatric surgeons are needed to generate more definitive data relevant to the field of pediatric surgery.

57.10 Mental Stress of Surgeons and Residents at Daily Activities

M. Weenk1, A. P. Alken1, L. J. Engelen1, B. J. Bredie1, T. H. Van De Belt1, H. Van Goor1 1Radboudumc,Nijmegen, GELDERLAND, Netherlands

Introduction:
Surgery is a stressful profession. Stress may negatively affect surgeons’ performance in the operating room, jeopardizing patient safety. Studies reporting on stress in surgeons focused on perceived stress during operations. For measuring objective stress, complex methods (saliva cortisol level) are used that cannot record stress real time and for a longer period. Long lasting stress patterns during the day and outside the operating room, and the effect of demographics and surgical factors (e.g. experience) on stress are of interest to identify areas and individuals for stress modulation. This pilot study aimed at continuously measuring mental stress during common daily activities in surgeons and residents using a novel, easy to wear sensor.

Methods:
Surgeons and residents wore the HealthPatch™ (Vital Connect) continuously for 2 to 3 days including operations, outpatient clinics, ward visits, and other tasks such as administrative work. The wireless patch, that is attached to the chest skin without wires, is able to measure heart rate variability (HRV), and stress level in percentage using an algorithm of heart rate (HR) and HRV. The standard deviation of the interval between two heart beats (SDNN) and the ratio between low frequency and high frequency (LF/HF ratio) represented HRV and were automatically calculated by the patch. The patch stops measuring stress percentage during increased physical activity. Data was transmitted to the cloud by a smart phone and were downloaded for analysis. Subjective stress was measured before and after each operation using the State Trait Anxiety Inventory (STAI; ?STAI ≥1 point indicates increased stress).

Results:
Significant differences in SDNN, LF/HF ratio, and stress percentages were found in both surgeons and residents during operative procedures in comparison with baseline (p=.000; p=.01; p=.000). SDNN and stress percentage differed significantly between operations and outpatient clinic (p=.000 and p=.001), other work (p=.000 and p=.000), and ward visits (p=.006 and p=.000). Fellows and senior residents had higher stress percentages than senior surgeons during operations (p=.024 and p=.021). SDNN, LF/HF ratio, and stress percentage did not differ between surgical procedures indicated as stressful (?STAI≥1) or not stressful (?STAI<1).

Conclusion:
Continuous stress monitoring in surgeons and resident surgeons using a wearable sensor patch reveals relevant data on mental stress of surgeons and residents during the day. Performing an operation is mentally more stressful than other daily activities, particularly for residents and fellows. In future studies, the smart patch could be used for assessing the effect of stress on skills and during training programs.

57.11 The Utility of Repeat Sestamibi Imaging in Primary Hyperparathyroidism After an Initial Negative Scan.

V. D. Krishnamurthy1, S. Sound1, A. K. Okoh1, P. Yazici1, H. Yigitbas1, D. Neumann2, K. Doshi3, E. Berber1 1Cleveland Clinic,Department Of Endocrine Surgery,Cleveland, OH, USA 2Cleveland Clinic,Department Of Nuclear Medicine,Cleveland, OH, USA 3Cleveland Clinic,Department Of Endocrinology,Cleveland, OH, USA

Introduction:
There are scant data in the literature about repeated sestamibi imaging in patients with primary hyperparathyroidism (PHPT). We aimed to determine the utility of repeat sestamibi scans in these patients.

Methods:
We performed a retrospective review of patients with PHPT who underwent repeat sestambi scans from 1996-2015 within our healthcare system. Patients underwent single-agent dual phase 99mTc-sestamibi ‘delayed’ scans (DS), iodine-subtraction 99mTc-sestamibi scans (ISS), or both. Patient demographics and disease characteristics were recorded. Findings between initial and subsequent sestamibi scans were compared, followed by univariate and multivariate regression analyses to identify predictors for conversion from an initial negative to a subsequent positive scan.

Results:
We identified 133 patients who underwent repeat sestamibi scans. Sixty-three scans were initially negative (44%), of which 23 were DS (37%) and 40 were ISS (63%). Of repeated scans, seven were DS (11%) and 56 were ISS (89%). Twenty-two patients had scans that converted to positive (35%), five with subsequent DS (8%) and 17 with subsequent ISS (27%). Initial negative DS were more likely to convert to positive with subsequent scan compared to initial negative ISS (p=0.03). Of patient groups, scans of patients with normocalcemic or normohormal PHPT were less likely to convert to positive than scans of patients with classic PHPT (p=0.003). Scans of asymptomatic patients demonstrated the highest rate of conversion to positive (73%, n =16), of which 80% were initial DS followed by subsequent ISS. In multivariate analysis, increase in serum calcium (p=0.009) predicted conversion from a negative to a subsequent positive scan.

Conclusion:
Repeating the sestamibi scan was helpful in 35% of patients, especially when the initial exam was a delayed scan and the subsequent exam was an iodine-subtraction scan. Repeat scans of patients with classic hyperparathyroidism were more likely convert than those of patients with variant biochemical profiles. Consideration should be given to obtaining an iodine-subtraction scan after an initial negative scan, especially in patients who are asymptomatic or have higher serum calcium levels. To our knowledge, this is the first study looking into the utility of repeat sestamibi scans in patients with primary hyperparathyroidism, which can be useful in planning surgical approach.