72.01 Do 30-day Readmissions Rates for Hysterectomies Vary by Site of Care For Vulnerable Patients?

A. Ranjit1, C. Zogg2, M. A. Chaudhary1, J. B. Iorgulescu3, A. S. Romano6, S. E. Little4, C. T. Witkop5, J. N. Robinson4, A. H. Haider1, S. L. Cohen4  1Brigham And Women’s Hospital,Center For Surgery And Public Health,Boston, MA, USA 2Yale University School Of Medicine,New Haven, CT, USA 3Brigham And Women’s Hospital,Pathology,Boston, MA, USA 4Brigham And Women’s Hospital,Obstetrics And Gynecology,Boston, MA, USA 5Uniformed Services University Of The Health Sciences,Preventive Medicine And Biostatistics,Bethesda, MD, USA 6VA Boston Healthcare System,Surgery-Gynecology,West Roxbury, MA, USA

Introduction:
Hysterectomy is the most common non-obstetric surgery performed on women. Previous studies have shown that “vulnerable” populations—including women who are uninsured, on Medicaid, or who come from low-income strata—tend to have higher 30-day readmission rates after hysterectomy. Whether these differences are associated with where such women receive care remains unexplained. The objectives of this study were to determine national 30-day readmission rates following hysterectomy, assess for access-based differences in readmission rates, and determine if differences vary by site of care.

Methods:
We used the Nationwide Readmissions Database to identify hysterectomies performed between January-October 2013. We categorized Medicaid, uninsured and low-income patients as “vulnerable” and ranked hospitals by the proportion of vulnerable patients that they served. Differences in 30-day readmission rates among higher (≥75th percentile) versus lower (1-74th) vulnerable patient-serving hospitals were compared using multivariable logistic regression. Models adjusted for differences in patient demographics, comorbidities, postoperative complications, resection types, hysterectomy indications and hospital characteristics. Clustering of patients within hospitals was addressed using survey weights.

Results:

A total of 102,266 women who underwent hysterectomy were included, weighted to represent 226,137 patients nationwide. Three-fourths were aged 35-64y (n=78,582). In 2013, 491 women per 100,00 hysterectomies were readmitted within 30 post-discharge days. Rates were highest for abdominal hysterectomy (5.9%), followed by robotic (4.6%), laparoscopic (3.7%) and vaginal (2.6%). The most common reasons for readmissions were post-operative infections (20.8%).One-third of patients (n=36,062) were categorized as vulnerable; 39.4% (n=40,212) received care at hospitals managing a higher proportion of such patients.Thirty-day readmission rates were higher among vulnerable patients (5.7% vs.4.4%, OR[95%CI]: 1.27[1.19-1.37], p<0.001 for all) and among patients treated at hospitals managing a higher proportion of vulnerable patients(5.2% vs.4.4%,1.17[1.09-1.25]). When stratified by site of care, vulnerable patients had higher odds of readmission relative to non-vulnerable patients when they were treated at a higher (OR[95%CI]:1.31[1.18-1.46]) versus lower (1.18[1.07-1.30]) vulnerable patient-serving hospital (9.2% reduction in differences).

Conclusion:

Vulnerable patients had higher 30-day readmission rates compared to non-vulnerable patients, regardless of site of care, suggesting that there are additional patient-level factors at play. Mediation of readmission differences by the site of care, suggests that targeted quality improvement interventions at hospitals managing higher proportions of vulnerable patients could help attenuate higher readmission rates among vulnerable women.

 

71.10 Thresholds for non-citrated r-TEG-guided resuscitation are consistent with those for citrated r-TEG.

P. EINERSEN1, H. MOORE1, C. Silliman1, A. Banerjee1, E. E. MOORE1  1University Of Colorado – Denver,Surgery,Aurora, CO, USA

Introduction:  The greatest challenge faced in the hospitalized trauma population is uncontrolled hemorrhage, which accounts for up to 40% of deaths. Our group has previously demonstrated in a prospective trial that massive transfusion protocol (MTP) guided by thromboelastography (TEG) led to improved survival and less plasma and platelet transfusions compared to that guided by conventional clotting assays. We then recently proposed the first system of thresholds guided by citrated rapid (r-TEG). However, many centers that use TEG rely on non-citrated r-TEG data exclusively, thus creating the need for a second set of thresholds. 

Methods:  Non citrated r-TEG data were reviewed for 212 patients presenting to our Level 1 Trauma Center from 2010 to 2016. Criteria for inclusion were highest level trauma activation in patients ≥ 18 years of age with hypotension presumed due to acute blood loss. Exclusion criteria included: isolated gun-shot wound to the head, pregnancy and chronic liver disease. Receiver operating characteristic (ROC) analysis was performed to test the predictive performance of r-TEG for massive transfusion requirement defined by need for >10 units of RBCs total or death in the first six hours from injury. Cut-point analysis was then performed to determine optimal thresholds for TEG-based resuscitation.

Results: ROC analysis of r-TEG data from 212 patients who met inclusion criteria yielded areas under the curve (AUC) with respect to substantial transfusion requirement greater than 70% for angle (α), maximum amplitude (MA) and percent lysis at 30 minutes (LY30) (78%, 81% and 71%) but not activated clotting time (ACT) (65%). Optimal cut-point analysis of the resultant ROC curves was performed using sensitivity, specificity equality yielding the following cutpoints: ACT >125 sec, MA <58 mm, α < 70 and LY30 >3%.

Conclusion: Through ROC analysis of prospective non-citrated r-TEG data, we have identified optimal thresholds to guide hemostatic resuscitation using non-citrated r-TEG, which are consistent with but not identical to those previously proposed for citrated r-TEG strategies (ACT >128 sec, MA <55 mm, α < 65 and LY30 >5%). These thresholds represent the first based on an analysis of severely injured patients at high risk for trauma-induced coagulopathy and the first based on non-citrated TEG data, thus providing an important standard in the evolution of TEG-guided resuscitation.  These results should be validated in a prospective multicenter trial.

71.09 The Use of Viscoelastometry With Tissue Factor Initiators To Identify Burn Induced Coagulopathy

S. Tejiram1,2, K. E. Brummel-Ziedins3, T. Orpheo3, L. T. Moffatt2, J. W. Shupp1,2  1MedStar Washington Hospital Center,The Burn Center, Department Of Surgery,Washington, DC, USA 2MedStar Health Research Institute,Washington, DC, USA 3University Of Vermont,Department Of Biochemistry, College Of Medicine,Colchester, VT, USA

Introduction: Current studies examining the presence of an acute burn induced coagulopathy remain discordant. These studies rely on limited clinical laboratory tests that provide only a snapshot of their coagulation profile at a given time. The use of whole blood provides better insight in vivo because all blood components interact during testing. Both thromboelastography (TEG) and thromboelastometry (ROTEM) provide on-site evaluation of clotting performance and the potency of in situ fibrinolysis utilizing various initiators. Understanding clot dynamics following acute burn injury could better identify the presence of coagulation defects, its pathophysiology, and potential management strategies. The aim of this work was to investigate the usefulness of pure extrinsic pathway initiators as an adjunct to identify coagulation defects versus a combined intrinsic/extrinsic pathway activator in a prospective cohort of acute burn injured patients.

Methods: A cohort of 28 burn injured patients who presented to a regional burn center from 2013 to 2016 were enrolled for prospective study. Whole blood collected from these patients at set intervals from admission to three weeks afterward underwent viscoelastic measurement by TEG and ROTEM. Data on patient demographics, injury characteristics, and clinical laboratory measures such as the international normalized ratio (INR) were also obtained. Initiators used in TEG were RapidTEG while EXTEM and in-house relipidated tissue factor (TF) reagent were used in ROTEM. Patients were stratified into early (≤ 72 hours) and late (> 72 hours) timepoints as well as moderate (10 – 20%) and severe (> 20%) total body surface area (TBSA) burn sizes for comparison purposes.

Results: In a majority of patients, the INR was normal within 24 hours of admission and for the study duration. In comparing early to late time points using EXTEM assays, patients in the moderate burn size group experienced significant increases in clot formation time (CFT), rate of clot formation (angle), and clot strength (MCF) over time (p < 0.05). For in-house TF assays, significant increases were also seen in MCF and fibrinolysis (LI30) over time. In-house TF assays additionally identified significant increases in CT, CFT, and angle in the severely burned cohort compared to the moderately burned cohort (p < 0.05). RapidTEG assays in turn identified significant increases in angle and maximum amplitude (MA) over time and between burn size groups (p < 0.05).

Conclusion: Dynamic changes in coagulation were identified in patients following acute burn injury by viscoelastic measurements. This study helps further characterize the complex coagulopathic response that follows burn injury and helps identify extrinsic pathway initiators as an additional adjunct that could be used to study the hemostatic response to burn. Further work will aim to fully characterize the scope of burn induced coagulopathy and its associated outcomes.

70.10 Out-of-Pocket Payment for Surgery in Uganda: Impoverishing and Catastrophic Expenditures

G. A. Anderson1,4, L. Ilcisin4, P. Kayima2, R. Mayanja3, N. Portal Benetiz2, L. Abesiga3, J. Ngonzi3, M. Shrime4  1Massachusetts General Hospital,Surgery,Boston, MA, USA 2Mbarara University Of Science And Technology,Surgery,Mbarara, WESTERN, Uganda 3Mbarara University Of Science And Technology,Obstetrics And Gynecology,Mbarara, WESTERN, Uganda 4Havard Medical School,Global Health And Social Medicine,Boston, MASSACHUSETTS, USA

Introduction:  All care delivered at government hospitals in Uganda is provided to patients free of charge. Unfortunately, frequent stock-outs and broken equipment require patients to pay out of pocket for medications, supplies and diagnostics. This is on top of the direct non-medical costs, which can far exceed direct medical costs. Little is known about the amount of money patients have to pay to undergo an operation at government hospitals in Uganda.

Methods:  Every patient that was discharged from MRRH after undergoing an operation over a 3-week period in April was approached. Participants were then interviewed, using a validated tool, about their typical monthly expenditures to gauge poverty levels. Next they were asked about the medical costs incurred during the hospitalization. An impoverishing expense was incurred if a patient spent enough money to push them into poverty. A catastrophic expense was incurred if the patient spent more than 10% of their average annual expenditures.

Results: 41% of our patients met the World Bank’s definition of extreme poverty compared with 33% of all Ugandan by Ministry of Finance estimates. After receiving surgical care, one quarter were pushed into poverty by Uganda’s definition and 2 out of every 3 patients became poor by the World Bank’s definition. These devastating financial impacts can also be seen in other ways. Over half of the households in our study had to borrow money to pay for care, 21% had to sell possessions and 17% lost a job as a result of the hospitalization.  Only 5% of our patients received some form of charity.

Conclusion: Despite “free care,” receiving an operation at a government hospital in Uganda can result in a severe economic burden to patients and their families. The Ugandan government needs to consider alternative forms of financial protection for its citizens. If surgical care is scaled-up in Uganda the result should not be a scale-up in financial catastrophes. The Ministry of Health and the Ministry of Finance can use our results, and others like these, to help inform decisions regarding healthcare policy and resource allocation.

 

70.09 The WHO Surgical Safety Checklist in Cambodia: Understanding Barriers to Quality Improvement.

N. Y. Garland1, H. Eap2, S. Kheng2, J. Forrester1, T. Uribe-Leitz1, M. M. Esquivel1, G. Lucas2, O. Palritha2, T. G. Weiser1  1Stanford University,Department Of Surgery, Section Of Acute Care Surgery,Stanford, CA, USA 2World Mate Emergency Hospital,Battambang, BATTAMBANG, Cambodia

Introduction: The WHO Surgical Safety Checklist (SSC) has been proven to reduce post operative morbidity and mortality, however it remains difficult to implement, particularly in low resource settings. We aimed to better characterize both the barriers to checklist implementation and subsequent improvement in patient safety measures by assessing compliance with specific checklist items and evaluating root causes of compliance failures. We hypothesized that a better understanding of barriers to quality improvement will lead to a more effective implementation strategy.

Methods: The SSC was introduced at a 109 bed orthopedic trauma hospital in Battambang, Cambodia. After two half-day training sessions for the operating theatre (OT) staff in checklist use, intraoperative data were collected by trained nurses via a paper form for the first 2 months, and later a mobile REDCap data collection tool. Our tool focused on identifying performance of specific checklist items, including both communication and perioperative processes. Process-level data were compiled and presented to hospital administration and the surgical team.

Results:We collected information from direct observations of 308 surgical cases. Following initiation of the checklist all communication elements of the checklist (discussing case length, confirming the correct patient, and estimating blood loss) were performed 100% (308/308) of the time with the exception of team introductions, which the surgical team found unnecessary as they had a small staff and were familiar with each other. Several elements that required material resources were also performed with great consistency; for example, appropriate imaging was present in the OT during 100% (278/278) of cases. Other processes that were initially done poorly or not done at all were quickly brought to 100% compliance when resource barriers were overcome, such as the presence of a sterile indicator in instrument trays, which increased from 0% to 100% by the end of the observational period. However, complex processes that required clinical decision-making, such as antibiotic administration within 60 minutes of skin incision for clean cases, were performed inconsistently.

Conclusion:The primary barriers to checklist compliance in this low resource setting were not communication factors or material resources, but rather inconsistently functioning processes. Complex processes that involve clinical decision-making were more difficult to perform consistently, but appear likely to improve over time with ongoing data feedback to the team. This study highlights the importance of understanding barriers to checklist compliance as part of a checklist implementation strategy.

 

70.08 Venous Thromboembolism Prophylaxis in Patients Undergoing Intracranial Pressure Monitoring is Safe

C. Luther1, A. Strumwasser1, D. Grabo1, D. Clark1, K. Inaba1, K. Matsushima1, E. Benjamin1, L. Lam1, D. Demetriades1  1University Of Southern California,Surgery – Trauma/Critical Care,Los Angeles, CA, USA

Introduction:  The use of venous thromboembolism (VTE) prophylaxis in patients with severe traumatic brain injury (TBI) and intracranial pressure monitoring (ICPM) is controversial. This study’s purpose was to determine the safety and efficacy of VTE prophylaxis in TBI patients undergoing ICPM. 

Methods:  A seven-year (2008-2015) retrospective analysis of patients undergoing ICPM at our academic Level I trauma center was conducted. Inclusion criteria were ICPM patients surviving ≥7 days that were eligible for VTE prophylaxis. Pediatric patients (<18 years) and patients with known VTE were excluded. Variables abstracted from the registry included patient demographics, age, sex, comorbidities, injury severity scores (ISS), injury profiles, Glasgow Coma Score (GCS), systolic blood pressure (SBP), ICP data, Marshall CT index, pharmacy data, and prior anticoagulant use.  Outcomes included ICP data pre/post initiation of prophylaxis, VTE incidence, hemorrhage expansion on CT, and need for neurosurgical intervention.  Data was analyzed by unpaired Student’s t-test for continuous variables and Chi Square analysis for categorical variables with significance denoted at a p value of 0.05 or less.

Results: A total of 213 patients met inclusion criteria. Of these, 104 (49%) received VTE prophylaxis (ICPM-PPx) and 109 (51%) did not (ICPM-no PPx). Groups were matched for age (p=0.1), sex (p=0.7), admission SBP (p=0.8), GCS (p=0.7) and total injury burden (mean ISS ICPM-PPx=25±1.3 vs. ICPM-no PPx=23±1.1, p=0.2). In low head bleed severity (Marshall CT Index≤3), VTE rates (ICPM-PPx=8.3% vs. ICPM-noPPx=6.3%, p=0.7) and craniotomy rates (ICPM-PPx=21% vs. ICPM-noPPx=14%, p=0.3) were similar.  Among high-risk ICH (Marshall CT Index≥4), VTE rates (VTE rate ICPM-noPPx=0% vs. ICPM-PPx=3.1%, p=0.2) and craniotomy rates (craniotomy rate ICPM-no PPx=50.0% vs. ICP-PPx=31.2%, p=0.1) were similar. Among patients on prophylaxis, 40 (39%) began prophylaxis with an ICPM in place (pre-d/c) while 64 (63%) began prophylaxis with the ICPM removed (post-d/c).  Among patients with ICPM in place, mean ICP did not change appreciably with prophylaxis (mean ICP pre-d/c=12±0.6 vs. post-d/c=11±0.8 mmHg, p=0.1) and there was no difference in the need for surgical intervention (pre-d/c=7.3% vs. post-d/c=3.1%, p=0.3).  There was no difference in prophylaxis interruptions (0.2), duration of prophylaxis (0.7), dosing (0.9) or type of prophylaxis (0.6).  The proportion of increased ICH identified by CT was similar pre-d/c vs. post-d/c (10.0% vs. 10.0%, p=0.9).  Overall incidence of VTE was not significantly different (pre-d/c vs. post-d/c=14.6% vs. 9.2%, p=0.6).

Conclusion: Anticoagulant prophylaxis can be initiated safely with-or-without an ICP monitor in place.  Intracranial pressures do not change significantly and there is no increased need for surgical intervention.  However, the data suggests there is no decreased incidence of VTE in ICPM patients on prophylaxis.

 

70.07 Improving Utilization of Continuous Renal Replacement Therapy in ICUs of a Large Academic Center

J. Tseng1, P. Hain1, S. Barathan1, H. Rodriguez1, T. Griner1, M. S. Rambod1, N. Parrish1, H. Sax1, R. F. Alban1  1Cedars-Sinai Medical Center,Los Angeles, CA, USA

Introduction:
Continuous Renal Replacement Therapy (CRRT) is a dialysis modality that is essential in the management of critically ill patients with renal failure. It confers the advantage of removing solutes and fluid at a slow, constant rate, with lower rates of hypotension and other adverse events. CRRT use has increased over time, particularly in surgical patients. We sought to assess current utilization patterns of CRRT at our institution and to standardize usage and efficiency in a collaborative approach.

Methods:
Data was collected for fiscal years 2013 to 2016 at a large urban academic medical center. A task force was developed in October 2015 involving intensive care units (ICUs), nursing and nephrology leadership with an effort to apply standardized guidelines for initiation and termination of CRRT, improve daily communication and documentation between the Nephrology, ICU teams, and ICU nursing staff. In addition to other measures, electronic order sets were revised to reassess the need for CRRT and its associated labs on a daily basis. Utilization data and related costs were calculated using our internal data warehouse and finance department. Fiscal year (FY) data before and after the intervention were compared.

Results:
From 2013 to 2016, the total volume of patients on CRRT increased by 187% (233 to 435), and the total number of CRRT days increased by 191% (1,704 to 3,257) with the large majority of patients being surgical (62%). Prior to intervention, the median number of CRRT days per patient peaked at 8 days for FY2014 and FY2015. This decreased to 7 after our intervention for FY2016. The total direct cost of CRRT increased yearly from $2.84 million in 2013 to $4.37 million in FY2016, while the average cost of CRRT per patient decreased from $12,167 in FY2013 to $11,548 in FY2015, and further to $10,030 in FY2016. This resulted in savings per case of $1,518, to a total annualized saving of $660,220 for FY2016. In addition, case mix index increased yearly from 8.82 in FY2013 to 9.35 in FY2016.

Conclusion:
By establishing a task force to critically review the usage of CRRT and implementing best practice guidelines and collaboration policies, we significantly reduced the cost of CRRT per case across all ICUs at our institution. These resulted in significant cost savings and improved documentation.
 

70.06 Burn Injury Outcomes in Patients with Preexisting Diabetic Disease

L. T. Knowlin1, B. A. Cairns1, A. G. Charles1  1University Of North Carolina At Chapel Hill,Surgery,Chapel Hill, NC, USA

Introduction: It is estimated 486,000 people sustained burn injuries last year in the United States.Despite advancements over the last three decades in burn care, the burden of burn injury morbidity and mortality is still high and the drivers are largely  burn wound infections and sepsis. The challenge remains to prognosticate various burn injury outcomes as current burn prediction models do not account for specific comorbidities such as diabetes. We therefore sought to examine the impact of pre-existing diabetes on burn injury outcomes.

Methods: A retrospective analysis of patients admitted to a regional burn center from 2002-2012. Independent variables analyzed included basic demographics, burn mechanism, presence of inhalation injury, TBSA (total body surface area), and pre-existing comorbidities. Bivariate analysis was performed and Poisson regression modeling was utilized to estimate the incidence of sepsis, graft complications, and in-hospital mortality.  

Results: 7640 patients were included in this study. Overall survival rate was 96%. 8% (n=605) had a preexisting diabetes. Diabetic patients had a higher rate of sepsis (5% vs 2%), graft complications (2% vs 0.5%), and crude mortality rate (8% vs 4%) compared to those without diabetic disease (p < 0.001). The adjusted Poisson regression model to estimate the incidence risk of sepsis in patients with preexisting diabetic disease was 54% more compared to those without diabetic disease (IRR =1.54, 95% CI = 1.04-2.29). The risk of graft complication is two times higher (IRR= 2.17, 95% CI = 1.03-4.58) for patients with pre-existing diabetic disease compared to those without diabetic disease after controlling for patient demographics and injury characteristics. However, there was no significant impact of preexisting diabetic disease on in-hospital mortality.

Conclusion: Preexisting diabetes significantly increases the risk of developing sepsis and graft complication but has no significant effect on mortality in patients following burn injury. Our findings emphasizes the need for the inclusion of comorbidities in burn care outcomes in addition to prognosticate burn mortality.

 

70.05 Association between Hospital Staffing Strategies and Failure to Rescue Rates

S. T. Ward1, D. A. Campbell1, C. Friese2, J. B. Dimick1, A. A. Ghaferi1  1University Of Michigan,Department Of Surgery,Ann Arbor, MI, USA 2University Of Michigan,School Of Nursing,Ann Arbor, MI, USA

Introduction: Failure to rescue (FTR) is a widely accepted quality measure in surgery. While numerous studies have established FTR as the principal driver in postoperative mortality rates, specific determinants of FTR remain unknown. In this study we investigate hospital staffing strategies associated with FTR.

Methods: Using prospectively collected data Michigan Surgical Quality Collaborative (MSQC), we identified 44,567 patients between 2008-2012 who underwent major general or vascular surgery procedures. Hospitals were divided into tertiles based on risk adjusted failure to rescue rates. We then administered a hospital resource survey to surgeon champions at MSQC participating hospitals, with a response rate of 62% (32/52). Survey items included ICU staffing model (closed or open), use of board-certified intensivists, presence of surgical hospitalists and residents, overnight coverage by advanced practice providers (APP) and a dedicated rapid response team (RRT).

Results: FTR rates across the tertiles were 8.9%, 16.5% and 19.9% respectively, p <0.001. Low FTR hospitals tended to have a closed ICU staffing model (56% vs 20%, p<0.001) and a higher proportion of board-certified intensivists (88% vs 60%, p<0.001) when compared to high FTR hospitals. There was also significantly more staffing of low FTR hospitals by hospitalists (85 vs 20%, p<0.001) and residents (62 vs 40%, p<0.01). Low FTR hospitals were noted to have more overnight coverage using APP (75% vs 45%, p<0.001) as well as a dedicated RRT (90% vs 60%, p<0.001).

Conclusion: Low FTR hospitals had significantly more staffing resources than high FTR hospitals. While hiring additional staff may be beneficial, there remain significant financial limitations for many hospitals to implement robust staffing models. As such, our ongoing work seeks to improve rescue rates with better understanding and implementation of effective hospital staffing strategies within these constraints.  

70.04 Acute Alcohol Intoxication Strongly Correlates With Polysubstance Abuse In Trauma Patients

A. Jordan1, P. Salen2, T. R. Wojda1, M. S. Cohen2, A. Hasani3, J. Luster3, H. Stankewicz2, S. P. Stawicki1  1St. Luke’s University Health Network,Department Of Surgery,Bethlehem, PENNSYLVANIA, USA 2St. Luke’s University Health Network,Department Of Emergency Medicine,Bethlehem, PENNSYLVANIA, USA 3Temple University,Department Of Surgery,Philadelpha, PA, USA

Introduction: Polysubstance abuse, defined as any combination of multiple drugs or at least one drug and alcohol, is a major public health problem. In addition to the negative impact on health and well-being of substance users, alcohol and/or drug abuse can be associated with significant trauma burden. The aim of this study was to determine if serum alcohol (EtOH) levels on initial trauma evaluation correlate with the simultaneous presence of other substances of abuse. We hypothesized that polysubstance use would be significantly more common among patients who presented to our trauma center with blood alcohol content (BAC) >0.10%.

Methods: A retrospective audit of trauma registry (August 1998 to June 2015) was performed. Abstracted data included patient demographics, BAC determinations, all available formal determinations of urine/serum drug screening, injury mechanism and severity information, Glasgow coma scale (GCS) assessments, and 30-day mortality. Stratification of BAC was based on the 0.10% cut-off. Statistical comparisons were performed using Fisher’s exact testing and Chi-square testing, with significance set at α=0.05.

Results: We analyzed 488 patients (76.3% male, mean age 38.7 years). Median GCS was 15 (IQR 14-15). Median ISS was 9 (IQR 5-17). Overall 30-day mortality was 2.7%, with no difference between elevated (>0.10) and normal (<0.10) EtOH groups. For the overall study sample, the median BAC was 0.10% (IQR 0-0.13). There were 284 (58.2%) patients with BAC <0.10% and 204 (41.8%) patients with BAC >0.10%. The two groups were similar in terms of mechanism of injury (both, >95% blunt).

A total of 245 patients underwent formal “tox-screen” evaluations. Of those, 31 (12.7%) were positive for marijuana, 18 (7.35%) were positive for cocaine, 28 (11.4%) for opioids, and 32 (13.1%) for benzodiazepines. Patients with BAC >0.10% on initial evaluation were significantly more likely to also have polysubstance use (e.g., EtOH + additional substance) than patients with BAC <0.10% (53/220 [24.1%] versus 16/25 [64.0%], p<0.002, Table). Among polysubstance users, BAC >0.10% was significantly associated with opioid and cocaine use (Table).

Conclusion: This study confirms that a significant proportion of trauma victims with an admission BAC >0.10% present with evidence of polysubstance use. Patients with BAC >0.10% were more likely to test positive for drugs of abuse (e.g., cocaine and opioids) than patients with BAC <0.10%. Our findings support the need for routine substance abuse screening in the presence of EtOH intoxication, with focus on primary identification, appropriate clinical management, and early polysubstance abuse intervention.
 

70.03 IMPACT OF EXPANDED MEDICAID COVERAGE ON HOSPITAL LENGTH STAY FOLLOWING INJURY

J. Holzmacher1, K. Townsend1, C. Seavey1, S. Gannon1, L. Collins1, R. L. Amdur1, B. Sarani1  1George Washington University School Of Medicine And Health Sciences,Surgery,Washington, DC, USA

Introduction:
Despite implementation of the Affordable Care Act (ACA), states differ regarding specific eligibility requirements, coverage, and benefits. Washington DC (DC) has the most expansive Medicaid eligibility, including coverage for undocumented immigrants and individuals at a higher federal poverty income threshold, followed by Maryland (MD), which meets ACA expansion standards, and Virginia (VA), which has not expanded Medicaid. We hypothesize that patients in DC have a shorter hospital length of stay (LOS) following injury than either MD or VA.

Methods:
A retrospective study of an adult, urban trauma center, which receives patients from DC, VA, and MD, was performed from 2013-2016. Private insurance was excluded. A multivariate linear model predicting LOS by insurance and state and a model examining LOS by insurance type within states were created after adjusting for demographics, injury severity, penetrating injury, and head and pelvis abbreviated injury scores.

Results:
2728 patients were enrolled. Average patient age and injury severity scores were 53 ± 23 years old and 7 ± 6, respectively . 90% of patients sustained a blunt mechanism of injury. Overall, 36% of patients had Medicaid and 42% had Medicare insurance. 20% of the overall cohort was uninsured. 47% of patients in DC had Medicaid compared with 18% in MD and 8% in VA (p<0.0001). 39% of patients in DC had Medicare compared with 47% in MD and 43% in VA (p<0.0001).  

Adjusted LOS was 1.9 days shorter for Medicaid patients in DC versus VA (p=0.003), and 0.9 days shorter in DC versus MD (p=0.02) (figure 1). Uninsured patients had 0.7 and 2.4 days shorter LOS than Medicaid in DC (p<0.0001) and VA (p=0.006), respectively, but no difference in LOS was found between states. Medicaid patients had 0.5 days shorter LOS than Medicare patients in DC (p=0.017), but 1.9 days longer LOS than Medicare patients in VA (p=0.042). There was no difference in LOS between Medicare and Medicaid patients in MD.

Conclusion:
Expanded Medicaid coverage, which includes undocumented immigrants and individuals at a higher federal poverty income threshold, is associated with shorter LOS following injury.
 

70.02 Graft Loss: Review of a Single Burn Center’s Experience and Proposal of a Graft Loss Grading Scale

L. S. Nosanov1,2, M. M. McLawhorn2, L. Hassan2, T. E. Travis1,2, S. Tejiram1,2, L. S. Johnson1,2, L. T. Moffatt2, J. W. Shupp1,2  1MedStar Washington Hospital Center,Burn Center,Washington, DC, USA 2MedStar Health Research Institute,Firefighters’ Burn And Surgical Research Laboratory,Washington, DC, USA

Introduction:  Etiologies contributing to burn graft loss are well studied, yet there exists no consensus definition of burn “graft loss”, nor a scale with which to grade severity. This study examines a single burn center’s experience with graft loss. Our institution introduced a graft loss grading scale in 2014 for quality improvement. We hypothesize that higher grades are associated with longer hospital stays and increased morbidity.

Methods:  Following IRB approval, a retrospective review was performed for all burned patients with graft loss on departmental morbidity and mortality reports 7/2014–7/2016. Duplicate entries, wounds not secondary to burns, and chronic non-healing wounds were excluded. Data abstracted from the medical record included demographics, medical history, and details of injury, surgical procedures, graft loss, and clinical outcomes including hospital and ICU lengths of stay. Graft loss grades were assigned per institutional grading scale (Table 1). Photos of affected areas were graded by two blinded surgeons, and a linear weighted κ was calculated to assess inter-rater agreement. 

Results: In the two-year study period, 50 patients with graft loss were identified. After exclusions, 43 patients were included for analysis. Mean age was 50.1 years, and the majority were male (58.1%) and African American (41.9%). Smoking (30.2%) and diabetes (27.9%) were prevalent. The most common mechanisms were flame (55.8%), scald (18.6%) and thermal (11.6%). Total body surface area (TBSA) involvement ranged 0.5–51.0% (mean 11.8±12.3 %). Grade 1 graft loss was documented in the chart of one patient (2.3%), Grade 2 in 15 (34.9%), Grade 3 in 12 (27.9%) and Grade 4 in 15 (34.9%). Seven patients had wound infections at diagnosis of graft loss. Reoperation was performed in 20 (46.5%). Hospital LOS ranged 9–81 days (mean 27.4±16.0 days), with ICU LOS ranging 0–45 days (mean 7.7±10.9 days). Hospital LOS was longer than predicted (by TBSA%) in 38 patients (88.4%). Seven patients experienced significant morbidities including two amputations. On image review, moderate agreement was reached between blinded surgeons (k = 0.44, 95% CI 0.11 – 0.65, p = 0.004).

Conclusion: Graft loss is a major source of morbidity in burn patients. In this cohort, reoperation was common and hospital LOS was extended. Use of a graft loss grading scale enables improved dialogue among providers and lays the foundation for improved understanding of risk factors. Results of this study will be used to guide revision of the institutional graft loss grading scale.

69.10 ~~Both Obesity (BMI≥30) and Timing of Chemical Prophylaxis Effect Pulmonary Embolism Rate after Trauma

K. Treto1, A. Giancarelli1, K. Safcsak1, B. Hobbs1, M. Cheatham1, I. Bhullar1  1Orlando Regional Medical Center,Department Of Surgical Education,Orlando, FL, USA

Introduction:

~~The current guidelines from the American College of Chest Physicians (ACCP) from 2008 recommend one standard dose of chemical prophylaxis (CP) for all trauma patients regardless of weight and Body Mass Index (BMI). The utility of weight-adjusted CP for obese patients remains controversial. Earlier initiation of CP (within 24 hours of admission) may also decrease pulmonary embolism (PE) rate. The purpose of this study was to evaluate the effect of BMI and timing of CP on PE rate in patients receiving standard chemical and mechanical prophylaxis after traumatic injury.

Methods:
~~The records of adult patients admitted to the trauma team from 2013-2015 were retrospectively reviewed. Regarless of BMI, standard practice at our institution was to provide one dose of enoxaparin 30 mg SQ every 12 hours for all trauma patients without contraindications to anticoagulation. Traumatic brain injury patients received 5000 units of SQ Heparin every 8 hours instead of enoxaparin (neurosurgery preference). The timing of CP initiation, however, was not standard and varied during the above study period. Patients that did not receive any chemical prophylaxis were excluded. Patients were divided into three groups based on admission BMI, Ideal weight [IW] [BMI<25], Over weight [OW] [BMI 25-29.9], and Obese [OB] [BMI>30]. PE rates were compared based on weight (IW vs. OW), and (IW vs. OB) and timing of CP (≤24 hours after admission [Early] vs. >24 hours after admission [Late]). Statistical analysis was performed using mean, Fisher’s exact test, and Student’s t-test.

Results:
~~2178 patients were identified that met the above criteria. There were 899 IW, 730 OW, and 549 OB patients. Compared to the IW the OW and OB groups both had a significantly higher PE rate (IW vs. OW, 0.2% vs 0.8%, p=0.04), (IW vs. OB, 0.2% vs 1.6%, p=0.004). There was also a trend toward significance for patients that had earlier CP initiation (≤24 hours vs. >24 hours, 0.3 vs. 1.0, p=0.07). The PE rate was then evaluated as a function of both BMI and time (Fig 1). As BMI increases the PE rate shifts upward to a higher frequency curve over time. The OB group had the highest PE rate at each time period. The curves show clear separation with no overlap for PE rate with OB>OW>IW. The highest PE rate occurred in the first ten days after admission for all three groups.

Conclusion:
~~After traumatic injuries, despite standard mechanical and CP, OB and OW patients had a significantly higher PE rate than IW patients. Weight adjusted CP may need to be re-evaluated as a possible means of decreasing this complication. Early initiation of CP (<24 hours after admission) may also play a role in decreasing PE. 
 

69.09 Is There Still A Role For "Selective Management In Penetrating Neck Trauma?

M. Gayed1, A. Ekeh1  1Wright State University,Surgery,Dayton, OH, USA

Introduction:
Penetrating Injuries to the neck in hemodynamically stable patients without “hard signs” have been traditionally managed selectively with angiography, bronchoscopy, esophagogram and esophagoscopy. Some recent studies have demonstrated Computerized Tomography (CT) angiography and physical examination can be used in lieu of these studies. We sought to identify any enduring role for these traditional studies in the era of CT scan.

Methods:
All patient with who sustained penetrating neck injury over a 10 yr. period (Jan 2006 – Dec 2015) who presented to an ACS verified Level I Trauma center were identified from the trauma registry. Demographic data, surgical procedures performed, investigative studies performed including CT angiography of the neck, bronchoscopy, esophagoscopy, esophagograghy were noted.

Results:
Over the specified period, there were 171 patients (109 stab wounds, 48 stab woumds, 14 other) identified with penetrating neck injuries. Mean age was 37 yrs., average ISS 7.3 and 83 % were male. Of these, 52 patients went directly to the operating room (OR) for neck exploration without CT, for “hard signs”. CT scans were performed in 100 patients and an additional 53 patients were taken to the OR based on the CT findings. In all, 51 % of CTs were considered to be positive with 17% of them demonstrating vascular injuries. Bronchoscopy was performed in 25 patients, esophagoscopy in 28 and Esophagography in 9 patients. These three studies yielded no additional findings when performed. There were no missed injuries.

Conclusion:
In hemodynamically stable penetrating neck trauma patients without hard signs, CT angiography appears to be an adequate modality for identifying injuries. Bronchoscopy, esophagoscopy and esophagography yield no additional information. Outside specific indications demonstrated by physical examination or CT, these studies appear to be no more relevant and unnecessary in the routine management of these patients.
 

69.08 The Effects of Helmet Legislation on Pediatric Bicycle Injuries in Illinois

R. Weston1, C. Williams2, M. Crandall1  1University Of Florida,Surgery,Jacksonville, FL, USA 2Washington University,Emergency Medicine,St. Louis, MO, USA

Introduction:  Bicycling is one of the most popular forms of play and exercise for children in the U.S.  However, over 200,000 children per year are injured in bicycle crashes, and an estimated 22,000 pediatric bicycle-related traumatic brain injuries (TBIs) occur annually.  Bicycle helmets are known to decrease the risk of head injury, but efficacy and magnitude of effect of helmet legislation have not been fully elucidated.  

Methods:  
This was a retrospective, observation study of children under 18 who presented after a bicycle crash and were included in the Illinois Trauma Registry from 1999-2009.  Demographic information, injury types, injury severity, helmet usage, and location of injury data were collected. Multiple logistic regression analysis was used to quantify the independent effects of helmet usage on likelihood of TBI, and among those with TBI, the severity of injury. Data were then compared between communities with and without helmet legislation.

Results: A total of 3080 pediatric bicycle related crashes were identified. Children wearing helmets were less likely to sustain a TBI, OR 0.56 (CI 0.37-0.84). Boys were less likely to suffer a TBI, OR=0.80 (95% CI 0.67-0.97) while older children were more likely to suffer a TBI. Overall 5.0% of patients were noted as wearing helmets. As compared to non-Hispanic white children, Black and Hispanic children were less likely to wear helmets, OR=0.24 (95% CI 0.09-0.68) and OR=0.10 (95% CI 0.02-0.42) respectively. Those injured living within helmet zip code regions wore proportionally more helmets, 12.2%, than the overall 5.0%.  There was no significant change in helmet usage between pre and post legislation in helmet legislation areas or over time in non-helmet legislation areas.

Conclusion: Rates of pediatric TBI from bicycle injury in Illinois trauma centers are not changing an appreciable amount. There was also no statistically significant change across the years of the analysis in total number of severe TBI. Similar to previous studies, non-Hispanic black populations as well as Hispanic populations were much less likely to wear helmets.  Children in helmet legislation areas were significantly more likely to wear helmets throughout the years combined, although it is unclear how much of this is attributed to legislation versus other sociodemographic factors. 

 

69.07 Access to Trauma Care: A Geospatial Analysis of Trauma System Infrastructure in California.

T. Uribe Leitz1, M. M. Esquivel1, N. Y. Garland1, L. M. Knowlton1, K. L. Staudenmayer1, D. A. Spain1, T. G. Weiser1  1Stanford University,Department Of Surgery, Section Of Acute Care Surgery,STANFORD, CA, USA

Introduction: Due to its expanse, geography, and population distribution, critically injured patients in California could face a challenge in access to trauma centers. We seek to apply geospatial techniques to analyze access to trauma center care in California from a geographic perspective. Understanding how trauma centers and populations are distributed could help identify areas of particular need or areas at risk of being underserved.

Methods: We obtained information on trauma center designation and location from the American Trauma Society-Trauma Information Exchange Program (TIEP, 2014) and the California Office of Statewide Health Planning and Development (OSHPD, 2014). We analyzed time and distance to these centers, and the proportion of the population covered within 1-hour travel time. We used OpenStreetMaps to determine driving times and LandScan 2014 Global Population Database (ORNL-US DoD) to geolocate population density within the state. Data were analyzed and images generated in Redivis (Mountain View, CA), a data visualization platform.

Results: A total of 74 trauma centers were identified for California. Of these, 15 (20.3%) were Level I, 37 (50%) were Level II, 14 (18.9%) were Level III, and 8 (10.8%) were Level IV. The majority of the population, 95.2% (36.9 million people) live within 1-hour of a Level I or II trauma center (Figure 1); only 1.4 million people live outside a hospital designated as a trauma center (Level I-IV).

Conclusions: The vast majority of Californians can access a Level I or II trauma center by road within an hour, but over 1 million people live outside of a one-hour road network to a trauma facility. Geospatial analyses and visualization tools assist in the evaluation of trauma systems, help identify populations without timely access to life saving trauma care, and inform state EMS to support trauma center designation, particularly in more remote areas, as a complement of a larger needs-based assessment of trauma systems.

 

69.06 Modeling the Need for First Responders in Rural United States

M. B. Bhatia1, M. Aranke1, T. Dang1, D. Vyas1  1Texas Tech University Health Science Center,Department Of Surgery,Lubbock, TEXAS, USA

Introduction:
Surgical intervention, especially trauma surgery, has gained traction in the public health community in the recent past, with factors such as surgeon density starting to gain consideration in medical and scientific research. Similar work has yet to be completed in regards to pre-hospital care that the trauma victim receives by way of first responders. Multiple studies have shown that prompt, well-executed pre-hospital care by first responders can lead to a reduction in mortality, in both urban and rural settings. Even though the importance of first responders is widely agreed upon by the healthcare and public health community, no mathematical model currently exists that gives a reliable estimate of the number of first responders a certain community needs at a given period in time.

Objective:
Propose a pilot model aimed at quantifying the number of first responders needed in a particular geographic area. 

Methods:
Fifteen states that were more than 50% rural were selected. Comparisons among the states were made across 11 measures: population density, surgeon density, hospital density, median age, male-female driver ratio, drivers <25 years, drivers >75 years, passenger car mortality, pick-up & SUV mortality, motorcycle mortality, and speeding-related traffic fatality. The comparison among three states—Wyoming, Montana, and Idaho— met the target match rate >50%. The three states demonstrated the relationship between first responder and MVA mortality.

Results:
Statistical data for Wyoming, Montana, and Idaho in 2010 were collected from the United States Bureau of Labor Statistics and the Highway Loss Data Institute, respectively, and plotted on a scatterplot. A best-fit line was created, yielding the linear relationship modeled by the equation y = -4.473x + 27.979. Based on this equation, 3.5 first responders per square mile are needed in these states to reach the Healthy People 2020 Target of 12.4 motor vehicle deaths per 100,000, assuming that the contributions of the matched factors remain constant.   

Conclusion:
Multiple studies have shown that prompt and well-executed pre-hospital care can lead to a significant reduction in mortality, both in urban and in rural settings. The proposed model demonstrates a clear inverse relationship between first responder density and MVA mortality. Since we could not find 100% match between any states and could only find 64% match between three states, the resulting relationship is likely still significantly influenced by unknown confounding factors. Further modeling efforts could elucidate these factors and ultimately, lead to better allocation of public health resources.

69.05 Self–inflicted Gunshot Wounds: Readmission Rates

C. M. Rajasingh1, L. Tennakoon1, K. L. Staudenmayer1  1Stanford University,Department Of Surgery,Palo Alto, CA, USA

Introduction:  Self-inflicted gunshot wounds (SI-GSW) are often fatal, but those who survive get hospitalized for their injuries. What happens to these survivors after the initial hospitalization is not known. We hypothesized that patients who survive a SI-GSW are frequently readmitted. We also hypothesized that rates would be higher than those admitted for other mechanisms of deliberate self-harm (DSH).

Methods:  This is a retrospective cohort analysis of hospital visits using the National Readmission Database (NRD) from the Healthcare Cost and Utilization Project (HCUP), Agency for Healthcare Research and Quality, 2013. The NRD is a new nationally representative sample of inpatient hospitalizations in the U.S. with an identifier that allows for linkage across hospitalizations.  We included patients with any diagnosis indicating deliberate self-harm (DSH) as coded by International Statistical Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) diagnosis codes. This group was divided into those who had SI-GSW as their mechanism for self-harm and those who did not. In order to have 6-month follow-up data, we excluded patients discharged in the second half of the calendar year. Patients who did not survive their initial hospitalization were excluded. Weighted numbers are reported below.

Results: A total of 492 patients were admitted for SI-GSW between January and June 2013. The majority were male (N=396, 81%) and 34% (N=167) were ages 22-35. Of these patients, 156 (32%) experienced at least one readmission in 2013. The mean time to the first readmission was 72 days. The top three diagnosis group reasons for readmissions included Mental Health (31%), Injury or Poisoning (15%), and Musculoskeletal (11%).  Readmissions for self-harm were low (<5%; small numbers not reportable per HCUP publishing restrictions). When compared to those admitted for DSH by non-firearm-related mechanisms, readmission rates were not statistically different (SI-GSW vs. other DSH 32% vs. 31%, p=0.70). However, readmissions for repeat self-harm were lower for the SI-GSW cohort (SI-GSW vs. other DSH <5% vs. 8%, p<0.001). In multivariate analysis controlling for patient and injury characteristics, SI-GSW was associated with a lower odds ratio for repeat self-harm admissions compared to other forms of DSH (OR 0.28, p=0.015).

Conclusion: Readmissions after survival for SI-GSW are frequent, indicating that estimates of the burden of survival can be underestimated if only focused on the initial hospitalization. To our knowledge, this is the first study to describe national readmission rates after SI-GSW. Furthermore, there are differences in readmission rates for SI-GSW vs. other forms of DSH. Overall readmission rates are the same for both groups, but the odds ratio for repeat self-harm admissions is 70% lower for the SI-GSW group even after controlling for severity of injury. This suggests opportunities for prevention and follow-up may differ between the two groups. 

 

69.04 Are ICUs Being Over-utilized for Mild Traumatic Brain Injury?

J. L. Pringle1, A. L. Bourgon1, M. A. Chaudhary1, F. Speranza1, S. Tisherman1, A. P. Ekeh1  1Wright State University,Surgery,Dayton, OH, USA

Introduction:
Mild TBI (MTBI) accounts for 80-90% of all TBI admissions and of all TBIs seen in the ED, 46% are associated with an intracranial hemorrhage. Up to 63% of MTBI are admitted to the intensive care unit (ICU) even though the majority of these patients do not require critical care interventions.  We hypothesize that patients with a mild TBI and isolated intracranial hemorrhage do not require critical care interventions that necessitate an admission to the intensive care unit, and instead can be monitored on an advanced care floor.

Methods:
We conducted a retrospective cohort study of the Trauma Registry of an American College of Surgeons (ACS) verified Level 1 Trauma Center from February 2013 to March 2015.  Patients with a MTBI, defined as a Glasgow Coma Scale (GCS) of 13-15, were included in the study.  Primary outcome was patients requiring critical care interventions during admission. Critical care interventions were defined as mechanical ventilation, neurosurgical or neurointerventional procedures, vasopressor or inotropic use, transfusion of blood products, invasive monitoring and use of hypertonic saline solution.

Results:
A total of 250 patients were admitted to the ICU.  141 (56%) of these patients underwent a critical care intervention, 68 (48%) transfusions, 35 (25%) neurological intervention, 27 (19%) mechanical ventilation, 11(8%) hypertonic saline.  28 (41%) of the transfusions were platelets alone for clopidogrel and/or aspirin use or elevated platelet function tests.  The average ISS for patients admitted to the ICU was 16.2.  For those patients that were admitted to the ICU for ≤1 day, the average ISS was 13.9.  Daily cost of ICU admission was $618 versus $297/day for admission to the advanced care units.

Conclusion:
In conclusion, 44% of the patients with MTBI and ICH that were admitted to the ICU did not require a critical care intervention. A majority of those that did require intervention only required transfusion of blood product.  Based on this data, some patients with MTBI and ICH may be able to be monitored in an advanced care unit where nursing staff is adequate enough to be able to do frequent neurological checks.  Evaluation of the patients ISS and intervention may give physicians a better ability to decide what level of care these patients need for improved utilization of medical resources.
 

69.03 The Efficacy and Efficiency of Electromagnetic Enteric Feeding Tubes versus Standard Placemet Methods

H. R. Shadid1, M. Keckeisen1, A. Zarrinpar1  1David Geffen School Of Medicine, University Of California At Los Angeles,Liver Transplant Unit,Los Angeles, CA, USA

Introduction:
Enteral feeding in critically ill patients has been shown to be beneficial, but reliable placement of feeding tubes into a post-pyloric position remains a challenge. The standard of care involves blind placement, followed by abdominal radiographs to confirm post pyloric placement. Multiple attempts and radiographs are frequently needed and placement may require costly endoscopy or fluoroscopy. Lung placement remains a serious adverse event. The Cortrak 2 device allows 3D real-time tracking of the feeding tube tip position during placement, which may lead to reduced complications, use of radiography and fluoroscopy, and costs.

Methods:
We employed the Cortrak 2 device for feeding tube placements on 13 consecutive patients requiring enteral nutrition in a surgical ICU at a tertiary care center. Patients undergoing feeding tube placements in the preceding 7 months served as historical controls. Data was collected for the control group in a retrospective chart review while data for the intervention group was collected prospectively. Outcome variables included: time from initial radiograph to final confirmation of post-pyloric position, the number of abdominal radiographs performed prior to confirmation, need for fluoroscopy, placement location of each attempt (stomach, proximal duodenum [D1, D2], distal duodenum [D3, D4], and jejunum), and lung placements. Cost analysis was performed to evaluate the cost effectiveness of the device.

Results:
There were 28 patients and 63 placements for the control group and 13 patients and 26 placements for the Cortrak group. Other than patient height, there were no significant differences between the two groups in terms of age, sex, weight, BMI, hiatal hernias, or previous esophageal/gastric operations. The use of Cortrak led to decrease in time from initial radiograph to final confirmation (1813 minutes +/- 3276 v 304 minutes +/- 667, p=0.02), decrease in the number of radiographs (2.44 +/- 1.80 v 1.45 +/- 0.857, p=0.007). There was also a decrease in need for fluoroscopic insertions (11 insertions/28 patients, 17.5% v 0/13, 0%, p=0.023) and lung insertions (2/28 v 0/13, p=0.36). There were 2, 11, 10, and 3 stomach, proximal duo, distal duo, and jejunal placements compared with 9, 36, 10, and 8 in the control group (p=0.40, 0.21, 0.020, 0.88). Placements using the Cortrak device led to lower costs per patient despite Cortrak tubes being more expensive ($731 v $412, p=0.0078).

Conclusion:
In high-acuity intensive care units, use of the Cortrak device allows reliable post-pyloric enteral feeding tube placement with fewer radiographs, decreased need for fluoroscopy, and fewer lung insertions compared to blind insertion. This would result in earlier establishment of feeding and lower total costs.