37.08 Emergency General Surgery in a Low-Middle Income Healthcare Setting – Determinants of Outcomes

A. A. Shah1,6, H. Zafar6, R. Riviello3, C. K. Zogg1, M. S. Halim7, S. Zafar5, A. Latif8, Z. Rehman6, A. H. Haider1  1Johns Hopkins University School Of Medicine,Center For Surgical Trials And Outcomes Research, Department Of Surgery,Baltimore, MD, USA 3Harvard School Of Medicine,Center For Surgery And Public Health, Brigham And Women’s Hospital,Brookline, MA, USA 5Howard University College Of Medicine,Department Of Surgery,Washington, DC, USA 6Aga Khan University Medical College,Department Of Surgery,Karachi, Sindh, Pakistan 7Aga Khan University Medical College,Section Of Critical Care, Department Of Medicine,Karachi, Sindh, Pakistan 8Johns Hopkins University School Of Medicine,Department Of Anesthesia,Baltimore, MD, USA

Introduction:  The field of emergency general surgery (EGS) has rapidly evolved as a distinct component of acute care surgery. However, nuanced understandings of outcome predictors in EGS have been slow to emerge, particularly in resource-constrained parts of the world.  The objective of this study was to describe the disease spectrum and risk factors associated with outcomes of EGS among patients presenting to a tertiary care facility in Pakistan. 

Methods:  Discharge data from a university hospital were obtained for all adult patients (≥16 years) presenting between March 2009 and April 2014 with ICD-9-CM diagnosis codes consistent with an EGS condition, as described by the American Association for the Surgery of Trauma (AAST). Multivariate analyses, accounting for age, gender, year of admission, type of admission, admitting specialty, length of stay (LOS), major complications and Charlson Comorbidity Index, were used to assess potential associations between demographic/clinical factors and all-cause mortality and major complications (pneumonia, pulmonary emboli, urinary tract infections, cerebrovascular accidents, myocardial infarcts, cardiac arrest and systemic sepsis).

Results: Records for 13,893 patients were identified. Average age was 47.2 (±16.8) years, with a male preponderance (59.9%). The majority of patients presented with an admitting diagnosis of biliary disease (20.2%) followed by soft tissue disorders (15.7%), hernias (14.9%) and colorectal disease (14.3%). The crude rates of death and complications were 2.7% and 6.6%, respectively. Increasing age was an independent predictor of death and complications. Patients admitted for resuscitation (n=225) had the highest likelihood of mortality and complications (OR [95% CI]: 229.0 [169.8-308.8], 421.0 [244.8-724.3], respectively). The median length of hospital stay was 2 (IQR: 1-5) days. Examination of the proportion of deaths over a 30-day LOS revealed a tri-modal mortality distribution that peaked on days 20, 25 and 30.

Conclusion: Patients of advanced age and those requiring resuscitation are at greater risk of both mortality and complications. This study provides an important first step toward quantifying the burden of EGS conditions in a lower-middle-income country. Data presented here will help facilitate efforts to benchmark EGS in similar and, as yet, unexplored settings.

37.09 A propensity score based analysis of the impact of Decompressive Craniectomy on TBI in India

D. Agarwal1, V. K. Rajajee2, D. Schoubel2, M. C. Misra1, K. Raghavendran2  1All India Institute Of Medical Sciences,Apex Trauma Institute,New Delhi, , India 2University Of Michigan,Ann Arbor, MI, USA

Introduction:  Severe Traumatic Brain Injury (TBI) is a problem of epidemic proportion in the developing world. The use of Decompressive Craniectomy (DC) may decrease the subsequent need for resources directed at Intracranial Pressure (ICP) control and ICU length of stay, important considerations in resource-constrained environments. The impact of DC on outcomes, however, is unclear.The primary objective of the study was to  determine the impact of DC on in-hospital mortality and 6-month functional outcomes following severe TBI in a resource-constrained setting at the JPNATC, AIIMS, New Delhi, India.

Methods: During a 4-year period data was prospectively entered into a severe TBI registry. Patients aged >12-years meeting criteria for ICP monitoring (ICPM) were included. The registry was queried for known predictors of outcome in addition to use of ICPM and DC. Early DC (eDC) was defined as DC performed <48 hours from injury. Outcomes of interest were in-hospital mortality and poor 6-month functional outcome with Glasgow Outcome Scale (GOS)<3. A propensity-score based analysis was utilized to examine the impact of DC on outcomes.

Results: Of 1345 patients meeting study criteria, 589 (44%) underwent DC. Following propensity-score based analysis, DC was associated with a 9.2% increase (p=0.005) in mortality and a 13.2% increase (p=0.016) in poor 6-month outcome, while eDC was associated with a 15.0% (p<0.0001) increase in mortality but not significantly associated with poor 6-month outcome (p=0.15).

Conclusions: The use of DC following severe TBI was associated with an increased likelihood of in-hospital death and poor 6-month functional outcome in a high-volume resource-constrained setting. Clinical trials of DC in similar settings are warranted to determine the impact of DC in severe TBI.

37.10 Indirect Costs Incurred by Patients Obtaining Free Breast Cancer Care in Haiti

K. M. O’Neill1, M. Mandigo5, R. Damuse6,7, Y. Nazaire6,7, J. Pyda4, R. Gillies7, J. G. Meara2,3,7  1University Of Pennsylvania,Perelman School Of Medicine,Philadelphia, PA, USA 2Harvard School Of Medicine,Brookline, MA, USA 3Children’s Hospital Boston,Plastic Surgery,Boston, MA, USA 4Beth Israel Deaconess Medical Center,Surgery,Boston, MA, USA 5University Of Miami,School Of Medicine,Miami, FL, USA 6Hopital Universitaire Mirebalais,Mirebalais, CENTRE, Haiti 7Partners In Health,Boston, MA, USA

Introduction: In low- and middle-income countries (LMIC), it has been reported that 90% of patients with breast cancer have stage III or IV upon presentation.[i] Although many factors contribute to this phenomenon, the financial burden of seeking care incurred through indirect costs such as user fees, food, travel and lost wages is an important consideration that is often overlooked.

Methods:  In this study, we delineated the costs that Haitian patients pay out-of-pocket to seek comprehensive oncology care at Hôpital Universitaire de Mirebalais (HUM), where oncologic care is offered free of charge. In total, 61 patients were directly interviewed about associated costs during different points along the treatment cycle: (1) Diagnostic visits; (2) Chemotherapy visits (pre- and post-surgery) and (3) Surgical visit.

Results: On average, patient indirect expenses were: $619.04 for diagnostic costs, $635.68 for chemotherapy and $94.33 for the surgical visit. When costs at outside facilities were included, we found that patients paid $1,698.84 out-of-pocket on average during the course of their treatment. When comparing these expenses to the income of the patients, we found that patients were spending 193% (95% CI: 99%-287%) of their income on average for out of pocket expenses, with 68% of patients spending >40% of their potential income on medical expenses. When we included lost wages into the indirect costs, the average indirect costs came to $6,465 (95% CI: $1,833 – $11,096). The indirect costs to the patient were on average 3.36 times higher than the direct costs to the hospital (calculated in a separate study as $1,922 per patient).

Conclusion: Health expenditures are financially catastrophic for families throughout the world. In Haiti, 74% of people live on less than $2 per day and 65% live in extreme poverty (less than $1 per day).[ii] Given the findings in this study, it is likely that the financial burden of seeking care for breast cancer—even when that care is offered “free of charge”—may be insurmountable for the majority of patients.

[i] Fregene A & Newman LA. Breast cancer in sub-Saharan Africa: How does it relate to breast cancer in African American women? Cancer 2005;103(8):154050.

[iI] "Objectifs du Millenaire pour le developpement etat, tendances et perspectives. Ministere de L’Economie et des Finances. Institut Haitien De Statistique et D’informatique. December 2009 http://www.ihsi.ht/pdf/odm/OMD_Novembre_2010.pdf Accessed June 20, 2014 

 

38.01 Joints Under Study Trial (JUST)

R. Martin1, A. Chan1  1Mount Hospital Breast Cancer Research Centre

Overview:
The JUST trial is a Phase II, randomised, double blind, placebo-controlled study to evaluate the efficacy of topical pure Emu oil for arthralgic pain, related to aromatase inhibitor use in postmenopausal women with early breast cancer.

Background:
20% of patients using aromatase inhibitors for treatment of breast cancer cease them due to arthralgias and joint pains.
Emu oil has been shown to have topical anti-inflammatory effect in animal studies. A phase 1 trial has demonstrated 45% reduction in pain scores in 13 women applying Emu oil to affected joints after 8 weeks treatment.

Aim:
Using Visual Analogue Scores (VAS) we aim to demonstrate an improvement in joint pain as assessed from baseline to end of 8 weeks of treatment.

Secondary end points are to demonstrate an improvement in joint stiffness as assessed by a 4 point categorical scale from baseline to end of 8 weeks of treatment. We will also look at: Adverse effects related to the use of emu oil, compliance with application of Emu oil, and assess overall pain at the end of 8 weeks using the Brief Pain Inventory score

Methods:
~75 patients with joint pain subjectively worsening whilst on an aromatase inhibitor, randomized to receive either 250ml Emu Oil or 250ml placebo oil on a 1:1 basis. 1.25 ml oil applied over 30mins to up to 3 affected joints nominated at baseline. Baseline 5 point visual analogue scores completed with Brief pain inventory (BPI). Daily diary entry will be checked to ensure compliance, with final VAS and BPI scores completed at 8 weeks. At 8 weeks participants will be offered a further 8 weeks of treatment with open label Emu oil with VAS and BPI to be completed at the end of 16 weeks.

Accrual expected to be complete end February 2015. 72 patients will give 80% power to detect a 40% difference allowing for 25% placebo effect.
 

38.02 Staging Studies are of Limited Utility for Newly Diagnosed Clinical Stage I-II Breast Cancer

A. Linkugel1, J. Margenthaler1, A. Cyr1  1Washington University,General Surgery/College Of Medicine,St. Louis, MO, USA

Introduction:   For patients diagnosed with clinical Stage I-II breast cancer, treatment guidelines recommend against the routine use of radiologic staging studies in the absence of signs or symptoms suggestive of distant metastasis. However, these tests continue to be used for many early-stage breast cancer patients. This study aims to determine the utilization and yield of these studies at a National Comprehensive Cancer Network (NCCN) member institution.

Methods:   Female patients presenting with AJCC 7th Edition clinical stage I-II invasive breast cancer between 1998 and 2012 at Siteman Cancer Center, an NCCN member institution, were identified in a prospectively maintained institutional surgical database. Patients treated with neoadjuvant chemotherapy were excluded. Charts were reviewed to verify clinical stage and to document staging studies performed within six months of diagnosis.  Staging studies of interest included computed tomography (CT) of the chest, abdomen, and/or pelvis, bone scan, and positron emission tomography (PET).  Results of staging studies and additional diagnostic studies or procedures were recorded.  Descriptive statistics were used for the analysis.

Results:  A total of 3291 patients were included in the analysis (2044 were stage I and 1247 were stage II). Of these, 882 (27%) received CT of the chest, abdomen, and/or pelvis; bone scan; or PET within 6 months of diagnosis. A total of 691/882 (78%) received chest CT, 705/882 (80%) abdominal/pelvic CT, 704/882 (80%) bone scan, and 70/882 (8%) PET. Of these 882 patients, 312 were stage I (15% of the stage I cohort) and 570 were stage II (46% of the stage II cohort). Of the 882 patients imaged, 194 (22%) required additional imaging (x-ray, CT, bone scan, sonogram, or PET) and/or biopsies to follow-up abnormalities seen on the staging studies. However, only 11 of those 194 (6%) were confirmed to have metastatic disease (1.2% of the 882 imaged patients, 0.33% of the total study cohort). Of these 11 patients, one was clinically stage I at presentation, and 10 were stage II. Metastatic sites identified included lung (n=3), bone (n=4), liver (n=1), and a combination of sites (n=3). Numbers of patients determined to have metastatic disease were too small for comparative analysis.

Conclusions:  The identification of distant metastasis among clinical Stage I-II patients in this study was rare (0.33% of the total cohort). Even among patients judged appropriate for staging studies (CT, bone scan, and/or PET), only 1.2% were diagnosed with metastatic disease. These findings suggest that even at an NCCN member institution, staging studies are overused and lead to additional procedures in over 20% of patients.

38.03 Cancer-Directed Surgery and Conditional Survival in Advanced Stage Colorectal Cancer

L. M. Wancata1, M. Banerjee4, D. G. Muenz4, M. R. Haymart5, S. L. Wong3  1University Of Michigan,Department Of General Surgery,Ann Arbor, MI, USA 3University Of Michigan,Division Of Surgical Oncology,Ann Arbor, MI, USA 4University Of Michigan,Department Of Biostatistics,Ann Arbor, MI, USA 5University Of Michigan,Division Of Metabolism, Endocrinology, & Diabetes & Hematology/Oncology,Ann Arbor, MI, USA

Introduction: Though historically associated with poor survival rates, recent data demonstrate that some patients with advanced (stage IV) colorectal cancer (CRC) are surviving longer in the modern era.  Treatments have included improvements in systemic therapies and increased use of metastasectomy.  Traditional survival estimates are less useful for longer-term cancer survivors and conditional survival, or survival prognosis based on time already survived, is becoming more accepted as a means of estimating prognosis for certain subsets of patients who live beyond predicted survival times.  What is unknown is how specific treatment modalities affect survival.  We evaluated the use of cancer-directed surgery in patients with advanced CRC to determine its impact on long term survival in this patient population.

Methods: We used data from the Surveillance, Epidemiology, and End Results (SEER) registry to identify 323,415 patients with CRC diagnosed from 2000-2009.  The SEER program collects data on patient demographics, tumor characteristics, treatment, and survival data from cancer registries across the country. This cohort represents approximately 26% of the incident cases and its demographics are comparable to that of the general US population. Conditional survival estimates by SEER stage, age and cancer-directed surgery were obtained based on Cox proportional hazards regression model of disease-specific survival.

Results: Of the 323,415 patients studied 64,956 (20.1%) had distant disease at the time of diagnosis.  Median disease-specific survival for this cohort was just slightly over 1 year. The proportion of patients with distant disease who underwent cancer-directed surgery was 65.1% (n=42,176).  Cancer-directed surgery in patients with distant disease appeared to have a significant effect on survival compared to patients who did not undergo surgery [hazard ratio 2.22 (95% CI 2.17-2.27)].  These patients had an approximately 25% improvement in conditional 5 year disease specific survival across all age groups as compared to their counterparts who did not receive cancer-directed surgery, demonstrating sustained survival benefits for selected patients with advanced CRC who undergo resection.  A significant improvement in conditional survival was observed over time, with the greatest gains in patients with distant disease compared to those with localized or regional disease (Figure).

Conclusion: Five-year disease-specific conditional survival improves dramatically over time for selected patients with advanced stage CRC who undergo cancer-directed surgery.  This information is important in determining long term prognosis associated with operative intervention and will help inform treatment planning for patients with metastatic disease.

38.04 Temporal Trends in Receipt of Immediate Breast Reconstruction

L. L. Frasier1, S. E. Holden1, T. R. Holden2, J. R. Schumacher1, G. Leverson1, B. M. Anderson3, C. C. Greenberg1, H. B. Neuman1,4  1University Of Wisconsin,Wisconsin Surgical Outcomes Research Program, Department Of Surgery,Madison, WI, USA 2University Of Wisconsin,Department Of Medicine,Madison, WI, USA 3University Of Wisconsin,Department Of Human Oncology,Madison, WI, USA 4University Of Wisconsin,Carbone Cancer Center,Madison, WI, USA

Introduction:  : Research suggests an inverse relationship between post-mastectomy radiation (PMRT) and immediate breast reconstruction (IR). Recent data on the effectiveness of PMRT has led to increasing use in patients at intermediate risk (tumor ≤ 5cm with 1-3 positive nodes) of recurrence. At the same time, significant increases in the use of IR over the last decade have been observed. We sought to determine whether the increased use of PMRT in intermediate risk patients has led to a slower increase in rates of IR when compared to groups in whom the guidelines for PMRT have not changed.  

Methods:  The SEER Database was used to identify female patients with stages I‑III breast cancer undergoing mastectomy over the decade from 2002‑2011 (n=40,889). Patients ≥ 65 were excluded due to low rates of IR (5.1%). Three patient cohorts defined by likelihood of PMRT were formed based on tumor characteristics: High Likelihood (four or more positive lymph nodes or tumors >5 cm with 1‑3 positive lymph nodes), Intermediate Likelihood (tumors ≤5 cm with 1‑3 positive lymph nodes), and Low Likelihood (tumors ≤5 cm with 0 positive nodes). Changes in IR for each of these groups over time were assessed using joinpoint regression and summarized using annual percentage change (APC), which represents the slope of the line.

Results: The overall use of reconstruction increased from 22% in 2002 to 41% in 2011.  This statistically significant increase was observed across all 3 cohorts defined by the likelihood of receiving PMRT and across all ages. Receipt of IR was lower among groups with a higher likelihood of a recommendation for PMRT at the start of the study period: 14.1%, 19.4%, and 27.8% in the High, Intermediate, and Low Likelihood cohorts, respectively, in 2002. The highest risk group demonstrated the most increase in receipt of IR, as evidenced by its annual percentage change of 9.8%, with intermediate and low risk exhibiting APCs of 6.2% and 5.9%, respectively.  No group showed a significant change in APC from 2002-2011, meaning the rate of change was constant over the study period.

Conclusion: Rates of reconstruction have increased over the study period across tumor characteristics and are highest in patients that are least likely to receive a recommendation for PMRT. At no point did any group exhibit any evidence of a decreased rate of change, despite increased indications for PMRT over this time period. In fact, rates of IR for patients at intermediate and high likelihood of receiving PMRT are increasing faster than rates for the lowest-likelihood patients.  This may indicate that surgeons and radiation oncologists are becoming increasingly more comfortable with the prospect of immediate reconstruction in the setting of anticipated PMRT.
 

38.05 Factors impacting patient compliance with breast cancer screening guidelines in the US.

S. C. Pawar1, R. S. Chamberlain1  1Saint Barnabas Medical Center,Surgery,Livingston, NJ, USA

Introduction: Breast cancer screening guidelines for women between age 40 and 49, 50 and 74, and over 75 years of age are variable.  Controversies exist as to the effectiveness and potential risk associated with screening among different age groups and important predictors of mammography remains unclear. The study sought to determine breast cancer screening rates among US women of various ages, identify factors predictive of adherence to mammographic screening guidelines, and determination of the impact of physician recommendation.  

Methods: The National Health Interview survey database was queried to identify female patients who underwent a screening mammography between 2008 and 2010. Univariate and multivariate logistic regression models were used to identify predictors of mammography. 

Results:The median age of the study cohort was 53 yr. Among 11,312 women surveyed, and 8,155 (72%) had undergone a mammogram. Women undergoing mammographic screening were significantly older than women did not undergo a mammogram (53 Vs 39; P < 0.001). The high to low possibility of women undergoing mammographic screening were age 50-74 years followed by ≥ 75 years (85%), and 40-49 years (77%) and  < 40 years (27%). 74% of this cohort were Caucasians, 18% African Americans, 0.8% Asians and 2% other races.  86% of women had insurance coverage for mammographic screening while 14% lacked any insurance coverage. Overall 53% of the uninsured received a mammogram.  Northeastern region had the highest percentage of women screened; however there were no significant geographic differences.  Mammographic screening was completed by 59% of women in whom it was recommended by physicians and by 75% in whom it was recommended by their designated primary care provider (p < 0.01). The percentage of women undergoing mammographic screening dropped from 78% (2008) to 76% (2010) over the study period and this was significant across women of all age groups except those < 40 years. The strongest predictors of completing mammography were physician recommendation, a designated primary care provider recommendation, adherence with annual breast examination, race/ethnicity, insurance type, and income status. The strongest association between physician recommendation and undergoing mammography was in the youngest age group women (OR: 20; 95% CI 15-27). Among women < 40 years for whom a mammogram was recommended by the physician, 23% had a history of BRCA1/BRCA2 gene mutation, while 34% reported of a family history of breast cancer.

Conclusion:A decrease in mammography screening among women of all age groups was observed during the study period and was most conspicuous in younger women. Explanations are likely multifactorial, but may be related to implementation of the USPSTF (United States prevention services task force). Barriers to mammography identified included the absence of physician recommendation, lack of a designated primary care provider, lack of adherence to annual breast examination, racial minorities, lower socioeconomic status, decreased education level and deprived insurance status. Physician recommendation is the strongest predictor of mammographic screening among patients compliant in all age groups, although there may be over recommendation of mammography among those > 70 years and < 40 years. 

 

38.06 Utilization of PET in Patients with Lung and Esophageal Cancers

M. A. Healy1, H. Yin1, R. M. Reddy1, S. L. Wong1  1University Of Michigan,Department Of Surgery And Center For Health Outcomes & Policy,Ann Arbor, MI, USA

Introduction: Positron Emission Tomography (PET) scans are commonly used for cancer patients as part of the staging process. PET scans are often used for surveillance without evidence that they are superior to lower cost screening scans, and there are concerns about potential overuse. We evaluated PET utilization patterns for patients with lung (LC) and esophageal (EC) cancers.

Methods: Using national Surveillance Epidemiology and End Results (SEER) and Medicare linked data from 2005-2009, we examined the use of PET in a cohort of patients with primary lung (n=105,697) and esophageal (n=6,961) cancers who were diagnosed during this period. Cancer diagnoses were identified with ICD-9 diagnoses codes: lung 162.xx and esophagus 150.xx. Diagnostic services such as PET are captured as charges, which are covered under Medicare. We examined a fee-for-service cohort of patients, excluding patients in risk-bearing Medicare managed care plans and patients who are not continuously enrolled in parts A and B. We examined the frequency and timing of PET usage, including with regard to diagnosis, treatment and cancer stage.

Results: There was similar overall utilization of PET in these groups, with 47,795 (45.2%) and 3,734 (53.6%) of lung and esophageal cancer patients, respectively, receiving at least one scan. Most patients received a first scan within 3 months of diagnosis (78.3% LC, 87.3% EC), indicating likely use for staging. Use of 2 or more scans occurred in 20,216 (19.1%) and 1,867 (26.7%) of LC and EC patients, respectively.  Additionally, 11,117 (10.5%) LC and 1,052 (15.1%) EC patients underwent 3 or more scans. Among patients with stage IV disease, 2 or more scans were performed in 4,987 (11.8%) and 382 (21%) of LC and EC patients, respectively. In this stage IV group, 2,710 (6.4%) LC and 222 (12.2%) EC underwent 3 or more scans.

For patients who underwent PET prior to chemotherapy, 10,085 (28.5%) and 781 (26.8%) of LC and EC patients received a single additional scan, 11,467 (32.3%) and 1,345 (46.2%) had 2 or more scans, and 6,774 (19.1%) and 793 (27.2%) had 3 or more scans. Total PET usage for LC was 96,475 scans and for EC was 8,223 scans.

Conclusion: Our results show that PET usage is common, though only half of patients with LC and EC received staging scans. However, many patients undergo multiple scans. A large number of patients with stage IV disease underwent as many as 3 or more scans, and it is in these patients that the likelihood of any benefit is the least. Our data supports the need for continued education to avoid using PET for surveillance in these cancers, especially in patients with advanced disease. Medicare’s current policy limiting routine reimbursement to 3 scans probably does not effectively curb wasteful PET usage.

35.07 Heart Rate in Pediatric Trauma: Rethink Your Strategy

J. Murry1, D. Hoang1, G. Barmparas1, A. Zaw1, M. Nuno1, K. Catchpole1, B. Gewertz1, E. J. Ley1  1Cedars-Sinai Medical Center,Los Angeles, CA, USA

Introduction:   The optimal heart rate (HR) for children after trauma is based on resting values for a given age, sometime with the use of a Broselow tape. Given the stages of shock are based in part on HR and blood pressure, treatment plan may vary if these values are abnormal.  Admission HRs for children after trauma were analyzed to determine which ranges were associated with lowest mortality.

Methods:   The NTDB (2007-2011) was queried for all injured patients ages 1 to 14 years admitted (n = 398,544). Age groups were analyzed at ranges to match those provided by the Broselow tape (1 year, 2-3 years, 4 years, 5-6 years, 7-8 years, 9-11 years, 12-13 years).  Exclusions included any Abbreviated Injury Scale=6, Injury Severity Score=75, ED demise, or missing information.  

Results:  After exclusions, admission HRs from 135,590 pediatric trauma patients were analyzed, overall mortality was 0.7% (table).  At 1 year the HR range with the lowest OR for mortality was 100 to 179. Starting at age 7 years lowest mortality was observed for HR range 80-99.

Conclusion:  The HR associated with lowest mortality after pediatric trauma frequently differs from current standards.  Starting at age 7 years, the HR range of 80 to 99 predicts lower mortality.  Our data indicates that at age 7 years a child with HR of 120 may be in stage III shock and treatment might include admission, intravenous fluids and probably blood products. Traditional HR ranges suggest that the normal HR for this child includes 120 and therefore aggressive treatment might not be considered.  Knowing when HR is critically high or low in the pediatric trauma population might guide treatment options such as ED observation, hospital admission, ICU admission and even emergent surgery.

 

35.08 The Impact of the American College of Surgeons Pediatric Trauma Center Verification on In-Hospital Mortality

B. C. Gulack1, J. E. Keenan1, D. P. Nussbaum1, B. R. Englum1, O. O. Adibe1, M. L. Shapiro1, J. E. Scarborough1  1Duke University Medical Center,Department Of Surgery,Durham, NC, USA

Introduction:  Previous studies have demonstrated improvement in the survival of pediatric trauma patients treated at American College of Surgeons (ACS) verified pediatric trauma centers.  However, it is not known whether the level of pediatric trauma center verification, Level 1 (PTC1) versus Level 2 (PTC2), has any effect on outcomes.

Methods: We performed a review of the research data set (RDS) from the National Trauma Data Bank (NTDB) from 2007-2011, including all pediatric patients less than 16 years of age who were treated at an ACS verified adult level I trauma center.  Patients were excluded if they were transferred to another facility.  Patients were subdivided on the basis of trauma center verification: PTC1, PTC2, or trauma center without a pediatric ACS verification. These groups were compared with regards to baseline demographics, injury severity, and outcomes.  Multivariable logistic regression was then performed to determine the independent association of ACS pediatric verification and in-hospital mortality.

Results: A total of 124,773 patients were included in the study, 63,746 (51.1%) of which presented to a PTC1 while 7,562 (6.1%) presented to a PTC2 and 53,465 (42.8%) presented to a trauma center with no pediatric ACS verification.  Unadjusted analysis demonstrated significant differences in in-hospital mortality at PTC1s (1.6%) compared to PTC2s (2.4%) and trauma centers without pediatric ACS verification (2.1%, p<0.001).  In multivariable logistic regression, compared to hospitals without pediatric ACS verification, PTC1s had a significantly reduced in-hospital mortality (Adjusted Odds Ratio [AOR] (95% Confidence Interval [CI]): 0.85 (0.73, 0.99), Figure) while PTC2s did not (AOR (95% CI): 1.07 (0.80, 1.42).

Conclusion: Pediatric patients treated at centers verified as a level I pediatric trauma center by the ACS have a significantly decreased odds of in-hospital mortality compared to those treated at non-verified centers, however this is not seen with regards to PTC2s.  Further investigation is necessary in order to determine if more stringent requirements are necessary for PTC2 verification.

35.09 Outcomes for Burns in Children: Volume Makes a Difference

T. L. Palmieri1,2, S. Sen1,2, D. G. Greenhalgh1,2  1University Of California – Davis,Sacramento, CA, USA 2Shriners Hospitals For Children Northern California,Sacramento, CA, USA

Introduction: The relationship between center volume and patient outcomes has been analyzed for multiple conditions, including burns, with variable results. To date, studies on burn volume and outcomes have primarily addressed adults. Burned children require age specific equipment and competencies in addition to burn wound care. We hypothesized that volume of patients treated would impact outcome for burned children.

Methods: We used the National Burn Repository (NBR) release from 2000-2009 to evaluate the influence of pediatric burn volume on outcomes using mixed effect logistic regression modeling. Of the 210,683 records in the NBR over that time span, 33,115 records for children ≤18 years of age met criteria for analysis.

Results: Of the 33,115 records, 26,280 had burn sizes smaller than 10%; only 32 of these children died. Volume of children treated varied greatly among facilities. Age, total body surface area (TBSA) burn, inhalation injury, and burn center volume influenced mortality (p<0.05) An increase in the median yearly admissions of 100 decreased the odds of mortality by approximately 40%. High volume centers (admitting >200 pediatric patients/year) had the lowest mortality when adjusting for age and injury characteristics (p<0.05).

Conclusion: Burn centers caring for a greater number of children had lower mortality rates.  The lower mortality of children a high volume centers could reflect greater experience, resource, and specialized expertise in treating pediatric patients.

 

35.10 Mechanism and Mortality of Pediatric Aortic Injuries.

J. Tashiro1, C. J. Allen1, J. Rey2, E. A. Perez1, C. M. Thorson1, B. Wang1, J. E. Sola1  1University Of Miami,Division Of Pediatric Surgery, DeWitt-Daughtry Department Of Surgery,Miami, FL, USA 2University Of Miami,Division Of Vascular And Endovascular Surgery, DeWitt-Daughtry Department Of Surgery,Miami, FL, USA

Introduction: Aortic injuries are rare, but have a high mortality rate in children and adolescents. We sought to investigate mechanisms of injury and predictors of survival.

Methods:  Kids’ Inpatient Database was used to identify cases of thoracic and abdominal aortic injury (ICD-9-CM 901.0, 902.0) in patients aged <20 yrs (1997-2009). Demographic and clinical characteristics were analyzed using standard and multivariate methods. Cases were limited to emergent or urgent admissions.

Results: Overall, 468 cases were identified. Survival was 65% for the cohort, 63% for boys, and 68% for girls. Average length of stay was 10.7±14.0 days with charges 105,110±121,838 USD. Adolescents (15-19 years) and males comprised the majority of the group (84% and 79%, respectively). Patients were predominantly Caucasian (45%) and privately insured (51%). Injuries tended to affect patients in the lowest income quartile (36%) and most presented to large (78%) or urban teaching (83%) hospitals. The most common mechanism of injury was motor vehicle-related (77%), followed by other penetrating trauma (10%) and firearm injury (8%). On logistic regression modeling, select diagnoses and procedures, along with gender, race group, payer / income status, and hospital type were found to be significant determinants of mortality. Boys (OR: 0.15 [95% CI: 0.05, 0.44]) and Hispanic children (OR: 0.14 [0.04, 0.55]) had lower associated mortality vs. girls and Caucasian patients, respectively. Self-pay patients (OR: 6.91 [2.01, 23.8]) had higher mortality vs. privately insured patients. Children in the lowest income quartile (OR: 15.5 [4.16, 57.6]) had higher mortality vs. highest income patients. Patients admitted to urban non-teaching hospitals (OR: 0.13 [0.03, 0.55]) had lower mortality vs. those admitted to urban teaching hospitals. Patients with traumatic shock (OR: 47.8 [12.4, 184]) or necessitating exploratory laparotomy (OR: 13.9 [2.12, 91.8]) had the lowest associated survival overall. Patients undergoing repair of vessel (OR: 0.25 [0.10, 0.62]) or resection of thoracic vessel with replacement (OR: 0.18 [0.04, 0.73]) had higher associated survival. Survival increased over the study period between 1997 and 2009, p<0.01.

Conclusion: Motor vehicle-related injuries are the predominant mechanism of aortic injury in the pediatric population. Gender, race, payer status, income quartile, and hospital type, along with associated procedures and diagnoses, are significant determinants of mortality on multivariate analysis.

36.01 Cost-utility of prophylactic mesh relative to primary suture repair for high-risk laparotomies

J. P. Fischer1, M. N. Basta1, N. Krishnan2, J. D. Wink1, S. J. Kovach1  1University Of Pennsylvania,Division Of Plastic Surgery,Philadelphia, PA, USA 2Georgetown University Medical Center,Plastic Surgery,Washington, DC, USA

Introduction

 

Although hernia repair with mesh can be successful, prophylactic mesh augmentation (PMA) represents a potentially useful preventative technique to mitigate incisional hernia (IH) risk select high-risk patients.  The efficacy, cost-benefit, and societal value of such an intervention is not clear. The aim of this study is to determine the cost-utility of using prophylactic mesh to augment midline fascial incisions.

Methods

 

A systematic review was performed identifying articles containing comparative outcomes for PMA and suture closure of high-risk laparotomies.  A web-based visual analog scale survey was administered to 300 nationally-representative community members to determine quality-adjusted life-years (QALYs) for several health states related to hernia repair (GfK Research). A decision tree model was employed to evaluate the cost-utility of using PMA relative to primary suture closure after elective laparotomy.  Inputs included cost (DRG, CPT, and retail costs for mesh), quality of life, and health-outcome probability estimates. The cost effectiveness threshold was set at $50,000/year-of-life gained.The authors adopted the societal perspective for cost and utility estimates. The costs in this study included direct hospital costs and indirect costs to society, and utilities were obtained through a survey of 300 English-speaking members of the general public evaluating 14 health state scenarios relating to ventral hernia.  

Results

 

Primary suture closure without mesh demonstrated an expected average cost of $17,182 (average QALY of 21.17) compared to $15,450 (expected QALY was 21.21) for PMA. Primary suture closure was associated with an ICER of -$42,444/QALY compared to prophylactic mesh, such that PMA was more effective and less costly.  Monte-Carlo sensitivity analysis was performed demonstrating more simulations resulting in ICERs for primary suture closure above the willingness-to-pay threshold of $50,000/QALY supporting the finding that prophylactic mesh is superior in terms of cost-utility.  Additionally, base rate analysis with an absolute reduction in hernia recurrence rate of 15% for prophylactic mesh demonstrated that mesh could cost a maximum of $3,700 and still be cost-effective.

Conclusions

 

Cost-utility analysis of suture repair compared to PMA of abdominal fascia incisions demonstrates PMA was more effective, less costly, and overall more cost-effective than primary suture closure.  Sensitivity analysis demonstrates that PMA dominates at multiple levels of willingness-to-pay, and is a potentially valuable, cost-effective, low risk intervention to mitigate risk of IH.

36.02 Cost-Effectiveness of Non-operative Management of Acute Uncomplicated Appendicitis

J. X. Wu1, A. J. Dawes1, G. D. Sacks1  1UCLA David Geffen School Of Medicine,Department Of General Surgery,Los Angeles, CALIFORNIA, USA

Introduction:  Appendectomy remains the gold standard of treatment for acute uncomplicated appendicitis. Nonetheless, there is growing evidence that non-operative management is both safe and efficacious. Non-operative management avoids the initial cost and morbidity associated with an operation, but may result in longer hospital stays, increased readmissions, and higher risk of treatment failures. We hypothesized non-operative management of acute appendicitis is cost-effective.

Methods:  We constructed a decision tree to compare non-operative management of acute uncomplicated appendicitis both with and without interval appendectomy (IA) to laparoscopic appendectomy at the time of diagnosis (Fig 1). Outcome probabilities, health utilities, and direct costs were abstracted from a review of the literature, Healthcare Cost and Utilization Project data, the Medicare Physician Fee schedule, and the American College of Surgeons National Surgical Quality Improvement Program Surgical Risk Calculator. Conservative estimates were used for operative costs and postoperative quality adjusted life-year (QALY) reductions to favor conventional operative management. Operative management was used as the reference group for cost-effectiveness comparisons. We performed Monte Carlo simulations using one-way and probabilistic sensitivity analyses to assess model parameters. 

Results: Operative management had a mean cost of $12,386. Compared to the status quo, non-operative management without IA dominated as the most cost-effective management strategy, costing $1,937 less and yielding 0.04 additional QALY. Non-operative management with IA was the least cost effective strategy, requiring an additional $2,880 per patient with no additional health benefit. One-way sensitivity analysis revealed that operative management would become the dominant strategy if recurrence rate of acute appendicitis after non-operative management exceeds 42%, or if the total cost of operative management could be reduced below $5,568. Probabilistic sensitivity analysis revealed that non-operative management without IA was dominant in 100% of 10,000 iterations.

Conclusion: Non-operative management without IA is potentially the most cost-effective treatment for healthy adults with acute uncomplicated appendicitis, and deserves serious consideration as a treatment option for a disease long thought to be definitively surgical. Further studies are necessary to better characterize the health states associated with the various treatment outcomes.

 

36.03 A Cost-utility Assessment of Mesh Selection in Clean and Clean-Contaminated Ventral Hernia Repair (VHR)

J. P. Fischer1, M. Basta1, J. D. Wink1, N. Krishnan2, S. J. Kovach1  1University Of Pennsylvania,Division Of Plastic Surgery,Philadelphia, PA, USA 2Georgetown University Medical Center,Plastic Surgery,Washington, DC, USA

PURPOSE

Ventral hernia is a common, challenging, and costly problem in the United States.  Mesh-reinforcement can reduce recurrence, but selection is poorly understood, particularly in higher risk wounds.  The use of acellular dermal matrixes (ADM) has provided a tool to perform single-stage ventral hernia repairs (VHR) in challenging wounds, but can be associated with higher complications, cost, and poor longevity. The aim of this study is to perform a cost-utility analysis of ADM and synthetic mesh in clean and clean-contaminated (CC) VHR.

METHOD

A systematic review was performed identifying articles containing comparative outcomes for synthetic mesh and ADM repairs.  A web-based visual analog scale survey was administered to 300 nationally-representative community members to determine quality-adjusted life-years (QALYs) for several health states related to hernia repair (GfK Research). A Decision tree was created for the reference cases (VHR with ADM or synthetic) and up to six additional post-operative scenarios.  Inputs included cost (DRG, CPT, and retail costs for mesh), quality of life, and health-outcome probability estimates. The cost effectiveness threshold was set at $50,000/year-of-life gained (incremental cost utility ratio (ICUR)).

 

RESULT

There was a 16% increase in the risk of a complication occurring after VHR when using biologic mesh compared to synthetic mesh in CC fields. This risk increased to 30% in clean fields. In CC fields, there was an increase of $8,022.61 in expected cost of VHR when using biologic mesh relative to synthetic mesh with a loss in clinical efficacy of 0.47 QALYs. This resulted in an ICUR of -$17,000/QALY. There was an expected increase of $11,694.02 with a clinical loss of 0.51 QALYs when using biologic mesh in clean fields resulting in an ICUR of -$23,000/QALY. Sensitivity analysis revealed that the recurrence rate of biologic mesh needs to be below 5% or the recurrence rate of synthetic mesh needs to be greater than 23% for biologic mesh to be cost effective in CC fields. In clean cases, the recurrence rate of synthetic mesh needs to be greater than 21% in order for biologic mesh to be cost effective.

CONCLUSION

This cost-effectiveness analysis of mesh selection indicates that biologic meshes are not cost effective relative to synthetic mesh in clean or CC defects. Specifically, from a societal perspective synthetic mesh is both cheaper and more clinically effective than biologic mesh.  Given the high prevalence of hernia and its associated cost to society, our data is critically important in improving cost-effective repair techniques, providing value-based care, and conserving healthcare resources in an ever-changing healthcare environment. 

36.04 National Analysis of Cost and Resource Utilization of Expanded Criteria Donor Kidneys

C. C. Stahl1, K. Wima1, D. J. Hanseman1, R. S. Hoehn1, E. F. Midura1, I. M. Paquette1, S. A. Shah1, D. E. Abbott1  1University Of Cincinnati,Cincinnati, OH, USA

Introduction:  Despite efforts to increase the deceased donor pool by increased utilization of expanded criteria donor (ECD) kidneys, concerns have been raised about the financial impact and resource utilization of these organs.

Methods:  The Scientific Registry of Transplant Recipients database was linked to the University HealthSystem Consortium Database to identify adult deceased donor kidney transplant recipients from 2009-2012.  Patients were divided into those receiving standard criteria donor (SCD) and ECD kidneys.  Length of stay, 30-day readmission rates, discharge disposition, and delayed graft function (DGF) were used as indicators of resource utilization.  Cost was defined as reimbursement based on Medicare cost:charge ratios, and included the costs of readmission when applicable.

Results: Of the 19,529 patients in the final dataset (47.6% of the total SRTR deceased donor cohort), 3,495 were ECD recipients (17.9%).  ECDs were more likely to be transplanted into older (median age 62 vs 52), male (63.7% vs 59.3%) and diabetic recipients (47.1% vs. 31.7%); all p<0.001.  On multivariable analysis, ECD kidneys were associated with increased 30-day readmission (OR: 1.35, CI: 1.21-1.50) and DGF (OR: 1.33, CI: 1.19-1.50) but length of stay (RR: 1.03, CI: 0.97-1.09) and discharge disposition (discharge to home, OR: 1.03, CI: 0.78-1.37) were similar between cohorts.  There was no difference in total cost (transplant hospitalization+readmission within 30 days) between ECDs and SCDs (RR: 0.97, CI: 0.93-1.00, p=0.07).

Conclusion: These data suggest that use of ECDs does not negatively impact short-term resource utilization and that ECDs can be more broadly utilized without financial consequences.

 

36.05 Abandoning Daily Routine Chest X-rays in a Surgical Intensive Care Unit: A Strategy to Reduce Costs

S. A. Hennessy1, T. Hranjec2, K. A. Boateng1, M. L. Bowles1, S. L. Child1, M. P. Robertson1, R. G. Sawyer1  1University Of Virginia,Department Of Surgery,Charlottesville, VA, USA 2University Of Texas Southwestern Medical Center,Department Of Surgery,Dallas, TX, USA

Introduction:   Chest x-ray (CXR) remains the most commonly used imaging modality in the Surgical Intensive Care Unit (SICU), especially in mechanically ventilated patients.   The practice of daily, routine CXRs is associated with morbidity for the patient and significantly increased costs.  We hypothesized that elimination of routine daily CXRs in the SICU and integration of clinical on-demand CXRs would decrease cost without any changes in morbidity or mortality.

Methods:   A prospective comparative quality improvement project was performed over a 6 month period at a single institution.  From November 2013 through January 2014 critically ill patients underwent daily routine CXRs (group 1).  From February through April 2014 daily routine CXRs were eliminated (group 2); ICU patients only received a CXR based on the on-demand CXR strategy. This strategy advised imaging for significant clinical changes or post-procedure.  Patients before and after the on-demand CXR strategy were compared by univariate analysis.   Parametric and non-parametric univariate testing was used where appropriate.  A multivariate logistic regression was performed to identify independent predictors of mortality.

Results:  In total, 495 SICU admissions were evaluated:  256 (51.7%) in group 1 and 239 (48.3%) in group 2.   There was a significant difference in the number of CXRs, with 4.2 ± 0.7 in the daily CXR group versus 1.2 ± 0.1 in the on-demand group (p<0.0001).  The mean cost per admission was $394.8 ± 47.1 in the daily CXR group versus $129.9 ± 12.5 in the on-demand group (p<0.0001).  This was an estimated cost savings of $60,000 over a 3 month period for group 2 compared to group 1.  Decreased ICU length of stay (LOS), hospital LOS and mechanical ventilation (MV) was seen in group 2, while mortality and re-intubation rates were equivalent despite decreased imaging (Table 1).  After adjusting for age, gender, re-intubation rate, duration of MV and APACHE III score, no difference in mortality was seen between the two groups (OR 2.2, 95% CI 0.7-6.4, p=0.15).  To further adjust for severity of illness, patients with APACHE III score > 30 were analyzed separately.   Mortality and re-intubation rate, ICU LOS and hospital LOS were similar between the groups, while duration of MV was still decreased (Table 1).  In high APACHE III score patients there was also a reduction in number of CXR per admission from 4.5 ± 0.8 to 1.4 ± 0.2 with a cost savings of $316.6 per ICU admission.

Conclusion:  Use of a clinical on-demand CXR strategy lead to a large cost savings without associated increase in mechanical ventilation or mortality.  This is a safe and effective quality initiative that will reduce cost without increasing adverse outcomes. 

 

36.06 Factors Associated with Secondary Overtriage in a Statewide Rural Trauma System

J. Con1, D. M. Long1, G. Schaefer1, J. C. Knight1, K. J. Fawad1, A. Wilson1  1West Virginia University,Department Of Surgery / Division Of Trauma, Emergency Surgery And Surgical Critical Care,Morgantown, WV, USA

Introduction:
Rural hospitals have variable degrees of involvement within the nationwide trauma system because of differences in infrastructure, human resources and operational goals.  “Secondary overtriage” is a term that has been used to describe the seemingly unnecessary transfers to another hospital, shortly after which the trauma patient is discharged home without requiring an operation.  An analysis of these occurrences is useful to determine the efficiency of the trauma system as a whole.  Few have addressed this phenomenom, and to our knowledge, we are the first to study it in the setting of a rural state's trauma system.

Methods:
Data was extracted from a statewide trauma registry from 2003-2013 to include those who were: 1) discharged home within 48h of arrival, and 2) did not undergo a surgical procedure.  We then identified those who arrived as a transfer prior to being discharged (secondary overtriage) from those who arrived from the scene.  Factors associated with transfers were analyzed using a logistic regression.  Injuries were classified based on the need for a specific consultant.  Time of arrival to ED was analyzed using 8-hour blocks, with 7AM-3PM as reference.

Results:
19,319 patients fit our inclusion criteria of which 1,897 (9.8%) arrived as transfers.  The mean ISS was 3.8 ± 3 for non-transfers and 6.6 ± 5 for transfers (p<0.0001).  Descriptive analysis showed various other differences between transfers and non-transfers due to our large sample size.  Thus, we examined variables that had more clinical significance using logistic regression controlling for age, ISS, the type of injury, blood products given, the time of arrival to the initial ER, and whether a CT scan was obtained initially.  Factors associated with being transferred were age>65, ISS>15, transfusion of PRBC’s, graveyard-shift arrivals, and neurosurgical, spine, and facial injuries.  Orthopedic injuries were not associated with transfers.  Patients having a CT scan done at the initial facility were less likely to be transferred.  

Conclusion:
Although transferred patients were more severely injured, this was not the only factor driving the decision to transfer.  Other factors were related to the rural hospital’s limited resources, which included the availability of surgical specialists, blood products, and overall coverage during the graveyard-shift.  More liberal use of the CT scaner at the initial facility may prevent unnecessary transfers. 
 

36.07 Comparing Local Flaps When Treating the Infected Vascular Groin Graft Wound: A Cost-Utility Analysis

A. Chatterjee1, T. Kosowski2, B. Pyfer2, S. Maddali3, C. Fisher1, J. Tchou1  1University Of Pennsylvania,Surgery,Philadelphia, PA, USA 2Dartmouth Medical School,Surgery,Lebanon, NH, USA 3Maine Medical Center,Portland, MAINE, USA

Introduction:
 

A variety of options exist in the treatment of the infected vascular groin graft.  The vascular and plastic surgery literature report on using the sartorius and rectus femoris flaps as reasonable coverage options.  Both flap options incur cost and have variability in success.  Given this, our goal was to perform a cost-utility analysis of the sartorius flap versus the rectus femoris flap in the treatment of an infected vascular groin graft.
 

Methods:

Cost utility methodology involved a literature review compiling outcomes for specific flap interventions, obtaining utility scores for complications to estimate quality adjusted life years (QALYs), accruing costs using DRG and CPT codes for each intervention, and developing a decision tree that could portray the more cost-effective strategy. Complications were divided into major and minor categories with major including graft loss with axillary-femoral bypass, amputation, and death. Minor complications assumed graft salvage after local debridement for partial flap necrosis, seromas and hematomas.  The upper limit for willingness to pay was set at $50,000.  We also performed sensitivity analysis to check the robustness of our data. Szilyagi III and Samson III and IV grades of infected groin grafts were included in our study.

Results:

Thirty two studies were used pooling 296 patients (234 sartorius flaps, 62 rectus flaps). Decision tree analysis noted that the rectus femoris flap was the more cost-effective option (Figure).  It was the dominant treatment option given that it was more clinically effective by an additional 0.30 QALYs with the sartorius flap option costing an additional $2,241.88. A substantial contribution to these results was due to the sartorius flap having a 13.68% major complication rate versus an 8.6% major complication rate for the rectus femoris flap. One-way sensitivity analysis showed that the sartorius flap became a cost-effective option if its major complication rate was less than or equal to 8.89%.

Conclusion:

The rectus femoris flap in the treatment of the infected vascular groin graft is a cost-effective option compared to the sartorius flap.