38.05 Factors impacting patient compliance with breast cancer screening guidelines in the US.

S. C. Pawar1, R. S. Chamberlain1  1Saint Barnabas Medical Center,Surgery,Livingston, NJ, USA

Introduction: Breast cancer screening guidelines for women between age 40 and 49, 50 and 74, and over 75 years of age are variable.  Controversies exist as to the effectiveness and potential risk associated with screening among different age groups and important predictors of mammography remains unclear. The study sought to determine breast cancer screening rates among US women of various ages, identify factors predictive of adherence to mammographic screening guidelines, and determination of the impact of physician recommendation.  

Methods: The National Health Interview survey database was queried to identify female patients who underwent a screening mammography between 2008 and 2010. Univariate and multivariate logistic regression models were used to identify predictors of mammography. 

Results:The median age of the study cohort was 53 yr. Among 11,312 women surveyed, and 8,155 (72%) had undergone a mammogram. Women undergoing mammographic screening were significantly older than women did not undergo a mammogram (53 Vs 39; P < 0.001). The high to low possibility of women undergoing mammographic screening were age 50-74 years followed by ≥ 75 years (85%), and 40-49 years (77%) and  < 40 years (27%). 74% of this cohort were Caucasians, 18% African Americans, 0.8% Asians and 2% other races.  86% of women had insurance coverage for mammographic screening while 14% lacked any insurance coverage. Overall 53% of the uninsured received a mammogram.  Northeastern region had the highest percentage of women screened; however there were no significant geographic differences.  Mammographic screening was completed by 59% of women in whom it was recommended by physicians and by 75% in whom it was recommended by their designated primary care provider (p < 0.01). The percentage of women undergoing mammographic screening dropped from 78% (2008) to 76% (2010) over the study period and this was significant across women of all age groups except those < 40 years. The strongest predictors of completing mammography were physician recommendation, a designated primary care provider recommendation, adherence with annual breast examination, race/ethnicity, insurance type, and income status. The strongest association between physician recommendation and undergoing mammography was in the youngest age group women (OR: 20; 95% CI 15-27). Among women < 40 years for whom a mammogram was recommended by the physician, 23% had a history of BRCA1/BRCA2 gene mutation, while 34% reported of a family history of breast cancer.

Conclusion:A decrease in mammography screening among women of all age groups was observed during the study period and was most conspicuous in younger women. Explanations are likely multifactorial, but may be related to implementation of the USPSTF (United States prevention services task force). Barriers to mammography identified included the absence of physician recommendation, lack of a designated primary care provider, lack of adherence to annual breast examination, racial minorities, lower socioeconomic status, decreased education level and deprived insurance status. Physician recommendation is the strongest predictor of mammographic screening among patients compliant in all age groups, although there may be over recommendation of mammography among those > 70 years and < 40 years. 

 

38.06 Utilization of PET in Patients with Lung and Esophageal Cancers

M. A. Healy1, H. Yin1, R. M. Reddy1, S. L. Wong1  1University Of Michigan,Department Of Surgery And Center For Health Outcomes & Policy,Ann Arbor, MI, USA

Introduction: Positron Emission Tomography (PET) scans are commonly used for cancer patients as part of the staging process. PET scans are often used for surveillance without evidence that they are superior to lower cost screening scans, and there are concerns about potential overuse. We evaluated PET utilization patterns for patients with lung (LC) and esophageal (EC) cancers.

Methods: Using national Surveillance Epidemiology and End Results (SEER) and Medicare linked data from 2005-2009, we examined the use of PET in a cohort of patients with primary lung (n=105,697) and esophageal (n=6,961) cancers who were diagnosed during this period. Cancer diagnoses were identified with ICD-9 diagnoses codes: lung 162.xx and esophagus 150.xx. Diagnostic services such as PET are captured as charges, which are covered under Medicare. We examined a fee-for-service cohort of patients, excluding patients in risk-bearing Medicare managed care plans and patients who are not continuously enrolled in parts A and B. We examined the frequency and timing of PET usage, including with regard to diagnosis, treatment and cancer stage.

Results: There was similar overall utilization of PET in these groups, with 47,795 (45.2%) and 3,734 (53.6%) of lung and esophageal cancer patients, respectively, receiving at least one scan. Most patients received a first scan within 3 months of diagnosis (78.3% LC, 87.3% EC), indicating likely use for staging. Use of 2 or more scans occurred in 20,216 (19.1%) and 1,867 (26.7%) of LC and EC patients, respectively.  Additionally, 11,117 (10.5%) LC and 1,052 (15.1%) EC patients underwent 3 or more scans. Among patients with stage IV disease, 2 or more scans were performed in 4,987 (11.8%) and 382 (21%) of LC and EC patients, respectively. In this stage IV group, 2,710 (6.4%) LC and 222 (12.2%) EC underwent 3 or more scans.

For patients who underwent PET prior to chemotherapy, 10,085 (28.5%) and 781 (26.8%) of LC and EC patients received a single additional scan, 11,467 (32.3%) and 1,345 (46.2%) had 2 or more scans, and 6,774 (19.1%) and 793 (27.2%) had 3 or more scans. Total PET usage for LC was 96,475 scans and for EC was 8,223 scans.

Conclusion: Our results show that PET usage is common, though only half of patients with LC and EC received staging scans. However, many patients undergo multiple scans. A large number of patients with stage IV disease underwent as many as 3 or more scans, and it is in these patients that the likelihood of any benefit is the least. Our data supports the need for continued education to avoid using PET for surveillance in these cancers, especially in patients with advanced disease. Medicare’s current policy limiting routine reimbursement to 3 scans probably does not effectively curb wasteful PET usage.

35.07 Heart Rate in Pediatric Trauma: Rethink Your Strategy

J. Murry1, D. Hoang1, G. Barmparas1, A. Zaw1, M. Nuno1, K. Catchpole1, B. Gewertz1, E. J. Ley1  1Cedars-Sinai Medical Center,Los Angeles, CA, USA

Introduction:   The optimal heart rate (HR) for children after trauma is based on resting values for a given age, sometime with the use of a Broselow tape. Given the stages of shock are based in part on HR and blood pressure, treatment plan may vary if these values are abnormal.  Admission HRs for children after trauma were analyzed to determine which ranges were associated with lowest mortality.

Methods:   The NTDB (2007-2011) was queried for all injured patients ages 1 to 14 years admitted (n = 398,544). Age groups were analyzed at ranges to match those provided by the Broselow tape (1 year, 2-3 years, 4 years, 5-6 years, 7-8 years, 9-11 years, 12-13 years).  Exclusions included any Abbreviated Injury Scale=6, Injury Severity Score=75, ED demise, or missing information.  

Results:  After exclusions, admission HRs from 135,590 pediatric trauma patients were analyzed, overall mortality was 0.7% (table).  At 1 year the HR range with the lowest OR for mortality was 100 to 179. Starting at age 7 years lowest mortality was observed for HR range 80-99.

Conclusion:  The HR associated with lowest mortality after pediatric trauma frequently differs from current standards.  Starting at age 7 years, the HR range of 80 to 99 predicts lower mortality.  Our data indicates that at age 7 years a child with HR of 120 may be in stage III shock and treatment might include admission, intravenous fluids and probably blood products. Traditional HR ranges suggest that the normal HR for this child includes 120 and therefore aggressive treatment might not be considered.  Knowing when HR is critically high or low in the pediatric trauma population might guide treatment options such as ED observation, hospital admission, ICU admission and even emergent surgery.

 

35.08 The Impact of the American College of Surgeons Pediatric Trauma Center Verification on In-Hospital Mortality

B. C. Gulack1, J. E. Keenan1, D. P. Nussbaum1, B. R. Englum1, O. O. Adibe1, M. L. Shapiro1, J. E. Scarborough1  1Duke University Medical Center,Department Of Surgery,Durham, NC, USA

Introduction:  Previous studies have demonstrated improvement in the survival of pediatric trauma patients treated at American College of Surgeons (ACS) verified pediatric trauma centers.  However, it is not known whether the level of pediatric trauma center verification, Level 1 (PTC1) versus Level 2 (PTC2), has any effect on outcomes.

Methods: We performed a review of the research data set (RDS) from the National Trauma Data Bank (NTDB) from 2007-2011, including all pediatric patients less than 16 years of age who were treated at an ACS verified adult level I trauma center.  Patients were excluded if they were transferred to another facility.  Patients were subdivided on the basis of trauma center verification: PTC1, PTC2, or trauma center without a pediatric ACS verification. These groups were compared with regards to baseline demographics, injury severity, and outcomes.  Multivariable logistic regression was then performed to determine the independent association of ACS pediatric verification and in-hospital mortality.

Results: A total of 124,773 patients were included in the study, 63,746 (51.1%) of which presented to a PTC1 while 7,562 (6.1%) presented to a PTC2 and 53,465 (42.8%) presented to a trauma center with no pediatric ACS verification.  Unadjusted analysis demonstrated significant differences in in-hospital mortality at PTC1s (1.6%) compared to PTC2s (2.4%) and trauma centers without pediatric ACS verification (2.1%, p<0.001).  In multivariable logistic regression, compared to hospitals without pediatric ACS verification, PTC1s had a significantly reduced in-hospital mortality (Adjusted Odds Ratio [AOR] (95% Confidence Interval [CI]): 0.85 (0.73, 0.99), Figure) while PTC2s did not (AOR (95% CI): 1.07 (0.80, 1.42).

Conclusion: Pediatric patients treated at centers verified as a level I pediatric trauma center by the ACS have a significantly decreased odds of in-hospital mortality compared to those treated at non-verified centers, however this is not seen with regards to PTC2s.  Further investigation is necessary in order to determine if more stringent requirements are necessary for PTC2 verification.

35.09 Outcomes for Burns in Children: Volume Makes a Difference

T. L. Palmieri1,2, S. Sen1,2, D. G. Greenhalgh1,2  1University Of California – Davis,Sacramento, CA, USA 2Shriners Hospitals For Children Northern California,Sacramento, CA, USA

Introduction: The relationship between center volume and patient outcomes has been analyzed for multiple conditions, including burns, with variable results. To date, studies on burn volume and outcomes have primarily addressed adults. Burned children require age specific equipment and competencies in addition to burn wound care. We hypothesized that volume of patients treated would impact outcome for burned children.

Methods: We used the National Burn Repository (NBR) release from 2000-2009 to evaluate the influence of pediatric burn volume on outcomes using mixed effect logistic regression modeling. Of the 210,683 records in the NBR over that time span, 33,115 records for children ≤18 years of age met criteria for analysis.

Results: Of the 33,115 records, 26,280 had burn sizes smaller than 10%; only 32 of these children died. Volume of children treated varied greatly among facilities. Age, total body surface area (TBSA) burn, inhalation injury, and burn center volume influenced mortality (p<0.05) An increase in the median yearly admissions of 100 decreased the odds of mortality by approximately 40%. High volume centers (admitting >200 pediatric patients/year) had the lowest mortality when adjusting for age and injury characteristics (p<0.05).

Conclusion: Burn centers caring for a greater number of children had lower mortality rates.  The lower mortality of children a high volume centers could reflect greater experience, resource, and specialized expertise in treating pediatric patients.

 

35.10 Mechanism and Mortality of Pediatric Aortic Injuries.

J. Tashiro1, C. J. Allen1, J. Rey2, E. A. Perez1, C. M. Thorson1, B. Wang1, J. E. Sola1  1University Of Miami,Division Of Pediatric Surgery, DeWitt-Daughtry Department Of Surgery,Miami, FL, USA 2University Of Miami,Division Of Vascular And Endovascular Surgery, DeWitt-Daughtry Department Of Surgery,Miami, FL, USA

Introduction: Aortic injuries are rare, but have a high mortality rate in children and adolescents. We sought to investigate mechanisms of injury and predictors of survival.

Methods:  Kids’ Inpatient Database was used to identify cases of thoracic and abdominal aortic injury (ICD-9-CM 901.0, 902.0) in patients aged <20 yrs (1997-2009). Demographic and clinical characteristics were analyzed using standard and multivariate methods. Cases were limited to emergent or urgent admissions.

Results: Overall, 468 cases were identified. Survival was 65% for the cohort, 63% for boys, and 68% for girls. Average length of stay was 10.7±14.0 days with charges 105,110±121,838 USD. Adolescents (15-19 years) and males comprised the majority of the group (84% and 79%, respectively). Patients were predominantly Caucasian (45%) and privately insured (51%). Injuries tended to affect patients in the lowest income quartile (36%) and most presented to large (78%) or urban teaching (83%) hospitals. The most common mechanism of injury was motor vehicle-related (77%), followed by other penetrating trauma (10%) and firearm injury (8%). On logistic regression modeling, select diagnoses and procedures, along with gender, race group, payer / income status, and hospital type were found to be significant determinants of mortality. Boys (OR: 0.15 [95% CI: 0.05, 0.44]) and Hispanic children (OR: 0.14 [0.04, 0.55]) had lower associated mortality vs. girls and Caucasian patients, respectively. Self-pay patients (OR: 6.91 [2.01, 23.8]) had higher mortality vs. privately insured patients. Children in the lowest income quartile (OR: 15.5 [4.16, 57.6]) had higher mortality vs. highest income patients. Patients admitted to urban non-teaching hospitals (OR: 0.13 [0.03, 0.55]) had lower mortality vs. those admitted to urban teaching hospitals. Patients with traumatic shock (OR: 47.8 [12.4, 184]) or necessitating exploratory laparotomy (OR: 13.9 [2.12, 91.8]) had the lowest associated survival overall. Patients undergoing repair of vessel (OR: 0.25 [0.10, 0.62]) or resection of thoracic vessel with replacement (OR: 0.18 [0.04, 0.73]) had higher associated survival. Survival increased over the study period between 1997 and 2009, p<0.01.

Conclusion: Motor vehicle-related injuries are the predominant mechanism of aortic injury in the pediatric population. Gender, race, payer status, income quartile, and hospital type, along with associated procedures and diagnoses, are significant determinants of mortality on multivariate analysis.

36.01 Cost-utility of prophylactic mesh relative to primary suture repair for high-risk laparotomies

J. P. Fischer1, M. N. Basta1, N. Krishnan2, J. D. Wink1, S. J. Kovach1  1University Of Pennsylvania,Division Of Plastic Surgery,Philadelphia, PA, USA 2Georgetown University Medical Center,Plastic Surgery,Washington, DC, USA

Introduction

 

Although hernia repair with mesh can be successful, prophylactic mesh augmentation (PMA) represents a potentially useful preventative technique to mitigate incisional hernia (IH) risk select high-risk patients.  The efficacy, cost-benefit, and societal value of such an intervention is not clear. The aim of this study is to determine the cost-utility of using prophylactic mesh to augment midline fascial incisions.

Methods

 

A systematic review was performed identifying articles containing comparative outcomes for PMA and suture closure of high-risk laparotomies.  A web-based visual analog scale survey was administered to 300 nationally-representative community members to determine quality-adjusted life-years (QALYs) for several health states related to hernia repair (GfK Research). A decision tree model was employed to evaluate the cost-utility of using PMA relative to primary suture closure after elective laparotomy.  Inputs included cost (DRG, CPT, and retail costs for mesh), quality of life, and health-outcome probability estimates. The cost effectiveness threshold was set at $50,000/year-of-life gained.The authors adopted the societal perspective for cost and utility estimates. The costs in this study included direct hospital costs and indirect costs to society, and utilities were obtained through a survey of 300 English-speaking members of the general public evaluating 14 health state scenarios relating to ventral hernia.  

Results

 

Primary suture closure without mesh demonstrated an expected average cost of $17,182 (average QALY of 21.17) compared to $15,450 (expected QALY was 21.21) for PMA. Primary suture closure was associated with an ICER of -$42,444/QALY compared to prophylactic mesh, such that PMA was more effective and less costly.  Monte-Carlo sensitivity analysis was performed demonstrating more simulations resulting in ICERs for primary suture closure above the willingness-to-pay threshold of $50,000/QALY supporting the finding that prophylactic mesh is superior in terms of cost-utility.  Additionally, base rate analysis with an absolute reduction in hernia recurrence rate of 15% for prophylactic mesh demonstrated that mesh could cost a maximum of $3,700 and still be cost-effective.

Conclusions

 

Cost-utility analysis of suture repair compared to PMA of abdominal fascia incisions demonstrates PMA was more effective, less costly, and overall more cost-effective than primary suture closure.  Sensitivity analysis demonstrates that PMA dominates at multiple levels of willingness-to-pay, and is a potentially valuable, cost-effective, low risk intervention to mitigate risk of IH.

36.02 Cost-Effectiveness of Non-operative Management of Acute Uncomplicated Appendicitis

J. X. Wu1, A. J. Dawes1, G. D. Sacks1  1UCLA David Geffen School Of Medicine,Department Of General Surgery,Los Angeles, CALIFORNIA, USA

Introduction:  Appendectomy remains the gold standard of treatment for acute uncomplicated appendicitis. Nonetheless, there is growing evidence that non-operative management is both safe and efficacious. Non-operative management avoids the initial cost and morbidity associated with an operation, but may result in longer hospital stays, increased readmissions, and higher risk of treatment failures. We hypothesized non-operative management of acute appendicitis is cost-effective.

Methods:  We constructed a decision tree to compare non-operative management of acute uncomplicated appendicitis both with and without interval appendectomy (IA) to laparoscopic appendectomy at the time of diagnosis (Fig 1). Outcome probabilities, health utilities, and direct costs were abstracted from a review of the literature, Healthcare Cost and Utilization Project data, the Medicare Physician Fee schedule, and the American College of Surgeons National Surgical Quality Improvement Program Surgical Risk Calculator. Conservative estimates were used for operative costs and postoperative quality adjusted life-year (QALY) reductions to favor conventional operative management. Operative management was used as the reference group for cost-effectiveness comparisons. We performed Monte Carlo simulations using one-way and probabilistic sensitivity analyses to assess model parameters. 

Results: Operative management had a mean cost of $12,386. Compared to the status quo, non-operative management without IA dominated as the most cost-effective management strategy, costing $1,937 less and yielding 0.04 additional QALY. Non-operative management with IA was the least cost effective strategy, requiring an additional $2,880 per patient with no additional health benefit. One-way sensitivity analysis revealed that operative management would become the dominant strategy if recurrence rate of acute appendicitis after non-operative management exceeds 42%, or if the total cost of operative management could be reduced below $5,568. Probabilistic sensitivity analysis revealed that non-operative management without IA was dominant in 100% of 10,000 iterations.

Conclusion: Non-operative management without IA is potentially the most cost-effective treatment for healthy adults with acute uncomplicated appendicitis, and deserves serious consideration as a treatment option for a disease long thought to be definitively surgical. Further studies are necessary to better characterize the health states associated with the various treatment outcomes.

 

36.03 A Cost-utility Assessment of Mesh Selection in Clean and Clean-Contaminated Ventral Hernia Repair (VHR)

J. P. Fischer1, M. Basta1, J. D. Wink1, N. Krishnan2, S. J. Kovach1  1University Of Pennsylvania,Division Of Plastic Surgery,Philadelphia, PA, USA 2Georgetown University Medical Center,Plastic Surgery,Washington, DC, USA

PURPOSE

Ventral hernia is a common, challenging, and costly problem in the United States.  Mesh-reinforcement can reduce recurrence, but selection is poorly understood, particularly in higher risk wounds.  The use of acellular dermal matrixes (ADM) has provided a tool to perform single-stage ventral hernia repairs (VHR) in challenging wounds, but can be associated with higher complications, cost, and poor longevity. The aim of this study is to perform a cost-utility analysis of ADM and synthetic mesh in clean and clean-contaminated (CC) VHR.

METHOD

A systematic review was performed identifying articles containing comparative outcomes for synthetic mesh and ADM repairs.  A web-based visual analog scale survey was administered to 300 nationally-representative community members to determine quality-adjusted life-years (QALYs) for several health states related to hernia repair (GfK Research). A Decision tree was created for the reference cases (VHR with ADM or synthetic) and up to six additional post-operative scenarios.  Inputs included cost (DRG, CPT, and retail costs for mesh), quality of life, and health-outcome probability estimates. The cost effectiveness threshold was set at $50,000/year-of-life gained (incremental cost utility ratio (ICUR)).

 

RESULT

There was a 16% increase in the risk of a complication occurring after VHR when using biologic mesh compared to synthetic mesh in CC fields. This risk increased to 30% in clean fields. In CC fields, there was an increase of $8,022.61 in expected cost of VHR when using biologic mesh relative to synthetic mesh with a loss in clinical efficacy of 0.47 QALYs. This resulted in an ICUR of -$17,000/QALY. There was an expected increase of $11,694.02 with a clinical loss of 0.51 QALYs when using biologic mesh in clean fields resulting in an ICUR of -$23,000/QALY. Sensitivity analysis revealed that the recurrence rate of biologic mesh needs to be below 5% or the recurrence rate of synthetic mesh needs to be greater than 23% for biologic mesh to be cost effective in CC fields. In clean cases, the recurrence rate of synthetic mesh needs to be greater than 21% in order for biologic mesh to be cost effective.

CONCLUSION

This cost-effectiveness analysis of mesh selection indicates that biologic meshes are not cost effective relative to synthetic mesh in clean or CC defects. Specifically, from a societal perspective synthetic mesh is both cheaper and more clinically effective than biologic mesh.  Given the high prevalence of hernia and its associated cost to society, our data is critically important in improving cost-effective repair techniques, providing value-based care, and conserving healthcare resources in an ever-changing healthcare environment. 

36.04 National Analysis of Cost and Resource Utilization of Expanded Criteria Donor Kidneys

C. C. Stahl1, K. Wima1, D. J. Hanseman1, R. S. Hoehn1, E. F. Midura1, I. M. Paquette1, S. A. Shah1, D. E. Abbott1  1University Of Cincinnati,Cincinnati, OH, USA

Introduction:  Despite efforts to increase the deceased donor pool by increased utilization of expanded criteria donor (ECD) kidneys, concerns have been raised about the financial impact and resource utilization of these organs.

Methods:  The Scientific Registry of Transplant Recipients database was linked to the University HealthSystem Consortium Database to identify adult deceased donor kidney transplant recipients from 2009-2012.  Patients were divided into those receiving standard criteria donor (SCD) and ECD kidneys.  Length of stay, 30-day readmission rates, discharge disposition, and delayed graft function (DGF) were used as indicators of resource utilization.  Cost was defined as reimbursement based on Medicare cost:charge ratios, and included the costs of readmission when applicable.

Results: Of the 19,529 patients in the final dataset (47.6% of the total SRTR deceased donor cohort), 3,495 were ECD recipients (17.9%).  ECDs were more likely to be transplanted into older (median age 62 vs 52), male (63.7% vs 59.3%) and diabetic recipients (47.1% vs. 31.7%); all p<0.001.  On multivariable analysis, ECD kidneys were associated with increased 30-day readmission (OR: 1.35, CI: 1.21-1.50) and DGF (OR: 1.33, CI: 1.19-1.50) but length of stay (RR: 1.03, CI: 0.97-1.09) and discharge disposition (discharge to home, OR: 1.03, CI: 0.78-1.37) were similar between cohorts.  There was no difference in total cost (transplant hospitalization+readmission within 30 days) between ECDs and SCDs (RR: 0.97, CI: 0.93-1.00, p=0.07).

Conclusion: These data suggest that use of ECDs does not negatively impact short-term resource utilization and that ECDs can be more broadly utilized without financial consequences.

 

36.05 Abandoning Daily Routine Chest X-rays in a Surgical Intensive Care Unit: A Strategy to Reduce Costs

S. A. Hennessy1, T. Hranjec2, K. A. Boateng1, M. L. Bowles1, S. L. Child1, M. P. Robertson1, R. G. Sawyer1  1University Of Virginia,Department Of Surgery,Charlottesville, VA, USA 2University Of Texas Southwestern Medical Center,Department Of Surgery,Dallas, TX, USA

Introduction:   Chest x-ray (CXR) remains the most commonly used imaging modality in the Surgical Intensive Care Unit (SICU), especially in mechanically ventilated patients.   The practice of daily, routine CXRs is associated with morbidity for the patient and significantly increased costs.  We hypothesized that elimination of routine daily CXRs in the SICU and integration of clinical on-demand CXRs would decrease cost without any changes in morbidity or mortality.

Methods:   A prospective comparative quality improvement project was performed over a 6 month period at a single institution.  From November 2013 through January 2014 critically ill patients underwent daily routine CXRs (group 1).  From February through April 2014 daily routine CXRs were eliminated (group 2); ICU patients only received a CXR based on the on-demand CXR strategy. This strategy advised imaging for significant clinical changes or post-procedure.  Patients before and after the on-demand CXR strategy were compared by univariate analysis.   Parametric and non-parametric univariate testing was used where appropriate.  A multivariate logistic regression was performed to identify independent predictors of mortality.

Results:  In total, 495 SICU admissions were evaluated:  256 (51.7%) in group 1 and 239 (48.3%) in group 2.   There was a significant difference in the number of CXRs, with 4.2 ± 0.7 in the daily CXR group versus 1.2 ± 0.1 in the on-demand group (p<0.0001).  The mean cost per admission was $394.8 ± 47.1 in the daily CXR group versus $129.9 ± 12.5 in the on-demand group (p<0.0001).  This was an estimated cost savings of $60,000 over a 3 month period for group 2 compared to group 1.  Decreased ICU length of stay (LOS), hospital LOS and mechanical ventilation (MV) was seen in group 2, while mortality and re-intubation rates were equivalent despite decreased imaging (Table 1).  After adjusting for age, gender, re-intubation rate, duration of MV and APACHE III score, no difference in mortality was seen between the two groups (OR 2.2, 95% CI 0.7-6.4, p=0.15).  To further adjust for severity of illness, patients with APACHE III score > 30 were analyzed separately.   Mortality and re-intubation rate, ICU LOS and hospital LOS were similar between the groups, while duration of MV was still decreased (Table 1).  In high APACHE III score patients there was also a reduction in number of CXR per admission from 4.5 ± 0.8 to 1.4 ± 0.2 with a cost savings of $316.6 per ICU admission.

Conclusion:  Use of a clinical on-demand CXR strategy lead to a large cost savings without associated increase in mechanical ventilation or mortality.  This is a safe and effective quality initiative that will reduce cost without increasing adverse outcomes. 

 

36.06 Factors Associated with Secondary Overtriage in a Statewide Rural Trauma System

J. Con1, D. M. Long1, G. Schaefer1, J. C. Knight1, K. J. Fawad1, A. Wilson1  1West Virginia University,Department Of Surgery / Division Of Trauma, Emergency Surgery And Surgical Critical Care,Morgantown, WV, USA

Introduction:
Rural hospitals have variable degrees of involvement within the nationwide trauma system because of differences in infrastructure, human resources and operational goals.  “Secondary overtriage” is a term that has been used to describe the seemingly unnecessary transfers to another hospital, shortly after which the trauma patient is discharged home without requiring an operation.  An analysis of these occurrences is useful to determine the efficiency of the trauma system as a whole.  Few have addressed this phenomenom, and to our knowledge, we are the first to study it in the setting of a rural state's trauma system.

Methods:
Data was extracted from a statewide trauma registry from 2003-2013 to include those who were: 1) discharged home within 48h of arrival, and 2) did not undergo a surgical procedure.  We then identified those who arrived as a transfer prior to being discharged (secondary overtriage) from those who arrived from the scene.  Factors associated with transfers were analyzed using a logistic regression.  Injuries were classified based on the need for a specific consultant.  Time of arrival to ED was analyzed using 8-hour blocks, with 7AM-3PM as reference.

Results:
19,319 patients fit our inclusion criteria of which 1,897 (9.8%) arrived as transfers.  The mean ISS was 3.8 ± 3 for non-transfers and 6.6 ± 5 for transfers (p<0.0001).  Descriptive analysis showed various other differences between transfers and non-transfers due to our large sample size.  Thus, we examined variables that had more clinical significance using logistic regression controlling for age, ISS, the type of injury, blood products given, the time of arrival to the initial ER, and whether a CT scan was obtained initially.  Factors associated with being transferred were age>65, ISS>15, transfusion of PRBC’s, graveyard-shift arrivals, and neurosurgical, spine, and facial injuries.  Orthopedic injuries were not associated with transfers.  Patients having a CT scan done at the initial facility were less likely to be transferred.  

Conclusion:
Although transferred patients were more severely injured, this was not the only factor driving the decision to transfer.  Other factors were related to the rural hospital’s limited resources, which included the availability of surgical specialists, blood products, and overall coverage during the graveyard-shift.  More liberal use of the CT scaner at the initial facility may prevent unnecessary transfers. 
 

36.07 Comparing Local Flaps When Treating the Infected Vascular Groin Graft Wound: A Cost-Utility Analysis

A. Chatterjee1, T. Kosowski2, B. Pyfer2, S. Maddali3, C. Fisher1, J. Tchou1  1University Of Pennsylvania,Surgery,Philadelphia, PA, USA 2Dartmouth Medical School,Surgery,Lebanon, NH, USA 3Maine Medical Center,Portland, MAINE, USA

Introduction:
 

A variety of options exist in the treatment of the infected vascular groin graft.  The vascular and plastic surgery literature report on using the sartorius and rectus femoris flaps as reasonable coverage options.  Both flap options incur cost and have variability in success.  Given this, our goal was to perform a cost-utility analysis of the sartorius flap versus the rectus femoris flap in the treatment of an infected vascular groin graft.
 

Methods:

Cost utility methodology involved a literature review compiling outcomes for specific flap interventions, obtaining utility scores for complications to estimate quality adjusted life years (QALYs), accruing costs using DRG and CPT codes for each intervention, and developing a decision tree that could portray the more cost-effective strategy. Complications were divided into major and minor categories with major including graft loss with axillary-femoral bypass, amputation, and death. Minor complications assumed graft salvage after local debridement for partial flap necrosis, seromas and hematomas.  The upper limit for willingness to pay was set at $50,000.  We also performed sensitivity analysis to check the robustness of our data. Szilyagi III and Samson III and IV grades of infected groin grafts were included in our study.

Results:

Thirty two studies were used pooling 296 patients (234 sartorius flaps, 62 rectus flaps). Decision tree analysis noted that the rectus femoris flap was the more cost-effective option (Figure).  It was the dominant treatment option given that it was more clinically effective by an additional 0.30 QALYs with the sartorius flap option costing an additional $2,241.88. A substantial contribution to these results was due to the sartorius flap having a 13.68% major complication rate versus an 8.6% major complication rate for the rectus femoris flap. One-way sensitivity analysis showed that the sartorius flap became a cost-effective option if its major complication rate was less than or equal to 8.89%.

Conclusion:

The rectus femoris flap in the treatment of the infected vascular groin graft is a cost-effective option compared to the sartorius flap.

 

36.08 One-Year Postoperative Resource Utilization in Sarcopenic Patients

P. S. Kirk1, J. F. Friedman1, D. C. Cron1, M. N. Terjimanian1, L. D. Canvasser1, A. M. Hammoud1, J. Claflin1, M. B. Alameddine1, E. D. Davis1, N. Werner1, S. C. Wang1, D. A. Campbell1, M. J. Englesbe1  1University Of Michigan Health System,Department Of Surgery,Ann Arbor, MI, USA

Introduction:  It is well established that sarcopenic patients are at higher risk of postoperative complications and short-term healthcare utilization. Less well understood is how these patients fare over the long term after surviving the immediate postoperative period. We explored costs over the postoperative year among sarcopenic patients.

Methods:  We identified 1,298 patients in the Michigan Surgical Quality Collaborative (MSQC) database who underwent inpatient elective surgery at the University of Michigan Health System from 2006 to 2011. Sarcopenia, defined by gender-stratified tertile of lean psoas area (LPA), was determined from preoperative CT scans using validated analytic morphomics. Data were analyzed to assess sarcopenia’s relationship to costs, readmissions, discharge location, surgical intensive care unit (SICU) admissions, hospital length of stay (LOS), and mortality. Multivariate models adjusted for patient demographics and surgical risk factors.

Results: Sarcopenia was independently associated with increased adjusted costs at 30, 90, 180, and 365 days (p=0.001, p<0.001, p=0.091, and p=0.021, respectively) (Fig. 1). The difference in adjusted postsurgical costs between sarcopenic and non-sarcopenic patients increased from $5,541 at 30 days to $9,938 at one year. Sarcopenic patients were more likely to be discharged somewhere other than home (OR=4.44, CI=2.30-8.59, p<0.001) and more likely to die in the postoperative year (OR=3.24, CI=1.72-6.11, p<0.001). Sarcopenia was not an independent predictor of increased readmission rates in the postsurgical year (p=0.69).

Conclusion: Sarcopenia is a robust predictor of healthcare utilization in the first year after surgery. These patients accumulate costs at a faster rate than their non-sarcopenic counterparts. It may be appropriate to allocate additional resources to sarcopenic patients in the perioperative setting to reduce the incidence of negative postoperative outcomes.

 

33.10 Risk Stratification of Sentinel Lymph Node Positivity in Intermediate Thickness Melanoma

M. G. Peters1, E. K. Bartlett1, R. E. Roses1, B. J. Czerniecki1, D. L. Fraker1, R. R. Kelz1, G. C. Karakousis1  1Hospital Of The University Of Pennsylvania,General Surgery,Philadelphia, PA, USA

Introduction:  Patients with intermediate thickness cutaneous melanoma are routinely recommended for sentinel lymph node biopsy (SLNB) as standard of practice.  Conversely, those with thin melanoma are selectively offered the procedure given the low risk of SLN positivity in this group overall.  We sought to identify a low-risk subset of patients with intermediate thickness melanoma who, like many patients with thin melanoma, may be spared the additional LN procedure.

Methods: Demographics and histo-pathological characteristics of the primary tumor were reviewed for 952 patients undergoing SLNB for primary intermediate thickness cutaneous melanoma (1.01-4.00mm) treated at our institution from 1995-2011. Univariate analysis using chi-square and Wilcoxon rank-sum as appropriate was used to determine associations with SLN positivity. Factors approaching statistical significance (p<0.20) were included in a forward step-wise multivariate logistic regression.  All significant factors (p<0.05) were then included in a risk scoring system. 

Results:  The rate of positive SLNB in the study cohort was 16.5% (n=157).   In univariate analyses, significant factors associated with SLN positivity were increasing thickness (p<.001), absence of tumor infiltrating lymphocytes (p=.043), ulceration (p=.014), lymphovascular invasion (p<.001), and the presence of microsatellites (p<.001).   With regards to age <60 (p=.18) and presence of mitoses (p=.071), there was a trend toward significant association with SLN positivity.  When all of these factors were included in a multivariate model, five factors were identified as significantly associated with SLN positivity; younger age (<60 years, OR=1.52, p=.032), absence of tumor infiltrating lymphocytes (OR=1.64, p=.02), thicker primary tumors (OR=2.6 for 1.51-2, OR=3.5 for 2.01-4, p<.001), the presence of satellites (OR=2.2, p=.015), and lymphovascular invasion (OR=2.1, p=.014).  These factors were used to develop a risk stratification scoring system (see Table).  The rate of positive SLN ranged from 4.6% (when no factors were present, score=0) to 44.0% (when all factors were present, score= 5). 

Conclusion:Patients with intermediate thickness melanoma can be risk stratified for SLN positivity using clinical and pathologic factors. While SLNB appears justified for the majority of patients with intermediate thickness melanomas, for appreciable minority (nearly 10%) the risk of LN positivity is more similar to that of low risk T1 (<1.0mm) melanomas.  For this subgroup of patients, SLNB can be offered selectively.

34.01 Urinary Tract Infection After Surgery for Colorectal Malignancy: Risk Factors and Complications

A. C. Sheka1, S. Tevis1, G. Kennedy1  1University Of Wisconsin,Department Of Surgery,Madison, WI, USA

Introduction: Over 4% of patients undergoing colorectal surgery develop post-operative UTI, twice the rate of patients undergoing other gastrointestinal surgery and over three times greater than for those undergoing non-gastrointestinal surgery. Surgical patients who suffer post-operative UTI have increased mortality rates, lengths of stay, and costs of care. The aim of this study was to analyze the risk factors and post-operative complications associated with urinary tract infection (UTI) after surgery for colorectal malignancy.

 

Methods: The ACS-NSQIP database was queried for patients who underwent surgery for colorectal malignancy from 2005-2012. From these records, patients were identified and included in the study using International Classification of Diseases (ICD-9) and current procedural terminology (CPT) codes. Chi square analysis and Mann Whitney U test were used to identify pre-operative and intra-operative risk factors for post-operative UTI. Pre-operative and intra-operative variables found to have a p<0.1 in univariate analysis were included in a logistic regression model that was used to identify independent predictors of post-operative UTI. Chi square and Mann Whitney tests were also used to evaluate the association between UTI and post-operative outcomes.

 

Results: A total of 47,781 patients were included in this study. The overall rate of post-operative UTI was 3.7%. Independent predictors of UTI included female sex (OR 1.66, 95% CI 1.47-1.88), open procedure (OR 1.46, 95% CI 1.28-1.67), older age (p<0.001), non-independent functional status (OR 1.51, 95% CI 1.22-1.88), steroid use for a chronic condition (OR 1.54, CI 1.13-2.10), neoadjuvant radiotherapy (OR 1.31, 95% CI 1.09-1.59), higher anesthesia class (p<0.001), and longer total operation time (p<0.001). Patients who suffered post-operative UTI had an average hospital stay five days longer than those who did not contract a UTI (7 vs. 12 days, p<0.001). They also had significantly higher reoperation rates (11.9% vs 5.1%, p<0.001). Of patients with post-operative UTI, 3.3% had death with 30 days of surgery, compared to 1.7% of those without UTI after surgery (p<0.001). Post-operative UTI also correlated with other complications, including sepsis, surgical site infections, and pulmonary embolism (p<0.001 for all).

 

Conclusions: Post-operative UTI in patients undergoing surgery for colorectal malignancy correlates with longer hospital stay, higher reoperation rate, and increased 30-day mortality compared to patients without UTI. It also appears that patients who contract post-operative UTI may be at increased risk of developing multiple complications. This analysis demonstrates significant benefit to laparoscopic surgery for colorectal malignancy when controlling for other factors. In addition, it identifies several risk factors that may be targeted in prospective interventions aiming to reduce complications, specifically post-operative UTI, in this population. 

34.02 Indication and Risk for Pancreaticoduodenectomy in Patients Over 80: An ACS NSQIP Study

J. R. Bergquist1,2, C. R. Shubert1,2, D. S. Ubl2, C. A. Thiels1,2, M. L. Kendrick1, M. J. Truty1, E. B. Habermann2  1Mayo Clinic,General Surgery,Rochester, MN, USA 2Mayo Clinic,Center For The Science Of Health Care Delivery,Rochester, MN, USA

Introduction: Expected mortality after elective pancreaticoduodenectomy (PD) in contemporary series is less than 5% even in older patients (>80). The perioperative risk in these older patients has not been reported with consideration of the specific indication for PD. We hypothesized that 30-day mortality, major morbidity, and prolonged length of stay (PLOS) following PD varies by diagnosis risk group in patients over 80, and that those elderly patients with high risk diagnoses may have higher than expected peri-operative risk.

Methods: ACS-NSQIP was reviewed for all PDs from 2005-2012. ICD-9 diagnoses (indication for PD) were categorized into high and low diagnosis risk groups based on incidence of 30-day major morbidity. Univariate and multivariate analyses compared PD outcomes (1) by diagnosis risk among patients over 80 and (2) by age group (80+ vs 18-79 and vs 70-79) among patients in the same diagnosis risk group.

Results: Of 7192 total patients, pancreas cancer (N=4200) and chronic pancreatitis (N=608) experienced similar major morbidity (p=0.64) and were grouped as “low risk”. Bile duct and ampullary neoplasm (N=1503), duodenal neoplasm (N=686), and neuroendocrine tumor (N=195) experienced similar major morbidity (p=0.69) and were grouped as “high risk”. The 30 day mortality risk for patients aged 80+ (N=749) undergoing PD with high risk diagnosis was found to be 7.0% vs 4.1% for those with low risk diagnosis (p<0.001). Of patients with high-risk diagnoses, patients 80+ had greater mortality risk (7.0%) than those 70-79 (3.9%, p=0.037) or all patients aged 18-79 (2.9%, p<0.001). Risk of major morbidity and prolonged length of stay was also increased in patients 80 and older (see table). On multivariate analysis, controlling for diagnosis risk, patients over 80 had greater odds of 30 day mortality (OR 2.155, 95% CI 1.242-3.741, p=0.0063), any major complication (OR 1.658, 95% CI 1.312-2.095, p<0.001), and PLOS (OR 1.448, 95% CI 1.140-1.838, p=0.0024), and when compared with patients 18-79.

Conclusion: In patients over 80 undergoing PD, high-risk diagnoses are independently associated with increased 30-day mortality compared to those with low-risk diagnoses and younger age groups. Risk of 30-day mortality following PD in patients 80+ with high risk diagnoses exceeds the expected threshold of 5%; those with low risk diagnoses however do not. For 80+ patients with duodenal, neuroendocrine, or bile duct and ampullary neoplasm, pre-operative counseling and shared decision making should reflect the increased 30-day mortality risk for pancreaticoduodenectomy.

34.03 Observation of Minimally Invasive Surgery for Gastric Submucosal Tumor

Y. Shoji1, H. Takeuchi1, H. Kawakubo1, O. Goto2, R. Nakamura2, T. Takahashi2, N. Wada1, Y. Saikawa1, T. Omori1, N. Yahagi2, Y. Kitagawa1  1Keio University School Of Medicine,Department Of Surgery,Tokyo, TOKYO, Japan 2Keio University School Of Medicine,Tumor Center,Tokyo, TOKYO, Japan

Introduction:

Because gastric submucosal tumors including gastrointestinal stromal tumor  can be treated with local resection without lymph-node dissection, laparoscopic local resection (LAP) is widely used to manage relatively small tumors less than 5cm in diameter. To make the operation less invasive, new surgical strategies such as single incision laparoscopic surgery (SILS), laparoscopy endoscopy cooperative surgery (LECS) and non-exposed endoscopic wall-inversion surgery (NEWS) were developed.

Methods:

In this study, we made a comparative review of the patient’s characteristics, surgical outcome, postoperative courses of each procedure.

Results:

From January 2004 to June 2014, 130 patients with gastric submucosal tumor underwent surgical treatment in Department of Surgery, Keio University School of Medicine. Eighty-two patients received minimally invasive surgery mentioned above. Detail of the patients were LAP 53, SILS 11, LECS 11, NEWS 7 (other surgical procedure were as follows; open surgery 17, hand assisted laparoscopic surgery 6, laparoscopy assisted proximal gastrectomy 6, laparoscopy assisted distal gastrectomy 3, laparoscopy assisted pylorus preserving gastrectomy 2, endoscopic submucosal dissection 3, other laparoscopic surgery 7. 4 patients in LAP group were excluded because of combined resection of other organs).

There were no significant differences in patient characteristics such as age, sex, body mass index and the size nor the growth pattern of the tumor. LAP and SILS were not indicated to tumors of the esophagogastric junction (p<0.001).Mean operative duration of the LAP and SILS group was significantly shorter than the LECS and NEWS group (p<0.05). There were no differences in intraoperative blood loss among the groups.The mean value of C-reactive protein of the 1st postoperative day was significantly higher in the LECS group in comparison to other groups (p<0.05). There was no significant difference in postoperative hospitalization between the groups. There were totally 4 cases with postoperative complications (acute appendicitis, splenic vein thrombosis, stenosis, toxicodermatitis). Every patient recovered with conservative measures without sequelae. Other patients discharged with an uneventful recovery.

Conclusion:

LAP and SILS were not selected to treat the tumor of the esophagogastric junction in order to prevent the postoperative stricture of the cardia by its relatively wide extent of resection(p<0.001). On the 1st postoperative day, the value of CRP, as an indicator of inflammatory reaction was significantly higher in the LECS group (p<0.05). The reason is expected that LECS is the only surgical form in which the digestive fluid expose to the body cavity.

Operative procedure for gastric submucosal tumor must be chosen studiously by the patient’s characteristics and the tumor property. However, NEWS is suggested to be a widely applied, less invasive technique, which should be introduced positively.

34.04 Long-term Health-Related Quality of Life After Cancer Surgery: A Prospective Study

M. C. Mason1,2, G. M. Barden1,2, N. Massarweh1,2,3, S. Sansgiry1, A. Walder1, D. L. White1, D. L. Castillo1, A. Naik1, D. H. Berger1,2,3, D. A. Anaya1,2,3  1Michael E. DeBakey Veterans Affairs Medical Center,Houston VA Center For Innovations In Quality, Effectiveness, And Safety (IQUEST),Houston, TX, USA 2Baylor College Of Medicine,Michael E. DeBakey Department Of Surgery,Houston, TX, USA 3Michael E. DeBakey Veterans Affairs Medical Center,Operative Care Line,Houston, TX, USA

Introduction: The Institute of Medicine recently emphasized the importance of patient-reported outcomes following cancer care, and their relevance for the growing geriatric population. There are limited data on the impact of cancer surgery on health-related quality of life (HRQoL) in elderly patients. The goal of our study was to examine trends over time and changes in HRQoL measures following cancer surgery, and to evaluate the effect of age and receipt of adjuvant therapy on these outcomes.

Methods:  A prospective cohort study of patients undergoing elective cancer surgery at a tertiary referral center was performed (2012-2014). Demographic, clinical, cancer, and treatment variables were recorded. Cancer-specific HRQoL was prospectively measured using the EORTC C-30 questionnaire (6 domains) at the preoperative visit and at 1-month and 6-months postoperatively. The primary outcome of interest was a clinically significant drop in HRQoL, defined using the validated cutoff of a ≥10 point drop in Global Health Score (GHS) preoperatively to 6-month postoperatively. Patients were categorized based on their age into Young (<65y) and Elderly (≥65y), and trends over time as well as changes in GHS scores were compared between both groups. Univariate and multivariate logistic regression analyses were used to examine the association between age ≥ 65 and the primary outcome (Model 1) adjusting for receipt of adjuvant therapy (Model 2) and other important cofounders (Model 3).

Results: A total of 236 patients were included; 177 (75%) had major surgery, 105 (44.5%) were elderly, and 73 (31%) received adjuvant therapy. Baseline mean GHS score (67.2 [± 24.6]) dropped at 1-month (61.0 [± 25.0]) and increased close to baseline at 6-months (64.2 [± 23.4]) for the whole cohort, with no differences in trends over time between age groups. In all, 74 patients (31.4%) experienced a clinically significant drop in GHS score. Age ≥65 years was not associated with a clinically significant drop in HRQoL after univariate (Model 1: OR 1.62 [95% CI 0.93-2.82], P=0.09), and multivariate analyses (Model 2: OR 1.62 [0.93-2.83], P=0.09; and Model 3: OR 1.67 [0.93-2.99]; P=0.08).

Conclusions: Cancer patients overall experience a drop in HRQoL shortly after surgery (1-month), with a return close to baseline by 6 months. However, a high proportion of patients do not regain their baseline HRQoL, with almost one-third having a clinically significant drop that persists at 6 months postoperatively. Clinically significant drops in HRQoL were not associated with age ≥65 years, even among patients who received adjuvant therapy. Surgical and multimodality treatment should not be withheld from elderly patients based on concerns regarding long-term HRQoL.

34.05 Efficacy of Post-Mastectomy Radiation Therapy in the Setting of T3 Node-Negative Breast Cancer

L. Elmore1, A. D. Deshpande1, J. A. Margenthaler1  1Washington University,Surgery,St. Louis, MO, USA

Introduction: In the absence of lymph node involvement, tumor size is arguably the most important prognostic factor for women with breast cancer.  Development of an optimal adjuvant treatment regimen for women with locally-advanced node-negative breast cancer is critical due to the risk of locoregional failure.  Radiation therapy has been shown to improve locoregional control in selected populations of women with breast cancer but its efficacy in T3 node-negative breast cancer is controversial.  We investigated patterns of post-mastectomy radiation therapy (PMRT) use and the survival impact of this treatment modality in women with T3 node-negative breast cancer.

Methods: A retrospective cohort study was conducted by identifying women with T3 node-negative breast cancer from the 1988-2009 Surveillance, Epidemiology and End Results (SEER) database.  Our primary outcome variable was breast cancer-specific mortality.  Several sociodemographic variables and tumor characteristics were obtained to evaluate patterns of use of adjuvant therapy.  Survival curves were generated using the Kaplan-Meier method.  Hazard ratios were computed using Cox proportional hazard analysis.  Propensity score analysis was used to evaluate the effect of radiation on overall and breast cancer-specific mortality.

Results:We identified 2874 patients with T3 node-negative breast cancer.  Within this cohort of women, 961 (33%) received PMRT and 1913 (67%) did not.  Statistically significant differences were seen in adjuvant radiation therapy use based upon patient age, marital status, tumor grade, tumor size and receptor status (p<0.05 for all).  Younger age at diagnosis, marriage, and grade 3 tumor pathology were associated with adjuvant therapy use. Tumor size >9cm was associated with decreased use of adjuvant radiation therapy.  Analysis of overall mortality demonstrated lower mortality in the PMRT group in unadjusted analysis (cHR 0.718; 95% CI 0.614,0.840); however, adjusted hazard ratios demonstrated no difference in overall mortality (aHR 0.898; 95% CI 0.765, 1.054).  Unadjusted analysis of breast-cancer specific mortality demonstrated no difference in those who received PMRT and those who did not (cHR 0.834; 95% CI 0.682,1.021).  After adjusting for potential confounders using a propensity score analysis, again no significant difference in breast-cancer specific mortality was observed based on PMRT use (aHR 0.939; 95% CI 0.762, 1.157).

Conclusion:Analysis of the SEER database demonstrated that several patient and tumor characteristics are associated with use of adjuvant radiation therapy.  Results of the current study indicate that receipt of PMRT does not affect breast-cancer specific or overall survival in women with T3 node-negative breast cancer.