39.07 The Surgical Apgar Score in Major Esophageal Surgery

C. F. Janowak2, L. Taylor2, J. Blasberg1, J. Maloney1, R. Macke1  1University Of Wisconsin,Division Of Cardiothoracic Surgery,Madison, WI, USA 2University Of Wisconsin,Department Of Surgery,Madison, WI, USA

Introduction:  Most postoperative assessments and triage decisions are based on subjective evaluation of a patient’s risk factors and overall condition. The Surgical Apgar Score (SAS) is a validated prognostic tool used to predict postoperative morbidity and mortality in a wide variety of surgical patients. The esophagectomy population is a unique subset of surgical patients who are high risk for post-operative complication and disposition resources. An objective prognostic metric is an appealing and efficient way to allocate limited care resources to the sickest of postoperative patients. Although other more complex risk calculators have been developed, the SAS is a simple, bedside usable, model that has been validated in a variety of surgical populations. We evaluated the reliability of the SAS in a major esophageal surgery population. 

Methods:  A retrospective review of a prospectively collected and internally validated database of cardiothoracic operations was performed for consecutive esophagectomies from 2009 to 2013.  Basic demographics, comorbidities, post-operative complications, and intraoperative variables were collected for all patients. The primary outcomes studied were mortality and NSQIP-defined in-hospital major complication; secondary outcomes were prolonged length of hospital stay (LOS) greater than 10 days and post-operative disposition. We used descriptive statistics, receiver operating characteristics (ROC) and Pearson Chi-Square analysis to analyze primary and secondary outcome prediction efficacy of SAS.  Preoperative comorbid conditions were also analyzed for association with post-operative outcomes prognostication using odds ratio (OR) analysis. 

Results: A total of 172 consecutive esophageal resections over four years were reviewed.   Overall mortality was 5 deaths (2.9%) with 4 occurring within 30 days of surgery, 1 after discharge within 30 days, and 1 after 90 days of hospitalization. Overall SAS 9-10, n=16; SAS 7-8, n=113; SAS 5-6, n= 42; and SAS ≤ 4, n=1. Of these, 34.3% had a major complication, 27.3% had a prolonged LOS, and 12.2% were discharged to a care facility other than home. No significant correlation was demonstrated between complication, LOS, or discharge disposition and the SAS with respective ROC of 0.44, 0.43, and 0.44.  Of the preoperative comorbid conditions analyzed, only neoadjuvant chemoradiation significantly increased the risk of any outcome, with an OR of 3.59 (95% CI 1.38-9.37, p < 0.01) risk of discharge to care other than home.

Conclusion: The perioperative performance measure of the SAS does not appear to have a good ability to predict major post-operative adverse outcomes in a major esophageal surgery population.

 

39.08 Transfer to Higher-Level Centers Does Not Improve Survival in Older Patients with Spinal Injuries

G. Barmparas2, Z. Cooper1, J. Havens1, R. Askari1, E. Kelly1, A. Salim1  1Brigham And Women’s Hospital,Division Of Acute Care Surgery And Surgical Critical Care-Department Of Surgery,Boston, MA, USA 2Cedars-Sinai Medical Center,Division Of Acute Care Surgery And Surgical Critical Care / Department Of Surgery,Los Angeles, CA, USA

Introduction:   As the numbers of injured elders continue to rise dramatically, trauma centers are pressed to identify which older patients benefit from higher level care.  The purpose of the current investigation was to delineate whether elderly patients with spinal injuries benefit from transfers to Level I or II centers.

Methods:   We used The National Trauma Databank (NTDB) datasets 2007-2011 to identify all patients over 65 (y) old with any spinal fracture or spinal cord injury from a blunt mechanism. Only centers reporting ≥ 80% of AIS and/or ≥ 20% of comorbidities and/or with ≥ 200 subjects in the NTDB, were included. Patients who were transferred to Level I and II centers (TR) were then compared to those who were admitted to Level III or other centers (NTR). Patients who were transferred from Level III or other centers to other acute care facilities were excluded. We used chi-squares and t-tests where appropriate to compare patient characteristics (demographics comorbidities, admission vital signs and GCS, injury severity), and hospital factors (teaching, region, and availability of > 10 orthopedic or neurosurgeons) between groups.  We then performed logistic regression to adjust for these differences between patients with any spinal injury and a subgroup analysis for patients with spinal cord injury. The primary outcome was in-hospital mortality. Alpha = p<0.01

Results: Of 3,313,117 eligible patients, 43,637 (1.3%) met inclusion criteria: 19,588 (44.9%) in the TR Group and 24,049 (55.1%) in the Non-TR Group. The majority of patients (95.8%) had a spinal fracture without a spinal cord injury. TR patients were significantly less likely to be ≥ 90 years old  (7.0% vs. 8.1%, p<0.01) and had higher injury severity scores (AIS head ≥ 3 (18.9% vs. 15.7%, p<0.01; AIS spine ≥ 3 (5.9% vs. 4.4%, p<0.01). When compared to NTR, TR patients were more likely to have a spinal cord injury at any level (4.7% vs. 3.1%, p<0.01) and to require a spinal surgical procedure within 48 hours from admission (4.8% vs. 2.4%, p<0.01). More TR patients required ICU admission  (48.5% vs. 36.0%, p<0.01) and ventilatory support (16.1% vs. 13.3%, p<0.01). Overall mortality was 7.7% (TR 8.6% vs. NTR 7.1%, p<0.01). However, mortality in the subgroup of patients with a spinal cord injury was 21.7% (TR 22.3% vs. NTR 21.0%, p<0.01). After multivariate analysis, there was no difference in the adjusted mortality for patients with any spinal injury (AOR [95% CI]: 0.98 [0.89, 1.08], p=0.70) or for patients with spinal cord injury (AOR [95% CI]: 0.86 [0.62, 1.20], p=0.38) treated at higher-level centers.

Conclusion: Transfer of elderly patients with spinal injuries to higher-level trauma centers is not associated with improved survival. Further research is required in this area to identify those subgroups of elderly patients who benefit from such transfers.

39.09 Tetanus and Pertussis Vaccination in U.S. Adult Trauma Centers: Who's up to Date?

B. K. Yorkgitis1,2, G. Timoney2, P. Van Den Berg2, A. Goldberg2, A. Pathak2, A. Salim1, J. Rappold2  1Brigham And Women’s Hospital,Trauma, Burn, Surgical Critical Care,Boston, MA, USA 2Temple University,Division Of Trauma,Philadelpha, PA, USA

Introduction:  Trauma centers commonly administer tetanus prophylaxis to patients sustaining wounds.  In the U.S., there are currently two different vaccinations available for adult administration: tetanus/diphtheria toxoid (Td) or tetanus/reduced diphtheria and acellular pertussis (Tdap). The importance of Tdap lies in its vaccination against pertussis while providing tetanus immunity. 
 Since the 1980’s there has been a steady rise in pertussis cases, from the low in 1976 of 1,010 to a high of 48,277 in 2012.1  This epidemic rise caused the Centers for Disease Control (CDC) Advisory Committee on Immunization Practices (ACIP) to recommend the routine use of Tdap when tetanus prophylaxis is indicated. Vaccination against pertussis is paramount for prevention.

Methods:  An institutional review board exempt, web based national survey was emailed to adult trauma center coordinators who's address could be located via an internet search.  Questions included level designation, number of trauma evaluations annually, zip code, hospital description (university, university affiliated, community), and which preparation is given for adults <65 years and those over. The aim of this study was to gather data on which vaccination is currently being given to trauma patients.  At the conclusion of the survey, hyperlinks to the CDC ACIP recommendations were provided as an educational tool. 

Results: A total of 718 emails were successfully sent and 439 (61.1%) completed surveys were returned.  Level 4/5 centers had the highest compliance rates for those patients between ages 18-65 (93%), followed by level 2/3 (86.9%), and last level 1 (56.9%). Among all centers, the use of Tdap was lower in the >65 years group.  Level 2/3 trauma centers were the most compliant with this age group (60.6%) then level 4/5 (57.4%) and lastly level 1 (40.3%). 

Conclusion: With the rise in pertussis cases, vaccination remains crucial to prevention.  The CDC recommendations for Tdap have existed for adults <65 years since 2005 and those over 65 years since 2012.2  Yet many adult trauma centers do not adhere to the current ACIP guidelines. In particular, Level 1 trauma centers have the lowest rate of compliance. Through this survey, centers were educated on current recommendations. Increased vaccination of trauma patients with Tdap should improve protection against this virulent pathogen and result in a decreased incidence.

1. Center for Disease Control. (2014). Pertussis (Whooping Cough). Retrieved from http://www.cdc.gov/pertussis/surv-reporting.html

2. Updated Recommendation for Use of Tetanus Toxoid, Reduced Diphtheria Toxoid and Acellular Pertussis (Tdap) Vaccine in Adults 65 Years and Older – Advisory Committee on Immunization Practices (ACIP), 2012. MMWR. 2012;61(25):468-70.

39.10 Comorbidity-Polypharmacy Score Predicts Readmission in Older Trauma Patients

B. C. Housley1, N. J. Kelly1, F. J. Baky1, S. P. Stawicki2, D. C. Evans1, C. Jones1  1The Ohio State University,College Of Medicine,Columbus, OH, USA 2St. Luke’s University Health Network,Department Of Research & Innovation,Bethlehem, PA, USA

Introduction:  Hospital readmissions correlate with worse outcomes and may soon lead to decreased reimbursement. The comorbidity-polypharmacy score (CPS) is the sum of the number of pre-injury medications and the number of comorbidities, and may estimate patient frailty more effectively than patient age does. Though CPS has previously been correlated with patient discharge destination and clinical outcomes, no information is currently available regarding the association between CPS and hospital readmission.  This study evaluates that association, and compares it to age and injury severity as predictors for readmission.

Methods:  We retrospectively evaluated all injured patients 45 years or older seen at our American College of Surgeons-verified Level 1 trauma center over a one-year period. Inmates, patients who died prior to discharge, and patients who were discharged to hospice care were excluded. Institutional trauma registry data and electronic medical records were reviewed to obtain information on demographics, injuries, pre-injury comorbidities and medications, ICU and hospital lengths of stay, and occurrences of readmission to our facility within 30 days of discharge. Kruskal-Wallis testing was used to evaluate differences between readmitted patients and those who were not, with logistic regression used to evaluate the contribution of individual risk factors for readmission.

Results: 960 patients were identified; 79 patients were excluded per above criteria, and 2 further were excluded due to unobtainable medical records. 879 patients were included in final analysis; their ages ranged from 45-103 (median 58) years, injury severity scores (ISS) from 0-50 (median 5), and CPS from 0-39 (median 7).  76 patients (8.6%) were readmitted to our facility within 30 days of discharge.  The readmitted cohort had higher CPS (median 9.5, p=0.031) and ISS (median 9, p=0.045), but no difference in age (median 59.5, p=0.646).  Logistic regression demonstrated independent association of higher CPS with increased risk of readmission, with each CPS point increasing the odds of readmission by 3.9% (p=0.01).

Conclusion: CPS is simple to calculate and, despite assumed limited accuracy of this information early in a trauma patient’s hospitalization, appears to correlate well with readmissions within 30 days.  Indeed, frailty defined by CPS was a significantly stronger predictor of readmission than patient age was.  Early recognition of elevated CPS may help optimize discharge planning and potentially decrease readmission rates in older trauma patients; larger multicenter evaluations of CPS as a readily available indicator for the frailty of older patients are warranted.

36.09 The Cost of Secondary Trauma Overtriage in a Level I Trauma Center

D. A. Mateo De Acosta1, R. Asfour1, M. Gutierrez1, S. Carrie2, J. Marshall2  1University Of Illinois College Of Medicine At Peoria (UICOMP),Department Of Surgery,Peoria, IL, USA 2University Of Illinois College Of Medicine At Peoria,Division Of Trauma / Department Of Surgery,Chicago, IL, USA

Introduction:
The goal of regional trauma systems is to deliver adequate level of care to injured patients in a timely and cost effective manner. Inter-facility transfer of injured patients is the foundation of the United States trauma systems. Patients are commonly secondarily overtriaged delaying their definitive care and posing unnecessary burden on the receiving institution. Secondary overtriage ranges from 6.9 – 38%. The financial burden of secondary overtriaging that is placed on receiving institutions has been rarely studied. 

Methods:
We reviewed the EMR and trauma registry data of 1200 patients transferred to our institution due to traumatic injuries, during a three year period. Patients were divided in two groups. Group 1 included patients “secondarily overtriaged” and Group 2 (control) those appropriately triaged. Secondary overtriage was defined as patients transferred from another hospital emergency department to our trauma service with an injury severity score (ISS) < 10, did not require an operation, and were discharged home within 48 hours of admission.

Results:
We identified 399 adult patients secondarily overtriaged to our institution. These represented a 31.9% of those transferred to our institution during the study period. Common indications for transfer were trauma to the torso, neurological, facial or orthopedic trauma. Main reasons for transfer among those secondarily overtriaged were Traumatic Brain Injury (37.4%, p<0.05)  and Orthopedic Trauma (21.8 %, p<0.05), impacted by the unavailability of speialist physcians in the reffering institution. Average hospital cost and reimbursement per overtriaged patient were $19,301 and $7,356.83 respectively. Cost itemization was as follows: Trauma activation – $5,016.49, Observation boarding $1,7413, Radiology – $ $4,339.09, Laboratory – $1,836.68, Pharmacy – $1,256.1 and Supplies – $2,431.6.Transport was by ground in 85.95% of patients and via helicopter in 14.05%. Average cost helicopter transport was $19.535.78.

Conclusion:
Secondary trauma overtriage presents a significant burden on trauma centers with an average cost per patient of approximately $19,301. Major reasons for transfer to our institution were traumatic brain injury and orthopedic mainly due to the unavailability of subspecialty services in the transferring institution. Education of rural trauma triage staff must continue to intensify in order to minimize the secondary overtriage of patients, expediting their care and optimizing resource utilization
 

36.10 The True Cost of Postoperative Complications For Colectomy

C. K. Zogg1, E. B. Schneider1, J. Canner1, K. S. Yemul1, S. Selvarajah1, N. Nagarajan1, F. Gani1, A. H. Haider1  1Johns Hopkins University School Of Medicine,Center For Surgical Trials And Outcomes Research, Department Of Surgery,Baltimore, MD, USA

Introduction:  In 2013, the United States spent $3.8 trillion on healthcare – a number projected to grow by 6.2% per year. Postoperative complications influence the cost of procedures and guidelines define them as a measure of the quality of surgical care. However, their impact on procedure costs remains obscure. This study explored increased costs associated with postoperative complications for colectomy using nationally representative data.

Methods:  Data from the 2007-2011 HCUP Nationwide Inpatient Sample were queried for patients ≥18 years of age undergoing elective procedures with a primary procedure code for laparoscopic or open colectomy. Patients with the following primary diagnoses of colon cancer, diverticulosis, diverticulitis, regional enteritis, ulcerative colitis and benign neoplasm of the colon were included. Patients were assessed for isolated complications including mechanical injury to wounds and infection as well as procedural, systemic, urinary, pulmonary, gastrointestinal and cardiovascular complications. HCUP-defined weights were used to calculate nationally representative estimates for each complication, stratified by patient-demographic and hospital-level factors. Diagnosis, procedure and Charlson Comorbidity Index were also examined. Population-weighted crude and risk-adjusted generalized linear models (GLM) were used to assess for differences in non-routine discharge (binomial), in-hospital mortality (binomial), length of stay (gamma) and total cost (gamma) (Table).

Results: We identified 115,269 patients of whom 20,728 (17.9%) experienced a post-op complication. The most frequent complications were gastrointestinal (9.8%) and infectious (3.2%). Patients undergoing laparoscopic procedures experienced fewer complications, while patients with colon cancer (19.7%) and ulcerative colitis (18.7%) were at the highest risk. Adjusted GLM (Table) revealed that patients with complications were >3 times more likely to be non-routinely discharged and >5 times more likely to die. They had 77-82% longer lengths of stay and incurred 70-76% higher total costs. Median costs for post-operative complications stratified by primary diagnosis and procedure type were consistently higher among patients experiencing complications (p<0.001), with an average complication/no complication ratio of 1.48/1.00.

Conclusion: The considerable patient- and financial-burdens associated with postoperative complications emphasize the need for systemic efforts to support quality-improvement initiatives and standardized procedures based on best evidence. Preventing or reducing postoperative complications following colectomy has the potential to dramatically reduce overall costs while improving patient-centered outcomes.

37.01 Promoting residents’ clinical reflections on medical care that seems futile by introducing a subjective but measurable perspective to improve ethically difficult decisions about gastrostomy tube placement.

L. Torregrosa1, E. Rueda2  1Xaverian University – San Ignacio Hospital, Bogotá, Colombia 2Institute of Bioethics, Xaverian University, Bogotá

Introduction:
Determining the futility of a particular intervention is not just an empirical medical assessment. Unfortunately, almost nothing in physicians´ medical training prepares them to make appropriate decisions in cases in which medical futility is at stake. The study of medical sciences doesn’t let physicians to know how to cope with those clinical situations either.

Making surgical decisions in situations in which a significant benefit for the patient is unclear should be a central issue to surgical teams especially when they are engaged in postgraduate training. In order to prepare surgeons to make better decisions in cases in which the benefit of gastrostomy tube placement is unclear, we develop a decision tool focused on the patient’s point of view about the acceptability of the procedure in his/her own case.

Methods:
Traditionally, the decision to place a gastrostomy tube is focused mainly on patients in terminal phases of their diseases (last phases of cancer, dementia, anorexia-cachexia syndrome, permanent vegetative states) who cannot satisfy nutritional requirements by oral intake of food or nutritional supplements. This situation, that residents face frequently during surgical training, provides an ideal scenario for learning how to move beyond a narrow physiological way of understanding benefit to a broader concept of clinical benefit (based on the patient`s perspective).
On the grounds of a theory of human capabilities (M. Nussbaum – A. Sen) we developed an open-ended questionnaire to register what a potential patient for the procedure would consider valuable in his/her case. The questionnaire was used during each clinical encounter between the physician (resident) and the patient (or his/her surrogate decision maker) to improve decision making on whether a gastrostomy tube should be placed. Perceptions of the members of the General Surgical team at Hospital San Ignacio about the utility of the tool were evaluated as well.

Results:
The physicians (attendings and residents) of the General Surgery Service and the Nutritional Support Team perceived that this tool improved their decisions on whether a gastrostomy tube should be offered. General capacity of those teams to address medical futility regarding different cases was also improved.

Conclusions:
Since our tool integrates both medical and non-medical dimensions within the decision making process on whether a gastrostomy tube should be placed, it contributes to improve ethical reasoning among physicians (including residents) on the potential futility of such procedure (or others), and guides the resident through the ethical reasoning when the overall clinical benefit of a surgical intervention is uncertain.

The “human capabilities approach” (A. Sen, M. Nussbaum) was productively integrated into the decision making on the acceptability of this procedure. Actually, the surgical team assessed this bed-side tool as useful to facilitate decision making in cases in which the overall clinical benefit of placing a gastrostomy tube is uncertain.
 

37.02 Do Patients Buy-In to the Use of Postoperative Life Supporting Treatments? A Qualitative Study

M. J. Nabozny1, J. M. Kruser2, K. E. Pecanac7, E. H. Chittenden5, Z. Cooper6, N. M. Steffens1, M. F. McKneally8,9, K. J. Brasel10, M. L. Schwarze1,4  1University Of Wisconsin,Department Of Surgery,Madison, WI, USA 2Northwestern University,Department Of Medicine,Chicago, IL, USA 4University Of Wisconsin,Department Of Medical History And Bioethics,Madison, WI, USA 5Massachusetts General Hospital,Division Of Palliative Care,Boston, MA, USA 6Brigham And Women’s Hospital,Division Of Trauma, Burns, And Surgical Critical Care,Boston, MA, USA 7University Of Wisconsin,School Of Nursing,Madison, WI, USA 8University of Toronto,Department Of Surgery,Toronto, Ontario, Canada 9University of Toronto,Joint Center For Bioethics,Toronto, Ontario, Canada 10Medical College Of Wisconsin,Department Of Surgery,Milwaukee, WI, USA

Introduction: Before a big operation surgeons generally assume that patients buy-in to life-supporting interventions that might be necessary postoperatively.  How patients understand this agreement and their willingness to participate in additional treatment is unknown.  The objective of this study is to characterize how patients buy-in to treatments beyond the operating room and what limits they would place on additional interventions.

Methods: We performed a qualitative study of preoperative conversations between surgeons and patients at surgical practices in Toronto, ON, Boston, MA, and Madison, WI.  Purposive sampling was used to identify 11 surgeons who are good communicators and routinely perform high-risk operations. Preoperative conversations between each surgeon and 3-7 of their patients were recorded (n = 89).  A subset of 41 patients and their family members were asked to participate in open-end preoperative and postoperative interviews.  We used qualitative content analysis to analyze the interviews and surgeon visits inductively, specifically evaluating the content of the conversation about the use of postoperative life support.

Results: Thirty-three patients and their family members participated in a preoperative interview and two of these were lost to follow-up.   Patients expressed confidence that they had a common understanding with their surgeon about how they would be treated if there was a postoperative complication.  However, this agreement was expressed in a variety of ways from an explicit desire that the surgeon would treat any complication to the fullest extent, “Just do what you got to do” to a simple assumption that complications would be treated if they did occur.  Most patients trusted their surgeon to intervene on their behalf postoperatively but expressed a preference for significant treatment limitations which were not discussed with their surgeon preoperatively (See Table).  Furthermore, patients did not discuss their advance directive with their surgeon preoperatively but assumed it would be on file and/or that family members knew their wishes.

Conclusion: Following high risk surgery, patients trust their surgeon to treat complications as they arise.  Although patients buy-in to additional postoperative intervention, they note a broad range of preferences for treatment limitations which are not discussed with the surgeon preoperatively.

37.03 Evaluating Coercion, Pressure, and Motivation in Potential Live Kidney Donors

A. A. Shaffer1, E. A. King1, J. P. Kahn2, L. H. Erby3, D. L. Segev1  1Johns Hopkins University School Of Medicine,Department Of Surgery,Baltimore, MD, USA 2Johns Hopkins School Of Public Health,Berman Institute Of Bioethics,Baltimore, MD, USA 3Johns Hopkins School Of Public Health,Department Of Health And Behavior Sciences,Baltimore, MD, USA

Introduction:
As the shortage of donor organs remains an obstacle, transplantation with living donors is increasing. Live donor kidney transplantation yields improved graft survival and recipient longevity, compared with deceased donors. However, one concern with living donation is the potential risk of coercion or pressure on individuals to donate when approached by a transplant candidate. Currently, there is no widely used, standard test to measure donor pressure in a clinical setting. The purpose of this study was to use a novel assessment to evaluate pressure experienced by live donor candidates, determine primary motivations for considering donation, and identify demographic factors associated with increased pressure.

Methods:
We modified a psychological questionnaire of perceived coercion to generate a novel pressure assessment for potential kidney donors. This assessment is composed of six questions. Each of the first four questions collects information on one element of the decision, including the idea, influence, choice, and freedom. These are answered on a Likert scale from “Strongly Agree” to “Strongly Disagree,” which we convert to a numerical scale from 1 to 5. The fifth question asks the respondent to self-report perceived pressure on a scale from 1 (least pressure) to 5 (most pressure). Results of the first five questions were averaged to compute a pressure score. The sixth question qualitatively ascertains the candidate’s primary motivation for donation. This question requires the respondent to rank her or his reason for donating, in order of importance, from three options. From November 25, 2013, data were prospectively collected on every individual calling our center for live donor evaluation.

Results:
Our study population included 400 potential live donors with a mean age of 41.8 years (SD=13.3). Of the participants, 58.8% were female, 72.3% were Caucasian, and 20.4% were African American. The mean pressure score was 1.1 (SD=0.3) and ranged from 1 to 4.2. Of the respondents, 79.2% had a total pressure score of 1, indicating that they had answered each of the scaled questions with the lowest pressure measurement. There was no difference in mean pressure score by age, sex, race, or recipient/ donor relationship type. The primary ranked motivation for donation was 86.3% “I wanted to help my recipient,” 11.3% “I wanted to give meaning to my life,” and 2.4% “My family or friends expected me to donate.”

Conclusion:
Most candidates (79.2%) for living kidney donation feel little pressure from others when making the decision to donate, but some (19.2%) report higher than minimal pressure. Our data show that there is no clearly identifiable demographic profile for those who experience pressure. This pressure assessment can be used to identify donor candidates facing pressure to donate early in the evaluation process so that these concerns can be fully addressed prior to donation.
 

37.04 Influence of Do-Not Resuscitate Status on Vascular Surgery Outcomes

H. Aziz1, B. C. Branco1, J. Braun1, M. Trinidad-Hernandez1, J. Hughes1, J. L. Mills1, J. L. Mills1  1University Of Arizona,Tucson, AZ, USA

Introduction

Do-not-resuscitate (DNR) orders allow patients to communicate their wishes regarding cardiopulmonary resuscitation. Although DNR status may influence physician decision-making beyond resuscitation, the impact of DNR status on the outcomes of patients undergoing emergent vascular operations remains unknown. The aim of this study was to analyze the outcomes of DNR patients undergoing emergency vascular surgery.

Methods

The National Surgical Quality Improvement Program database was queried to identify all patients requiring emergency vascular surgical interventions between 2005 and 2010. Demographics, clinical data, and outcomes were extracted. Patient outcomes were compared according to DNR status and the primary outcome was mortality.

Results

Over the study period, a total of 16,678 patients underwent emergency vascular operations (10.8% of the total vascular surgery population). Of those, 548 (3.3%) patients had a preoperative DNR status. There were no significant differences in rates of open or endovascular repair, or intraoperative blood requirements between the two groups. After adjusting for differences in demographics, and clinical data, DNR patients were more likely to have higher graft failure rates (8.7 % vs 2.4%; Adj. P < 0.01), and failure to wean from mechanical ventilation ( 14.9 % vs. 9.9%. Adj. P < 0.001). DNR status was associated with a 2.5 fold rise in 30- day mortality (35% vs. 14%; 95% CI: 1.7-2.9, Adj. P<0.001).

Conclusion

The presence of DNR order was independently associated with mortality. Patient and family counseling on surgical expectations prior to emergent operations is warranted as perioperative risks are significantly elevated when a DNR order exists.

37.05 Assessing Surgeon Behavior Change after Anastomotic Leak in Colon Surgery

V. V. Simianu1, A. Basu2, R. Alfonso-Cristancho3, A. D. Flaxman4, D. R. Flum1,3  1University Of Washington,Department Of Surgery,Seattle, WA, USA 2University Of Washington,Department Of Health Services,Seattle, WA, USA 3University Of Washington,Surgical Outcomes Research Center (SORCE),Seattle, WA, USA 4University Of Washington,Institute For Health Metrics And Evaluation,Seattle, WA, USA

Introduction: Breakdown of a colorectal anastomosis is a rare but potentially life-threatening complication. Pressure testing the anastomosis by submerging it in water as air is injected (leak testing) can identify leaks intra-operatively and reduces the risk of leaks after surgery by up to 50%. Surgeons have varying opinions about the value of leak testing, and the field of behavioral economics predicts that perceived value drives behavior. We evaluated the impact of having a surgical leak on a surgeons’ leak-testing behavior during subsequent cases, to test the hypothesis that a recent leak would influence the perceived value of leak testing.

Methods: Using a prospectively gathered cohort from the Surgical Care and Outcome Assessment Program (SCOAP) in Washington State, we quantified leak testing during elective colorectal procedures with testable anastomoses (left colectomy, low anterior resection, and total abdominal colectomy) and assessed for adverse events related to leak. We describe patterns of leak testing and leaks, stratified by surgeon volume. Higher volume surgeons were defined as performing 5 or more procedures per year.  To test the hypothesis of behavior change, we explored a difference-in-difference non-parametric model to compare leak testing before and after a leak.

Results: From 2008 to 2013, surgeons performed 7,497 elective colorectal operations across 46 hospitals, with a leak rate of 2.6% (n=195). Higher-volume surgeons accounted for 83.2% of the cases (n= 6,234) in the time period. Mean leak testing rate for all surgeons was 85.9%. While leaks occur more often in untested cases (3.5% vs 2.5%, p=0.05), leak events and leak testing were not different between lower- and higher-volume surgeons. The overall rate of leak testing increased for both lower-volume (76 to 88%, p=0.007) and higher-volume (82 to 88%, p=0.002) surgeons over the study. Lower-volume surgeons seem to increase their testing after a leak, as shown in Table 1. However, our difference-in-difference analytic model was limited by small sample size at the individual surgeon level. Several hundred unique surgeons’ data would be needed in each strata to detect significant differences.

Conclusion: Intraoperative leak testing appears to increase the most for lower-volume surgeons who experienced a leak, suggesting that these surgeons may attribute higher value to leak testing after a leak. For higher-volume surgeons, it may be that surgeon-specific preferences and practice style are more influential in the uptake of leak testing rather than exposure to adverse events.  These insights may help in crafting quality improvement initiatives around colorectal surgery that require clinician behavior change.

 

 

 

37.06 Burns in Nepal: A Population Based Countrywide Assessment

S. Gupta1,2, U. Mahmood3, S. Gurung8, S. Shrestha7, A. G. Charles6, A. L. Kushner2,4, B. C. Nwomeh2,5  1University Of California, San Francisco – East Bay,Surgery,Oakland, CA, USA 2Surgeons OverSeas,New York, NY, USA 3University Of South Florida,Department Of Plastic Surgery,Tampa, FL, USA 4Johns Hopkins Bloomberg School Of Public Health,International Health,Baltimore, MD, USA 5Nationwide Children’s Hospital,Ohio State University, Pediatric Surgery,Columbus, OH, USA 6University Of North Carolina, Chapel Hill,Surgery, Trauma And Critical Care,Chapel Hill, NC, USA 7Nepal Medical College,Surgery,Kathmandu, , Nepal 8Kathmandu Medical College,Kathmandu, , Nepal

Introduction:  The incidence of burns in low and middle income countries (LMICs) is 1.3 per 100,000 population compared with an incidence of 0.14 per 100,000 population in high-income countries, ranking in the top 15 leading causes of burden of disease globally.  However, much of the data from LMIC is based on estimates of those presenting to a health facility and may underestimate the true prevalence of burn injury. The purpose of this study was to assess the prevalence of burn injuries at a population level in Nepal, a low income South Asian country.

Methods:  A cluster randomized, cross sectional country wide survey was administered in Nepal using the Surgeons OverSeas Assessment of Surgical Need (SOSAS) from May 25th to June 12th, 2014.  Fifteen of the 75 districts of Nepal were randomly chosen proportional to population.  In each district, three clusters, two rural and one urban, were randomly selected.  The SOSAS survey has two portions:  the first collects demographic data about the household’s access to healthcare and recent deaths in the household; the second is structured anatomically and designed around a representative spectrum of surgical conditions, including burns.

Results:  In total, 1350 households were surveyed with 2,695 individuals with a response rate of 97%.  Fifty-five burn injuries were present in 54 individuals (2.0%, 95% CI 1.5% to 2.6%), mean age 30.6 (SD 2.3, 95% CI 26.0 – 35.2) and 52% in males.  The largest proportion of burns was in the age group 25-54 (2.22%, 95% CI 1.47 to 3.22%), with those aged 0-14 having the second largest proportion (2.08%, 95% CI 1.08% to 3.60%).  The upper extremity was the most common anatomic location affected with 36.36% of burn injuries.  Causes of burns included 60.38% due to hot liquid and/or hot objects, and 39.62% due to an open fire or explosion.  Eleven individuals with a burn had an unmet surgical need (20%, 95% CI 10.43% to 32.97%).  Barriers to care included facility/personnel not available (8), fear/no trust (1) and no money for healthcare (2). Extrapolations suggest that nearly 608,605 people in Nepal have suffered an injury due to a burn, with potentially 124,200 unable to receive appropriate care. 

Conclusion:  Burn injuries in Nepal appear to be primarily a disease of adults due to scalds, rather than the previously held belief that burn injuries occur mainly in children (0-14) and women and are due to open flames.  This data suggest that the demographics and etiology of burn injuries at a population level vary significantly from hospital level data.   To tackle the burden of burn injuries, interventions from all the public health domains including education, prevention, healthcare capacity and access to care, need to be addressed, particularly at a community level.  Increased efforts in all spheres would likely lead to significant reduction of burn-related death and disability.

 

37.07 The Natural Progression of Biliary Atresia in Vietnam

M. B. Liu1, X. Hoang3, T. B. Huong3, H. Nguyen3, H. T. Le4, A. Holterman2  1Stanford University School Of Medicine,Stanford, CA, USA 2University Of Illinois College Of Medicine At Peoria,Department Of Surgery/Pediatric Surgery,Peoria, IL, USA 3National Hospital Of Pediatrics,Hepatology Department,Hanoi, HANOI, Viet Nam 4National Hospital Of Pediatrics,Hanoi, HANOI, Viet Nam

Introduction: While the natural evolution of operated biliary atresia (BA) patients who undergo the Kasai portoenterostomy is well documented, untreated biliary atresia is not a common occurrence in developed countries and has not been well characterized. The objectives of this study were to further characterize unoperated biliary atresia patients and their survival course in Vietnam, a developing country.

Methods: A retrospective chart review was undertaken of the demographics and clinical characteristics of patients diagnosed with biliary atresia between January 2012 and July 2013 at one hospital in Vietnam. Patients identified as unoperated biliary atresia cases were contacted to obtain survival data.

Results: A total of 84 patients (60 patients for 2012 and 24 patients for 2013) were identified as unoperated biliary atresia cases out of a total of 178 patients who were diagnosed with biliary atresia within the same timeframe. Mean age at diagnosis for the unoperated BA patients was 100+/-84 days with a median of 69 days. The majority (54%) present within 2 months of life (10% within 45 days); 33%, 21% and 12% present after 3, 4, and 6 months of age respectively. At the time of presentation, the mean +/- SD values for total bilirubin values were 10.3+/-4.5 mg/dL (normal between 0.1-1.0 mg/dL). Those for ALT were 141+/- 88u/L (normal <42 u/L) and for PELD scores were 15+/-21 (median of 10-15). The reasons for no surgical treatment were parents’ refusal of the Kasai surgery, late diagnosis, or lack of access to primary liver transplantation. Follow-up data was limited since only 12% had at least 1 readmission at the same hospital for complications after their initial diagnosis. The remaining 88% did not return for further management. Follow-up survival and mortality data was obtained for 34 patients out of the 84 unoperated BA cases. The remaining patients could not be contacted. Of the 34 patients, 7(20%) were still alive as of August 2013. These 7 patients had been alive for an average of 9.5+/-3.6 months at the time of being contacted. The remaining 27 deceased patients had a median lifespan of 7.4 months.

Conclusion: Our data provide the most recent survival outcomes for patients with unoperated biliary atresia. They illustrate the multiple causes for the significant medical burden of patients from Vietnam, including delays in presentation, parents’ refusal of surgical treatment, and their lack of access to follow-up care. Innovative non-invasive palliative therapy may be more acceptable to these families to improve the survival and quality of life for these patients.

37.08 Emergency General Surgery in a Low-Middle Income Healthcare Setting – Determinants of Outcomes

A. A. Shah1,6, H. Zafar6, R. Riviello3, C. K. Zogg1, M. S. Halim7, S. Zafar5, A. Latif8, Z. Rehman6, A. H. Haider1  1Johns Hopkins University School Of Medicine,Center For Surgical Trials And Outcomes Research, Department Of Surgery,Baltimore, MD, USA 3Harvard School Of Medicine,Center For Surgery And Public Health, Brigham And Women’s Hospital,Brookline, MA, USA 5Howard University College Of Medicine,Department Of Surgery,Washington, DC, USA 6Aga Khan University Medical College,Department Of Surgery,Karachi, Sindh, Pakistan 7Aga Khan University Medical College,Section Of Critical Care, Department Of Medicine,Karachi, Sindh, Pakistan 8Johns Hopkins University School Of Medicine,Department Of Anesthesia,Baltimore, MD, USA

Introduction:  The field of emergency general surgery (EGS) has rapidly evolved as a distinct component of acute care surgery. However, nuanced understandings of outcome predictors in EGS have been slow to emerge, particularly in resource-constrained parts of the world.  The objective of this study was to describe the disease spectrum and risk factors associated with outcomes of EGS among patients presenting to a tertiary care facility in Pakistan. 

Methods:  Discharge data from a university hospital were obtained for all adult patients (≥16 years) presenting between March 2009 and April 2014 with ICD-9-CM diagnosis codes consistent with an EGS condition, as described by the American Association for the Surgery of Trauma (AAST). Multivariate analyses, accounting for age, gender, year of admission, type of admission, admitting specialty, length of stay (LOS), major complications and Charlson Comorbidity Index, were used to assess potential associations between demographic/clinical factors and all-cause mortality and major complications (pneumonia, pulmonary emboli, urinary tract infections, cerebrovascular accidents, myocardial infarcts, cardiac arrest and systemic sepsis).

Results: Records for 13,893 patients were identified. Average age was 47.2 (±16.8) years, with a male preponderance (59.9%). The majority of patients presented with an admitting diagnosis of biliary disease (20.2%) followed by soft tissue disorders (15.7%), hernias (14.9%) and colorectal disease (14.3%). The crude rates of death and complications were 2.7% and 6.6%, respectively. Increasing age was an independent predictor of death and complications. Patients admitted for resuscitation (n=225) had the highest likelihood of mortality and complications (OR [95% CI]: 229.0 [169.8-308.8], 421.0 [244.8-724.3], respectively). The median length of hospital stay was 2 (IQR: 1-5) days. Examination of the proportion of deaths over a 30-day LOS revealed a tri-modal mortality distribution that peaked on days 20, 25 and 30.

Conclusion: Patients of advanced age and those requiring resuscitation are at greater risk of both mortality and complications. This study provides an important first step toward quantifying the burden of EGS conditions in a lower-middle-income country. Data presented here will help facilitate efforts to benchmark EGS in similar and, as yet, unexplored settings.

37.09 A propensity score based analysis of the impact of Decompressive Craniectomy on TBI in India

D. Agarwal1, V. K. Rajajee2, D. Schoubel2, M. C. Misra1, K. Raghavendran2  1All India Institute Of Medical Sciences,Apex Trauma Institute,New Delhi, , India 2University Of Michigan,Ann Arbor, MI, USA

Introduction:  Severe Traumatic Brain Injury (TBI) is a problem of epidemic proportion in the developing world. The use of Decompressive Craniectomy (DC) may decrease the subsequent need for resources directed at Intracranial Pressure (ICP) control and ICU length of stay, important considerations in resource-constrained environments. The impact of DC on outcomes, however, is unclear.The primary objective of the study was to  determine the impact of DC on in-hospital mortality and 6-month functional outcomes following severe TBI in a resource-constrained setting at the JPNATC, AIIMS, New Delhi, India.

Methods: During a 4-year period data was prospectively entered into a severe TBI registry. Patients aged >12-years meeting criteria for ICP monitoring (ICPM) were included. The registry was queried for known predictors of outcome in addition to use of ICPM and DC. Early DC (eDC) was defined as DC performed <48 hours from injury. Outcomes of interest were in-hospital mortality and poor 6-month functional outcome with Glasgow Outcome Scale (GOS)<3. A propensity-score based analysis was utilized to examine the impact of DC on outcomes.

Results: Of 1345 patients meeting study criteria, 589 (44%) underwent DC. Following propensity-score based analysis, DC was associated with a 9.2% increase (p=0.005) in mortality and a 13.2% increase (p=0.016) in poor 6-month outcome, while eDC was associated with a 15.0% (p<0.0001) increase in mortality but not significantly associated with poor 6-month outcome (p=0.15).

Conclusions: The use of DC following severe TBI was associated with an increased likelihood of in-hospital death and poor 6-month functional outcome in a high-volume resource-constrained setting. Clinical trials of DC in similar settings are warranted to determine the impact of DC in severe TBI.

37.10 Indirect Costs Incurred by Patients Obtaining Free Breast Cancer Care in Haiti

K. M. O’Neill1, M. Mandigo5, R. Damuse6,7, Y. Nazaire6,7, J. Pyda4, R. Gillies7, J. G. Meara2,3,7  1University Of Pennsylvania,Perelman School Of Medicine,Philadelphia, PA, USA 2Harvard School Of Medicine,Brookline, MA, USA 3Children’s Hospital Boston,Plastic Surgery,Boston, MA, USA 4Beth Israel Deaconess Medical Center,Surgery,Boston, MA, USA 5University Of Miami,School Of Medicine,Miami, FL, USA 6Hopital Universitaire Mirebalais,Mirebalais, CENTRE, Haiti 7Partners In Health,Boston, MA, USA

Introduction: In low- and middle-income countries (LMIC), it has been reported that 90% of patients with breast cancer have stage III or IV upon presentation.[i] Although many factors contribute to this phenomenon, the financial burden of seeking care incurred through indirect costs such as user fees, food, travel and lost wages is an important consideration that is often overlooked.

Methods:  In this study, we delineated the costs that Haitian patients pay out-of-pocket to seek comprehensive oncology care at Hôpital Universitaire de Mirebalais (HUM), where oncologic care is offered free of charge. In total, 61 patients were directly interviewed about associated costs during different points along the treatment cycle: (1) Diagnostic visits; (2) Chemotherapy visits (pre- and post-surgery) and (3) Surgical visit.

Results: On average, patient indirect expenses were: $619.04 for diagnostic costs, $635.68 for chemotherapy and $94.33 for the surgical visit. When costs at outside facilities were included, we found that patients paid $1,698.84 out-of-pocket on average during the course of their treatment. When comparing these expenses to the income of the patients, we found that patients were spending 193% (95% CI: 99%-287%) of their income on average for out of pocket expenses, with 68% of patients spending >40% of their potential income on medical expenses. When we included lost wages into the indirect costs, the average indirect costs came to $6,465 (95% CI: $1,833 – $11,096). The indirect costs to the patient were on average 3.36 times higher than the direct costs to the hospital (calculated in a separate study as $1,922 per patient).

Conclusion: Health expenditures are financially catastrophic for families throughout the world. In Haiti, 74% of people live on less than $2 per day and 65% live in extreme poverty (less than $1 per day).[ii] Given the findings in this study, it is likely that the financial burden of seeking care for breast cancer—even when that care is offered “free of charge”—may be insurmountable for the majority of patients.

[i] Fregene A & Newman LA. Breast cancer in sub-Saharan Africa: How does it relate to breast cancer in African American women? Cancer 2005;103(8):154050.

[iI] "Objectifs du Millenaire pour le developpement etat, tendances et perspectives. Ministere de L’Economie et des Finances. Institut Haitien De Statistique et D’informatique. December 2009 http://www.ihsi.ht/pdf/odm/OMD_Novembre_2010.pdf Accessed June 20, 2014 

 

38.01 Joints Under Study Trial (JUST)

R. Martin1, A. Chan1  1Mount Hospital Breast Cancer Research Centre

Overview:
The JUST trial is a Phase II, randomised, double blind, placebo-controlled study to evaluate the efficacy of topical pure Emu oil for arthralgic pain, related to aromatase inhibitor use in postmenopausal women with early breast cancer.

Background:
20% of patients using aromatase inhibitors for treatment of breast cancer cease them due to arthralgias and joint pains.
Emu oil has been shown to have topical anti-inflammatory effect in animal studies. A phase 1 trial has demonstrated 45% reduction in pain scores in 13 women applying Emu oil to affected joints after 8 weeks treatment.

Aim:
Using Visual Analogue Scores (VAS) we aim to demonstrate an improvement in joint pain as assessed from baseline to end of 8 weeks of treatment.

Secondary end points are to demonstrate an improvement in joint stiffness as assessed by a 4 point categorical scale from baseline to end of 8 weeks of treatment. We will also look at: Adverse effects related to the use of emu oil, compliance with application of Emu oil, and assess overall pain at the end of 8 weeks using the Brief Pain Inventory score

Methods:
~75 patients with joint pain subjectively worsening whilst on an aromatase inhibitor, randomized to receive either 250ml Emu Oil or 250ml placebo oil on a 1:1 basis. 1.25 ml oil applied over 30mins to up to 3 affected joints nominated at baseline. Baseline 5 point visual analogue scores completed with Brief pain inventory (BPI). Daily diary entry will be checked to ensure compliance, with final VAS and BPI scores completed at 8 weeks. At 8 weeks participants will be offered a further 8 weeks of treatment with open label Emu oil with VAS and BPI to be completed at the end of 16 weeks.

Accrual expected to be complete end February 2015. 72 patients will give 80% power to detect a 40% difference allowing for 25% placebo effect.
 

38.02 Staging Studies are of Limited Utility for Newly Diagnosed Clinical Stage I-II Breast Cancer

A. Linkugel1, J. Margenthaler1, A. Cyr1  1Washington University,General Surgery/College Of Medicine,St. Louis, MO, USA

Introduction:   For patients diagnosed with clinical Stage I-II breast cancer, treatment guidelines recommend against the routine use of radiologic staging studies in the absence of signs or symptoms suggestive of distant metastasis. However, these tests continue to be used for many early-stage breast cancer patients. This study aims to determine the utilization and yield of these studies at a National Comprehensive Cancer Network (NCCN) member institution.

Methods:   Female patients presenting with AJCC 7th Edition clinical stage I-II invasive breast cancer between 1998 and 2012 at Siteman Cancer Center, an NCCN member institution, were identified in a prospectively maintained institutional surgical database. Patients treated with neoadjuvant chemotherapy were excluded. Charts were reviewed to verify clinical stage and to document staging studies performed within six months of diagnosis.  Staging studies of interest included computed tomography (CT) of the chest, abdomen, and/or pelvis, bone scan, and positron emission tomography (PET).  Results of staging studies and additional diagnostic studies or procedures were recorded.  Descriptive statistics were used for the analysis.

Results:  A total of 3291 patients were included in the analysis (2044 were stage I and 1247 were stage II). Of these, 882 (27%) received CT of the chest, abdomen, and/or pelvis; bone scan; or PET within 6 months of diagnosis. A total of 691/882 (78%) received chest CT, 705/882 (80%) abdominal/pelvic CT, 704/882 (80%) bone scan, and 70/882 (8%) PET. Of these 882 patients, 312 were stage I (15% of the stage I cohort) and 570 were stage II (46% of the stage II cohort). Of the 882 patients imaged, 194 (22%) required additional imaging (x-ray, CT, bone scan, sonogram, or PET) and/or biopsies to follow-up abnormalities seen on the staging studies. However, only 11 of those 194 (6%) were confirmed to have metastatic disease (1.2% of the 882 imaged patients, 0.33% of the total study cohort). Of these 11 patients, one was clinically stage I at presentation, and 10 were stage II. Metastatic sites identified included lung (n=3), bone (n=4), liver (n=1), and a combination of sites (n=3). Numbers of patients determined to have metastatic disease were too small for comparative analysis.

Conclusions:  The identification of distant metastasis among clinical Stage I-II patients in this study was rare (0.33% of the total cohort). Even among patients judged appropriate for staging studies (CT, bone scan, and/or PET), only 1.2% were diagnosed with metastatic disease. These findings suggest that even at an NCCN member institution, staging studies are overused and lead to additional procedures in over 20% of patients.

38.03 Cancer-Directed Surgery and Conditional Survival in Advanced Stage Colorectal Cancer

L. M. Wancata1, M. Banerjee4, D. G. Muenz4, M. R. Haymart5, S. L. Wong3  1University Of Michigan,Department Of General Surgery,Ann Arbor, MI, USA 3University Of Michigan,Division Of Surgical Oncology,Ann Arbor, MI, USA 4University Of Michigan,Department Of Biostatistics,Ann Arbor, MI, USA 5University Of Michigan,Division Of Metabolism, Endocrinology, & Diabetes & Hematology/Oncology,Ann Arbor, MI, USA

Introduction: Though historically associated with poor survival rates, recent data demonstrate that some patients with advanced (stage IV) colorectal cancer (CRC) are surviving longer in the modern era.  Treatments have included improvements in systemic therapies and increased use of metastasectomy.  Traditional survival estimates are less useful for longer-term cancer survivors and conditional survival, or survival prognosis based on time already survived, is becoming more accepted as a means of estimating prognosis for certain subsets of patients who live beyond predicted survival times.  What is unknown is how specific treatment modalities affect survival.  We evaluated the use of cancer-directed surgery in patients with advanced CRC to determine its impact on long term survival in this patient population.

Methods: We used data from the Surveillance, Epidemiology, and End Results (SEER) registry to identify 323,415 patients with CRC diagnosed from 2000-2009.  The SEER program collects data on patient demographics, tumor characteristics, treatment, and survival data from cancer registries across the country. This cohort represents approximately 26% of the incident cases and its demographics are comparable to that of the general US population. Conditional survival estimates by SEER stage, age and cancer-directed surgery were obtained based on Cox proportional hazards regression model of disease-specific survival.

Results: Of the 323,415 patients studied 64,956 (20.1%) had distant disease at the time of diagnosis.  Median disease-specific survival for this cohort was just slightly over 1 year. The proportion of patients with distant disease who underwent cancer-directed surgery was 65.1% (n=42,176).  Cancer-directed surgery in patients with distant disease appeared to have a significant effect on survival compared to patients who did not undergo surgery [hazard ratio 2.22 (95% CI 2.17-2.27)].  These patients had an approximately 25% improvement in conditional 5 year disease specific survival across all age groups as compared to their counterparts who did not receive cancer-directed surgery, demonstrating sustained survival benefits for selected patients with advanced CRC who undergo resection.  A significant improvement in conditional survival was observed over time, with the greatest gains in patients with distant disease compared to those with localized or regional disease (Figure).

Conclusion: Five-year disease-specific conditional survival improves dramatically over time for selected patients with advanced stage CRC who undergo cancer-directed surgery.  This information is important in determining long term prognosis associated with operative intervention and will help inform treatment planning for patients with metastatic disease.

38.04 Temporal Trends in Receipt of Immediate Breast Reconstruction

L. L. Frasier1, S. E. Holden1, T. R. Holden2, J. R. Schumacher1, G. Leverson1, B. M. Anderson3, C. C. Greenberg1, H. B. Neuman1,4  1University Of Wisconsin,Wisconsin Surgical Outcomes Research Program, Department Of Surgery,Madison, WI, USA 2University Of Wisconsin,Department Of Medicine,Madison, WI, USA 3University Of Wisconsin,Department Of Human Oncology,Madison, WI, USA 4University Of Wisconsin,Carbone Cancer Center,Madison, WI, USA

Introduction:  : Research suggests an inverse relationship between post-mastectomy radiation (PMRT) and immediate breast reconstruction (IR). Recent data on the effectiveness of PMRT has led to increasing use in patients at intermediate risk (tumor ≤ 5cm with 1-3 positive nodes) of recurrence. At the same time, significant increases in the use of IR over the last decade have been observed. We sought to determine whether the increased use of PMRT in intermediate risk patients has led to a slower increase in rates of IR when compared to groups in whom the guidelines for PMRT have not changed.  

Methods:  The SEER Database was used to identify female patients with stages I‑III breast cancer undergoing mastectomy over the decade from 2002‑2011 (n=40,889). Patients ≥ 65 were excluded due to low rates of IR (5.1%). Three patient cohorts defined by likelihood of PMRT were formed based on tumor characteristics: High Likelihood (four or more positive lymph nodes or tumors >5 cm with 1‑3 positive lymph nodes), Intermediate Likelihood (tumors ≤5 cm with 1‑3 positive lymph nodes), and Low Likelihood (tumors ≤5 cm with 0 positive nodes). Changes in IR for each of these groups over time were assessed using joinpoint regression and summarized using annual percentage change (APC), which represents the slope of the line.

Results: The overall use of reconstruction increased from 22% in 2002 to 41% in 2011.  This statistically significant increase was observed across all 3 cohorts defined by the likelihood of receiving PMRT and across all ages. Receipt of IR was lower among groups with a higher likelihood of a recommendation for PMRT at the start of the study period: 14.1%, 19.4%, and 27.8% in the High, Intermediate, and Low Likelihood cohorts, respectively, in 2002. The highest risk group demonstrated the most increase in receipt of IR, as evidenced by its annual percentage change of 9.8%, with intermediate and low risk exhibiting APCs of 6.2% and 5.9%, respectively.  No group showed a significant change in APC from 2002-2011, meaning the rate of change was constant over the study period.

Conclusion: Rates of reconstruction have increased over the study period across tumor characteristics and are highest in patients that are least likely to receive a recommendation for PMRT. At no point did any group exhibit any evidence of a decreased rate of change, despite increased indications for PMRT over this time period. In fact, rates of IR for patients at intermediate and high likelihood of receiving PMRT are increasing faster than rates for the lowest-likelihood patients.  This may indicate that surgeons and radiation oncologists are becoming increasingly more comfortable with the prospect of immediate reconstruction in the setting of anticipated PMRT.