38.10 Saving Your Tail: How Do We Improve Overall Survival in Anal Cancer?

C. P. Probst1, C. T. Aquina1, A. Z. Becerra1, B. J. Hensley1, K. Noyes1, M. G. Gonzalez1, A. W. Katz2, J. R. Monson1, F. J. Fleming1  1University Of Rochester Medical Center,Surgical Health Outcomes & Research Enterprise,Rochester, NY, USA 2University Of Rochester Medical Center,Department Of Radiation Oncology,Rochester, NY, USA

Introduction:
Since the 1980s, combined modality treatment with radiotherapy (RT) and multi-agent chemotherapy has replaced abdominoperineal resection as the preferred definitive treatment for anal cancer. However, there is little data regarding factors affecting long-term overall survival (OS). This study examined the effect of patient, treatment, and hospital factors as well as year of diagnosis on overall survival.

Methods:
Patients with clinical stage I-III squamous cell carcinoma of the anus with complete information about RT treatment were selected from the 1998-2006 National Cancer Data Base. Bivariate analyses were used to examine differences in 5-year overall survival across patient, treatment, and facility characteristics. Kaplan-Meier curves compared survival differences between patients diagnosed from 1998-2002 and those diagnosed from 2003-2006. Subsequently, factors with a p-value <0.2 were entered into a Cox Proportional Hazards model to examine factors associated with 5-year OS. Factors that did not contribute to model fit were manually removed to produce an optimized final model.

Results:
Of the 11,027 patients that met inclusion criteria, 25% were clinical stage I, 49% clinical stage II and 26% clinical stage III. On Kaplan Meier analysis, minimal improvements in mean overall survival were noted for those diagnosed in later years compared to earlier years. Only 40% of patients were treated with guideline-indicated multi-agent chemotherapy and 45 Gray (Gy) RT dose. Additionally, suboptimal chemotherapy and radiation treatments resulted in reduced survival (Figure 1, p<0.001 for all comparisons). Within the multivariable analysis, numerous factors had a negative impact on OS. Compared to those receiving multi-agent chemotherapy and 45 Gy RT dose, increased hazard of death was observed in those treated with single-agent, no chemotherapy or RT dose less than 45 Gy (HR=1.10 95% CI=1.05-1.16) as well as those with both suboptimal chemotherapy regimen and RT dose (HR=1.35, 95% CI=1.26-1.45). Compared to patients with private insurance, decreased survival was observed among those with no insurance (HR=1.12, 95% CI=1.01-1.24), Medicaid (HR=1.20, 95% CI=1.10-1.30), and Medicare (HR=1.20, 95% CI=1.13-1.26). Compared to white patients, black patients had increased risk of death (HR=1.10 95% CI=1.02-1.19). Male sex was also an independent predictor of poor survival (HR=1.17, 95% CI=1.12-1.23).

Conclusion:
There has been minimal improvement in anal cancer survival over time. Sixty percent of patients are still undertreated, with widespread disparity in survival across patient groups. Utilization of a multi-disciplinary tumor board for anal cancer may help improve the delivery of appropriate treatment to all patients.
 

39.01 The Pitfalls of Recreational Inguinal Herniorraphy

C. T. Aquina1, K. N. Kelly1, C. P. Probst1, J. C. Iannuzzi1, K. Noyes1, F. J. Fleming1, J. R. Monson1  1University Of Rochester,Surgical Health Outcomes & Research Enterprise (S.H.O.R.E.),Rochester, NY, USA

Introduction:  Notwithstanding that inguinal hernia repair is the most common general surgical procedure with an estimated 750,000 repairs performed each year in the United States, there is currently little information regarding the impact of surgeon volume on outcomes following inguinal hernia repair, specifically whether increasing surgeon volume is associated with reoperation rates or resource utilization.

 

Methods:  The New York Statewide Planning and Research Cooperative System database was queried for elective outpatient open inguinal hernia repairs performed in New York States from 2001-2006 using ICD-9 and CPT codes. Low (<25 cases per year) and high (≥25 cases per year) surgeon volume was defined using the bottom tertile and upper two tertiles for number of open inguinal hernia repairs performed per year, respectively.  Bivariate, mixed-effect Cox proportional-hazards, and negative binomial regression analyses were performed assessing for factors associated with reoperation for recurrence, procedure time, and downstream total cost calculated as the sum of total facility charges for initial and recurrent repair.

Results:  Among 129,269 inguinal hernia repairs, the overall rate of reoperation for recurrence within 5 years was 1.7%. The median time to reoperation was 1.8 years where 4.8% of the reoperations were emergent. Recurrent hernia repair was performed by the same surgeon in only 57% of patients. A significant inverse relationship was seen between surgeon volume and reoperation rate, procedure time, and healthcare costs (P<0.001). After controlling for surgeon, facility, operative, and patient characteristics, the difference in procedure time and downstream total cost between low-volume and high-volume surgeons was 23 minutes and $763 per patient, respectively. Of note, facility volume had no effect on reoperation rates or procedure time. If elective inguinal hernia repairs were performed by surgeons with a minimum volume of 25 repairs per year, roughly $5.2 million would be saved each year in New York State alone. Extrapolated across the United States, over $180 million could be saved annually.

Conclusion:  Surgeon volume < 25 cases per year for elective open inguinal hernia repair was independently associated with higher rates of reoperation for recurrence, worse operative efficiency, and substantially higher healthcare costs. Referral to surgeons who perform at least 25 inguinal hernia repairs per year should be considered to decrease reoperation rates and unnecessary resource utilization.
 

39.02 A Screening Program to Prevent Readmission Following Colorectal Surgery

T. R. Grenda1,2, M. R. Hemmila1,2, S. L. Wong1,2, A. Mikhail2, S. E. Regenbogen1,2  1University Of Michigan,Center for Healthcare Outcomes And Policy,Ann Arbor, MI, USA 2University Of Michigan,Department Of Surgery,Ann Arbor, MI, USA

Introduction:  As healthcare reimbursement reform has increasingly penalized hospitals for unplanned readmissions, there is widespread interest in developing interventions to prevent them.  In a high-volume colorectal surgery service, we designed, implemented, and evaluated a pre-discharge screening program aimed at preventing readmission following inpatient colorectal surgery.

 

Methods:  We composed a 10-item screening tool to identify patients at increased risk for postoperative readmission. At discharge, mid-level providers or residents completed the screening and identified patients received a follow-up phone call from clinic nursing staff 48-72 hours after discharge, to identify and redirect patients with problems to early outpatient attention.  We obtained data on comorbidities and outcomes from supplemental review of the electronic medical record. Statistical analysis was performed to compare early (<7 days) and 30-day readmission rates between patients with positive and negative screens, and among those with positive screens, between those who did and did not receive follow-up phone calls.

Results: 290 consecutive patients undergoing colorectal surgery were screened for readmission risk.  193 of these patients (66.5%) screened positive using the tool (Table 1). The 30-day readmission rate was 12.4% for patients screening positive and 3.1% for those screening negative (relative risk 4.0, p=0.009).  The screening tool had a sensitivity of 91% for early readmission and 88% for 30-day readmission.  The positive predictive value of the tool was 5.6% and 12.3% for early and 30-day readmission, respectively.  Of those patients screening positive, only 52% were successfully contacted by nursing staff for follow-up phone call. There were no significant differences in readmission rates, at either 7 days (phone call: 3.9% vs. no phone call: 7.7%, p=0.4) or 30 days (11.8% vs. 13.2%, p=0.8) associated with receiving an intervention phone call. Issues screened for during the phone call did not predict subsequent readmission.

Conclusion: Our study identifies a 10 question tool with high sensitivity for detecting patients at highest risk for readmission after colorectal surgery. However, a targeted early follow up phone call intervention did not appear to prevent readmissions. Future efforts aimed at understanding the specific factors predictive of readmission are needed to guide implementation of effective interventions to prevent postoperative readmissions. 
 

39.03 Use Of Tranexamic Acid In Civilian US Trauma Centers: Results Of A National Survey

R. S. Jawa1, A. Singer2, J. E. McCormack1, C. Huang1, J. A. Vosswinkel1  1Stony Brook University Medical Center,Trauma,Stony Brook, NEW YORK, USA 2Stony Brook University Medical Center,Emergency Medicine,Stony Brook, NY, USA

Introduction:  The antifibrinolytic tranexamic acid (TXA) is listed as essential medication by the World Health Organization, is included in the Joint Theater Trauma System, and is recommended by the Trauma Quality Improvement Program of the American College of Surgeons as part of massive transfusion guidelines. A recent major trauma study further advocated for TXA use.  However, its use in US trauma centers is unknown. We determined surgeon’s familiarity with TXA and use of TXA.  We further hypothesized that military experience would be associated with greater TXA familiarity and use. 

Methods:  An online survey was sent to the 1291 attending surgeon members of a national trauma organization in the spring of 2014. The survey was organized into three parts: respondent demographics, perceptions of TXA, and experience with TXA. Perceptions of TXA use were scored on a 5 point Likert scale.  Chi-squared test was used for statistical analysis and p<0.05 was considered significant.

Results: The survey was completed by 35%.   With regards to demographics, 81.1% had completed a Critical Care fellowship. Military medical experience was reported by 21.0%.  74.5% of respondents work in a Level 1 Trauma Center, and 23% in a Level II trauma center.

With regards to TXA perceptions, a majority of those surveyed agreed or strongly agreed that: TXA reduced bleeding (78.9%), and that a comprehensive massive transfusion protocol should include TXA (82.5%).  Furthermore, 92% of respondents are looking towards national trauma organizations to develop practice guidelines for its use.

Experience with TXA was variable: 38.0% use regularly, 24.9% use it 1-2 times per year, 12.3% use it rarely, and 24.7% have not used it.  Of those who had used TXA, 79.6% indicated that the primary indication is significant hemorrhage; 18.6% felt risk of significant bleeding was an indication.  Amongst respondents who did not routinely use TXA, the primary reason was that they felt that TXA had uncertain clinical benefit (48.3%), followed by unfamiliarity with the drug (32.8%).  TXA unavailability in the hospital was a rare cause (3.6%); 87.2% of respondent's hospitals had TXA on formulary.  While 18.3% of surgeons with military experience had never used TXA, 26.4% of those without military experience had not used TXA, but this failed to reached statistical significance, with p=0.11. 

Conclusion: Currently, only 38% of US trauma surgeons regularly use TXA for significant traumatic hemorrhage.  The major reason for this appears to be unfamiliarity with TXA.  Military experience was not a significant predictor of TXA use in civilian US trauma centers.  The data suggest an opportunity for collaboration amongst members of national organizations to further a guideline for TXA use in significant hemorrhage. 
 

39.04 Surgical Volume, Post-Operative Outcomes, and Overall Patient Satisfaction

S. E. Tevis1, G. D. Kennedy1  1University Of Wisconsin,General Surgery,Madison, WI, USA

Introduction:  Patient satisfaction is an increasing area of interest due to both public reporting of results and tying of Medicare reimbursement to satisfaction scores.  While scores are adjusted for patient factors, little is known about how surgical volume and post-operative outcomes affect satisfaction with the hospital experience. 

Methods:  Hospitals participating in the University HealthSystem Consortium (UHC) database from 2011-2012 were included.  Patients were restricted to those discharged by general surgeons to isolate surgical patients.  Hospital data was paired with HCAHPS results from the Hospital Compare website.  Post-operative outcomes were dichotomized based on the median for all hospitals and stratified based on surgical volume.  Overall patient satisfaction scores, defined as the recommendation of hospital domain, were also dichotomized based on median scores.  Chi square and binary logistic regression analyses were performed to evaluate whether post-operative outcomes or surgical volume more significantly influenced high patient satisfaction.

Results:  The study population consisted of 171 hospitals from the UHC database.   The median surgical volume was 6,341 annual operations.  The median complication rate was 4.15%, median readmission rate was 10.72%, and median mortality rate was 1.24%.  Highly satisfied patients on the recommendation of hospital domain ranged from 46% to 90% with a median of 75%.  High surgical volume was a more important predictor of overall patient satisfaction than post-operative outcomes (Figure 1).  Hospitals with high surgical volume were significantly more likely to have high overall satisfaction scores than low volume hospitals regardless of hospital complication rates (p<0.001).  Similarly, high surgical volume independently predicted satisfaction on the HCAHPS survey regardless of hospital readmission rates (p<0.001) or mortality rates (p=0.009).

Conclusion:  High surgical volume more strongly predicted overall patient satisfaction on the HCAHPS survey than post-operative outcomes.  Patients may find higher volume hospitals more generally impressive and may be less capable of identifying safe, high quality hospitals.

 

39.05 Satisfaction with Surgeon Care as Measured by S-CAHPS is Not Related to NSQIP Outcomes

R. K. Schmocker1, L. Cherney-Stafford1, E. R. Winslow1  1University Of Wisconsin,Surgery,Madison, WI, USA

Introduction: Patient satisfaction is an important component of the patient experience, however measurement of satisfaction with surgical care has been problematic. The recently approved Consumer Assessment of Healthcare Providers and Systems Surgical Care Survey (S-CAHPS) was designed to measure the surgical patient experience. Although previous studies have suggested that satisfaction is not related to postoperative morbidity, this has largely been examined at the hospital level using administrative, measures of morbidity, and more global surveys. We set out to determine, on the patient level, whether the presence of NSQIP complications or other clinical variables impact patient satisfaction on the S-CAHPS.

Methods: All patients undergoing a general surgical operation from 6/13-11/13 were sent the S-CAHPS within 3 days of discharge, with a response rate of 45.3% (456/1007). To assess the impact of malignancy on satisfaction, a subset of operative sites with a high proportion of malignant indications was used (colorectal, thyroid, breast, hepatobiliary). Retrospective chart review was conducted using NSQIP variable definitions. Major complications were defined by the presence of: septic shock, cardiac arrest, stroke, ventilator > 48hrs, unplanned intubation, or organ space infection. Data were analyzed as a function of response to the overall surgeon-rating item, and those surgeons rated as the “best possible” or topbox were compared with those lower ratings using χ2 and t-tests as appropriate.

Results:253 patients were identified, 68% female, with an age of 59±16 yrs, BMI of 28.2±7 kg/m2, and length of stay (LOS) 4.5±6.7 days. 79% of respondents rated the surgeon as topbox. Age, BMI, ASA class, and LOS were similar between those who rated the surgeon as topbox and those that did not. The overall NSQIP complication rate was 20% (48/243) with 23% of those (11/48) being major complications. Neither the complication rate (total or major) nor the number of complications impacted satisfaction scores (Table). Similarly, a malignant indication for the operation, having an urgent operation, or being discharged to somewhere other than home were not associated with satisfaction scores.

Conclusion:Even when examined on a patient-level with surgery-specific measures and outcomes, the presence of complications after an operation does not appear to impact overall patient satisfaction with surgeon care. This, in conjunction with the finding that satisfaction does not appear to be impacted by other important clinical variables such as malignancy, suggests that satisfaction may be an outcome distinct from traditional measures. Further investigation into the primary determinants of this unique outcome is needed.

 

39.06 Influence of Body-Mass Index on Outcomes Following Major Resection for Cancer

C. K. Zogg1, B. Mungo2, A. O. Lidor3, M. Stem3, K. S. Yemul1, A. H. Haider1, D. Molena2  1Johns Hopkins University School Of Medicine,Center For Surgical Trials And Outcomes Research, Department Of Surgery,Baltimore, MD, USA 2Johns Hopkins University School Of Medicine,Division Of Thoracic Surgery, Department Of Surgery,Baltimore, MD, USA 3Johns Hopkins University School Of Medicine,Department Of Surgery,Baltimore, MD, USA

Introduction:  More than 1 in 3 adults in the United States, accounting for >106 million people, is obese. From a surgical perspective, the high prevalence of obesity means that operations on this population are common in everyday practice. Despite the assumption that obesity is associated with increased surgical risks, current evidence to suggest that obese patients fair worse is inconclusive. This study sought to examine associations between body-mass index (BMI) and outcomes following major resection for cancer using a nationally-validated outcomes-based database.

Methods:  Data from the 2006-2012 American College of Surgeons NSQIP were queried for patients ≥18 years of age with a primary ICD-9 cancer diagnosis and corresponding CPT code for lung surgery, esophagectomy, hepatectomy, gastrectomy, colectomy or pancreatectomy. BMI calculated for included patients were categorized according to World Health Organization classification (Table). Patients were compared first via single logistic regression for differences in 30-day mortality, extended length of stay (LOS), serious morbidity, overall morbidity and isolated morbid conditions among three cohorts: normal vs. (1) underweight, (2) overweight-obese I and (3) obese II-III. Similar methodology was employed using multivariate logistic regression adjusted for clinical/demographic factors and type of resection preformed. Risk-adjusted, stratified analyses for each resection were also considered in addition to an overall propensity score-adjusted logistic analysis (Table).

Results: Consistent with the distribution of BMI in the United States, we identified 529,955 patients of whom 32.06% (169,880) were normal weight, 3.45% (18,284) underweight, 32.52% (172,355) overweight and 17.76% (93,669), 7.51% (39,820) and 4.94% (26,177) obese I-III. Unadjusted, multivariate and propensity-score adjusted logistic regression found that 30-day mortality, extended LOS and serious and overall morbidity were significantly increased in cohort 1. Overall, we did not observe worse surgical outcomes in cohort 2; although, these patients had increased risk for isolated complications such as wound infection, venous thromboembolism, prolonged mechanical ventilation and renal complications. In cohort 3, obese patients experienced a 3-9% increased odds of overall and serious morbidity. Analyses stratified by cancer-resection type reported similar trends.

Conclusion: Evidence-based assessment of outcomes following major resection for cancer suggests that obese patients should be treated according to optimal oncologic standards Surgeons should not be hindered by unproven perceptions of prohibitively increased perioperative risk in this population.

39.07 The Surgical Apgar Score in Major Esophageal Surgery

C. F. Janowak2, L. Taylor2, J. Blasberg1, J. Maloney1, R. Macke1  1University Of Wisconsin,Division Of Cardiothoracic Surgery,Madison, WI, USA 2University Of Wisconsin,Department Of Surgery,Madison, WI, USA

Introduction:  Most postoperative assessments and triage decisions are based on subjective evaluation of a patient’s risk factors and overall condition. The Surgical Apgar Score (SAS) is a validated prognostic tool used to predict postoperative morbidity and mortality in a wide variety of surgical patients. The esophagectomy population is a unique subset of surgical patients who are high risk for post-operative complication and disposition resources. An objective prognostic metric is an appealing and efficient way to allocate limited care resources to the sickest of postoperative patients. Although other more complex risk calculators have been developed, the SAS is a simple, bedside usable, model that has been validated in a variety of surgical populations. We evaluated the reliability of the SAS in a major esophageal surgery population. 

Methods:  A retrospective review of a prospectively collected and internally validated database of cardiothoracic operations was performed for consecutive esophagectomies from 2009 to 2013.  Basic demographics, comorbidities, post-operative complications, and intraoperative variables were collected for all patients. The primary outcomes studied were mortality and NSQIP-defined in-hospital major complication; secondary outcomes were prolonged length of hospital stay (LOS) greater than 10 days and post-operative disposition. We used descriptive statistics, receiver operating characteristics (ROC) and Pearson Chi-Square analysis to analyze primary and secondary outcome prediction efficacy of SAS.  Preoperative comorbid conditions were also analyzed for association with post-operative outcomes prognostication using odds ratio (OR) analysis. 

Results: A total of 172 consecutive esophageal resections over four years were reviewed.   Overall mortality was 5 deaths (2.9%) with 4 occurring within 30 days of surgery, 1 after discharge within 30 days, and 1 after 90 days of hospitalization. Overall SAS 9-10, n=16; SAS 7-8, n=113; SAS 5-6, n= 42; and SAS ≤ 4, n=1. Of these, 34.3% had a major complication, 27.3% had a prolonged LOS, and 12.2% were discharged to a care facility other than home. No significant correlation was demonstrated between complication, LOS, or discharge disposition and the SAS with respective ROC of 0.44, 0.43, and 0.44.  Of the preoperative comorbid conditions analyzed, only neoadjuvant chemoradiation significantly increased the risk of any outcome, with an OR of 3.59 (95% CI 1.38-9.37, p < 0.01) risk of discharge to care other than home.

Conclusion: The perioperative performance measure of the SAS does not appear to have a good ability to predict major post-operative adverse outcomes in a major esophageal surgery population.

 

39.08 Transfer to Higher-Level Centers Does Not Improve Survival in Older Patients with Spinal Injuries

G. Barmparas2, Z. Cooper1, J. Havens1, R. Askari1, E. Kelly1, A. Salim1  1Brigham And Women’s Hospital,Division Of Acute Care Surgery And Surgical Critical Care-Department Of Surgery,Boston, MA, USA 2Cedars-Sinai Medical Center,Division Of Acute Care Surgery And Surgical Critical Care / Department Of Surgery,Los Angeles, CA, USA

Introduction:   As the numbers of injured elders continue to rise dramatically, trauma centers are pressed to identify which older patients benefit from higher level care.  The purpose of the current investigation was to delineate whether elderly patients with spinal injuries benefit from transfers to Level I or II centers.

Methods:   We used The National Trauma Databank (NTDB) datasets 2007-2011 to identify all patients over 65 (y) old with any spinal fracture or spinal cord injury from a blunt mechanism. Only centers reporting ≥ 80% of AIS and/or ≥ 20% of comorbidities and/or with ≥ 200 subjects in the NTDB, were included. Patients who were transferred to Level I and II centers (TR) were then compared to those who were admitted to Level III or other centers (NTR). Patients who were transferred from Level III or other centers to other acute care facilities were excluded. We used chi-squares and t-tests where appropriate to compare patient characteristics (demographics comorbidities, admission vital signs and GCS, injury severity), and hospital factors (teaching, region, and availability of > 10 orthopedic or neurosurgeons) between groups.  We then performed logistic regression to adjust for these differences between patients with any spinal injury and a subgroup analysis for patients with spinal cord injury. The primary outcome was in-hospital mortality. Alpha = p<0.01

Results: Of 3,313,117 eligible patients, 43,637 (1.3%) met inclusion criteria: 19,588 (44.9%) in the TR Group and 24,049 (55.1%) in the Non-TR Group. The majority of patients (95.8%) had a spinal fracture without a spinal cord injury. TR patients were significantly less likely to be ≥ 90 years old  (7.0% vs. 8.1%, p<0.01) and had higher injury severity scores (AIS head ≥ 3 (18.9% vs. 15.7%, p<0.01; AIS spine ≥ 3 (5.9% vs. 4.4%, p<0.01). When compared to NTR, TR patients were more likely to have a spinal cord injury at any level (4.7% vs. 3.1%, p<0.01) and to require a spinal surgical procedure within 48 hours from admission (4.8% vs. 2.4%, p<0.01). More TR patients required ICU admission  (48.5% vs. 36.0%, p<0.01) and ventilatory support (16.1% vs. 13.3%, p<0.01). Overall mortality was 7.7% (TR 8.6% vs. NTR 7.1%, p<0.01). However, mortality in the subgroup of patients with a spinal cord injury was 21.7% (TR 22.3% vs. NTR 21.0%, p<0.01). After multivariate analysis, there was no difference in the adjusted mortality for patients with any spinal injury (AOR [95% CI]: 0.98 [0.89, 1.08], p=0.70) or for patients with spinal cord injury (AOR [95% CI]: 0.86 [0.62, 1.20], p=0.38) treated at higher-level centers.

Conclusion: Transfer of elderly patients with spinal injuries to higher-level trauma centers is not associated with improved survival. Further research is required in this area to identify those subgroups of elderly patients who benefit from such transfers.

39.09 Tetanus and Pertussis Vaccination in U.S. Adult Trauma Centers: Who's up to Date?

B. K. Yorkgitis1,2, G. Timoney2, P. Van Den Berg2, A. Goldberg2, A. Pathak2, A. Salim1, J. Rappold2  1Brigham And Women’s Hospital,Trauma, Burn, Surgical Critical Care,Boston, MA, USA 2Temple University,Division Of Trauma,Philadelpha, PA, USA

Introduction:  Trauma centers commonly administer tetanus prophylaxis to patients sustaining wounds.  In the U.S., there are currently two different vaccinations available for adult administration: tetanus/diphtheria toxoid (Td) or tetanus/reduced diphtheria and acellular pertussis (Tdap). The importance of Tdap lies in its vaccination against pertussis while providing tetanus immunity. 
 Since the 1980’s there has been a steady rise in pertussis cases, from the low in 1976 of 1,010 to a high of 48,277 in 2012.1  This epidemic rise caused the Centers for Disease Control (CDC) Advisory Committee on Immunization Practices (ACIP) to recommend the routine use of Tdap when tetanus prophylaxis is indicated. Vaccination against pertussis is paramount for prevention.

Methods:  An institutional review board exempt, web based national survey was emailed to adult trauma center coordinators who's address could be located via an internet search.  Questions included level designation, number of trauma evaluations annually, zip code, hospital description (university, university affiliated, community), and which preparation is given for adults <65 years and those over. The aim of this study was to gather data on which vaccination is currently being given to trauma patients.  At the conclusion of the survey, hyperlinks to the CDC ACIP recommendations were provided as an educational tool. 

Results: A total of 718 emails were successfully sent and 439 (61.1%) completed surveys were returned.  Level 4/5 centers had the highest compliance rates for those patients between ages 18-65 (93%), followed by level 2/3 (86.9%), and last level 1 (56.9%). Among all centers, the use of Tdap was lower in the >65 years group.  Level 2/3 trauma centers were the most compliant with this age group (60.6%) then level 4/5 (57.4%) and lastly level 1 (40.3%). 

Conclusion: With the rise in pertussis cases, vaccination remains crucial to prevention.  The CDC recommendations for Tdap have existed for adults <65 years since 2005 and those over 65 years since 2012.2  Yet many adult trauma centers do not adhere to the current ACIP guidelines. In particular, Level 1 trauma centers have the lowest rate of compliance. Through this survey, centers were educated on current recommendations. Increased vaccination of trauma patients with Tdap should improve protection against this virulent pathogen and result in a decreased incidence.

1. Center for Disease Control. (2014). Pertussis (Whooping Cough). Retrieved from http://www.cdc.gov/pertussis/surv-reporting.html

2. Updated Recommendation for Use of Tetanus Toxoid, Reduced Diphtheria Toxoid and Acellular Pertussis (Tdap) Vaccine in Adults 65 Years and Older – Advisory Committee on Immunization Practices (ACIP), 2012. MMWR. 2012;61(25):468-70.

39.10 Comorbidity-Polypharmacy Score Predicts Readmission in Older Trauma Patients

B. C. Housley1, N. J. Kelly1, F. J. Baky1, S. P. Stawicki2, D. C. Evans1, C. Jones1  1The Ohio State University,College Of Medicine,Columbus, OH, USA 2St. Luke’s University Health Network,Department Of Research & Innovation,Bethlehem, PA, USA

Introduction:  Hospital readmissions correlate with worse outcomes and may soon lead to decreased reimbursement. The comorbidity-polypharmacy score (CPS) is the sum of the number of pre-injury medications and the number of comorbidities, and may estimate patient frailty more effectively than patient age does. Though CPS has previously been correlated with patient discharge destination and clinical outcomes, no information is currently available regarding the association between CPS and hospital readmission.  This study evaluates that association, and compares it to age and injury severity as predictors for readmission.

Methods:  We retrospectively evaluated all injured patients 45 years or older seen at our American College of Surgeons-verified Level 1 trauma center over a one-year period. Inmates, patients who died prior to discharge, and patients who were discharged to hospice care were excluded. Institutional trauma registry data and electronic medical records were reviewed to obtain information on demographics, injuries, pre-injury comorbidities and medications, ICU and hospital lengths of stay, and occurrences of readmission to our facility within 30 days of discharge. Kruskal-Wallis testing was used to evaluate differences between readmitted patients and those who were not, with logistic regression used to evaluate the contribution of individual risk factors for readmission.

Results: 960 patients were identified; 79 patients were excluded per above criteria, and 2 further were excluded due to unobtainable medical records. 879 patients were included in final analysis; their ages ranged from 45-103 (median 58) years, injury severity scores (ISS) from 0-50 (median 5), and CPS from 0-39 (median 7).  76 patients (8.6%) were readmitted to our facility within 30 days of discharge.  The readmitted cohort had higher CPS (median 9.5, p=0.031) and ISS (median 9, p=0.045), but no difference in age (median 59.5, p=0.646).  Logistic regression demonstrated independent association of higher CPS with increased risk of readmission, with each CPS point increasing the odds of readmission by 3.9% (p=0.01).

Conclusion: CPS is simple to calculate and, despite assumed limited accuracy of this information early in a trauma patient’s hospitalization, appears to correlate well with readmissions within 30 days.  Indeed, frailty defined by CPS was a significantly stronger predictor of readmission than patient age was.  Early recognition of elevated CPS may help optimize discharge planning and potentially decrease readmission rates in older trauma patients; larger multicenter evaluations of CPS as a readily available indicator for the frailty of older patients are warranted.

36.09 The Cost of Secondary Trauma Overtriage in a Level I Trauma Center

D. A. Mateo De Acosta1, R. Asfour1, M. Gutierrez1, S. Carrie2, J. Marshall2  1University Of Illinois College Of Medicine At Peoria (UICOMP),Department Of Surgery,Peoria, IL, USA 2University Of Illinois College Of Medicine At Peoria,Division Of Trauma / Department Of Surgery,Chicago, IL, USA

Introduction:
The goal of regional trauma systems is to deliver adequate level of care to injured patients in a timely and cost effective manner. Inter-facility transfer of injured patients is the foundation of the United States trauma systems. Patients are commonly secondarily overtriaged delaying their definitive care and posing unnecessary burden on the receiving institution. Secondary overtriage ranges from 6.9 – 38%. The financial burden of secondary overtriaging that is placed on receiving institutions has been rarely studied. 

Methods:
We reviewed the EMR and trauma registry data of 1200 patients transferred to our institution due to traumatic injuries, during a three year period. Patients were divided in two groups. Group 1 included patients “secondarily overtriaged” and Group 2 (control) those appropriately triaged. Secondary overtriage was defined as patients transferred from another hospital emergency department to our trauma service with an injury severity score (ISS) < 10, did not require an operation, and were discharged home within 48 hours of admission.

Results:
We identified 399 adult patients secondarily overtriaged to our institution. These represented a 31.9% of those transferred to our institution during the study period. Common indications for transfer were trauma to the torso, neurological, facial or orthopedic trauma. Main reasons for transfer among those secondarily overtriaged were Traumatic Brain Injury (37.4%, p<0.05)  and Orthopedic Trauma (21.8 %, p<0.05), impacted by the unavailability of speialist physcians in the reffering institution. Average hospital cost and reimbursement per overtriaged patient were $19,301 and $7,356.83 respectively. Cost itemization was as follows: Trauma activation – $5,016.49, Observation boarding $1,7413, Radiology – $ $4,339.09, Laboratory – $1,836.68, Pharmacy – $1,256.1 and Supplies – $2,431.6.Transport was by ground in 85.95% of patients and via helicopter in 14.05%. Average cost helicopter transport was $19.535.78.

Conclusion:
Secondary trauma overtriage presents a significant burden on trauma centers with an average cost per patient of approximately $19,301. Major reasons for transfer to our institution were traumatic brain injury and orthopedic mainly due to the unavailability of subspecialty services in the transferring institution. Education of rural trauma triage staff must continue to intensify in order to minimize the secondary overtriage of patients, expediting their care and optimizing resource utilization
 

36.10 The True Cost of Postoperative Complications For Colectomy

C. K. Zogg1, E. B. Schneider1, J. Canner1, K. S. Yemul1, S. Selvarajah1, N. Nagarajan1, F. Gani1, A. H. Haider1  1Johns Hopkins University School Of Medicine,Center For Surgical Trials And Outcomes Research, Department Of Surgery,Baltimore, MD, USA

Introduction:  In 2013, the United States spent $3.8 trillion on healthcare – a number projected to grow by 6.2% per year. Postoperative complications influence the cost of procedures and guidelines define them as a measure of the quality of surgical care. However, their impact on procedure costs remains obscure. This study explored increased costs associated with postoperative complications for colectomy using nationally representative data.

Methods:  Data from the 2007-2011 HCUP Nationwide Inpatient Sample were queried for patients ≥18 years of age undergoing elective procedures with a primary procedure code for laparoscopic or open colectomy. Patients with the following primary diagnoses of colon cancer, diverticulosis, diverticulitis, regional enteritis, ulcerative colitis and benign neoplasm of the colon were included. Patients were assessed for isolated complications including mechanical injury to wounds and infection as well as procedural, systemic, urinary, pulmonary, gastrointestinal and cardiovascular complications. HCUP-defined weights were used to calculate nationally representative estimates for each complication, stratified by patient-demographic and hospital-level factors. Diagnosis, procedure and Charlson Comorbidity Index were also examined. Population-weighted crude and risk-adjusted generalized linear models (GLM) were used to assess for differences in non-routine discharge (binomial), in-hospital mortality (binomial), length of stay (gamma) and total cost (gamma) (Table).

Results: We identified 115,269 patients of whom 20,728 (17.9%) experienced a post-op complication. The most frequent complications were gastrointestinal (9.8%) and infectious (3.2%). Patients undergoing laparoscopic procedures experienced fewer complications, while patients with colon cancer (19.7%) and ulcerative colitis (18.7%) were at the highest risk. Adjusted GLM (Table) revealed that patients with complications were >3 times more likely to be non-routinely discharged and >5 times more likely to die. They had 77-82% longer lengths of stay and incurred 70-76% higher total costs. Median costs for post-operative complications stratified by primary diagnosis and procedure type were consistently higher among patients experiencing complications (p<0.001), with an average complication/no complication ratio of 1.48/1.00.

Conclusion: The considerable patient- and financial-burdens associated with postoperative complications emphasize the need for systemic efforts to support quality-improvement initiatives and standardized procedures based on best evidence. Preventing or reducing postoperative complications following colectomy has the potential to dramatically reduce overall costs while improving patient-centered outcomes.

37.01 Promoting residents’ clinical reflections on medical care that seems futile by introducing a subjective but measurable perspective to improve ethically difficult decisions about gastrostomy tube placement.

L. Torregrosa1, E. Rueda2  1Xaverian University – San Ignacio Hospital, Bogotá, Colombia 2Institute of Bioethics, Xaverian University, Bogotá

Introduction:
Determining the futility of a particular intervention is not just an empirical medical assessment. Unfortunately, almost nothing in physicians´ medical training prepares them to make appropriate decisions in cases in which medical futility is at stake. The study of medical sciences doesn’t let physicians to know how to cope with those clinical situations either.

Making surgical decisions in situations in which a significant benefit for the patient is unclear should be a central issue to surgical teams especially when they are engaged in postgraduate training. In order to prepare surgeons to make better decisions in cases in which the benefit of gastrostomy tube placement is unclear, we develop a decision tool focused on the patient’s point of view about the acceptability of the procedure in his/her own case.

Methods:
Traditionally, the decision to place a gastrostomy tube is focused mainly on patients in terminal phases of their diseases (last phases of cancer, dementia, anorexia-cachexia syndrome, permanent vegetative states) who cannot satisfy nutritional requirements by oral intake of food or nutritional supplements. This situation, that residents face frequently during surgical training, provides an ideal scenario for learning how to move beyond a narrow physiological way of understanding benefit to a broader concept of clinical benefit (based on the patient`s perspective).
On the grounds of a theory of human capabilities (M. Nussbaum – A. Sen) we developed an open-ended questionnaire to register what a potential patient for the procedure would consider valuable in his/her case. The questionnaire was used during each clinical encounter between the physician (resident) and the patient (or his/her surrogate decision maker) to improve decision making on whether a gastrostomy tube should be placed. Perceptions of the members of the General Surgical team at Hospital San Ignacio about the utility of the tool were evaluated as well.

Results:
The physicians (attendings and residents) of the General Surgery Service and the Nutritional Support Team perceived that this tool improved their decisions on whether a gastrostomy tube should be offered. General capacity of those teams to address medical futility regarding different cases was also improved.

Conclusions:
Since our tool integrates both medical and non-medical dimensions within the decision making process on whether a gastrostomy tube should be placed, it contributes to improve ethical reasoning among physicians (including residents) on the potential futility of such procedure (or others), and guides the resident through the ethical reasoning when the overall clinical benefit of a surgical intervention is uncertain.

The “human capabilities approach” (A. Sen, M. Nussbaum) was productively integrated into the decision making on the acceptability of this procedure. Actually, the surgical team assessed this bed-side tool as useful to facilitate decision making in cases in which the overall clinical benefit of placing a gastrostomy tube is uncertain.
 

37.02 Do Patients Buy-In to the Use of Postoperative Life Supporting Treatments? A Qualitative Study

M. J. Nabozny1, J. M. Kruser2, K. E. Pecanac7, E. H. Chittenden5, Z. Cooper6, N. M. Steffens1, M. F. McKneally8,9, K. J. Brasel10, M. L. Schwarze1,4  1University Of Wisconsin,Department Of Surgery,Madison, WI, USA 2Northwestern University,Department Of Medicine,Chicago, IL, USA 4University Of Wisconsin,Department Of Medical History And Bioethics,Madison, WI, USA 5Massachusetts General Hospital,Division Of Palliative Care,Boston, MA, USA 6Brigham And Women’s Hospital,Division Of Trauma, Burns, And Surgical Critical Care,Boston, MA, USA 7University Of Wisconsin,School Of Nursing,Madison, WI, USA 8University of Toronto,Department Of Surgery,Toronto, Ontario, Canada 9University of Toronto,Joint Center For Bioethics,Toronto, Ontario, Canada 10Medical College Of Wisconsin,Department Of Surgery,Milwaukee, WI, USA

Introduction: Before a big operation surgeons generally assume that patients buy-in to life-supporting interventions that might be necessary postoperatively.  How patients understand this agreement and their willingness to participate in additional treatment is unknown.  The objective of this study is to characterize how patients buy-in to treatments beyond the operating room and what limits they would place on additional interventions.

Methods: We performed a qualitative study of preoperative conversations between surgeons and patients at surgical practices in Toronto, ON, Boston, MA, and Madison, WI.  Purposive sampling was used to identify 11 surgeons who are good communicators and routinely perform high-risk operations. Preoperative conversations between each surgeon and 3-7 of their patients were recorded (n = 89).  A subset of 41 patients and their family members were asked to participate in open-end preoperative and postoperative interviews.  We used qualitative content analysis to analyze the interviews and surgeon visits inductively, specifically evaluating the content of the conversation about the use of postoperative life support.

Results: Thirty-three patients and their family members participated in a preoperative interview and two of these were lost to follow-up.   Patients expressed confidence that they had a common understanding with their surgeon about how they would be treated if there was a postoperative complication.  However, this agreement was expressed in a variety of ways from an explicit desire that the surgeon would treat any complication to the fullest extent, “Just do what you got to do” to a simple assumption that complications would be treated if they did occur.  Most patients trusted their surgeon to intervene on their behalf postoperatively but expressed a preference for significant treatment limitations which were not discussed with their surgeon preoperatively (See Table).  Furthermore, patients did not discuss their advance directive with their surgeon preoperatively but assumed it would be on file and/or that family members knew their wishes.

Conclusion: Following high risk surgery, patients trust their surgeon to treat complications as they arise.  Although patients buy-in to additional postoperative intervention, they note a broad range of preferences for treatment limitations which are not discussed with the surgeon preoperatively.

37.03 Evaluating Coercion, Pressure, and Motivation in Potential Live Kidney Donors

A. A. Shaffer1, E. A. King1, J. P. Kahn2, L. H. Erby3, D. L. Segev1  1Johns Hopkins University School Of Medicine,Department Of Surgery,Baltimore, MD, USA 2Johns Hopkins School Of Public Health,Berman Institute Of Bioethics,Baltimore, MD, USA 3Johns Hopkins School Of Public Health,Department Of Health And Behavior Sciences,Baltimore, MD, USA

Introduction:
As the shortage of donor organs remains an obstacle, transplantation with living donors is increasing. Live donor kidney transplantation yields improved graft survival and recipient longevity, compared with deceased donors. However, one concern with living donation is the potential risk of coercion or pressure on individuals to donate when approached by a transplant candidate. Currently, there is no widely used, standard test to measure donor pressure in a clinical setting. The purpose of this study was to use a novel assessment to evaluate pressure experienced by live donor candidates, determine primary motivations for considering donation, and identify demographic factors associated with increased pressure.

Methods:
We modified a psychological questionnaire of perceived coercion to generate a novel pressure assessment for potential kidney donors. This assessment is composed of six questions. Each of the first four questions collects information on one element of the decision, including the idea, influence, choice, and freedom. These are answered on a Likert scale from “Strongly Agree” to “Strongly Disagree,” which we convert to a numerical scale from 1 to 5. The fifth question asks the respondent to self-report perceived pressure on a scale from 1 (least pressure) to 5 (most pressure). Results of the first five questions were averaged to compute a pressure score. The sixth question qualitatively ascertains the candidate’s primary motivation for donation. This question requires the respondent to rank her or his reason for donating, in order of importance, from three options. From November 25, 2013, data were prospectively collected on every individual calling our center for live donor evaluation.

Results:
Our study population included 400 potential live donors with a mean age of 41.8 years (SD=13.3). Of the participants, 58.8% were female, 72.3% were Caucasian, and 20.4% were African American. The mean pressure score was 1.1 (SD=0.3) and ranged from 1 to 4.2. Of the respondents, 79.2% had a total pressure score of 1, indicating that they had answered each of the scaled questions with the lowest pressure measurement. There was no difference in mean pressure score by age, sex, race, or recipient/ donor relationship type. The primary ranked motivation for donation was 86.3% “I wanted to help my recipient,” 11.3% “I wanted to give meaning to my life,” and 2.4% “My family or friends expected me to donate.”

Conclusion:
Most candidates (79.2%) for living kidney donation feel little pressure from others when making the decision to donate, but some (19.2%) report higher than minimal pressure. Our data show that there is no clearly identifiable demographic profile for those who experience pressure. This pressure assessment can be used to identify donor candidates facing pressure to donate early in the evaluation process so that these concerns can be fully addressed prior to donation.
 

37.04 Influence of Do-Not Resuscitate Status on Vascular Surgery Outcomes

H. Aziz1, B. C. Branco1, J. Braun1, M. Trinidad-Hernandez1, J. Hughes1, J. L. Mills1, J. L. Mills1  1University Of Arizona,Tucson, AZ, USA

Introduction

Do-not-resuscitate (DNR) orders allow patients to communicate their wishes regarding cardiopulmonary resuscitation. Although DNR status may influence physician decision-making beyond resuscitation, the impact of DNR status on the outcomes of patients undergoing emergent vascular operations remains unknown. The aim of this study was to analyze the outcomes of DNR patients undergoing emergency vascular surgery.

Methods

The National Surgical Quality Improvement Program database was queried to identify all patients requiring emergency vascular surgical interventions between 2005 and 2010. Demographics, clinical data, and outcomes were extracted. Patient outcomes were compared according to DNR status and the primary outcome was mortality.

Results

Over the study period, a total of 16,678 patients underwent emergency vascular operations (10.8% of the total vascular surgery population). Of those, 548 (3.3%) patients had a preoperative DNR status. There were no significant differences in rates of open or endovascular repair, or intraoperative blood requirements between the two groups. After adjusting for differences in demographics, and clinical data, DNR patients were more likely to have higher graft failure rates (8.7 % vs 2.4%; Adj. P < 0.01), and failure to wean from mechanical ventilation ( 14.9 % vs. 9.9%. Adj. P < 0.001). DNR status was associated with a 2.5 fold rise in 30- day mortality (35% vs. 14%; 95% CI: 1.7-2.9, Adj. P<0.001).

Conclusion

The presence of DNR order was independently associated with mortality. Patient and family counseling on surgical expectations prior to emergent operations is warranted as perioperative risks are significantly elevated when a DNR order exists.

37.05 Assessing Surgeon Behavior Change after Anastomotic Leak in Colon Surgery

V. V. Simianu1, A. Basu2, R. Alfonso-Cristancho3, A. D. Flaxman4, D. R. Flum1,3  1University Of Washington,Department Of Surgery,Seattle, WA, USA 2University Of Washington,Department Of Health Services,Seattle, WA, USA 3University Of Washington,Surgical Outcomes Research Center (SORCE),Seattle, WA, USA 4University Of Washington,Institute For Health Metrics And Evaluation,Seattle, WA, USA

Introduction: Breakdown of a colorectal anastomosis is a rare but potentially life-threatening complication. Pressure testing the anastomosis by submerging it in water as air is injected (leak testing) can identify leaks intra-operatively and reduces the risk of leaks after surgery by up to 50%. Surgeons have varying opinions about the value of leak testing, and the field of behavioral economics predicts that perceived value drives behavior. We evaluated the impact of having a surgical leak on a surgeons’ leak-testing behavior during subsequent cases, to test the hypothesis that a recent leak would influence the perceived value of leak testing.

Methods: Using a prospectively gathered cohort from the Surgical Care and Outcome Assessment Program (SCOAP) in Washington State, we quantified leak testing during elective colorectal procedures with testable anastomoses (left colectomy, low anterior resection, and total abdominal colectomy) and assessed for adverse events related to leak. We describe patterns of leak testing and leaks, stratified by surgeon volume. Higher volume surgeons were defined as performing 5 or more procedures per year.  To test the hypothesis of behavior change, we explored a difference-in-difference non-parametric model to compare leak testing before and after a leak.

Results: From 2008 to 2013, surgeons performed 7,497 elective colorectal operations across 46 hospitals, with a leak rate of 2.6% (n=195). Higher-volume surgeons accounted for 83.2% of the cases (n= 6,234) in the time period. Mean leak testing rate for all surgeons was 85.9%. While leaks occur more often in untested cases (3.5% vs 2.5%, p=0.05), leak events and leak testing were not different between lower- and higher-volume surgeons. The overall rate of leak testing increased for both lower-volume (76 to 88%, p=0.007) and higher-volume (82 to 88%, p=0.002) surgeons over the study. Lower-volume surgeons seem to increase their testing after a leak, as shown in Table 1. However, our difference-in-difference analytic model was limited by small sample size at the individual surgeon level. Several hundred unique surgeons’ data would be needed in each strata to detect significant differences.

Conclusion: Intraoperative leak testing appears to increase the most for lower-volume surgeons who experienced a leak, suggesting that these surgeons may attribute higher value to leak testing after a leak. For higher-volume surgeons, it may be that surgeon-specific preferences and practice style are more influential in the uptake of leak testing rather than exposure to adverse events.  These insights may help in crafting quality improvement initiatives around colorectal surgery that require clinician behavior change.

 

 

 

37.06 Burns in Nepal: A Population Based Countrywide Assessment

S. Gupta1,2, U. Mahmood3, S. Gurung8, S. Shrestha7, A. G. Charles6, A. L. Kushner2,4, B. C. Nwomeh2,5  1University Of California, San Francisco – East Bay,Surgery,Oakland, CA, USA 2Surgeons OverSeas,New York, NY, USA 3University Of South Florida,Department Of Plastic Surgery,Tampa, FL, USA 4Johns Hopkins Bloomberg School Of Public Health,International Health,Baltimore, MD, USA 5Nationwide Children’s Hospital,Ohio State University, Pediatric Surgery,Columbus, OH, USA 6University Of North Carolina, Chapel Hill,Surgery, Trauma And Critical Care,Chapel Hill, NC, USA 7Nepal Medical College,Surgery,Kathmandu, , Nepal 8Kathmandu Medical College,Kathmandu, , Nepal

Introduction:  The incidence of burns in low and middle income countries (LMICs) is 1.3 per 100,000 population compared with an incidence of 0.14 per 100,000 population in high-income countries, ranking in the top 15 leading causes of burden of disease globally.  However, much of the data from LMIC is based on estimates of those presenting to a health facility and may underestimate the true prevalence of burn injury. The purpose of this study was to assess the prevalence of burn injuries at a population level in Nepal, a low income South Asian country.

Methods:  A cluster randomized, cross sectional country wide survey was administered in Nepal using the Surgeons OverSeas Assessment of Surgical Need (SOSAS) from May 25th to June 12th, 2014.  Fifteen of the 75 districts of Nepal were randomly chosen proportional to population.  In each district, three clusters, two rural and one urban, were randomly selected.  The SOSAS survey has two portions:  the first collects demographic data about the household’s access to healthcare and recent deaths in the household; the second is structured anatomically and designed around a representative spectrum of surgical conditions, including burns.

Results:  In total, 1350 households were surveyed with 2,695 individuals with a response rate of 97%.  Fifty-five burn injuries were present in 54 individuals (2.0%, 95% CI 1.5% to 2.6%), mean age 30.6 (SD 2.3, 95% CI 26.0 – 35.2) and 52% in males.  The largest proportion of burns was in the age group 25-54 (2.22%, 95% CI 1.47 to 3.22%), with those aged 0-14 having the second largest proportion (2.08%, 95% CI 1.08% to 3.60%).  The upper extremity was the most common anatomic location affected with 36.36% of burn injuries.  Causes of burns included 60.38% due to hot liquid and/or hot objects, and 39.62% due to an open fire or explosion.  Eleven individuals with a burn had an unmet surgical need (20%, 95% CI 10.43% to 32.97%).  Barriers to care included facility/personnel not available (8), fear/no trust (1) and no money for healthcare (2). Extrapolations suggest that nearly 608,605 people in Nepal have suffered an injury due to a burn, with potentially 124,200 unable to receive appropriate care. 

Conclusion:  Burn injuries in Nepal appear to be primarily a disease of adults due to scalds, rather than the previously held belief that burn injuries occur mainly in children (0-14) and women and are due to open flames.  This data suggest that the demographics and etiology of burn injuries at a population level vary significantly from hospital level data.   To tackle the burden of burn injuries, interventions from all the public health domains including education, prevention, healthcare capacity and access to care, need to be addressed, particularly at a community level.  Increased efforts in all spheres would likely lead to significant reduction of burn-related death and disability.

 

37.07 The Natural Progression of Biliary Atresia in Vietnam

M. B. Liu1, X. Hoang3, T. B. Huong3, H. Nguyen3, H. T. Le4, A. Holterman2  1Stanford University School Of Medicine,Stanford, CA, USA 2University Of Illinois College Of Medicine At Peoria,Department Of Surgery/Pediatric Surgery,Peoria, IL, USA 3National Hospital Of Pediatrics,Hepatology Department,Hanoi, HANOI, Viet Nam 4National Hospital Of Pediatrics,Hanoi, HANOI, Viet Nam

Introduction: While the natural evolution of operated biliary atresia (BA) patients who undergo the Kasai portoenterostomy is well documented, untreated biliary atresia is not a common occurrence in developed countries and has not been well characterized. The objectives of this study were to further characterize unoperated biliary atresia patients and their survival course in Vietnam, a developing country.

Methods: A retrospective chart review was undertaken of the demographics and clinical characteristics of patients diagnosed with biliary atresia between January 2012 and July 2013 at one hospital in Vietnam. Patients identified as unoperated biliary atresia cases were contacted to obtain survival data.

Results: A total of 84 patients (60 patients for 2012 and 24 patients for 2013) were identified as unoperated biliary atresia cases out of a total of 178 patients who were diagnosed with biliary atresia within the same timeframe. Mean age at diagnosis for the unoperated BA patients was 100+/-84 days with a median of 69 days. The majority (54%) present within 2 months of life (10% within 45 days); 33%, 21% and 12% present after 3, 4, and 6 months of age respectively. At the time of presentation, the mean +/- SD values for total bilirubin values were 10.3+/-4.5 mg/dL (normal between 0.1-1.0 mg/dL). Those for ALT were 141+/- 88u/L (normal <42 u/L) and for PELD scores were 15+/-21 (median of 10-15). The reasons for no surgical treatment were parents’ refusal of the Kasai surgery, late diagnosis, or lack of access to primary liver transplantation. Follow-up data was limited since only 12% had at least 1 readmission at the same hospital for complications after their initial diagnosis. The remaining 88% did not return for further management. Follow-up survival and mortality data was obtained for 34 patients out of the 84 unoperated BA cases. The remaining patients could not be contacted. Of the 34 patients, 7(20%) were still alive as of August 2013. These 7 patients had been alive for an average of 9.5+/-3.6 months at the time of being contacted. The remaining 27 deceased patients had a median lifespan of 7.4 months.

Conclusion: Our data provide the most recent survival outcomes for patients with unoperated biliary atresia. They illustrate the multiple causes for the significant medical burden of patients from Vietnam, including delays in presentation, parents’ refusal of surgical treatment, and their lack of access to follow-up care. Innovative non-invasive palliative therapy may be more acceptable to these families to improve the survival and quality of life for these patients.