31.08 Pre-Operative Native TEG Maximum Amplitude Predicts Massive Transfusion in Liver Transplantation

P. J. Lawson1, H. B. Moore1, G. R. Stettler1, T. J. Pshak1, T. L. Nydam1  1University Of Colorado Denver,Aurora, CO, USA

Introduction:
Pre-operative laboratory values to assess liver transplant patients coagulation status is based off of plasma based assays, such as INR. These assays partition coagulation and violate the current understanding of cell the based hemostasis. Whole blood viscoelastic assays such as thrombelastography (TEG) have been used for decades in liver transplant surgery but have not been utilized for perioperative coagulation assessment. We hypothesize that TEG is a superior predictor of perioperative blood products transfused during liver transplant surgery than INR.

Methods:
Liver transplant had blood drawn prior to surgical incision and assayed with citrated native TEG. Pre-operative labs including liver function test, coagulation assays, and complete blood counts were collected as part of the standard of care. Initial blood samples were collected on patients before surgical incision. TEG variables including R-Time, Angle, MA, and LY30 were correlated to red blood cell (RBC), plasma (FFP), cryoprecipitate (Cryo), and platelets (Plts) during the intra operative period in addition to INR, platelet count, and MELD score. Massive transfusion (MT) was defined as >10 units of RBC during surgery. A receiver operator curve (ROC) was generated to identify which pre-operative value had the greatest predictability of identifying a patient requiring a MT during surgery.

Results:
Eighteen patients were included in the analysis. The median MELD score was 29.0 (20.5-37.0). The median INR was 1.4 (1.3-2.8). The median RBC transfused was 3.5 (0.8-11.3). The median Platelet Count was 91 (50-111). Pre-Op TEG values were: R-Time (12.3 min, 7.4-12.3), Angle (52.2 degrees, 38.6-56.9), MA (51.8 mm, 36.5-60.8), and LY30 (0.1 %, 0.0-1.3). 33% of patients required a massive transfusion. Spearman’s Rho correlations for RBC transfusion and variables of interest included: R-Time (0.1583, p=0.530), Angle (-0.4623, p=0.534), MA (-0.6268, p=0.005), LY30 (0.1184, p=0.640), INR (0.5013, p=0.422), Plts (0.5053, p=0.040), MELD (0.4974, p=0.036). Correlation to FFP utilization was significant for MA (p=0.0081), INR (p=0.006), and Plts (p=0.044), but not other variables. This correlation persisted for MA for Cryo (p=0.003) and Plts (p<0.001), but not INR. Pre-Op lab and TEG variable correlation to RBC use were ROC curve with AUC (p=0.013). When performing a ROC curve, MA had an AUC of 0.941 (p<0.001) superior to INR AUC of 0.618 (p=0.235).

Conclusion:
Preoperative assessment of predicted blood product utilization in liver transplant surgery remains poorly defined. The INR which drives preemptive plasma transfusion before surgery was not a good predictor of massive transfusion. The maximum clot strength measured by TEG has superior predictability and may help guide more cost effective blood bank preparation for this procedure as only a third of patients required a massive transfusion.

31.07 Age is an effect modifier of racial disparities in heart transplant

H. Maredia1, M. Bowring1,2, A. Massie1,2, E. Bush1, D. Segev1,2  2Johns Hopkins University School Of Public Health,Department Of Epidemiology,Baltimore, MD, USA 1Johns Hopkins University School Of Medicine,Department Of Surgery,Baltimore, MD, USA

Introduction:  African American (AA) race is associated with poorer survival among heart transplant recipients. Even after controlling for socioeconomic and clinical characteristics, racial disparities persist and remain unexplained in the literature. Young age (<55) is associated with better survival (Kilic et al. 2012); however, the potential interaction of age with race has not been explored. We examined racial disparities in survival post-heart transplant and explored the potential effect modification of age and race on survival.

Methods:  Using the Scientific Registry of Transplant Recipients (SRTR), we performed a retrospective observational study of 29,039 adult heart transplant recipients from 1/1/2000– 3/3/2016, comparing post-transplant survival in African-American (AA) vs non-AA recipients stratified by age (18–30, 31–40, 41–60 and 61–80). Cox regression was used to compare mortality by race with age as an effect modifier after adjusting for recipient and donor characteristics.

Results:  AA recipients were older at transplant than non-AA recipients, with median (IQR) of 51 (40–58) vs 56 (48–62) (ranksum p<0.001). In an adjusted model, AA recipients had a 19% higher risk of death than comparable non-AA candidates (aHR = 1.071.191.33, p=0.002). Survival differed significantly across age categories among both AA and non–AA recipients (both logrank p<0.001; Figure). The association between AA race and mortality was amplified among younger recipients (p<0.001 for race/age interaction). Among recipients aged 18–30, AA were at a 1.602.002.48 times higher risk of death relative to non-AA recipients; however, among recipients aged 61–80, AA recipients were at a 1.061.251.46 times higher risk of death compared to non-AA. Among AA recipients, recipients aged 18-30 had the highest risk of death post-transplant, which was 1.511.822.21 times higher than the risk among AA recipients 41–60 years old.

Conclusion: Young AA recipients aged 18–30 years have the highest risk of death post-transplant relative to other age and race groups. Identifying age as an effect modifier of racial disparities will provide better prognostication for transplant candidates and inform improved surveillance of young AA transplant recipients. Further investigation into reasons for reduced survival among young AA recipients is warranted in order to identify opportunities for more effective clinical management and for the reduction of racial disparities.

31.06 Early Post-transplant Clinical Outcomes for Liver Allografts with Any Steatosis

J. P. Davis1, C. A. Kubal1, B. Ekser1, J. A. Fridell1, K. A. Thatch1, R. S. Mangus1  1Indiana University School Of Medicine,Indianapolis, IN, USA

Introduction:
Deceased donor liver transplant allografts with steatosis have an increased risk of post-transplant primary non-function and early allograft dysfunction. The degree of allograft steatosis which results in these negative outcomes is not clearly defined, largely because there is significant variability in grading graft steatosis. This study reviews a large number of liver transplants at a single center. The percent of total graft steatosis from the permanent reperfusion biopsy is recorded. Early clinical outcomes are then measured with increasing severity of steatosis.

Methods:
The records of all liver transplants (LTs) performed over a 15-year period were reviewed. Reperfusion biopsy reports were reviewed and recorded in the transplant center research database. All biopsies were read by experienced liver pathologists, from permanent sections taken at transplant. Total steatosis included both micro- and macrovesicular and was categorized into four study groups, (1) none (0%), (2) mild (1-10%), (3) moderate (11-20%) and (4) severe (>20%). Outcomes included early allograft dysfunction (EAD) and graft loss, peak alanine aminotransferase (ALT) and total bilirubin (TB) values, length of hospital stay, and change in renal function.

Results:
Data were available for 1864 adult LT recipients, 40% of whom had some liver allograft steatosis.  Donor factors associated with increasing graft steatosis included White race, older age, and chronic alcohol abuse. Pre-procurement donor peak ALT and TB levels were no different for livers with or without steatosis. Recipient post-transplant peak ALT levels were similar to non-steatotic livers when there was <10% steatosis. However, above 10% steatosis, there was an incremental increase (p<0.001). Peak TB did not differ among the groups. EAD was similarly seen only in livers with > 10% steatosis, reaching 42% in the 11-20% steatosis group and 54% in the >20% steatosis group (p<0.001). There was a significant risk of graft loss, but this was only seen in grafts with >20% steatosis (p<0.01 at 7 days and p=0.02 at 30 days). Steatosis >20% was associated with an acute decrease in glomerular filtration rate (>20%) which persisted through day 30.

Conclusion:
Initial liver function is delayed in grafts with more than 10% steatosis, being demonstrated by high peak ALT and a 40-50% risk of early allograft dysfunction.  There is a significant risk of graft loss when total steatosis is greater than 20%. Severe steatosis is also associated with acute kidney dysfunction early post transplant. The early graft losses seen in livers with > 20% steatosis lead to a 10% reduction in survival at 10-years compared to non-steatotic grafts.
 

31.05 Incidence and impact of adverse drug events leading to hospitalization in kidney transplantion

M. Arms1, J. W. McGillicuddy1, S. N. Nadig1, D. J. Taber1  1Medical University Of South Carolina,Transplant Surgery,Charleston, SC, USA

Introduction: Long-term graft survival in kidney transplant recipients remains sub-optimal.  The impact of adverse drug events (ADEs) contributing to hospitalization and as a predominant risk factor for late graft loss has not been well-studied in this population.

Methods:  This was a retrospective longitudinal cohort study of adult solitary kidney recipients transplanted between 2005 and 2010 with follow up through May 2016.  Patients were divided into three cohorts: no readmissions, readmissions not due to an ADE, and ADEs contributing to readmissions.  Medication regimens and progress notes were utilized to assess for ADE contribution to hospitalization using validated methodology.   The rationale of the ADE contribution to the readmission was categorized in terms of probability, preventability, and severity.  Predominant readmission etiologies across time, from 2005 to 2013 were compared to assess for temporal trends.

Results: 837 patients with 963 hospital readmissions were included in the study with a total follow up of 3,734 patient years (26 admissions per 100 patient years).   Of the 837 patients, 47.9% had at least one hospital readmission during follow up; 65.0% of readmissions were deemed as having an ADE contribute to the readmission.  The predominant causes of readmissions related to ADEs included non-opportunistic infections (39.6%), opportunistic infections (10.5%), acute rejection (18.1%) and acute kidney injury not related to rejection or infection (11.8%).  From 2005 to 2013, readmissions over time due to under-immunosuppression significantly decreased at a rate of -1.6% per year, while readmission due to over-immunosuppression, as indicated by infection, cancer or cytopenias, significantly increased at a rate of 2.1% increase per year (difference 3.7%, p=0.026, see Figure 1).  Significant risk factors for readmission related to an ADE included African American race, increased time on dialysis, increased time on waitlist and increased kidney donor profile index (KDPI). Protective factors included only being the recipient of a living donor kidney.  Delayed graft function, acute rejection, serum creatinine, graft loss and death were all significantly higher in those with an ADE that contributed to a readmission, as compared to those with a readmission not due to an ADE or those that did not have a readmission during follow-up (p<0.05, see Table 1). 

Conclusion: These results provide novel evidence demonstrating that ADEs contribute to a substantial number of readmissions after kidney transplant, which significantly increases the risk of graft loss and death, as compared to those readmitted for other causes or those without readmissions after transplant. 

31.04 Raising Quality or Harming Access? Effects of Systems Improvement Agreements on Organ Transplantation

L. H. Nicholas1,2, D. Segev1,2  1Johns Hopkins University School Of Medicine,Baltimore, MD, USA 2Johns Hopkins School Of Public Health,Baltimore, MD, USA

Introduction: Since 2007, the Centers for Medicare and Medicaid Services (CMS) requires organ transplantation programs that perform poorly on two biannual report cards within a 30-month period to enter a Systems Improvement Agreement (SIA) to continue receiving CMS reimbursement.  Each SIA prescribes a number of steps that transplant centers must take to improve their performance. Centers are also required to notify all patients on their waitlist of the quality problems and pay all costs associated with the patient joining the waiting list at an additional center.  While this regulatory program is designed to save lives by targeting programs with low 1-year patient and graft survival rates, the impact of the SIAs on waitlist candidates is unknown. 

Methods: We obtained details of all Systems Improvement Agreements between 2008 and 2014 through a Freedom of Information Act.  We compared waitlist outcomes for patients on the waitlist before and after their centers entered SIAs using patient-level data from the Scientific Registry of Transplant Recipients.  We also compared SIA centers before and after the SIA to other poor-performing but non-sanctioned centers before and after their second instance of poor performance.  All analyses estimated multivariate regressions adjusting for patient and center characteristics with standard errors clustered at the center level. 

Results: 26 centers entered SIAs for kidney, liver, and lung transplantation during the study period.  8,506 patients were on the waiting list at SIA centers, 41% were female, 64% were non-White, and 38% had private insurance.  On average, only 3% listed at an additional center during any 6 month period and 8% left the list due to transplant.  Following an SIA, patients were 4 percentage points less likely to leave the list due to transplant (p < 0.01).  There was no change in multi-listing behavior in response to the notification letters. 

Conclusion: A CMS quality improvement initiative designed to improve transplant center quality was associated with a reduction in the probability of transplant receipt among patients waiting for transplant at the targeted centers.  Letters describing quality problems and offering to pay the costs associated with moving to another center were insufficient tools to motivate patients to list at higher performing centers. 

 

31.03 Assessment of Risk Factors for Increased Resource Utilization in Kidney Transplantation

S. C. Vranian1, K. L. Covert2, C. R. Mardis3, J. W. McGillicuddy1, K. D. Chavin1, D. Dubay1, D. J. Taber1  1Medical University Of South Carolina,Divison Of Transplant Surgery,Charleston, SC, USA 2Medical University Of South Carolina,Department Of Pharmacy Services,Charleston, SC, USA 3Medical University Of South Carolina,Transplant Service Line,Charleston, SC, USA

Introduction: There are limited studies seeking to identify patients at high risk for medication errors and subsequent adverse clinical outcomes in transplant. This study aimed to identify significant risk factors for deleterious outcomes in kidney transplant recipients based on drug-related problems (DRPs) and self-administered surveys.

Methods: This was a prospective observational study. Adult kidney transplant recipients with a clinic visit at our facility between Sept. and Nov. 2015 were eligible to participate. Patients were surveyed for self-reported demographics, medication adherence and health status/outlook. We assessed for associations between survey results, pharmacist-derived DRPs and health resource utilization over an 8-month follow-up period. Based on significant associations, two patient risk cohorts were identified and compared for health care utilization using Poisson regression analysis.  

Results: 237 patients completed the survey and were included with mean follow-up of 8 months. For the patient-reported data, those that receive Medicaid insurance or rated their health as poor (M/PHS) were identified as a significant risk cohort. For pharmacist assessment, patients that were receiving an incorrect medication or did not have appropriate follow-up meidcation monitoring were identified as a significant risk cohort (pharmacy errors [PE]). The M/PHS cohort experienced 11.4 encounters per patient year and the PE cohort experienced 34.2 encounters per patient year, while the non-risk cohorts experienced 8.8 and 9.0 encounters per patient year, respectively. Poisson regression demonstrated that the M/PHS cohort experienced 43% more total encounters (p<0.05), 31% more admissions and 35% more outpatient transplant clinic visits (p<0.05). The PE cohort experienced 4.2 times more total encounters (p<0.05), 4.1 times more admissions (p<0.05) and 2.3 times more outpatient transplant clinic visits (p<0.05). A composite cohort comprising of either M/PHS or PE patients experienced 56% more total encounters (p<0.05), 58% more admissions (p<0.05) and 41% more outpatient transplant clinic visits (p<0.05).

Conclusions: This prospective observational study identified both patient-reported and pharmacist-derived risk factors that were associated with a significant increase in health care encounters; these risks included Medicaid insurance, poor self-reported health status, medication errors and lack of proper medication monitoring. These factors increased the rate of health care encounters by 30 to 400% during an 8-month follow-up period. Further research is warranted to validate these risks, determine their impact on graft and patient survival and develop risk-mitigation strategies to improve patient care and outcomes.

 

31.02 Prospective Study of a Steroid-Free, Low Dose Tacrolimus with Everolimus Regimen in Kidney Transplant

D. Lyubashevsky1, A. Shetty2, J. Leventhal2, M. J. Ansari2, V. Mas3, J. Matthew2, L. Gallon2  3Virginia Commonwealth University,Department Of Surgery,Richmond, VA, USA 1New York Medical College,Valhalla, NY, USA 2Northwestern University,Chicago, IL, USA

Introduction:
Calcineurin inhibitors (CNIs) such as Tacrolimus (FK) are considered mainstay of immunosuppression (IS) after kidney transplantation. However, CNIs are also known to have nephrotoxic properties: potent renal vasoconstriction can lead to irreversible and progressive tubulo-interstitial injury and glomerulosclerosis. One approach to circumvent this problem, is CNI minimization or ‘sparing’ by substitution or addition of a non-CNI immunosuppressive agent like mycophenolate mofetil (MMF) or a mammalian Target of Rapamycin inhibitor (mTORi). We hypothesize that combining low dose FK with the mTORi Everolimus may improve renal allograft outcomes, namely allograft survival and estimated GFR (eGFR).

Methods:
40 adult kidney transplant recipients were randomized at time of transplant to 2 groups: Low-dose FK with Everolimus, and standard dose FK with MMF. All patients received Alemtuzumab (Campath) induction and rapid steroid elimination. Everolimus levels were maintained between 3-8 ng/ml. Blood samples were taken at time of transplant, 3, 6, 12, 18, and 24 months post-randomization to record estimated GFR (eGFR) and serum drug concentration. Kidney biopsy data was gathered at transplant, 3, 12 and 24 months post. Primary outcomes were rejection-free graft survival and eGFR. 

Results:
Mean follow-up time was similar between groups: 26 ± 8.5 months for FK/Everolimus, and 26 ± 10.9 for FK/MMF (p = 0.975). Baseline characteristics such as demographics, HLA match, and time on dialysis were statistically similar between the two groups. FK levels in the FK/Everolimus group (4.2 ± 1.56 ng/mL) were significantly lower (p=0.003) than in the FK/MMF group (6.45 ± 1.96 ng/mL). The FK/Everolimus group experienced 1 episode of acute rejection compared to 4 episodes in the FK/MMF group: Using Cox-proportional hazards survival analysis, rejection-free graft survival (Fig 1A) was statistically similar between groups. eGFRs (FK/Everolimus = 61.535 ± 3.44 mL/min/1.73 m2; FK/MMF = 59.03 ± 3.24 mL/min/1.73 m2) were statistically similar between groups (Fig 1B); as were adverse events, including proteinuria, infectious events and death. 

Conclusion:
A combination of low dose FK with Everolimus may be an alternative immunosuppressive strategy to the standard of care FK+ MMF combination in a steroid free maintenance IS program. The favorable impact of mTORi on T-regulatory cells may provide added protection against rejection episodes. Longer clinical follow up, a larger sample size, and evaluation of renal biopsy data and circulating T-regulatory cell populations are warranted for future investigation.    
 

31.01 Neutrophil–lymphocyte Ratio Reflects Cancer Recurrence After Liver Transplantation

T. Motomura1, T. Yoshizumi1, A. Nagatsu1, S. Itoh1, N. Harada1, N. Harimoto1, T. Ikegami1, Y. Soejima1, Y. Maehara1  1Kyushu University,Department Of Surgery And Sciences,Fukuoka, , Japan

Introduction:

Although the Milan criteria (MC) have been used to select liver transplantation candidates among patients with hepatocellular carcinoma (HCC), many patients exceeding the MC have shown good prognosis. Recently, inflammation-based scores such as Preoperative neutrophil–lymphocyte ratio (NLR) or platelet-lymphocyte ratio (PLR) are received attention as predictors of patient prognosis in various malignant cancers, but it has yet to be clarified which is better criteria for liver transplantation with HCC. 

Methods:
We assessed outcomes in 202 patients who had undergone living-donor liver transplantation (LDLT) for HCC (1999.Jul-2015.Sep). Recurrence-free survival (RFS) was determined in patients with high (≥4) and low (<4) NLR, or with high (≥150) and low (<150) PLR. The cut-off values were determined according to the previous reports. Next, the levels of expression of vascular endothelial growth factor (VEGF), interleukin (IL)-8, IL-17, CD68 and CD163 were measured.

Results:
The 5-year RFS rate was significantly lower in patients with high (n=29) than with low (n=172) NLR (52.8% versus 88.9%, P<0.0001), both in patients with high (n=17) than with low (n=185) PLR (55.9% versus 86.8%, P<0.05). Multivariate analysis showed both NLR and MC significantly correlated with HCC recurrence (p=0.0009 and p=0.0001, respectively), whereas PLR was not significant (p=0.71). Combined with NLR and MC, the 5-year RFS rate was significantly lower in patients with high (n=11) than with low (n=57) NLR who exceeded the MC (19.5% versus 76.8%, P=0.0002). Tumor expression of VEGF, IL8, IL-17, CD68 and CD163 was similar in the high and low NLR groups, but serum and peritumoral IL-17 were significantly higher in the high-NLR group (P=0.01 each). The peritumoral CD163 correlated with the peritumoral IL-17-producing cells (P=0.04) and was significantly higher in the high-NLR group (P=0.005). 

Conclusion:
NLR predicts outcomes after LDLT for HCC via inflammatory tumor microenvironment. Combined with the MC, NLR may be a new criterion for LDLT candidates with HCC.
 

30.10 Impact of Neoadjuvant Dose Escalation on Downstaging & Perioperative Mortality in Esophageal Cancer

S. Ji3, S. Thomas5,7, K. Anderson3, J. Frakes2, S. Roman4,7,8, J. A. Sosa4,6,7,8, T. Robinson2  3Duke University Medical Center,School Of Medicine,Durham, NC, USA 4Duke University Medical Center,Department Of Surgery,Durham, NC, USA 5Duke University Medical Center,Department Of Biostatistics And Bioinformatics,Durham, NC, USA 6Duke University Medical Center,Department Of Medicine,Durham, NC, USA 7Duke Cancer Institute,Durham, NC, USA 8Duke Clinical Research Institute,Durham, NC, USA 1Duke University Medical Center,Durham, NC, USA 2Moffitt Cancer Center And Research Institute,Tampa, FL, USA

Introduction:

The addition of neoadjuvant chemoradiation prior to resection of locally advanced esophageal cancer has been shown to improve disease-free and overall survival. However, the optimal radiation dose remains unknown, and conventional U.S. practice has been to use a higher dose (50.4 Gy) than that used in recent European trials (41.4 Gy). Our objective was to characterize current U.S. practice patterns and compare primary tumor and nodal down-staging, perioperative mortality, and overall survival as a function of total radiation dose. 

Methods:

We performed a retrospective analysis of adult patients with non-metastatic esophageal cancer diagnosed between 2004 and 2013 within the National Cancer Data Base treated with neoadjuvant chemoradiotherapy followed by resection. The primary outcome was overall survival. Secondary outcomes included 30- and 90-day mortality and pathologic down-staging. Univariate and multivariate analyses were used to assess the association between selected outcomes and total radiation dose (41.4, 45.0 or 50.4 Gy) after controlling for patient demographic and clinical factors. 

Results

A total of 5,835 patients met inclusion criteria: 154 (2.6%) received 41.4 Gy, 1,696 (29.1%) 45 Gy and 3,985 (68.3%) 50.4 Gy. Patient demographic characteristics and comorbidities were balanced among groups. The use of 41.4 and 50.4 Gy both increased substantially (2.1% to 6.3% and 45.1% to 75.4%, respectively), while use of 45 Gy decreased (52.9% to 18.3%) during the study period (p<0.001). Compared with the 41.4 Gy group, patients receiving 45 and 50.4 Gy had higher rates of nodal down-staging (49% and 48% vs. 38%, respectively; p=0.05). Survival outcomes including 30-day, 90-day and overall survival did not vary significantly by radiation dose; however, patients receiving 41.4 Gy had numerically lower rates of 30-day and 90-day (0.0% and 1.3%) mortality compared to those with 45.0 Gy (2.8% and 7.0%) or 50.4 Gy (2.7% and 6.1%; p=0.21 for 30-day; p=0.16 for 90-day mortality, respectively). 

Conclusion:

To our knowledge, this study provides the first nationally representative assessment of neoadjuvant chemoradiation dose escalation practice patterns in the treatment of locally advanced esophageal cancer in the U.S. We observed no statistically significant differences in overall or short-term survival as a function of radiation dose. Although higher radiation doses were significantly associated with improved nodal down-staging, lower dose radiation exhibited a non-significant trend towards lower 30- and 90-day mortality rates. Our study lends support to neoadjuvant approaches that balance lower elective doses (41.4 Gy) to minimize toxicity while maintaining higher doses (50.4 Gy) to gross disease to maximize locoregional control. Further research is warranted to assess the impact of neoadjuvant radiation dose escalation on locoregional disease control, perioperative complications, and overall survival. 

30.09 Stromal MZB1 is a Prognostic Factor of Pancreatic Cancer Resected After Chemoradiotherapy

K. Miyake1, R. Mori1, R. Matsuyama1, Y. Homma1, A. Okayama2, Y. Ota1, K. Taniguchi1, H. Hirano2, I. Endo1  2Yokohama City University,Graduate School Of Medical Life Science And Advanced Medical Research Center,Yokohama, KANAGAWA, Japan 1Yokohama City University,Department Of Gastroenterological Surgery,Yokohama, KANAGAWA, Japan

Introduction: Pancreatic ductal adenocarcinoma (PDAC) is classified to three types following the resectability in NCCN Guidelines, namely Resectable, Borderline resectable (BR), and Unresectable. BR cases invade to surrounding major arteries and/or vein. Therefore, it is not easy to achieve R0 resection by straightforward surgery. Recently, several studies have reported that NACRT for BR-PDAC improves prognosis and resectability, and eradicates micro metastases. Furthermore, it is presumed that NACRT induces antitumor immunity, and the accumulation of tumor infiltrating lymphocytes (TILs) correlate with prognosis. In our department, we have started clinical research of NACRT for BR-PDAC from Jan 2009. In fact, we have already reported that high CD8+ TILs might be a predictive marker of long survival for these cases. However, the feature of cases with high CD8+ TILs has not been clarified. In this study, we have performed proteomic analysis to reveal the predictive marker of high accumulation of CD8+ TILs.

Methods: We studied 72 resected BR-PDAC cases with NACRT from Jan 2009 to Mar 2014. Three matched pairs of high CD8+ TILs with good prognosis and low CD8+ TILs with poor prognosis cases were selected. Shotgun proteomics was performed using the cancerous part and tumor stroma which are extracted from formalin-fixed and paraffin-embedded tissue samples. For validation of identified proteins, immunohistochemistry (IHC) was performed. 44 PDAC cases with straight forward surgery from 2006 to 2014 were evaluated for comparison. Relationships between the identified proteins and NACRT, TILs, clinical outcomes were assessed by statistical analysis.

Results: 369 proteins were identified by shotgun proteomics, and there was statistic difference of expression in 6 proteins. From these candidates, we selected one protein; Marginal zone B and B1 cell specific protein (MZB1), which is known for B lineage cell specific protein. MZB1 expression were detected in only tumor stroma, and tumor cells were negative. IHC showed high expression of stromal MZB1 in long survival cases with high CD8+ TILs as with proteomic analysis. In the NACRT group (n=72), high expression of stromal MZB1 was positively correlated with the accumulation of CD8+ TILs (|R|=0.347, p=0.002). Patients with high accumulation of stromal MZB1 (?207) had a longer overall survival (OS) than others (3 year-survival; MZB1 high : low = 60.2% : 28.6%, p=0.014). Regarding the 36 patients with high CD8+ TILs in the NACRT group, there was statistic significant relationship between high expression of stromal MZB1 and OS (3 year-survival; MZB1 high : low = 72.9% : 42.9%, p=0.003). In straight forward group (n=44), there was no significant relationships between stromal MZB1 and accumulation of CD8+ TILs, or OS.

Conclusion: MZB1 might be a predictive marker of the high CD8+ TILs and long term survival of resected BR-PDAC cases after NACRT. Furthermore, MZB1 might have a promotive effect on anti-tumor immunity.

 

30.08 Malignant Large Bowel Obstruction: Is Less More?

P. J. Chung1, M. C. Smith1, H. Talus3, V. Roudnitsky2, A. Alfonso1, G. Sugiyama1  1State University Of New York Downstate Medical Center,Surgery,Brooklyn, NY, USA 2Kings County Hospital Center,Acute Care Surgery/Trauma,Brooklyn, NY, USA 3Kings County Hospital Center,Surgery,Brooklyn, NY, USA

Introduction:
Colorectal cancer is the fourth most common malignancy in the United States, with over 134,000 new cases expected in 2016. Though many of these cases are early-stage and identified on screening colonoscopy, a subset of patients are detected because they present with large bowel obstruction (LBO). These patients are likely to require urgent or emergent operative therapy. Using a large national database we sought to investigate the outcomes of patients who present with LBO as there are several options for managing this condition.

Methods:
Data was collected from the Nationwide Inpatient Sample (NIS) 2010 – 2012. We included patients with a diagnosis of LBO (560.89, 560.9), with a confirmed diagnosis of colorectal cancer (153 – 154). To identify patients with average risk we excluded patients with familial syndromes (e.g. Familial Adenomatous Polyposis), concurrent neoplasms, age <60 years, and missing race data. We calculated the Elixhauser-Van Walraven score to assess comorbidity status. We identified patients that underwent non-surgical therapy (non-invasive or invasive diagnostic modalities, with resuscitation and/or percutaneous drainage, with or without subsequent chemotherapy), diversion alone, diversion followed by either open or laparoscopic resection, colonic stenting alone, or stenting followed by either open or laparoscopic resection, and either open or laparoscopic resection alone. Multiple imputation was performed. Using inpatient mortality as the outcome variable we performed multivariable logistic regression using age, gender, race, insurance status, income status, elective procedure status, hospital size, urban vs rural hospital setting, geographic region, type of procedure performed, tumor location, presence of perforation, and Elixhauser-Van Walraven score as risk variables.

Results:
6,308 patients met the inclusion criteria of which 473 (7.50%) died. The median age was 74.0 years and 80.23% underwent an emergent procedure. After adjusting for all risk variables, age (OR 1.67 [1.39 – 2.00], p<0.0001), perforation (OR 2.85 [1.97 – 4.11], p<0.0001), Elixhauser-Van Walraven score (OR 1.97 [1.71 – 2.27], p<0.0001), and non-surgical management compared to open resection alone (OR 2.06 [1.60 – 2.65], p<0.0001) were predictive of mortality. However laparoscopic resection compared to open was associated with decreased risk of mortality (OR 0.33 [0.17 – 0.67], p<0.0001).

Conclusion:
In this large observational study of patients presenting with LBO due to colorectal cancer, we found that age, perforation, increasing comorbidities, and non-surgical management were associated with a significantly increased risk of mortality, while undergoing a laparoscopic compared to open resection was associated with decreased risk of mortality. Further prospective studies are warranted to study longer term outcomes and better inform operative planning, particularly as less invasive options become more widely available.
 

30.07 Impact of Peer Support on Colorectal Cancer Patients’ Adherence to Recommended Multidisciplinary Care

A. E. Kanters1, A. M. Morris1, P. H. Abrahamse2, L. Mody3, P. A. Suwanabol1  1University Of Michigan,Department Of General Surgery,Ann Arbor, MICHIGAN, USA 2University Of Michigan,Center For Cancer Biostatistics,Ann Arbor, MI, USA 3University Of Michigan,Department Of Internal Medicine,Ann Arbor, MI, USA

Introduction:  Multidisciplinary care is critical for the successful treatment of Stage III colorectal cancer (CRC), yet postoperative receipt of chemotherapy remains unacceptably low for unclear reasons. Peer support, or exposure to others who have undergone similar diagnoses and treatment, has been proposed as a means to improve patient acceptance of and coping with cancer care. However, the specific impact of peer support on colorectal cancer patients’ attitudes toward and adherence to recommended chemotherapy is unknown. 

Methods:  We conducted a population-based survey of patients in the Detroit and Georgia Surveillance, Epidemiology and End Results regions after surgery for Stage III CRC between 2011- 2013. For this study, we assessed patient-reported exposure to any peer support, adequacy of peer support, and attitudes towards chemotherapy, and analyzed their association with receipt of postoperative chemotherapy using χ2 tests.

Results: Among 1281 patient respondents (68% response rate), 56% reported exposure to some form of peer support. Exposure to peer support was associated with younger age, higher income, and having a spouse or domestic partner (p<0.001, p=0.016 and p<0.001, respectively). Exposure to any peer support was significantly associated with receipt of adjuvant chemotherapy (p<0.001), but amount or adequacy of peer support was not (p=0.74). Respondents reported that exposure to peer support had a primarily positive impact on their attitudes (e.g., 73% indicated that it helped them know what to expect). However, the few who reported negative impact on attitudes (e.g., 11% indicated that it made them more scared or anxious about treatment) were less likely to receive chemotherapy (p=0.020). Male patients and those with lower levels of education found that peer support helped with decision making for use of chemotherapy (p=0.007 and p=0.012, respectively). 

Conclusion: Our study demonstrates that peer support is associated with overall higher rates of postoperative chemotherapy adherence, except in the rare instances of a negative peer support experience. These data suggest that a facilitated peer support program could positively influence treatment decision making and uptake of recommended multidisciplinary care. 

30.06 Hospital Minimally Invasive Surgery Utilization for Gastrointestinal Cancer

M. C. Mason1,2, H. S. Tran Cao2, S. S. Awad1,2, F. Farjah3, G. J. Chang4, C. Chai2, N. N. Massarweh1,2  1Michael E. DeBakey VA Medical Center,Houston VA Center For Innovations In Quality, Effectiveness, And Safety,Houston, TX, USA 2Baylor College Of Medicine,Michael E. DeBakey Department Of Surgery,Houston, TX, USA 3University Of Washington,Department Of Surgery And Surgical Outcomes Reseach Center,Seattle, WA, USA 4The University Of Texas MD Anderson Cancer Center,Department Of Surgical Oncology And Health Services Research,Houston, TX, USA

Introduction: Laparoscopic and robotic techniques are applied across surgical specialties. However, the extent to which these minimally invasive surgery (MIS) techniques are applied for gastrointestinal (GI) cancer resection has not been well defined and the impact of receiving care at high MIS utilizing hospitals is unclear.

Methods: Retrospective cohort study of 137,581 surgically resected esophageal, gastric, pancreatic, hepatobiliary, colon, and rectal cancer patients within the National Cancer Data Base (2010-2013). Disease-specific, reliability-adjusted MIS utilization and conversion to open rates were calculated for each hospital and used to stratify hospitals into quartiles. Among gastric, pancreatic, and colon patients for whom AC was indicated, the association between days to AC and hospital MIS utilization was examined using generalized estimating equations.  The association with risk of death was evaluated with multivariable Cox regression.

Results: While disease-specific MIS use increased significantly (42.0-68.3% increase; trend test, p<0.001 for all but hepatobiliary [p=0.007]), most hospitals remained low MIS-utilizers. High MIS utilization is associated with increased lymph nodes examined (p<0.001 for all) and shorter LOS (p<0.001 for all). Among colon and rectal patients, mortality at 30 days (colon—0.7% lowest MIS quartile vs 0.4% highest quartile; trend test, p<0.001; rectal—1.1% vs 0.8%; trend test, p=0.018) and 90 days (colon—2.6% vs 2.0%; trend test, p=0.002; rectal—2.4% vs 1.6%; trend test, p=0.002) was lower at higher MIS utilizing hospitals. Except for colon, case volume was highest at hospitals in the lowest and highest conversion to open quartiles. However, hospital conversion rates were not clearly associated with worse perioperative outcomes. For gastric cancer, each 10% increase in hospital MIS utilization is associated with 3.3[95% CI, 1.2-5.3] fewer days to AC initiation. While this association was not observed for pancreatic or colon patients overall, time-to-AC was decreased by 3.3[0.7-5.8] days for gastric and 1.1 [0.3-2.0] days for colon patients who had open resection.  Relative to the lowest quartile hospitals, care at higher MIS utilizing hospitals was associated with a lower risk of death for colon (Q2–Hazard Ratio 0.96[0.89-1.02]; Q3–HR 0.91[0.86-0.98]; Q4–HR 0.87[0.82-0.93]) and rectal cancer patients (Q2–Hazard Ratio 0.89[0.76-1.05]; Q3–HR 0.84[0.72-0.97]; Q4–HR 0.86[0.74-0.98]).

Conclusions: Although MIS use for GI cancer has increased, most hospitals remain low utilizers. Shorter LOS at high utilizing hospitals and the lack of a clear association between hospital conversion rates and perioperative outcomes potentially reflect the real world effectiveness of MIS.  As data regarding MIS for GI cancer resection evolve, MIS utilization may help identify hospitals with infrastructure and care processes that can be used to facilitate multimodality cancer care.

30.05 Postoperative Outcomes From Rectal Cancer Resection in the U.S.: Still Room For Improvement

L. Gregorian1, E. Vo1, L. Haubert1,2, E. Choi1,2, S. S. Awad1,3, A. Artinyan1,2  1Baylor College Of Medicine,Houston, TX, USA 2Baylor St. Lukes Medical Center,Houston, TX, USA 3Michael E. DeBakey Veterans Affairs Medical Center,Houston, TX, USA

Introduction:
Colorectal cancer is a leading cause of cancer death in the US. We have previously described changes in cancer-specific rectal cancer treatment and long-term survival over the last 4 decades. The aim of our current study was to describe changes in early postoperative outcomes after curative-intent surgery for rectal cancer in the US. We hypothesized that postoperative outcomes such as length of stay (LOS), mortality, and postoperative complications have improved over time.

Methods:
The National Inpatient Sample and the Nationwide Inpatient Sample (NIS), Healthcare Cost and Utilization Project (HCUP), Agency for Healthcare Research and Quality data were queried in 5 year intervals from 1993-2013 for patients with rectal adenocarcinoma, older than 18 years of age, who had undergone curative-intent surgery (n=16,419). Baseline characteristics (age, gender, type of operation) and postoperative outcomes (LOS, inpatient mortality, discharge disposition, and postoperative complications) were described. Clinical/demographic characteristics and postoperative outcomes were compared by discharge year. Continuous variables were compared using the 1-way analysis of variance (ANOVA) or non-parametric tests, and categorical variables were compared using the chi-square test.

Results:
The mean age of the entire population was 65.6±13.1 years. 58.7% of patients were male and median LOS was 8 (IQR 4-11) days. Mean age of diagnosis has decreased with time (68.3±12.1 in 1993 to 62.6±13.0 years in 2013, p<0.001). The proportion of male patients has increased in the same time period (56% to 62%, p<0.001). As in our prior study, sphincter-preserving operations increased significantly over time (51% in 1993 to 60.5% in 2013, p<0.001). During the same time period, perioperative hemorrhage and inpatient mortality decreased from 3.6% to 1.6% (p<0.001) and 1.9% to 0.7% (p<0.001), respectively. There was no clinically significant change in the surgical site infection (SSI) rate (4.3% to 4.6%, p<0.001), whereas anastomotic leak and digestive complications increased over time (9.8% to 12.7%, p<0.001). Median LOS decreased significantly from 10 (IQR 7-13) to 6 (IQR 4-9) days (p<0.001). However, non-home discharges and home-health use increased from 8.3% to 11.4% and 23.5% to 42.7%, respectively (p<0.001).

Conclusion:
The treatment of rectal cancer continues to evolve, with a greater emphasis on sphincter-preserving surgery, as well as decreases in perioperative hemorrhage and inpatient mortality. However, the rate of SSIs has not changed meaningfully and the risk of anastomotic and other digestive complications has increased, potentially secondary to anatomically lower pelvic anastomoses. Although LOS has decreased, there has been an increase in transitional care and home-health service needs. A shift toward organ-preserving strategies is likely necessary to further improve post-operative outcomes from rectal cancer surgery.

30.04 Age is an Important Risk Stratifier for Lymph Node Metastasis in Patients with Thin Melanoma

A. J. Sinnamon1, M. G. Neuwirth1, R. L. Hoffman1, D. E. Elder2, X. Xu2, R. R. Kelz1, R. E. Roses1, D. L. Fraker1, G. C. Karakousis1  2Hospital Of The University Of Pennsylvania,Department Of Pathology,Philadelphia, PA, USA 1Hospital Of The Univerity Of Pennsylvania,Endocrine And Oncologic Surgery,Philadelphia, PA, USA

Introduction:
While the association of age with nodal metastases and outcomes in patients with melanoma has been recognized and variably reported upon, the influence of age on nodal positivity in patients with thin melanoma has been less well studied, limited by few events in institutional experiences.  Using a large national dataset we study the association of age and nodal positivity in thin melanoma and its implications on current recommendations for sentinel lymph node biopsy in this patient population.

Methods:
Patients with clinical stage I 0.50-1.0mm thin melanoma diagnosed from 2010-2013 who underwent wide excision and had any LNs pathologically evaluated were identified using the National Cancer Data Base (NCDB). Nodes were defined as either positive or negative based on presence of any metastatic disease. Age was categorized as <40 years, 40-64 years, and ≥65 years. Clinicopathologic factors associated with LN positivity were identified using chi-square or Fisher exact method as indicated. Multivariable logistic regression was performed to identify predictors of LN positivity.

Results:
From 2010-2013, 8772 patients underwent wide excision and had evaluation of regional LNs. Of these, 333 were found to have nodal spread, for an overall positivity rate of 3.8%. Median age was 56y (IQR 46-67y) in those with negative LNs and 52y (IQR 41-61y) with LN disease (p<0.001). By multivariable analysis, age≥65 years, thickness≥0.76mm, increasing Clark level, mitoses, ulceration, and acral lentiginous or epithelioid histology were independently associated with LN positivity. Age was found to reliably stratify patients for LN positivity among other high risk features, namely tumor depth, mitogenicity, and ulceration status (figure).  Patients <40yo with T1a tumors<0.76mm (who would not generally be recommended SLN biopsy) had LN positivity rate of 5.56% (18/324 patients); conversely, patients ≥65yo with T1b tumors ≥0.76mm (who would generally be recommended for SLN biopsy) demonstrated LN positivity rate of 3.87% (37/956).  This pattern remained unchanged if including Clark level IV/V as a worrisome feature in addition to mitogenicity and ulceration.

Conclusion:

Current guidelines for SLN biopsy in patients with thin melanoma focused on tumor variables may be too restrictive in young patients and overly permissive among patients ≥65 years using a 5 percent threshold for LN positivity; patient’s age should be an important factor when counseling these patients for lymph node evaluation.

30.03 Impact of Time to Surgery in Patients with Clinical Stage I-II Pancreatic Adenocarcinoma

D. S. Swords1, C. Zhang2, A. P. Presson2, M. A. Firpo1, S. J. Mulvihill1, C. L. Scaife1  1University Of Utah,Department Of Surgery,Salt Lake City, UT, USA 2University Of Utah,Study Design And Biostatistics Center,Salt Lake City, UT, USA

Introduction:  Timeliness is a domain of healthcare quality, and wait times for cancer surgery have increased in recent years. Time to surgery (TTS) from diagnosis in pancreatic adenocarcinoma (PDAC) may be delayed due to the need for biliary decompression, multi-disciplinary review, or medical optimization. Existing data on the clinical impact of TTS have been conflicting.

Methods:  The National Cancer Database was reviewed from 2004-2012 for patients undergoing upfront resection of clinical Stage I-II PDAC with data on TTS. TTS was defined as time from diagnosis to resection. Patients with TTS of 0 or > 120 days and those that received neoadjuvant therapy were excluded. Patients with unknown clinical stage were excluded if pathologic stage was III-IV. Overall survival (OS) began at time of surgery and was the primary outcome. Multivariable Cox regression with TTS modeled as a restricted cubic spline was used to evaluate the relationship between TTS and mortality in order to define TTS groups. OS was evaluated with unadjusted Kaplan-Meier analysis and multivariate Cox regression analysis. Secondary outcomes were rates of positive margins, nodal positivity, and upstaging from clinical to pathologic stage; they were examined using logistic regression models adjusted for demographic and clinical characteristics.

Results: There were 15,945 patients available for analysis. Patients with TTS ≤ 2 weeks had the highest risk of mortality with a gradual decrease to 40 days, and then a gradual increase to 120 days. We thus defined TTS as: short (1-14 days, N=5,465), medium (15-42 days, N=8,241), and long (43-120 days, N=2,239). Adjusted odds of positive margins, nodal positivity, and upstaging were not significantly different between TTS groups. On unadjusted survival analysis, short TTS patients had slightly worse survival than medium and long (P<0.001, Log-rank). Survival differences between TTS groups were most pronounced in Stage I patients; long TTS  had superior survival to medium TTS, which was superior to short TTS (P<0.001 for both, Log-rank, Figure). On multivariate Cox proportional hazard analysis, short vs. medium TTS was associated with modestly increased hazards of mortality (Hazard ratio [HR] 1.07, 95% confidence interval [CI] 1.02-1.11, P=0.003) but long vs. medium was not (HR 0.95, 95% CI 0.9-1.01, P=0.12).

Conclusion: Moderately longer TTS was not associated with worse outcomes and short TTS was associated with higher mortality, especially in Stage I disease. These findings should reassure patients and providers that reasonable delays are likely safe. However, we could not account for patients who initially were planned for resection but progressed on repeat imaging or who were unresectable on exploration.

30.02 Disparities in Managing Emotions when Facing Breast Cancer: Results of Couples Distress Screening

S. Dumitra1, V. Jones1, C. Vito1, J. Rodriguez1, C. Bitz1, E. Polamero2, M. Loscalzo2, R. Obenchain2, L. Kruper1, S. G. Warner1  1City Of Hope National Medical Center,Department Of Surgery,Duarte, CA, USA 2City Of Hope National Medical Center,Department Of Populational Sciences,Duarte, CA, USA

Introduction: Distress screening and referral is now required for cancer center accreditation. Understanding patient and caregiver stress is critical to successful cancer care. This study examines the perceived emotional impact of breast cancer on both patients and partners.

Methods: From March 2011 to February 2016, patients and partners underwent an electronic 48-point distress screen during their initial surgical clinic visit. Distress was measured via self-reported concerns on a five-point Likert scale. Respondents were also asked about preferred interventions. Patient and partner ability to manage emotions was assessed in relation to education, ethnicity, fatigue, anxiety and depression using ordered logistic regression.

Results: Of the 665 individuals screened, 51.7% (n=344) were patients while 48.3% (n=321) were partners. Patients were more distressed than partners regarding fatigue, anxiety, depression, and worrying about the future (p<0.005). Partners requested information regarding “managing emotions” less often than patients (19.7% vs. 46.3%).  In the univariate analysis for managing emotions, being partner was protective against self-reported distress (OR 0.49 (95%CI 0.34–0.70), p<0.000) as was holding an advanced degree (OR 0.36 (95%CI 0.14–0.93),p=0.035). In the multivariate ordered logistic regression, having at least some college remained protective against difficulty in managing emotions, while being a partner was not(OR 0.93 (95%C I0.62–1.39, p=0.789). Financial concerns, anxiety, depression, and worrying about the future remained significantly associated with increased difficulty in managing emotions (Table 1). After correcting for known variables, partners were found to ask for information or help less than patients(OR 0.28 (95%CI 0.17–0.48), p<0.000).

Conclusion: While partners have similar concerns as patients, they do not seek information or help in managing emotions. Both patients and partners with less education and increased financial distress were more likely to report difficulty managing emotions. This study identifies groups who would benefit from supportive measures even in the absence of a request for help.

30.01 Can Medicaid Expansion Decrease Disparity in Surgical Cancer Care at High Quality Hospitals?

D. Xiao1,2,3, C. Zheng1,2,3, M. Jindal1,2,3, C. Ihemelandu1,2,3, L. Johnson1,2,3, T. DeLeire2,3, N. Shara1,2,3, W. Al-Refaie1,2,3  1MedStar Georgetown University Hospital,Washington, DC, USA 2MedStar Georgetown Surgical Outcomes Research Center,Washington, DC, USA 3Georgetown University Medical Center,Washington, DC, USA

Introduction:  Skepticism on Medicaid program’s ability to provide quality care has contributed to the debate on Affordable Care Act’s (ACA) Medicaid expansion. It is unknown whether Medicaid expansion can improve access to high-quality surgical cancer care for poor Americans. To address this gap, we examined the effects of the largest pre-ACA expansion in Medicaid eligibility, which occurred in New York in 2001. We hypothesized that this policy decreased disparity in access to surgical cancer care at high-quality hospitals (HQH) by insurance type and by race.

Methods:  We identified 67,685 non-elderly adults 18-64 years old from the 1997-2006 New York State Inpatient Database who underwent one of nine major cancer resections. HQHs were defined as either high-volume hospitals (HVH, assigned yearly as hospitals of highest procedure volumes that treated 1/3 of all patients) or low-mortality hospitals (LMHs), whose observed-to-expected mortality ratio were < 0.7. Analysis examining access to HVH was restricted only to patients of procedures with strong volume-outcome relationship (esophagus, liver, stomach, pancreas, and urinary bladder; N=10,737).   

Disparity was defined as the model-adjusted difference in percentage of patients operated at HQH by insurance type (Medicaid/uninsured vs privately insured) or by race (blacks vs whites). Consistent with published literature, we combined Medicaid and uninsured patients to capture changes in access to care due to newly gained Medicaid coverage by an otherwise uninsured patient. Covariates included age, sex, procedure type and emergency admission. Levels of disparity were calculated quarterly for each pair of comparison, then regressed using interrupted time series to evaluate the impact of Medicaid expansion.

Results: Overall, 15.0% of our study cohort were Medicaid/self-pay and 12.1% were blacks. The disparity in access to HVH by insurance type was reduced by 0.61 percentage points per quarter following the expansion (p=0.003) (Figure). Meanwhile, the Medicaid/uninsured beneficiaries had similar access to LMH as the privately insured; no significant change was detected around the expansion. Conversely, racial disparity has increased by 0.86 percentage points per quarter (p<0.001) in access to HVH (Figure) and by 0.48 percentage points per quarter (p=0.005) in access to LMH after the expansion.

Conclusions: The pre-ACA Medicaid expansion reduced the disparity in access to surgical cancer care at HQH by insurance type. However, it was associated with an increased racial gap in access to HQH for surgical cancer care. Further investigations are needed to explore whether Medicaid expansion may aggravate racial disparity in surgical cancer care.

29.10 Does High Expression of Tumor Suppressive MicroRNA Prolong Survival of Breast Cancer?

T. Kawaguchi1, L. Yan2, Q. Qi2, S. Liu2, J. Young1, K. Takabe1  1Roswell Park Cancer Institute,Breast Surgery, Department Of Surgical Oncology,,Buffalo, NY, USA 2Roswell Park Cancer Institute,Department Of Biostatistics & Bioinformatics,Buffalo, NY, USA

Introduction:
MicroRNAs (miRNAs) are noncoding RNAs with 19-25 nucleotides that exert its function by either degradation of coding mRNA or inhibition of mRNA translation. Dysregulations of miRNAs have been reported to play critical roles in carcinogenesis and progression of various types of cancer including breast cancer (BrCa). Some miRNAs, such as miR-31, miR-126, miR-146b, miR-206, miR-335, have been reported as tumor suppressive miRNAs targeting oncogenes. However, clinical relevance of those reports has not yet been validated using common large cohort, which provides sufficient statistical power with proved high quality genetic samples. In this study we took advantage of the high-throughput data from The Cancer Genome Atlas (TCGA) as a validation cohort to evaluate the clinical relevance of well-known nine suppressive miRNAs. 

Methods:
All data were obtained from The Cancer Genome Atlas (TCGA). Expression of five suppressive miRNAs in BrCa, miR-31, miR-126, miR-146b, miR-206, miR-335, were retrieved from the GDC data portal and were analyzed using microRNA-Seq dataset. Overall survival was compared using the Cox proportional hazard model between the high and low expression groups determined by each miRNA-specific thresholds.

Results:
Among the 1097 patient breast cancer samples logged in TCGA, 1053 samples were found to contain both microRNA-seq datasets and survival data. High expression levels of miR-31 and miR-146b demonstrated significantly better survival (p = 0.032 and p = 0.025, respectively), while high expression levels of miR-206 tend to show worse prognosis (p = 0.091). The other miRNAs of interest, miR-126 and miR-335 have no significant difference between high and low expression groups. All of the miRNAs examined did not show any significant difference between high and low expression groups in clinicopathological factors such as staging, tumor size (T category), nodal metastasis (N category), distant metastasis (M category), and intrinsic subtype (ER, PR, and HER2 status), except for miR-126 that demonstrated significant association with PR and HER2 status (p = 0.003 and p < 0.001, respectively).

Conclusion:
Utilizing a big data (TCGA) with sufficient statistical power, we found that high expression of miR-31 and miR-146b was significantly associated with better overall survival. This is in agreement with previous reports that demonstrated their tumor suppressive role in BrCa. Conversely, expression of miR-126 and miR-335 did not show any survival impact, and high expression of miR-206 demonstrated a trend to worse prognosis against the previous reports. We conclude that it is essential to validate the survival impact of reported miRNAs using a large publically available data base such as TCGA.
 

29.08 Outcome in Real-World Practice for Elderly Patients with Early Breast Cancer

Q. D. Chu1, P. Peddi1, M. Zhou2, K. Medeiros2, X. Wu2  1Louisiana State University Health Sciences Center,Surgery,Shreveport, LA, USA 2Louisiana State University Health Sciences Center,Biostatistics,New Orleans, LA, USA

Introduction: Clinical trials demonstrated the efficacy of omitting radiation therapy (RT) for women ≥ 70 years old, hormone receptor positive (HR+) T1 breast cancer who underwent breast-conserving therapy cancer treated with anti-hormonal therapy. Whether such results also apply to real-world population is unknown. We report the survival outcomes of patients who received adjuvant RT versus those who did not using a large national clinical oncology database.

Methods: Using the National Cancer Data Base, representing about 70% of newly diagnosed cancer cases nationwide, we evaluated a cohort of 66,763 women diagnosed with breast cancer in 2004 -2012 and meeting the following criteria: age ≥ 70 years, pathologic stage I, HR+, negative margins, and receipt of anti-hormonal therapy. Patients were stratified into two groups: (1) RT and (2) no RT. Propensity score matching was used to compensate for differences in baseline characteristics. Univariate and multivariable survival analysis with Cox proportional hazards models were employed to determine the impact of radiation therapy on the overall survival (OS). 

Results:After matching, 23,276 cases were analyzed. The 5-year OS was 85.9% for RT group and 78.3% for no RT group (HR=1.59; P<0.0001). The median survival time was 114.07 months for RT and 103.56 months for no RT. Significant adjusted predictors (P<0.01) of poor OS were lack of radiation, advanced age, facility type, facility location, and high comorbidity score.

Conclusion:Patients who received RT had better survival outcomes than those who did not, revealing discordance between results of randomized trials and real-world setting.