77.05 A 15-Year Experience with Renal Transplant in Undocumented Immigrants: An Argument For Transplant

M. Janeway2, A. Foster1, S. De Gue2, K. Curreri2, T. Dechert2, M. Nuhn2  1Boston University School of Medicine,Boston, MA, USA 2Boston Medical Center,Department Of Surgery,Boston, MA, USA

Introduction:  Health and financial benefits of renal transplant are well demonstrated, yet transplantation in undocumented immigrants remains rare and little published data on outcomes in this population exists. We investigated whether undocumented immigrants have similar outcomes after renal transplant compared to documented recipients.

Methods:  We retrospectively analyzed records of adult renal transplant recipients at our academic medical center between 2002 and 2016. Primary endpoints were recipient and graft survival. Secondary endpoints were delayed graft function (DGF), acute rejection, and post-transplant complications. Patients were matched 1:1 using a propensity score matching model based on age, sex, race, type of donor (living vs. cadaveric), and cause of end-stage renal disease.

Results: We identified 44 undocumented and 137 documented patients. Undocumented patients were younger and more likely to receive a living-donor kidney. Unadjusted survival rates were comparable between undocumented and documented recipients at 1-year (97% vs. 96%) and 3-years (96% vs. 96%), as were graft survival at 1-year (92% vs. 93%) and at 3-years (87% vs. 86%), and post-transplant complications (44% vs. 41%). After matching, documentation status was not significantly associated with graft survival at one year (OR=1.50, 95%CI[0.27, 9.50], p= 0.6669) or three years (OR=1.33, 95%CI[0.30, 5.88] p=0.7039), DGF (OR=1.62, 95%CI[0.57, 4.59], p=0.3632), acute rejection (OR=1.58, 95%CI[0.25, 10.00], p=0.6265), transplant complications (OR 1.62, [0.68, 3.84], p=0.2752), or post-transplant CKD (OR=0.60, [0.20, 1.80] p=0.3598).

Conclusion: Documentation status is not associated with adverse renal transplant outcomes in this small study. Given these outcomes data, we feel transplant centers should consider renal transplant for undocumented patients.

 

77.04 Going beyond MELD: A data-driven mortality predictor for liver transplantation waiting list

G. Nebbia2, E. R. Dadashzadeh1,3, C. Rieser3, S. Wu1  1University Of Pittsburgh,Department Of Biomedical Informatics,Pittsburgh, PA, USA 2University Of Pittsburgh,Intelligent Systems Program,Pittsburgh, PA, USA 3University Of Pittsburgh,Department Of Surgery,Pittsburgh, PA, USA

Introduction:  Since 2002, the liver allocation policy for adults is based on the Model for End-stage Liver Disease (MELD). While MELD was not originally created for this purpose, given its ability to predict short-term mortality, it has been serving as an urgency-based mechanism for organ allocation. Aiming to improve the MELD criteria, the purpose of this study is to investigate a data-driven approach using machine learning (ML) techniques to build a predictor of mortality for patients awaiting liver transplantation.

Methods: We retrospectively used the Scientific Registry of Transplant Recipients (SRTR) dataset, which included patients waitlisted for liver transplantation from 1985 to 2017, and we divided the dataset into three survival cohorts (i.e., 3, 12, and 24 months) including 88,758, 63,205, and 53,361 patients, respectively. We applied three ML algorithms (Logistic Regression, Random Forests, and Neural Networks) to predict the survival for each cohort, by training each ML model using 30 clinical factors such as functional status, additional laboratory values, diagnosis, blood type, BMI, and MELD itself. We removed patients that have substantial missing data in these factors, resulting in the final three cohorts of 25,560, 17,295, and 14,203 patients, respectively.  For each cohort, 75% data were used for training and the rest unseen 25% for testing, measuring the prediction performance by the Area Under the ROC Curve (AUC). We analyzed each cohort as a whole and also grouped patients based on their specific diagnosis categories for sub-group analysis. The diagnosis categories we analyzed are Acute Hepatic Necrosis, Cholestatic Liver Disease/Cirrhosis, Malignant Neoplasm, Metabolic Disease, Non-Cholestatic Cirrhosis, and Other. AUCs of different models are compared by Delong test to assess statistical significance.

Results: MELD alone reached an AUC of 0.87, 0.78, and 0.75 for the 3, 12, and 24-month cohort, respectively. Logistic Regression reached AUC of 0.89, 0.83, and 0.82, while the other two ML models performed comparably. All the AUC improvements over the MELD baseline were statistically significant (p<0.05). In sub-group analyses, the AUCs of diagnosis-specific models showed consistent improvement over the sub-groups; in particular, the largest increase in AUC is achieved on the 24-month cohort for the diagnosis of Malignant Neoplasm, Non Cholestatic Cirrhosis, and Metabolic Disease.

Conclusion: This study shows that data-driven ML modeling outperforms the MELD criteria in predicting mortality for patients awaiting liver transplantation. We see a larger improvement (0.82 vs 0.75) when predicting a longer survival (24 months) and a smaller improvement (0.89 vs 0.87) for the 3 months. More promisingly, the improvement over different sub-groups indicates the ML models may be particularly beneficial to certain group of patients with specific diagnosis, potentially enabling precision prediction of survival on stratified patient cohorts. 

 

77.03 Effect of Kidney Allocation System Policy on Transplant Rates Across UNOS Regions in the US

A. C. Perez-Ortiz1,2, E. Heher1, N. Elias1  1Massachusetts General Hospital,Transplant Center,Boston, MA, USA 2Yale University School Of Public Health,New Haven, CT, USA

Introduction:
The new Kidney Allocation System (KAS) aimed to improve transplantation rates and to address other core needs of deceased donor (DD) kidney allocation. Three years after implementation the regional effects of KAS have not been well studied. Since the United States (US) is heterogeneous, particular states might have experienced significant improvements compared to others. We here aimed to test if such regional differences existed after KAS implementation.

Methods:
We abstracted regional and state data on DD from the Organ Procurement and Transplantation Network, end-stage renal disease prevalence from the US Renal Data System, and US Census, and constructed Poisson regression models to estimate kidney transplant incidence ratios (IRs) by region compared to the national average between 2012–2017. We also tested the additive effect of KAS policy by average marginal effects (AMEs), specifically in the post-implementation period (2015 – 2017) regionally.  and plotted our findings in a 50-state choropleth map where lighter colors represent regions with the highest improvement and darkest null effects post-KAS.

Results:
KAS impact was different across regions in two ways. First, the multiplicative effect of KAS post-implementation, measured by IRs, significantly increased the base rate by a factor of 1.16 and 1.09 in regions six and ten respectively. Second, the additive effect of KAS (2015 onward), measured by AMEs, significantly improved the expected mean number of transplants in regions 2, 3, 4, 5, and 9 (Figure). KAS was most impactful in Southern -states with both IRs and AMEs were higher than the national average. In comparison, the mid-west and north-west regions had the lowest AMEs.

Conclusion:
KAS had a higher impact in Southern states improving deceased donor kidney transplant compared to other regions in the US. KAS regional effect warrants exploration, specifically to identify characteristics driving the increase in transplantation so that public policies can be improved accordingly.?
 

77.02 Survival Benefit After Liver Transplant in the Post-MELD Era: 15 Year Analysis of 100,000 Patients

T. A. Russell1, D. Graham1, V. Agopian2, J. DiNorcia2, D. Markovic3, S. Younan2, D. Farmer2, H. Yersiz2, R. Busuttil2, F. Kaldas2  1University Of California – Los Angeles,General Surgery,Los Angeles, CA, USA 2University Of California – Los Angeles,Liver & Pancreas Transplantation,Los Angeles, CA, USA 3University Of California – Los Angeles,Biomathematics,Los Angeles, CA, USA

Introduction:  Annually less than 60% of waitlisted patients receive liver transplantation (LT), resulting in over 2,000 waitlist deaths. Historically, the minimum threshold for survival benefit (SB) with LT is a model for end-stage liver disease (MELD) score of 15. Limited organ availability and geographic disparities require examination of the relative LT-SB in the post MELD era to ensure optimization of lives saved.  

Methods:  All waitlisted adults from 2/2002-3/2017 (excluding Status-1A, MELD-exception candidates) within in the United Network for Organ Sharing (UNOS) database were included. Patients were followed from the time of listing to 3-months post-transplant or waitlist-removal. Survival-time was accrued to MELD categories according to score changes over time. LT-SB hazard ratios were computed comparing waitlist to post-LT survival for the entire cohort and by UNOS regions and by eras (2002-2006, 2007-2011, 2012-2017). The threshold for SB was defined by a HR <1.0, indicating SB for receiving LT as compared to remaining on waitlist.

Results: 107,503 patients were waitlisted; 46,249 underwent LT. By era, the 3-month LT-SB threshold was at MELD 19, 20-23, and 20-23 (Figure 1). All UNOS regions had a common 3-month LT-SB threshold of MELD 21-29 for the entire study period. At time of LT 10,899 (24%) patients had a MELD of 15-20, while 3,756 (8.1%) had a MELD<15. Fifty percent (n=1891) of LT for MELD <15 were done in 3 of the 11 UNOS regions.

Conclusion: The 3-month SB-LT threshold of MELD >20 suggests an increase from the previously established score of 15, yet patients continue to undergo LT at MELDs even below 15 in donor rich regions. These findings highlight the potential to save more lives by allocating organs to higher acuity patients at increased risk of 3-month pre-transplant mortality.

 

77.01 Persistent Gender Disparities in Access to Kidney Transplantation

C. Holscher1, C. Haugen1, K. Jackson1, A. Kernodle1, S. Bae1, J. Garonzik Wang1, D. Segev1  1Johns Hopkins University School Of Medicine,Baltimore, MD, USA

Introduction: While national policies direct organ allocation for waitlisted candidates, the decision to list a candidate for transplantation is made at the center- and patient-level. Historically, women have had decreased access to kidney transplantation (KT). We sought to investigate if gender disparities in access to KT have improved over time. 

Methods: To explore temporal trends in access to KT, we studied 1,511,863 adults (age 18-99) with incident end-stage renal disease (ESRD) using the United States Renal Data System (USRDS) from 2000 to 2015. We divided the study period into four eras and compared characteristics of patients who were and were not listed for transplantation (Chi-square and Student’s t tests), and tested if waitlisting changed over time (Cuzick test of trend).  We used Cox regression to determine the association between era and access to transplantation while controlling for candidate factors.  As a sensitivity analysis to determine whether a differential risk of death before waitlisting impacted our inferences, we used a competing risk regression using the Fine and Gray method with a 5% random sample.

Results: The proportion of ESRD patients who were subsequently waitlisted decreased over time (13.2% in 2000-2003 to 8.7% in 2012-2015, p<0.001). Compared to those who were never waitlisted, waitlist registrants were less likely to be female (37% vs 45%, p<0.001), were younger (mean 50 vs. 66 years, p<0.001), were more likely to be African American (32% vs 28%, p<0.001), were more likely to be Hispanic (20% vs. 13%, p<0.001), and were more likely to have private insurance (38% vs. 17%) or be uninsured (13% vs 6%, p<0.001). After controlling for age, race, ethnicity, and prior insurance, men had similar access to KT over time (per 4-year era, aHR 1.00, 95% CI 1.00-1.01, p=0.6), while women had less access (aHR 0.80, 95% CI 0.20-0.81, p<0.001) that worsened with time (interaction p<0.001) (Figure). For context, in 2000-2003, women were 20% less likely to be waitlisted for kidney transplant (aHR 0.80, 95% CI 0.78-0.82, p<0.001), while in 2012-2015 this worsened to 22% less likely (aHR 0.78, 95% CI 0.76-0.80, p<0.001). Our sensitivity analysis using a competing risk regression also showed persistent gender disparities in waitlisting.

Conclusion: Despite decades of studies showing that women have less access to kidney transplantation, gender disparities in access to KT have not improved over time, rather they have worsened. Further focus and novel interventions are needed to improve access for female KT candidates. 

 

76.10 Molecular Profiling and Mitotic Rate in Cutaneous Melanoma

K. Liang1, G. Gauvin1, E. O’Halloran1, D. Mutabdzic1, C. Mayemura1, E. McGillivray1, K. Loo1, A. Olszanski2, S. Movva2, M. Lango1, H. Wu3, B. Luo4, J. D’Souza5, S. Reddy1, J. Farma1  1Fox Chase Cancer Center,Department Of Surgical Oncology,Philadelphia, PA, USA 2Fox Chase Cancer Center,Department Of Hematology/Oncology,Philadelphia, PA, USA 3Fox Chase Cancer Center,Department Of Pathology,Philadelphia, PA, USA 4Fox Chase Cancer Center,Molecular Diagnostics Laboratory,Philadelphia, PA, USA 5Fox Chase Cancer Center,Molecular Therapeutics Program,Philadelphia, PA, USA

Introduction:  Mitotic rate (MR) is a measure of tumor cellular proliferation in melanoma and has been associated with the tumor’s likelihood to metastasize. Although higher mitotic rate is associated with worse prognosis, specific genetic mutations associated with MR are less known. In this study, we examine the relationship between mitotic rate and genetic mutations in melanoma using next generation sequencing (NGS) technology.

Methods:  A retrospective chart review was conducted on all melanoma patients who received NGS and had pathology reports with documented mitotic rates at an NCI designated cancer center. We compared no mitosis versus ≥ 5 mitoses. Groups were compared using chi squared tables and linear regression models.

Results: Between 1997 and 2018, 239 melanoma patients had NGS performed and were included in this study. The median age of the study group was 64 and 62% were male. Primary tumor locations were trunk (n=70), lower extremity (n=59), upper extremity (n=50), head and neck (n=31), mucosal (n=10), genital (n=5), and other (n=14). Pathological staging included stage I (n=25), stage II (n=64), stage III (n=109), stage IV (n=20), and unknown (n=21). Only 5 patients had 0 mitoses/mm2, while 104 patients had ≥ 5 mitoses/mm2. Out of a total of 380 mutations, the most common gene mutations were any BRAF (18%, n=69) or NRAS (14%, n=53) mutation, but these were not associated with mitotic rate. Mutations in ERBB4, PIK3CA, and SMAD genes were protective against high mitotic rate, associated with 0 mitoses/mm2 (p=0.009, 0.002, 0.044, respectively). Higher mitotic rates, greater than 5/mm2, were associated with mutations in TP53 (p=0.015), KRAS (p=0.002), and FGFR3 (p=0.048). Only three patients had mutations in all three of these genes; these patients had 8, 9, and 20 mitoses/mm2 on final pathology. After controlling for mutations in KRAS and FGFR3, a mutation in TP53 is associated with 2.74-fold increased odds of having more than 5 mitoses/mm2 (95%CI 1.15-6.52, p=0.023).

Conclusion: Mitotic rate is an important prognostic indicator in melanoma. Our data demonstrate that certain genetic mutations such as TP53, FGFR3, and KRAS are associated with higher mitotic rate while other mutations, including ERBB4, PIK3CA, and SMAD4 are more frequently found in patients with no mitoses. Further studies are needed to determine whether next generation sequencing can be used to predict more aggressive tumors so that treatment and surveillance can be better tailored to these patients.

 

76.09 Disconnected Pancreatic Duct Syndrome: Spectrum of Operative Management

T. K. Maatman1, A. M. Roch1, M. A. Heimberger1, K. A. Lewellen1, R. Cournoyer1, M. G. House1, A. Nakeeb1, E. P. Ceppa1, C. Schmidt1, N. J. Zyromski1  1Indiana University School Of Medicine,Surgery,Indianapolis, IN, USA

Introduction:  Disconnected pancreatic duct syndrome (DPDS), complete discontinuity of the pancreatic duct with a viable, but undrained tail, is a relatively common complication following necrotizing pancreatitis (NP). DPDS represents a complex and heterogeneous problem to the clinician; decision-making must consider the presence of sinistral portal hypertension, a variable volume of disconnected pancreatic remnant, and timing relative to definitive management of pancreatic necrosis. Treatment commonly falls to the surgeon; however, limited information is available to guide operative strategy. The aim of this study is to evaluate outcomes after operative management for DPDS. 

Methods:  An institutional necrotizing pancreatitis database was queried to identify patients with DPDS requiring operative management. When feasible, an internal drainage procedure was performed. In the presence of sinistral portal hypertension, small-volume disconnected pancreatic remnant, or concurrent infected necrosis requiring débridement,  distal pancreatectomy with or without splenectomy (DPS/DP) was performed. Descriptive statistics were applied; median (range) values are reported unless otherwise specified. 

Results: Among 647 NP patients treated between 2005-2017, DPDS was diagnosed in 289 patients (45%). Operative management was required in 211 patients; 78 patients were managed non-operatively or died of NP prior to DPDS intervention. Median EBL was 250 mL (10-5000). Median follow-up was 19 months (1-158). In 21 patients (10%), pancreatic débridement and external drainage resulted in subsequent fistula closure without need for further intervention. The remaining 185 patients underwent operation as definitive therapy. Internal drainage was performed in 99 and DPS/DP in 86. Time from NP diagnosis to OR was 108 days (5-2439). Morbidity was 53% (table 1). Length of stay was 8 days (3-65). Readmission was required in 49 patients (23%). Post-operative mortality was 1.9%. Death was caused by: ruptured splenic artery pseudoaneurysm (1); intra-operative cardiac event (1); and progressive organ failure following concomitant enterocutaneous fistula (2). Repeat pancreatic intervention was required in 23 patients (11%) at a median of 407 days (119-2947); initial management was internal drainage in 18 and DPS in 5. Salvage pancreatectomy was performed in 10 patients and the remaining 13 patients were managed with endoscopic therapy. 

Conclusion: DPDS is a common yet extremely challenging consequence of necrotizing pancreatitis. Patient selection is critical as perioperative morbidity and mortality are serious. Appropriate operation requires complex decision-making, however provides durable long-term therapy in nearly 90% of patients. 
 

76.08 Some is Not Better Than None: A Meta-Analysis of Total and Proximal Gastrectomy for Gastric Cancer.

B. P. Stahl1, J. B. Rose1, C. M. Contreras1, M. J. Heslin1, T. N. Wang1, S. Reddy1  1University Of Alabama at Birmingham,Surgery,Birmingham, Alabama, USA

Introduction: Surgical resection is a mainstay for treating gastric cancer. There is significant controversy surrounding the appropriate operation to maximize oncological benefit and functional outcome for proximal gastric cancer. Some advocate total gastrectomy (TG) with roux-en Y esophagojejunostomy reconstruction claiming that this operation provides optimal lymph node staging for this disease and eliminates post-operative reflux.  Others favor proximal gastrectomy (PG) with esophagogastric reconstruction hoping that the residual gastric reservoir will improve nutrition. We sought to address this question by reviewing oncological, perioperative, and functional outcomes of patients undergoing these two operations for proximal gastric cancer.

Methods: We performed a systematic review and meta-analysis of patients undergoing TG and PG for gastric cancer using PubMed, Embase, and the Cochrane Library from 2007 to 2018 with the MeSH terms “proximal”; “total”; and “gastrectomy” in English-language publications. We identified 659 results; 359 remained after duplicates were purged. From this dataset, 23 articles were selected for the present study. Studies were evaluated for quality with the Newcastle-Ottawa scale for non-randomized evaluations and via the Jadad scale for randomized-control trials.

Results: 23 articles were included in the quantitative synthesis (17 retrospective and 6 prospective studies) with 3227 patients (1984 TG and 1243 PG).  Most of the studies originated from Asia (Japan 13, Korea 5, China 2, Italy 1, India 1, United States 1) with patients cared for from 1990-2012. Most of the patients (96%) had Stage I or II gastric cancer. 30% (6/20) of the studies used perioperative chemotherapy. Median follow up was reported in 19/23 studies (range 17-60 months). TG retrieved a larger number of lymph nodes (OR 13.11, P<0.00001; FIGURE A), had fewer anastomotic stenoses (OR 3.13, P=0.0004; FIGURE B), and had less post-operative reflux symptoms (OR 2.72, P=0.01; FIGURE C) compared to PG.  The two operations had similar complication (FIGURE D) and 5-year overall survival rates (FIGURE E).  Mortality was similar between the two operations (PG 3.5% vs. TG 1.3%, P=0.66).

Conclusion: Although TG obtains a greater number of lymph nodes, both operations offer similar long-term overall survival—raising the question of whether these additional distal gastric resected lymph nodes are important in early stage proximal gastric cancer. PG is a safe and effective operation for early stage proximal gastric cancer if surgeons are willing to accept postoperative gastric reflux and anastomotic stenosis. These findings will need to be evaluated in advanced gastric cancer.

76.07 Learning Curve Bias Can Significantly Influence Results Of Surgical Randomized Controlled Trials

F. Van Workum1, G. Hannink2, C. J. Van Der Velde4, H. J. Bonenkamp1, I. D. Nagtegaal3, M. M. Rovers2, C. Rosman1  1Radboud University Medical Center,Surgery,Nijmegen, GELDERLAND, Netherlands 2Radboud University Medical Center,Evidence Based Surgery,Nijmegen, GELDERLAND, Netherlands 3Radboud University Medical Center,Pathology,Nijmegen, GELDERLAND, Netherlands 4Leiden University Medical Center,Surgery,Leiden, NOORD HOLLAND, Netherlands

Introduction:
Learning curves are often observed after introduction of innovative surgical techniques, but there is currently no robust data suggesting that learning curves can influence outcome of high quality surgical randomized controlled trials (RCTs). 

Methods:
Individual patient data was acquired from the Dutch D1-D2 trial, in which 1078 patients were randomised between D1 gastrectomy (old intervention) or D2 gastrectomy (innovative intervention). This RCT concluded that postoperative complications and mortality were significantly higher in the D2 group and that this did not support implementation of D2 resection into practice. Data from centres that included at least 10 consecutive cases (the minimum to perform meaningful trend analysis) were pooled for individual consecutive case numbers. Weighted moving average analysis was performed for the main outcome parameters and incidence graphs showing trends in outcome were plotted.

Results:
The incidence of postoperative death was 6% in the D1 group and no trend was observed during the trial, but in the D2 group, the incidence of postoperative death decreased from 10% to 3%. The incidence of postoperative complications increased from 19% to 20% in the D1 group (no significant trend). However, the incidence of postoperative complications decreased from 42% to 25% in the D2 group.

Conclusion:
This study showed significantly improving trends in the D2 group (innovative intervention) but not in the D1 group (old intervention), reflecting learning curve bias. Learning curve bias can significantly influence high quality RCT results and lead to misinterpretation of trial results. Incorporation of trend analysis in RCT reporting can assist clinicians with the interpretation of trial outcome data. Methodology to incorporate this into the design of RCTs is proposed.
 

76.06 Assessment of a Readmission Risk Model in Cancer Patients and the Impact on Patient Care and Outcome

E. A. Armstrong1, R. K. Pickard4, K. L. Johnson4, S. Abdel-Misih5,6  4Ohio State University,Cancer Program Analytics, James Cancer Hospital And Solove Research Institute, Comprehensive Cancer Center,Columbus, OH, USA 5Ohio State University,Department Of Surgery,Columbus, OH, USA 6Ohio State University,Division Of Surgical Oncology,Columbus, OH, USA 1Ohio State University,College Of Medicine,Columbus, OH, USA

Introduction:
Hepato-pancreatico-biliary (HPB) and gastrointestinal (GI) cancer patients receiving surgical care are at a significant risk for post-operative hospital readmission. At a tertiary academic center, a Readmission Risk Model was developed to identify cancer patients at increased risk for readmission with a list of suggested post-discharge interventions intended to lower readmission rates for HPB and GI Surgical Oncology (SONC) services. This study investigates the utility of these interventions in lowering patient readmission rates.

Methods:
153 patients with 163 surgical admissions related to their cancer diagnosis between September 1, 2016 and September 30, 2017 were analyzed. Patients were stratified into one of four risk categories based on variables in the established Readmission Risk Model. A chi-square analysis of readmission rates before and after implementation of the Readmission Risk Model Suggested Interventions (RRMSI) for HPB and SONC was performed as well as for each risk category. Chi-square analysis was further performed to determine difference in patient readmission rates based on type of surgery performed and difference in number of days to readmission before and after RRMSI. Additionally, compliance for each suggested intervention was analyzed using univariate analysis.

Results:
There were no significant differences in readmission rates among HPB or SONC patients before and after implementation of the RRMSI. There were also no significant differences for readmission rates based on patient Readmission Risk Category. There was also no difference in readmission rates for patients based on type of surgery performed. Median number of days to readmission was not significantly changed after the RRMSI. While "moderate risk" patients in both the pre-RRMSI and post-RRMSI groups were readmitted at rates betwen 0% and 14%, patients in the "high risk" pre- and post-RRMSI groups were readmitted at rates ranging between 33% and 45%. The HPB service showed overall a greater rate of compliance for the suggested interventions, ranging from 39.3% and 79.1% for individual interventions, while SONC showed a compliance ranged from 6.7% to 70.0%.

Conclusion:
The RRMSI did not affect patient readmission rates for any analyzed group. Implementation of more robust interventions for patients to avoid readmission and compliance improvement strategies should be goals for future clinical practice. Because of the discrepancy between predicted and actual readmission rates among "high risk" patients, additional studies should look into the ability of this Readmission Risk Model to accurately predict surgical cancer patient readmission rates.?
 

76.05 Features of Synchronous versus Metachronous Metastasectomy for Adrenal Cortical Carcinoma

K. M. Prendergast1, P. Marincola Smith2, C. M. Kiernan2, S. K. Maithel4, J. D. Prescott5, T. M. Pawlik5, T. S. Wang6, J. Glenn6, I. Hatzaras7, J. E. Phay8, L. A. Shirley8, R. C. Fields9, S. M. Weber10, J. K. Sicklick11, A. C. Yopp12, J. C. Mansour12, Q. Duh13, E. A. Levine14, G. A. Poultsides3, C. C. Solorzano2  1Vanderbilt University Medical Center,School Of Medicine,Nashville, TN, USA 2Vanderbilt University Medical Center,Department Of Surgery,Nashville, TN, USA 3Stanford University School of Medicine,Department Of Surgery,Palo Alto, CA, USA 4Emory University School Of Medicine,Department Of Surgical Oncology,Atlanta, GA, USA 5Johns Hopkins University School Of Medicine,Department Of Surgery,Baltimore, MD, USA 6Medical College Of Wisconsin,Department Of Surgery,Milwaukee, WI, USA 7New York University School Of Medicine,Department Of Surgery,New York, NY, USA 8Ohio State University,Department of Surgery,Columbus, OH, USA 9Washington University School of Medicine,Department Of Surgery,St. Louis, MO, USA 10University Of Wisconsin School of Medicine and Public Health,Department Of Surgery,Madison, WI, USA 11University Of California – San Diego,Department Of Surgery,San Diego, CA, USA 12University Of Texas Southwestern Medical Center,Department Of Surgery,Dallas, TX, USA 13University Of California – San Francisco,Department Of Surgery,San Francisco, CA, USA 14Wake Forest University School Of Medicine,Department Of Surgery,Winston-Salem, NC, USA

Introduction:  Adrenocortical carcinoma (ACC) is a rare and aggressive cancer and many patients present with metastases. We describe the features of patients presenting with metastatic disease who underwent synchronous metastasectomy and contrast them with patients who underwent metastasectomy for recurrent ACC.

Methods:  Adult patients who underwent resection for metastatic ACC from 1993-2014 at 13 participating institutions of the US-ACC Group were analyzed retrospectively. Patients were categorized as “synchronous” if they underwent metastasectomy at the time of their index adrenalectomy or “metachronous” if they underwent resection for disease recurrence. Differences between groups were summarized using descriptive statistics. Factors associated with overall survival were assessed by univariate analysis.

Results: 84 patients with ACC underwent metastasectomy: 26 (31%) synchronous and 58 (69%) metachronous. Demographics were similar between groups. Patients in the synchronous group had more right-sided (54 vs. 40%) and glucocorticoid-secreting tumors (27 vs. 16%); however these differences were not significant (p=0.341 and  p=0.304, respectively). The synchronous group had a higher percentage of T4 tumors at index resection (42 vs. 3%, p<0.001); both groups had a similar proportion of N1 disease (11 vs. 7%, p=0.734). There were no significant differences between groups in the rate of treatment with neoadjuvant chemotherapy or adjuvant chemotherapy, mitotane, or radiation. The most common site of metastasectomy in the synchronous group was liver (58%), followed by lung (23%). The most common site of metastasectomy in the metachronous group was local (36%), followed by multiple sites (17%). The metachronous group had prolonged median survival following index resection (86.3 vs. 17.3 months, p<0.001) and following first metastasectomy (36.9 vs. 17.3 months, p=0.007). In the synchronous group, patients with R0 resection had improved survival over patients with R1 or R2 resection (p=0.008), while margin status at metastasectomy was not significantly associated with survival in the metachronous group (p=0.452).

Conclusion: Select patients with metastatic ACC may benefit from metastasectomy. Compared to patients with metachronous metastases, those with synchronous metastases have shortened survival following metastasectomy. This study highlights the need for future studies examining differences in tumor biology that might influence treatment decisions in these two distinct patient populations.
 
 

76.04 Trends in the Use of Adjuvant Chemotherapy for High-Grade Truncal and Extremity Soft Tissue Sarcoma

M. H. Squires1, L. Suarez-Kelly1, P. Y. Yu1, T. M. Hughes1, R. Shelby1, C. G. Ethun2, T. B. Tran3, G. Poultsides3, J. Charlson4, T. Gamblin4, J. Tseng5, K. K. Roggin5, K. Chouliaras6, K. Votanopoulos6, B. A. Krasnick7, R. C. Fields7, R. E. Pollock1, V. Grignol1, K. Cardona2, J. Howard1  1Ohio State University,Division Of Surgical Oncology,Columbus, OH, USA 2Emory University School Of Medicine,Division Of Surgical Oncology,Atlanta, GA, USA 3Stanford University,Department Of Surgery,Palo Alto, CA, USA 4Medical College Of Wisconsin,Division Of Surgical Oncology,Milwaukee, WI, USA 5University Of Chicago,Department Of Surgery,Chicago, IL, USA 6Wake Forest University School Of Medicine,Department Of Surgery,Winston-Salem, NC, USA 7Washington University,Department Of Surgery,St. Louis, MO, USA

Introduction:  In the randomized controlled trial (RCT) EORTC-62931, adjuvant chemotherapy failed to show improvement in relapse-free survival (RFS) or overall survival (OS) for patients with resected high-grade soft tissue sarcoma (STS). We sought to evaluate whether the negative results of this 2012 RCT have influenced multidisciplinary treatment patterns for patients with high-grade STS undergoing resection at 7 academic referral centers.

Methods: The US Sarcoma Collaborative (USSC) database was queried to identify patients who underwent curative-intent resection of primary high-grade truncal or extremity STS from 2000-2016. Patients with recurrent tumors, metastatic disease, and those receiving neoadjuvant chemotherapy were excluded.

 

Patients were divided by treatment era into early (2000-2011, pre-EORTC trial) and late (2012-2016, post-EORTC trial) cohorts for analysis. Rates of adjuvant chemotherapy delivery, standard demographics, and clinicopathologic variables were compared between the two cohorts. Univariate and multivariate regression analyses (MVA) were used to determine factors associated with OS and RFS. 

Results: 949 patients who met inclusion criteria were identified, with 730 patients in the early cohort and 219 in the late cohort. Adjuvant chemotherapy rates were similar between the early and late cohorts (15.6% vs 14.6%; p=0.73). Patients within the early and late cohorts demonstrated similar median OS (128 mos vs median not reached [MNR], p=0.84) and RFS (107 mos vs MNR, p=0.94).

 

Receipt of adjuvant chemotherapy was associated with larger tumor size (13.6 vs 8.9cm, p<0.001), younger age (53.3 vs 63.7 yrs, p<0.001), margin-positive resection (p=0.04), and receipt of adjuvant radiation (p<0.001).

 

On MVA, risk factors associated with decreased OS (Table 1) were increasing ASA class (p=0.02), increasing tumor size (p<0.001), and margin-positive resection (p=0.01). Adjuvant chemotherapy was not associated with OS (p=0.88). Risk factors associated with decreased RFS included increasing tumor size (p<0.001) and margin-positive resection (p=0.04); adjuvant chemotherapy was not associated with RFS (p=0.22). 

Conclusion: Rates of adjuvant chemotherapy for resected high-grade truncal or extremity STS have not decreased over time within the USSC, despite RCT data suggesting a lack of efficacy. In this retrospective multi-institutional analysis, adjuvant chemotherapy was not associated with RFS or OS on multivariate analysis, consistent with the results from EORTC-62931. Rates of adjuvant chemotherapy for high-grade STS were low in both cohorts, but may be influenced more by selection bias based on clinicopathologic variables such as tumor size, margin status, and patient age, than by prospective, randomized data.

76.03 Predictors of narcotic requirements after cervical endocrine surgery: results of a prospective trial

L. I. Ruffolo1, K. M. Jackson1, T. Chennell1, D. M. Glover1, J. Moalem1  1University Of Rochester,Department Of Surgery,Rochester, NY, USA

Introduction:

We adopted an opt-in narcotic prescription program for patients undergoing outpatient thyroid and parathyroid surgery. All patients received preoperative bilateral cervical blocks, and perioperative pain management was at the discretion of the anesthesia and perioperative staff.  Patients were discharged with acetaminophen, unless they requested narcotic medications. Here we report our experience with this program, as well as the factors which correlated with patients requesting narcotic prescription for discharge.

Methods:

We prospectively collected data on patient demographics, their medical/social history, operative details, and postoperative pain medication use and prescription. Univariate and multivariate analyses were performed using Student’s T test for continuous variables, Chi square analysis for categorical variables, and nominal logistic regression. Patients who requested narcotics at discharge were contacted at least 1 month following surgery to determine the number and disposal status of any unused narcotic tablets. This study was approved by the university’s institutional review board.

Results:

Of 219 patients who had outpatient surgery during our study, only nine (4%) requested narcotic prescription at discharge, and none called after discharge to request analgesic prescription. Univariate analysis demonstrated that patients with longer incisions (p=0.007) or with a history of substance abuse (p<0.001), anxiety (p=0.012), depression (p<0.001), baseline narcotic use (p=0.002), or elevated higher pain scores post anesthesia (p=0.003) were more likely to request narcotic medications at discharge. Multivariate logistic regression again demonstrated larger incision length (OR 42.6, CI 1.12-1612, p=0.04) and history of substance abuse (OR 52.3, CI 1.6-1713, p=0.01) as predictive of requesting narcotic prescriptions.

Patients who opted to receive a narcotic prescription received 10-20 tablets of hydrocodone-acetaminophen (mean=16.1).  All of the patients used at least one of the prescribed pills, and 2 used all of their prescribed pills. In total, 74 (51%) of the 145 tablets prescribed were consumed. On follow up call 1 month after surgery, none of the unused tablets were disposed of and all remained in the patients’ medicine cabinets.

Conclusion:

The vast majority of patients can be comfortably managed without narcotic medications after thyroid and parathyroid surgery. No patient who was discharged without narcotics called to request a prescription.  Both patient and procedural factors contribute to narcotic requirement at discharge.

Even under this paradigm, approximately half of the prescribed pain tablets were unused, and were retained in patients’ homes. Ongoing efforts to reduce unnecessary narcotic prescriptions, as well as community educational programs and mechanisms for narcotic disposal remain paramount for reducing narcotics tablets at risk for diversion.

76.02 The Comparison of Recurrent Locations of HCC after Surgical Resection and Radiofrequency Ablation

S. Liu1, R. Hu1  1National Taiwan University Hospital,Department Of Surgery,Taipei City, TAIWAN, Taiwan

Introduction:
Hepatocellular carcinoma (HCC)  was characterized with its high recurrence rate and poor long term prognosis. There was few study described the difference of HCC recurrent locations among different treatment. The present study aimed to compare the recurrent locations after the patients receiving curative surgical resection or receiving RFA.

Methods:
A total of 419 patients with single solitary HCC ≤ 5 cm who underwent curative-intent surgical resection (SR) or radiofrequency ablation (RFA) as initial treatment were retrospectively collected between January 2013 and December 2015. Electric medical charts and image were reviewed to collected clinical characteristics data and patterns of recurrence. Clinical follow-up of these patients were traced to December 2017, with the mean follow-up time 35.2 months. Recurrent locations were classified into recurrence occurred only in the same or nearby segment compared with primary tumor location, recurrence involved to distal segment, and recurrence involved extrahepatic recurrence. Other recurrence patterns including recurrent AFP, recurrent tumor number, time period to recurrence, and major vessels invasion were also collected. The recurrence patterns and outcomes were compared between the patients who received SR and the patients who received RFA.

Results:
There were 157 patients enrolled in SR group and 262 patients enrolled in RFA group. Comparing two treatment groups, the outcomes evaluation showed better overall survival (p = 0.001) and disease free survival (p< 0.001) in SR. Comparing the recurrence pattern, the RFA group showed significant higher rate in only same or nearby segment recurrence (SR: 29.5% vs RFA: 60.1%, p-value 0.001). The multivariate analysis identified receiving RFA as significant risk factor for same or nearby segment recurrence. There was no difference in other recurrence pattern  between two groups.

Conclusion:
For HCC patients with single solitary tumor ≤ 5 cm, receiving RFA as initial treatment may have higher same or nearby recurrence rate compared to receiving SR as initial treatment
 

76.01 Centralization of High-Risk Cancer Surgery Within Hospital Networks

K. H. Sheetz1, J. B. Dimick1, H. Nathan1  1University of Michigan,Surgery,Ann Arbor, MI, USA

Background:

Hospital consolidation has the potential to improve the quality of care within regional delivery networks. We evaluated the extent to which hospital networks centralize high-risk cancer surgery and whether centralization is associated with changes in short-term clinical outcomes.

Methods:

We merged results from the American Hospital Association’s annual survey on network participation with Medicare claims to identify patients undergoing surgery for pancreatic, esophageal, colon, lung, or rectal cancer between 2005 and 2014. We calculated the degree to which networks centralized each procedure by calculating the annual proportion of surgeries performed at the highest volume hospital within each network. We then estimated the independent effect of centralization on the incidence of postoperative complications, death, and failure to rescue after accounting for patient complexity, hospital volume, and overall hospital quality. 

Results:

The average degree of centralization varied from 25.2% (Range 6.6–100%) for colectomy to 71.2% (Range 8.3–100%) for pancreatectomy. Greater centralization was associated with lower rates of postoperative complications and death for lung resection, esophagectomy, and pancreatectomy. For example, there was a 1.6% (95% CI -2.3 to -0.9) absolute reduction in 30-day mortality following pancreatectomy for each 20% increase in the degree of centralization within networks. Independent of volume and hospital quality, postoperative mortality for pancreatectomy was 58% lower in the most centralized networks compared to the least centralized networks (8.9% vs. 3.7%, p<0.01). Centralization was not associated with better outcomes for colectomy or proctectomy.

Conclusions:

Greater centralization of complex cancer surgery within existing hospital networks was associated with better outcomes. As hospitals affiliate in response to broader financial and organization pressures, these networks may also present unique opportunities to improve the quality of high-risk cancer care.

75.10 Incompatible Living Donor Kidney Transplantation in Older Recipients

J. Long1, K. Jackson1, J. Motter1, M. Waldram1, K. Covarrubias1, B. Orandi2, D. Segev1, J. Garonzik-Wang1  1Johns Hopkins University School Of Medicine,Baltimore, MD, USA 2University Of Alabama at Birmingham,Birmingham, Alabama, USA

Introduction: Older individuals represent the fastest-growing population with end-stage renal disease (ESRD) in need of a kidney transplant. While incompatible living donor kidney transplantation (ILDKT) is known to provide survival benefit it is unknown if older individuals have similar post-ILDKT outcomes. Knowledge of the risk profile of older ILDKT recipients could help inform patient counseling and clinical management of this rapidly growing group of transplant recipients.

Methods: Using a 22-center ILDKT cohort linked to SRTR data, we compared post-transplant outcomes of 154 older (age >60) ILDKT recipients to 871 younger (age < 60) recipients. We analyzed mortality and death-censored graft failure using multivariable Cox regression. Delayed graft function (DGF) and length of stay (LOS) were evaluated using multivariable logistic regression and negative binomial regression, respectively. All models were adjusted for the following recipient and transplant characteristics: gender, body mass index, race, blood type, years on dialysis, panel reactive antibody, donor-specific antibody, and number of human leukocyte antigen mismatches.

Results: Compared to younger recipients, older recipients were more likely to be female (81.2% vs 64.5%, p<0.001), white (76.6% vs 64.5%, p=0.014), and to have been on dialysis prior to transplant (21.4% vs 10.8%, p<0.001). Older recipients were less likely to have had a prior transplant (16.2% vs 68.4%, p<0.001). The 1, 5, and 10-year post-transplant survival for older recipients were 92.2%, 81.1% and 50.0% compared to 95.5%, 86.6%, and 64.9% for younger recipients (p=0.02). The 1, 5, and 10-year death-censored graft survival were 96.6%, 84.6%, and 79.0% for older recipients compared to 93.6%, 76.0%, and 56.8% for younger recipients (p=0.005) (Table 1). In adjusted models, older recipients had a 60% increased risk of mortality (adjusted hazard ratio [aHR]: 1.101.602.34, p=0.01) and a 45% decreased risk of developing death-censored graft failure (HR: 0.360.550.86, p<0.01) compared to their younger counterparts. There were no differences in LOS (adjusted LOS ratio: 0.870.991.13, p=0.9) or DGF (adjusted odds ratio: 0.481.092.45, p=0.8).

Conclusion: While older ILDKT recipients have worse survival compared to younger recipients, their long-term survival is still good, and they have similar LOS and DGF. Older recipients should not be denied ILDKT based on age alone.

 

75.09 Pediatric Deceased Donor Kidney Transplantation Under the New Kidney Allocation System

K. R. Jackson1, S. Zhou1, J. Ruck1, C. Holscher1, A. Massie1, A. Kernodle1, J. Glorioso1, J. Motter1, N. Desai1, D. Segev1, J. Garonzik-Wang1  1Johns Hopkins University School Of Medicine,Baltimore, MD, USA

Introduction:  The new Kidney Allocation System (KAS) has resulted in fewer pediatric kidneys being allocated to pediatric deceased donor kidney transplant (pDDKT) recipients. This has prompted concerns that post-pDDKT outcomes may worsen and led to suggestions that KAS should be modified to reverse this change.

 

Methods:  To study whether post-transplant outcomes have worsened under KAS, we used SRTR data to compare outcomes of 953 pre-KAS pDDKT (age < 18 years) recipients (12/4/2012-12/3/2014) to 934 post-KAS pDDKT recipients (12/4/2014-12/3/2016). We analyzed mortality and graft loss using Cox regression, delayed graft function (DGF) using logistic regression, and length of stay (LoS) using negative binomial regression. Multivariable models were adjusted for donor and recipient characteristics.

Results: Post-KAS recipients had longer dialysis times (median 1.26 vs. 1.07 years, p=0.02) and were more often cPRA 100% (2.0% vs. 0.1%, p=0.001). Post-KAS recipients had less graft loss than pre-KAS recipients (hazard ratio [HR]: 0.350.540.83, p=0.005, Table), but equivalent mortality (HR: 0.230.722.28, p=0.6), DGF (odds ratio [OR]: 0.931.321.87, p=0.1), and LoS (LoS ratio: 0.961.061.16, p=0.3). After adjusting for donor/recipient characteristics, there were no post-KAS differences in mortality (adjusted HR [aHR]: 0.230.933.73, p=0.9), DGF (adjusted OR: 0.931.372.03, p=0.1), or LoS (adjusted LoS ratio: 0.931.041.16, p=0.5). However, post-KAS pDDKT recipients still had less graft loss (aHR: 0.390.610.96, p=0.03).

Conclusion: Despite a decrease in pediatric donor kidneys being allocated to pDDKT recipients, there is no evidence that post-transplant outcomes have worsened for pDDKT recipients under KAS. Therefore, any KAS modification discussions should consider this context.

 

75.08 Trends in Incidence, Treatment and Survival of Gallbladder Cancer; a Nation-Wide Cohort Study.

E. A. De Savornin Lohman1, T. De Bitter2, R. Verhoeven3, I. Nagtegaal2, C. Van Laarhoven1, R. Van Der Post2, P. De Reuver1  1Radboudumc,Surgery,Nijmegen, Netherlands 2Radboudumc,Pathology,Nijmegen, Netherlands 3Netherlands Comprehensive Cancer Organization,Eindhoven, Netherlands

Introduction:

Gallbladder cancer (GBC) is a rare but lethal malignancy, primarily diagnosed in an advanced stage unless detected incidentally after laparoscopic cholecystectomy for benign gallbladder disease. Scarce data is available on GBC treatment and survival outcomes in Western populations. Consequently, controversy exists regarding surgical and systemic treatment. Using data from the Netherlands Cancer Registry, trends in incidence, treatment and survival of GBC patients were evaluated.

Methods:
Data of 2427 GBC patients diagnosed between 2000 – 2015 were included in this nationwide population-based study. Incidence and demographics were assessed. Treatment strategies and associated survival were analysed using Kaplan-Meier methods and propensity score matching.

Results:
Age-standardised incidence of GBC varied from 0.6 to 0.9 per 100.000 person years and did not change significantly over time. Demographic characteristics are presented in Table 1. Most tumours (67.2%) were detected pre-operatively. The overall median survival was 5.2 months and primarily determined by tumour stage, ranging from 36.2 months in stage I patients to 3.0 months   in stage 4 patients. Between 2000 and 2015 median survival improved from 4.1 to 6.6 months (p < 0.01). After propensity score matching, median survival in surgically treated stage III + IV gallbladder cancer was 7.4 months, compared to 3.3 months for non-surgically treated patients (p < 0.001). Stage II GBC patients receiving additional liver bed resection showed superior median survival to those whom did not receive additional surgery (21.7 vs. 46.6 months, p < 0.001). Systemic therapy in advanced stage GBC improved median survival from 2.8 to 7.4 months.

Conclusion:

Although an increase of 2 months in overall survival was demonstrated over time, the clinical significance of this finding is debatable and outcome of GBC patients is still poor. A considerable, clinically relevant increase in survival was seen in two subgroups: patients with early GBC receiving additional resection and patients with advanced GBC treated with systemic therapy. More aggressive treatment strategies should be advocated, as they appear to improve the prospects of GBC patients.

75.07 Effects of Kidney Allocation System on Deceased Donor Kidney Transplant Rates Across Race/Ethnicities

A. C. Perez-Ortiz1,2, N. Elias1  1Massachusetts General Hospital,Transplant Center,Boston, MA, USA 2Yale University School Of Public Health,New Haven, CT, USA

Introduction:
The new deceased donors (DD) Kidney Allocation System (KAS) aimed to decrease racial allocation disparity, indirectly improving DD transplant rates (TR) for non-Whites. Three years after implementation, we lack evidence of this specific efficacy. We here assess whether the waitlisting and transplant rates across races/ethnicities have improved after KAS implementation.

Methods:
To assess any systematic difference in US TR (1989-2017) and the effect pre- (1989-2014) and post-KAS (2015-2017), we calculated the slope per year basis with data from the Organ Procurement and Transplantation Network database. We then, using regression modeling, estimated the effect of race/ethnicity on kidney waitlist addition and TR across periods adjusting for meaningful covariates. To eliminate improved deceased donation rates effect, we compared kidney to liver TR; a difference indicates a positive impact of KAS. Finally, we similarly evaluated the slope of change for kidney waitlist additions.

Results:
We show three distinct periods wherein the kidney TR varied for all ethnicities. Between 1989–2006 and 2015–2017, there was a significant positive slope -growth of TR on a yearly basis- (β=456.5, and 913.7, respectively), higher in the latter (p<0.02). However, between 2006–2014 there was no change in TR (β=1.6, p<0.01). Furthermore, compared to DD liver TR after KAS implementation (2015-2017), DD kidney TR increased up to 76% (p=0.02). Before 2015, there were no differences between the two groups (p>0.20). Moreover, kidney TR have steadily risen for non-Whites compared to Whites (p=0.03) (Figure). KAS implementation increased the rate for non-Whites ~12-fold (β=344.8, SE=169.4) compared to Whites (β=28.4, SE=45.3) (p = 0.04). Lastly, this improvement was not mirrored in the waitlist additions, and KAS did not change the slope (p=0.25).

Conclusion:
We have preliminary evidence that KAS has improved TR especially benefiting non-Whites (Blacks, Hispanics, and Asians). This improvement is independent of organ donation rates in the same era. Waitlist addition did not equally change, arguing for the need for improved education and other means to alter referral and listing practice.?
 

75.06 Impact of Area of Deprivation Index On Hospital Readmissions after Surgery for Pancreas Cancer

A. N. Krepline1, J. Mora1, M. Aldakkak1, S. Misustin1, K. Christians1, C. N. Clarke1, B. George2, P. S. Ritch2, W. A. Hall3, B. A. Erickson3, N. Kulkarni4, A. H. Khan5, D. B. Evans1, S. Tsai1  1Medical College Of Wisconsin,Division Of Surgical Oncology,Milwaukee, WI, USA 2Medical College Of Wisconsin,Division Of Hematology And Oncology,Milwaukee, WI, USA 3Medical College Of Wisconsin,Department Of Radiation Oncology,Milwaukee, WI, USA 4Medical College Of Wisconsin,Department Of Radiology,Milwaukee, WI, USA 5Medical College Of Wisconsin,Division Of Gastroenterology And Hepatology,Milwaukee, WI, USA

Introduction: Area of deprivation index (ADI) is a validated metric used to quantify socioeconomic disadvantages by neighborhood.  The ADI is composed of 17 educational, employment, housing, and poverty measures abstracted from the US Census Long Form and the American Community Survey; higher ADIs signify a more disadvantaged neighborhood. We sought to examine the impact of ADI on readmission rates after surgery among patients with pancreatic cancer (PC).

Methods: Patients with resectable and borderline resectable PC treated at the Medical College of Wisconsin from 2009 to 2018 were identified.  The ADI for all patients was obtained using the zip code+4 code.  Patients were dichotomized into low and high ADI categories based on the median ADI. Demographic, clinicopathologic, and readmission data for patients were abstracted.

Results: Neoadjuvant therapy and surgery was completed in 310 patients with resectable and borderline resectable PC.  The median ADI was 97.32 (IQR 17.7), 155 (50%) with low ADI and 155 (50%) with high ADI.  No differences were observed between groups in demographic characteristics, clinical stage, baseline carbohydrate antigen 19-9, or type of neoadjuvant therapy received.  In addition, no differences were observed between low and high ADI groups in the types of operation performed, need for vascular reconstruction, hospital length of stay, or pathologic stage. Of the 310 patients, 66 (21%) had a readmission within 90 days of surgery; 26 (17%) of the 155 patients with a low ADI and 40 (26%) of the 155 patients with high ADI (p=0.049).  Among the low and high ADI groups, the most common reasons for readmission were procedure-related complications (n=16 (10%) vs. 23 (15%) patients, p=0.30) and failure to thrive (n=7 (5%) vs. 13 (8%) patients, p=0.25), respectively.  For patients with low vs. high ADI, readmission occurred at a median of 19 days (IQR 14) and 27 (IQR 24), respectively (p=0.02).  In a multivariable logistic regression, high ADI was associated with 1.80-fold increased odds of 90-day readmission (Table 1: 95% CI:1.02-3.16, p=0.04). 

Conclusion: ADI was not associated with more advanced clinical or pathologic stage or operation performed. However, patients with high ADI were at increased risk for 90-day readmission. Additional studies are needed to identify modifiable factors associated with readmissions in this high-risk group.