21.01 Not all Readmissions are Created Equal – Index vs. Non-Index Readmissions After Major Cancer Surgery

S. Zafar1, A. A. Shah1, H. Channa2, L. L. Wilson4, N. Wasif3  1Howard University College Of Medicine,General Surgery,Washington, DC, USA 2Purdue University,Agricultural Economics,West Lafayette, IN, USA 3Mayo Clinic In Arizona,Surgical Oncology,Phoenix, AZ, USA 4Howard University College Of Medicine,Surgical Oncology,Washington, DC, USA

Introduction:
Hospital readmissions after major cancer surgery pose a major healthcare burden and are associated with increased costs and worse outcomes. Increasing regionalization of cancer surgery has the inadvertent potential to lead to fragmentation of care if readmissions occur at a different hospital from the index facility.  Using a national dataset we aim to quantify rates of readmission to non-index hospitals after major cancer surgery and to compare outcomes between index and non-index hospital readmissions.

Methods:
We used the National Readmissions Dataset (2013) as our data source. All adult patients undergoing a major cancer operation (defined as esophagectomies/gastrectomies, hepatico-biliary resections, pancreatectomies, colorectal resections, and cystectomies) within the first 9 months of the year were selected. Readmission characteristics including timing, cost, morbidity and mortality were analyzed. Discharge weights were used to calculate national estimates. Adjusting for clustering by facility, we used multivariate logistic regression to identify factors associated with non-index vs. index readmissions and also to identify differences in mortality, major complications and subsequent readmissions.  Generalized linear modeling was used to test for differences in length of stay (LOS) and hospital costs between the two groups.

Results:
A total of 57,362 patients with 86,362 hospital admissions were analyzed. Overall, the 90 day readmission rate was 23.1% and median time to readmission was 42 days (IQR 20-70 days). Weighted analysis revealed the total national cost for 90 day readmissions to be $682 million. Of the 17,020 readmissions, 22.0% were to a non-index hospital. Independent factors associated with 90 day readmission to a non-index hospital included younger age,  male gender, type of procedure, more comorbidities, Medicaid insurance, longer LOS, in-hospital complications, discharge to a nursing facility, and index surgery at a teaching hospital (p<0.05).  Following risk adjustment, patients readmitted to non-index hospitals had 32% higher odds of mortality (OR 1.32, 95% CI: 1.03-1.70) and 26% higher odds of having a major complication (OR 1.26, 95% CI: 1.10-1.43). Subsequent readmissions and hospital costs were not significantly different between the two groups.

Conclusion:
The 90 day readmission rate following major cancer surgery in the United States is 23.1%, of which a further 22% are to a non-index hospital. When compared to patients readmitted to the index hospital, readmission to a non-index hospital is associated with higher mortality and morbidity. Targeted interventions to reduce non-index readmissions may mitigate fragmentation of postoperative care and result in improved outcomes.
 

20.12 eCART Before the Hearse: Predicting Severe Adverse Events in Over 30,000 Surgical Inpatients

B. Bartkowiak1, A. M. Snyder1, A. Benjamin2, A. Schneider2, N. M. Twu1, M. M. Churpek1, D. P. Edelson1, K. K. Roggin2  1University Of Chicago,Department Of Medicine,Chicago, IL, USA 2University Of Chicago,Department Of Surgery,Chicago, IL, USA

Introduction:  Postoperative clinical deterioration on the wards is associated with increased morbidity, mortality, and cost.  Early warning scores (EWSs) have been developed to detect inpatient clinical deterioration and trigger rapid response activation more generally, but little is known about the specific application of EWSs to postoperative inpatients.

Methods:  We aimed to assess the accuracy of three general EWSs for predicting severe adverse events (SAE) in postoperative inpatients. We conducted a retrospective cohort study of adult patients hospitalized on the wards following operative procedures at an academic medical center in the United States from 11/2008 to 1/2016.  We compared the Modified Early Warning Score (MEWS), National Early Warning Score (NEWS), and the electronic Cardiac Arrest Risk Triage (eCART) score. The maximum scores from postoperative ward locations were used for analysis. SAE were defined as ICU transfer, ward cardiac arrest, or ward death in the postoperative period.  Accuracy was evaluated using the area under the receiver operating characteristic curve (AUC).  Patients with multiple operations were censored at the start of the second procedure.

Results: Of the 30,009 patient admissions included in the study, 4% (n=1,530) experienced a SAE a median of 2 days (IQR; 0.3-5.6) following the procedure.  Patients who experienced a SAE reached higher maximum scores during their postoperative stay with the following median (IQR) values: eCART, 58 (12-159) vs 12 (7-23); MEWS, 4 (3-6) vs 3 (2-3); and NEWS, 9 (7-11) vs 6 (4-7).  The accuracy for predicting the composite outcome was highest for eCART (AUC 0.80 [CI; 0.79-0.81]), followed by NEWS (AUC 0.77 [CI; 0.75-0.78]), and MEWS (AUC 0.75 [CI; 0.74-0.77]); see figure. Of the individual vital signs and labs, high respiratory rate was the most predictive (AUC 0.70) and high temperature the least (AUC 0.48).

Conclusion: EWSs are predictive of SAEs in postoperative surgical patients. eCART is significantly more accurate in this patient population than both NEWS and MEWS. Future work validating these findings multi-institutionally and determining whether the use of eCART improves the outcomes of high-risk post-operative patients is warranted.

20.11 Safety of Early Venous Thromboembolism Prophylaxis for Isolated Blunt Splenic Injury: A TQIP Study

B. Lin1, K. Matsushima1, L. De Leon1, G. Recinos1, A. Piccinini1, E. Benjamin1, K. Inaba1, D. Demetriades1  1University Of Southern California,Acute Care Surgery,Los Angeles, CALIFORNIA, USA

Introduction:

Non-operative management (NOM) has become the standard of care in hemodynamically stable patients with blunt splenic injury. Due to the potential risk of bleeding, there are no widely accepted guidelines for an optimal and safe timeframe for the initiation of venous thromboembolism (VTE) prophylaxis in patients undergoing NOM. The purpose of this study was to explore the association between the timing of VTE prophylaxis initiation and NOM failure rate in isolated blunt splenic injury. 

Methods:

After approval by the institutional review board, we utilized the American College of Surgeons (ACS) Trauma Quality Improvement Program (TQIP) database (2013-2014) to identify adult patients (≥18 years) who underwent NOM for isolated blunt splenic injuries (Grade III/IV/V). Patients were excluded if they expired within 24 hours of admission or required surgical management of splenic injury within 12 hours after admission. Failure of NOM was defined as any splenic surgeries after 12 hours of admission. The incidence of overall NOM failure was compared between two groups: 1) VTE prophylaxis <48 hours after admission (early prophylaxis group), and 2) VTE prophylaxis ≥48 hours (late prophylaxis group). Similarly, we compared the incidence of NOM failure after the initiation of VTE prophylaxis between the early and late prophylaxis group. Multiple logistic regression analysis was performed for NOM failure adjusting for clinically important covariates including the timing of VTE prophylaxis initiation. 

Results:

A total of 816 patients met the inclusion criteria; median age: 34 years (IQR 23-52), 67% male gender, median ISS: 13 (IQR 10-17), 679 patients (83.2%) with severe splenic injury (Grade IV/V). Of the patients who met the inclusion criteria, VTE prophylaxis was not administered in 525 patients (64.3%), whereas VTE prophylaxis was given < 48 hours and ≥48 hours after admission in 144 and 147 patients, respectively. Among patients who received VTE prophylaxis, angioembolization of the spleen was performed in 30 patients (10.3%). Overall NOM failure rate was 13.4% (39/291). While overall NOM failure rate was significantly lower in the early group compared to late prophylaxis group (4.9% vs. 21.8%, p<0.001), there was no significant difference in the NOM failure rate after the initiation of VTE prophylaxis between two groups (3.5% vs. 3.4%, p=1.00). In the multiple logistic regression analysis, early initiation of VTE prophylaxis was not significantly associated with NOM failure (OR: 1.19, 95% CI 0.31-4.51, p=0.80).

Conclusion:

Our results suggest that early initiation of VTE prophylaxis (<48 hours) does not increase the risk of NOM failure in patients with isolated splenic injury. Further prospective study to validate the safety of early VTE prophylaxis is warranted.   
 

20.10 Can Medicaid Expansion Decrease Disparity in Surgical Cancer Care at High Quality Hospitals?

D. Xiao1,2,3, C. Zheng1,2,3, M. Jindal1,2,3, C. Ihemelandu1,2,3, L. Johnson1,2,3, T. DeLeire2,3, N. Shara1,2,3, W. Al-Refaie1,2,3  1MedStar Georgetown University Hospital,Washington, DC, USA 2MedStar Georgetown Surgical Outcomes Research Center,Washington, DC, USA 3Georgetown University Medical Center,Washington, DC, USA

Introduction:  Skepticism on Medicaid program’s ability to provide quality care has contributed to the debate on Affordable Care Act’s (ACA) Medicaid expansion. It is unknown whether Medicaid expansion can improve access to high-quality surgical cancer care for poor Americans. To address this gap, we examined the effects of the largest pre-ACA expansion in Medicaid eligibility, which occurred in New York in 2001. We hypothesized that this policy decreased disparity in access to surgical cancer care at high-quality hospitals (HQH) by insurance type and by race.

Methods:  We identified 67,685 non-elderly adults 18-64 years old from the 1997-2006 New York State Inpatient Database who underwent one of nine major cancer resections. HQHs were defined as either high-volume hospitals (HVH, assigned yearly as hospitals of highest procedure volumes that treated 1/3 of all patients) or low-mortality hospitals (LMHs), whose observed-to-expected mortality ratio were < 0.7. Analysis examining access to HVH was restricted only to patients of procedures with strong volume-outcome relationship (esophagus, liver, stomach, pancreas, and urinary bladder; N=10,737).   

Disparity was defined as the model-adjusted difference in percentage of patients operated at HQH by insurance type (Medicaid/uninsured vs privately insured) or by race (blacks vs whites). Consistent with published literature, we combined Medicaid and uninsured patients to capture changes in access to care due to newly gained Medicaid coverage by an otherwise uninsured patient. Covariates included age, sex, procedure type and emergency admission. Levels of disparity were calculated quarterly for each pair of comparison, then regressed using interrupted time series to evaluate the impact of Medicaid expansion.

Results: Overall, 15.0% of our study cohort were Medicaid/self-pay and 12.1% were blacks. The disparity in access to HVH by insurance type was reduced by 0.61 percentage points per quarter following the expansion (p=0.003) (Figure). Meanwhile, the Medicaid/uninsured beneficiaries had similar access to LMH as the privately insured; no significant change was detected around the expansion. Conversely, racial disparity has increased by 0.86 percentage points per quarter (p<0.001) in access to HVH (Figure) and by 0.48 percentage points per quarter (p=0.005) in access to LMH after the expansion.

Conclusions: The pre-ACA Medicaid expansion reduced the disparity in access to surgical cancer care at HQH by insurance type. However, it was associated with an increased racial gap in access to HQH for surgical cancer care. Further investigations are needed to explore whether Medicaid expansion may aggravate racial disparity in surgical cancer care.

20.07 Failure to Rescue Following Cytoreductive Surgery and Hyperthermic Intraperitoneal Chemotherapy

K. Li1, A. A Mokdad1, M. Augustine1, S. Wang1, M. Porembka1, A. Yopp1, R. Minter1, J. Mansour1, M. Choti1, P. Polanco1  1University Of Texas Southwestern Medical Center,Division Of Surgical Oncology,Dallas, TX, USA

Introduction: Cytoreductive surgery with hyperthermic intraperitoneal chemotherapy (CRS/HIPEC) has been shown to significantly improve the survival of selected patients with peritoneal carcinomatosis (PC). However, this invasive procedure can result in significant morbidity and mortality. Using a national cohort of patients, this study aims to identify perioperative patient characteristics predictive of failure to rescue (FTR)–mortality following postoperative complications from CRS/HIPEC.

Methods: Patients who underwent CRS/HIPEC between 2005 and 2013 were identified in the American College of Surgeons National Surgical Quality Improvement Program dataset (NSQIP). Patients who suffered any post-operative complication were identified. Major complications were defined as those corresponding to Clavien-Dindo grade III or IV. Failure to rescue (FTR) was defined as 30-day mortality in the setting of a treatable complication. Patients who suffered FTR were compared against those who survived a complication (non-FTR) using patient characteristics, pre-operative clinical information, types of resections, and severity of complication. Univariable comparisons were conducted using the Wilcoxon rank-sum test for continuous variables and the Fischer’s exact test for categorical variables. Predictors of FTR were identified using a multi-variable logistic regression model.

Results: From the NSQIP database, 915 eligible CRS/HIPEC cases were identified in the study period. Overall, 382 patients (42%) developed postoperative complications and constituted our study population. A total of 88 (10%) patients suffered one or more major complications. Seventeen patients died following a complication, amounting to an FTR rate of 4%. Patients’ age, gender, and race were similar between FTR and non-FTR groups. Colorectal cancer was the most common diagnosis in the FTR and non-FTR groups (35% vs 25%, respectively). The rates of multi-visceral resections were also similar (88% vs 86%, p=1.00). FTR patients were more likely than non-FTR patients to have dependent functional status (18% vs 2%, p=0.01), have ASA class 4 status (29% vs 8%, p=0.01), develop three or more complications (65% vs 24%, p<0.01), and suffer a major complication (94% vs 20%, p<0.01). Independent predictors of FTR were as follows: having a major complication (odds ratio [OR] 66.0, 95% confidence interval [CI] 8.4-516.6), dependent functional status (OR 5.9, 95%CI 0.8-41.9), and ASA class 4 (OR 13.4, 95%CI 1.2-146.8). Procedure type and diagnosis were not predictive of FTR.

Conclusion: Morbidity associated with CRS/HIPEC is comparable to other complex surgical procedures and has an acceptable low rate of death in this national cohort of patients. Dependent functional status and ASA class 4 are patient factors predictive of FTR. These patients have a prohibitively high risk of 30-day mortality following postoperative complications and should be considered ineligible for CRS/HIPEC.

20.06 Drivers of Variation in 90-day Episode Payments for Coronary Artery Bypass Grafts (CABG)

V. Guduguntla1,2,5, J. Syrjamaki1,5, C. Ellimoottil1,3,4,5, D. Miller1,3,4,5, P. Theurer1,6, D. Likosky1,7, J. Dupree1,3,4,5  1University Of Michigan,Ann Arbor, MI, USA 2University Of Michigan,Medical School,Ann Arbor, MI, USA 3Institute For Healthcare Policy And Innovation,Ann Arbor, MICHIGAN, USA 4Dow Division Of Health Services Research,Department Of Urology,Ann Arbor, MICHIGAN, USA 5Michigan Value Collaborative,Ann Arbor, MICHIGAN, USA 6The Michigan Society Of Thoracic And Cardiovascular Surgeons Quality Collaborative,Ann Arbor, MICHIGAN, USA 7Section Of Health Services Research And Quality,Department Of Cardiac Surgery,Ann Arbor, MICHIGAN, USA

Introduction:  Coronary Artery Bypass Grafting (CABG) is a common and expensive surgery. Starting in July 2017, CABG will become part of a mandatory Medicare bundled payment program, an alternative payment model where hospitals are paid a fixed amount for the entire care process, including care for 90-days post-discharge. Details on the specific drivers of CABG payment variation are largely unknown, and an improved understanding will be important for policy makers, hospital leaders, and clinicians.

Methods:  We identified patients undergoing CABG at 33 non-federal hospitals in Michigan from 2012 through June 2015 using data from the Michigan Value Collaborative, which includes adjusted claims from Medicare and Michigan’s largest private payer. We calculated 90-day price-standardized, risk-adjusted, total episode payments for each of these patients, and divided hospitals into quartiles based on mean total episode payments. We then disaggregated payments into four components: readmissions, professional, post-acute care, and index hospitalization. Lastly, we compared payment components across hospital quartiles and determined drivers of variation for each component.

Results: We identified a total of 5,910 patients across 33 Michigan hospitals.  The mean age was 68 and 74% were male.  The mean 90-day episode payment for CABG was $48,571 (SD: $20,739; Range: $11,723-$356,850). The highest cost quartile had a mean total episode payment of $45,487 compared to $54,399 for the lowest cost quartile, resulting in a difference of $8,912 or 16.4%. The highest cost quartile hospitals, when compared to the lowest cost quartile hospitals, had greater readmissions (1.35x), professional (1.34x), post-acute care (1.30x), and index hospitalization payments (1.15x) (all p<0.05). The main drivers of this variation are patients with multiple readmissions, increased use of evaluation and management (E&M) services, higher utilization of inpatient rehabilitation, and diagnosis related group distribution, respectively.

Conclusions: Significant variation exists in 90-day CABG episode payments for Medicare and private payer patients in Michigan. These results have implications for policymakers implementing CABG bundled payment programs, hospital administrators leading quality improvement efforts, and clinicians caring for these patients. Specifically, hospitals and clinicians entering bundled payment programs for CABG should work to understand local sources of variation, including multiple readmissions as well as inpatient E&M and post-discharge rehabilitation services. 

20.05 Prospective Study of a Steroid-Free, Low Dose Tacrolimus with Everolimus Regimen in Kidney Transplant

D. Lyubashevsky1, A. Shetty2, J. Leventhal2, M. J. Ansari2, V. Mas3, J. Matthew2, L. Gallon2  3Virginia Commonwealth University,Department Of Surgery,Richmond, VA, USA 1New York Medical College,Valhalla, NY, USA 2Northwestern University,Chicago, IL, USA

Introduction:
Calcineurin inhibitors (CNIs) such as Tacrolimus (FK) are considered mainstay of immunosuppression (IS) after kidney transplantation. However, CNIs are also known to have nephrotoxic properties: potent renal vasoconstriction can lead to irreversible and progressive tubulo-interstitial injury and glomerulosclerosis. One approach to circumvent this problem, is CNI minimization or ‘sparing’ by substitution or addition of a non-CNI immunosuppressive agent like mycophenolate mofetil (MMF) or a mammalian Target of Rapamycin inhibitor (mTORi). We hypothesize that combining low dose FK with the mTORi Everolimus may improve renal allograft outcomes, namely allograft survival and estimated GFR (eGFR).

Methods:
40 adult kidney transplant recipients were randomized at time of transplant to 2 groups: Low-dose FK with Everolimus, and standard dose FK with MMF. All patients received Alemtuzumab (Campath) induction and rapid steroid elimination. Everolimus levels were maintained between 3-8 ng/ml. Blood samples were taken at time of transplant, 3, 6, 12, 18, and 24 months post-randomization to record estimated GFR (eGFR) and serum drug concentration. Kidney biopsy data was gathered at transplant, 3, 12 and 24 months post. Primary outcomes were rejection-free graft survival and eGFR. 

Results:
Mean follow-up time was similar between groups: 26 ± 8.5 months for FK/Everolimus, and 26 ± 10.9 for FK/MMF (p = 0.975). Baseline characteristics such as demographics, HLA match, and time on dialysis were statistically similar between the two groups. FK levels in the FK/Everolimus group (4.2 ± 1.56 ng/mL) were significantly lower (p=0.003) than in the FK/MMF group (6.45 ± 1.96 ng/mL). The FK/Everolimus group experienced 1 episode of acute rejection compared to 4 episodes in the FK/MMF group: Using Cox-proportional hazards survival analysis, rejection-free graft survival (Fig 1A) was statistically similar between groups. eGFRs (FK/Everolimus = 61.535 ± 3.44 mL/min/1.73 m2; FK/MMF = 59.03 ± 3.24 mL/min/1.73 m2) were statistically similar between groups (Fig 1B); as were adverse events, including proteinuria, infectious events and death. 

Conclusion:
A combination of low dose FK with Everolimus may be an alternative immunosuppressive strategy to the standard of care FK+ MMF combination in a steroid free maintenance IS program. The favorable impact of mTORi on T-regulatory cells may provide added protection against rejection episodes. Longer clinical follow up, a larger sample size, and evaluation of renal biopsy data and circulating T-regulatory cell populations are warranted for future investigation.    
 

20.04 Preoperative Enteral Access is not Requite Prior to Multimodality Treatment of Esophageal Cancer

T. K. Jenkins4, A. N. Lopez4, G. A. Sarosi1,2, K. Ben-David3, R. M. Thomas1,2  1University Of Florida,Department Of Surgery,Gainesville, FL, USA 2North Florida/South Georgia Veterans Health System,Department Of Surgery,Gainesville, FL, USA 3Mount Sinai Medical Center,Department Of Surgery,Miami Beach, FL, USA 4University Of Florida,College Of Medicine,Gainesville, FL, USA

Introduction:  While prior research has shown that preoperative (preop) enteral access is feasible and safe in patients to support their nutrition prior to esophagectomy, controversy exists regarding its necessity, as subjective dysphagia is a poor indicator of need for enteral access. We hypothesized that patients who underwent preop enteral access prior to esophagectomy for cancer fared no better than those who had surgical enteral access performed at the time of esophagectomy.

Methods: An IRB approved retrospective database of patients undergoing esophagectomy for esophageal malignancy from 2007-2014 was established. Clinicopathologic factors were recorded including preop enteral access, weight change, nutritional labs, preop cancer stage, operative details, and perioperative complications.

Results: One hundred fifty-six patients were identified, of which 99 (63.5%) received preop chemoradiation (cXRT) prior to esophagectomy. Since preop cXRT can influence perioperative nutrition, this group comprised the study cohort. Fifty (50.5%) underwent preop enteral access [esophageal stent (1), gastrostomy (14), jejunostomy (32), nasoenteric (1), combination (2); “access group”] prior to cXRT followed by esophagectomy and feeding jejunostomy unless it was pre-existing. There was no difference in demographics, preop tumor staging, or operative details between the access and non-access groups. No difference was noted between access and non-access groups in subjective dysphagia [n=43 (86%) vs 37 (75.5%), respectively; p=0.2)] or mean preop serum albumin (gm/dl) [3.9 (range 3.1-4.5) vs 4 (range 3.3-6.4), respectively; p=0.2]. To account for potential cXRT delays, there was no difference in median time from diagnosis to surgery in the access vs non-access groups (126d vs 126d, p=0.5). Comparing weight loss 6mo preop to surgery, the access group had a mean 5.2% weight loss (range -29.4 – +6.6%) vs 4.5% reduction (range -19.4% – +68.2%) in the non-access group (p=0.8). Additionally, mean weight loss 6mo preop to 6mo postop was similar in the access vs non-access groups [-11.2% (range -44% – +5.3%) vs -15.4% (range -34.1% – -1.4%), respectively p=0.1].  Complication rates between access and non-access groups (64% vs 51%, respectively; p=0.2) were likewise similar.  In patients with reported dysphagia, there was no difference in weight change 6mo preop to 6mo postop in the access vs non-access group (-11% vs -15.2%, p=0.1; respectively).

Conclusions: Despite the bias of establishing enteral access prior to preop cXRT for esophageal malignancy in candidates for esophagectomy, there was no difference in weight change, preop albumin, or complication rates in patients who had preop enteral access versus those who did not. Patients with esophageal malignancy should therefore proceed directly to appropriate neoadjuvant and surgical therapy with enteral access performed at the time of definitive resection or reserved for those with obstruction confirmed on endoscopy.

20.02 Explicating Hospital-Based Violence Intervention Program Risk Assessment Through Qualitative Analysis

E. J. Kramer1,2, J. Dodington1, T. Henderson2, R. Dicker2, C. Juillard2  1Yale University School Of Medicine,New Haven, CT, USA 2University Of California San Francisco,Surgery,San Francisco, CA, USA

Introduction:  Violent injury is the second most common cause of death among 15-24 year-olds in the US. Up to 45% of violently injured youth return to the emergency department with a second violent injury. Hospital-based violence intervention programs (HVIPs) have been shown to reduce injury recidivism through intensive case management to victims of violence at high risk for recidivism. To date, case manager gestalt has guided identification of victims at high risk for re-injury; no validated guidelines for risk-assessment strategies in the HVIP setting have been published. We aimed to use qualitative methods to investigate the key components of risk assessments employed by HVIP case managers in a mature HVIP with demonstrated effectiveness. We propose a risk assessment model based on this qualitative analysis, combined with literature review and review of current best practices of established HVIPs. 

Methods:  A qualitative approach was used due to the complexity and interconnectivity of inherently non-binary and socially-influenced risk factors. An established academic hospital-affiliated HVIP served as the nexus for this research. Thematic saturation was reached with 11 semi-structured interviews and 2 focus groups conducted with HVIP case managers and key informants identified through snowball-sampling. Interactions were audiotaped, transcribed, and analyzed by a four-member team using Nvivo and employing the constant comparison method. Risk factors identified were used to create a set of models presented in 2 follow-up HVIP case managers and leadership focus groups.

Results: Key themes emerged revolving around the imminent threat of violence, history of incarceration, dissociative behavior, weapons use, and pursuing “street”/gang lifestyle. In total, 141 potential risk factors for use in the risk assessment framework were identified. The most salient factors were incorporated into eight models that were presented to the HVIP case managers. A 29-item algorithmic structured professional judgment model was selected by focus group consensus. The model contains four categories of risk factors: high-risk (A), behavioral (B), severe conditional (C), and moderate conditional (D) factors. The presence of 1+ A factor indicates high-risk, while combinations of B/C/D factors can lead to an assessment of high-risk [Fig 1].

Conclusion: Qualitative methods identified four tiers of risk factors for violent re-injury that were incorporated into a proposed risk assessment instrument. Further research should assess the validity and generalizability of this instrument. A valid risk assessment instrument has the potential to identify high-risk individuals in diverse clinical settings, who may benefit from violence intervention strategies.

17.20 Chemotherapy versus Chemoradiotherapy for Resected Pancreatic Cancer: Defining the Optimal Regimen

C. Mosquera1, L. M. Hunter1, T. L. Fitzgerald1  1East Carolina University Brody School Of Medicine,Surgical Oncology,Greenville, NC, USA

Introduction:  Postoperative adjuvant therapy for pancreatic adenocarcinoma has engendered significant controversy and is the subject of yet to be completed clinical trials. Pending these trials, data to guide optimal patient management is needed.

Methods:  Patients with resected adenocarcinomas of the pancreas undergoing surgery only, postoperative chemotherapy (CT), and chemoradiotherapy (CRT) from 2004-2013 were identified using the NCDB.

Results: A total of 26,821 patients were included. A majority were male (50.6%), white (86.3%), stage of II (82.4%), and with a Charlson comorbidity score of 0 (67.7%). On univariate analysis, adjuvant therapy was most strongly associated with younger age, race, insurance status, lymphatic invasion, high grade, and size > 2cm., p<.0001. On multivariate analysis, the associations continued for age <50 years (OR 5.24), lymphatic invasion (OR 1.60), high-grade (OR 1.37) and size > 2cm (OR 1.35), but not race or insurance status. On univariate survival analysis, patients that received adjuvant therapy had a greater median survival compared to surgery alone (22.0 vs 18.2 months, p<.0001). On multivariate survival analysis adjuvant therapy continued to be associated with survival (1.39, p<.0001). A total of 16,549 patients received postoperative CT or CRT. On univariate analysis patients who were older, had negative margins, and with Medicare were most likely to receive CT, p<.0001. On multivariate analysis age and negative margins were significant (OR 1.73, p<.0001), lymphatic invasion, tumor size, and insurance status were not. When survival analysis was restricted to those receiving CT or CRT, the highest median survival was seen in low grade (31.2 months, p<.0001) and size < 2cm. (33.1 months, p<.0001). Patients who received CRT had longer median survival than those who received CT (22.9 vs 21.8 months, p=.0001). On multivariate analysis of CT vs CRT (HR 1.14), high grade (HR 1.64), positive margins (HR 1.53), and lymphatic invasion (HR 1.50) continue to be associated with diminished survive, p <0.0001.

Conclusion: Postoperative adjuvant therapy is associated with a 40% improved survival. In contrast to other studies, these data suggest a modest survival advantage to combination therapy compared to CT alone.

 

17.19 Risk Factors Necessitating Early Ophthalmologic Evaluation of Facial Burns

S. J. Day1, A. Patel1, D. E. Bell1  1University Of Rochester,School Of Medicine And Dentistry,Rochester, NY, USA

Introduction:  Facial burns with ocular involvement often lead to significant morbidity. This study aimed to identify risk factors for short- and long-term ophthalmic complications in facial burn patients. Upon validation, these risk factors could distinguish patients who require ophthalmologic evaluation and prompt intervention.

Methods:  Retrospective case review was conducted of facial burn patients presenting to an American Burn Association-verified regional burn center from June 2007 to May 2016. Demographic, injury-related, and hospitalization-related variables were assessed for correlation with short- and long-term ophthalmic complications. Short-term complications included visual loss on presentation, lagophthalmos, ectropion, chemosis, ocular hypertension, chalasis, conjunctival necrosis, and orbital or periorbital infection. Long-term complications included lagophthalmos, cicatricial ectropion, exposure keratopathy, scleritis, and corneal stem cell deficiency. 

Results: From June 2007 to May 2016, 1126 facial burn patients presented to a regional burn center’s inpatient and outpatient settings. One hundred thirty-seven (12.2%) had associated periorbital and orbital injuries. Of the ocular burns, 66.4% were male and 75.9% were Caucasian. Average total body surface area (TBSA) burned was 9.72% (range, 0.02 – 75.13%), and average facial surface area burned was 1.64% (range, 0.02 – 6.65%). One hundred and twenty patients (87.6%) received an ophthalmologic consult. Sixty patients (43.8%) developed short-term ophthalmic complications, with the most common being chemosis (n = 36, 26.3%). Eight patients (5.8%) developed long-term complications, with the most common being lagophthalmos (n = 4, 2.9%). Two flash burn patients (1.5%) developed cicatricial ectropion and underwent full-thickness skin grafts to the eyelids. One scald burn caused a localized corneal stem cell deficiency, which led to chronic keratitis requiring long-term steroid treatment.

 

Statistically significant risk factors (p < 0.05) for both short- and long-term ophthalmic complications included inhalation injury, higher percentage of body with 3rd degree burns, and presence of corneal injury.

 

Presence of short-term complications was significantly associated (p < 0.05) with advanced age, higher TBSA burns, and higher percentage of body with 2nd degree burns. Significant associations (p < 0.05) with the development of long-term complications included active smoker status, 3rd degree eyelid burns, periorbital edema, need for intubation, longer duration of mechanical ventilation, visual loss on presentation, chemosis, lagophthalmos, ectropion, bloodstream infection, and longer length of hospitalization. 

Conclusion: Providers should obtain early ophthalmologic evaluation for facial burn patients who present with advanced age, active smoking status, inhalation injury, 2nd and 3rd degree burns, or need for intubation. 
 

17.18 Prognostic effect of lymph node ratio after curative-intent resection for distal cholangiocarcinoma

A. Y. Son1, R. Shenoy1, C. G. Ethun2, G. Poultsides3, K. Idrees4, R. C. Fields5, S. M. Weber6, R. C. Martin7, P. Shen11, C. Schmidt9, S. K. Maithel2, T. M. Pawlik10, M. Melis1, E. Newman1, I. Hatzaras1  1New York University School Of Medicine,Surgery,New York, NY, USA 2Emory University School Of Medicine,Surgery,Atlanta, GA, USA 3Stanford University,Surgery,Palo Alto, CA, USA 4Vanderbilt University Medical Center,Surgery,Nashville, TN, USA 5Washington University,Surgery,St. Louis, MO, USA 6University Of Wisconsin,Surgery,Madison, WI, USA 7University Of Louisville,Surgery,Louisville, KY, USA 9Ohio State University,Surgery,Columbus, OH, USA 10Johns Hopkins University School Of Medicine,Surgery,Baltimore, MD, USA 11Wake Forest University School Of Medicine,Surgery,Winston-Salem, NC, USA

Introduction:
The ratio of metastatic to total harvested lymph nodes (LNR) is an important prognostic factor following resection of gastrointestinal malignancies. We assessed the prognostic value of LNR in patients undergoing resection for distal cholangiocarcinoma (DCC).

Methods:
Patients who underwent curative intent resection of DCC in 10 institutions of the US Extrahepatic Biliary Malignancy Collaborative were included. Descriptive statistics were used to evaluate characteristics of demographic data. Multivariate proportional hazards regression was used to identify factors associated with recurrence-free survival and overall survival.

Results:
A total of 265 were included (median age 67 years; 63.4% male): 199 with low-LNR (00.4). The high LNR group was less likely to have undergone a Whipple procedure (85.4% vs. 82.9% vs. 60.0%, p<0.01), had a higher proportion of margin-positive resection (19.6% vs. 19.5% vs. 45.8%, p<0.05), poor differentiation (26.2% vs. 36.6% vs. 52.2%, p<0.05), lymphovascular (44.3% vs. 74.3% vs. 88.2%, p<0.001) and perineural invasion (81.0% vs. 69.2% vs. 91.3%, p>0.05). Multivariate analysis showed high-LNR as an independent predictor of poor RFS (HR 4.6, 95%CI 1.8-11.8, p=0.001) and OS (HR 2.2, 95%CI 1.0-4.6, p<0.05) (Table 1).  Rates of adjuvant chemoradiation in low-moderate LNR and high-LNR were 61.9% and 82.6%, respectively (p=0.07). Nevertheless, stratification by LNR showed no improvements in RFS or OS with either adjuvant chemoradiation.

Conclusion:
LNR can be used as a prognostic factor for recurrence and survival in patients undergoing curative-intent resection for DCC. Every effort should be made to perform an oncologic resection, with negative margins and adequate lymph node harvest, as adjuvant chemoradiation does not appear to provide LNR-specific improvements in long-term prognosis.
 

17.17 Cluster Analysis of Acute Type A Aortic Dissection Results in a Modified DeBakey Classification

J. L. Philip1, B. Rademacher1, C. B. Goodavish2, P. D. DiMusto3, N. C. De Oliveira2, P. C. Tang2  1University Of Wisconsin,General Surgery,Madison, WI, USA 2University Of Wisconsin,Cardiothoracic Surgery,Madison, WI, USA 3University Of Wisconsin,Vascular Surgery,Madison, WI, USA

Introduction: Acute type A aortic dissection is a surgical emergency. Traditional classifications systems group dissections were based on an evolving therapeutic approach. Using statistical groupings based on dissection morphology, we examined the impact of morphology on presentation and clinical outcomes.

Methods: We retrospectively reviewed 108 patients who underwent acute type A dissection repair from 2000-2016. Dissection morphology was characterized using 3-dimensional reconstructions of computed tomography scan images based on the true lumen area as a fraction of the total aortic area along the aorta. Two-step cluster analysis was performed to group the dissections.

Results: Cluster analysis resulted in two distinct clusters (silhouette cluster of cohesion and separation = 0.6). Cluster 1 (n=71, 65.7%) was characterized by a dissection extending form the ascending aorta into the abdominal aorta and iliac arteries (table 1). Cluster 2 dissections (n=37, 34.3%) extends from the ascending aorta to the aortic arch with limited extension into the distal arch and descending thoracic aorta. Cluster 2 extends the traditional DeBakey type II definition to include dissections propagating into the arch. Cluster 1 patients had more malperfusion (P=0.002) as well as lower extremity and abdominal pain on presentation (P<0.05). Cluster 2 had a greater number of diseased coronary vessels (P<0.05) and more commonly had previous percutaneous coronary intervention (P<0.05). No differences in age, gender and other major comorbidities were noted. Cluster 1 is characterized by a smaller primary tear area (3.7 vs 6.6 cm2, P=0.009), greater number of secondary tears (1.9 vs 0.5, P<0.001), more dissected non-coronary branches (2.9 vs 0.6, P<0.001) and greater degree of aortic valve insufficiency (P<0.05). Operative variables including cardiopulmonary bypass and cross-clamp time as well as extent of arch repair and type of proximal operations were similar. There were no differences in post-operative complications or survival. Cluster 1 had a significantly higher rate of intervention for distal dissection complications (10% vs 0%, P=0.048).

Conclusions: This study examines clinical presentation and outcomes in acute type A dissection based on morphology using statistical categorization. Cluster 2 acute type A dissections had much less distal aortic dissection involvement and need for distal aortic intervention. Therefore, it is likely reasonable to extend the definition of DeBakey type II dissection to involvement of the distal arch and proximal descending thoracic aorta. The greater area of the primary aortic tear in Cluster 2 with rapid decompression of the false lumen may explain the lesser degree of distal aortic dissection.

17.16 Outcomes Following Cholecystectomy in Kidney Transplant Recipients

S. R. DiBrito1,2, I. Olorundare1,2, C. Holscher1,2, C. Haugen1,2, Y. Alimi2,3, D. Segev1,2  1Johns Hopkins University School Of Medicine,Transplant Surgery,Baltimore, MD, USA 2Johns Hopkins School Of Public Health,Baltimore, MD, USA 3Georgetown University Medical Center,Washington, DC, USA

Introduction:  Cholecystectomy is one of the most commonly performed operations, and kidney transplant (KT) recipients are among those who require surgical management for cholelithiasis and cholecystits. Limited studies suggest that KT recipients have higher complication rates and longer length of stay (LOS) following general surgical procedures than non-transplant patients. This study investigates differences in complication rates, LOS, and cost between KT recipients and non-transplant patients undergoing cholecystectomy. Differences in outcomes following surgery at transplant centers vs non-transplant centers were also evaluated. 

Methods:  The Nationwide Inpatient Sample was used to study 7318 adult KT recipients and 5.3 million non-transplant patients who underwent cholecystectomy between 2000-2011. Postoperative complications were defined by ICD-9 code. Complication rates, LOS, and cost were compared using hierarchical logistic regression, hierarchical negative binomial regression, and mixed effects log-linear models respectively. Hospitals were categorized as transplant center if at least one transplant was performed during the follow-up period.

Results: On primary admission, the mortality rate for KT recipients was significantly higher than for non-transplant patients. (2.7 vs 1.2%, p <0.001). The rate of any postoperative complication was also higher (18.8 vs 13.9%, p <0.001). Following adjustment for patient and hospital level factors, mortality and complications both had increased odds in KT recipients respectively (OR 2.39, 95%CI 1.66-3.44; OR 1.30, 95%CI 1.12-1.51). Median length of stay was significantly longer in KT recipients compared to non-transplant patients (5 vs 3 days, p <0.001). After adjusting, the LOS ratio in KT recipients was still higher (OR 1.23, 95%CI 1.18-1.28). Median hospital costs were signficantly higher for KT recipients ($12077 vs $9002, p<0.001), and the higher cost ratio was still demonstrated after adjustment (ratio 1.14, 95%CI 1.10-1.18). There was no significant difference in any of the outcomes when comparing cholecystectomy performed at transplant center vs non-transplant center (Table 1).

Conclusion: KT recipients have higher mortality, more frequent postoperative complications, longer length of stay, and higher hospital costs than non-transplant patients undergoing cholecystectomy. Undergoing surgery at a transplant center makes no difference in outcomes when comparing KT recipients and non-transplant patients

 

17.15 Impact of Deceased Donor Cardiac Arrest Time on Post-Liver Transplant Graft Function and Survival

J. R. Schroering1, C. A. Kubal1, B. Ekser1, J. A. Fridell1, R. S. Mangus1  1Indiana University School Of Medicine,Indianapolis, IN, USA

Introduction:
Transplantation of liver grafts from donors who have experienced cardiopulmonary arrest is common practice. The impact of so-called donor “downtime,” the cumulative time of donor asystole, has not been well described. This study reviews a large number of liver transplants at a single center to quantitate the frequency and length of deceased donor arrest. Post-transplant clinical outcomes include delayed graft function, early graft loss, peak post-transplant transaminase levels, recipient length of hospital stay, and short- and long-term graft survival.

Methods:
The records of all liver transplants performed at a single center over a 15-year period were reviewed. Donor arrest included any reported asystole, and the time period of asystole was calculated from the pre-hospital reports as recorded by the onsite organ procurement coordinator.  Peak donor transaminase levels were extracted from the original on-site records. Post-transplant graft function was assessed using measured laboratory values including alanine aminotransferase (ALT), total bilirubin (TB), and international normalized ratio (INR). Cox regression was employed to assess graft survival, with a direct entry method for covariates with p<0.10.

Results:
The records for 1830 deceased donor liver transplants were reviewed. There were 521 donors who experienced cardiopulmonary arrest (28%). The median arrest time was 21 minutes (mean 25, range 1 to 120, SD 18). The median peak pre-procurement ALT for donors with arrest time was 127u/L, compared to 39u/L for donors with no arrest (p<0.001). Post-transplant, the peak ALT for liver grafts from donors with arrest was 436u/L, compared to 513u/L for donors with no arrest (p=0.09). Early allograft dysfunction occurred in 27% and 29% of arrest and no arrest donors (p=0.70). Comparing recipients with donor arrest and no arrest, there was no difference in risk of early graft loss (3% vs 3%, p=0.84), recipient length of hospital stay (10 vs 10 days, p=0.76), and 30-day graft survival (94% vs 95%, p=0.60). Cox regression comparing four groups of patients (no arrest, < 20 minutes arrest, 20 to 40 minute arrest and >40 minutes arrest) demonstrated no statistical difference in survival at 10 years.

Conclusion:
These results support the routine use of deceased liver donors who experience cardiopulmonary arrest in the period immediately prior to donation. Prolonged periods of asystole were associated with higher peak elevations in ALT in the donor, but lower peak ALT elevations in the recipient. Peak TB levels were similar in the donors, but significantly lower with increasing arrest time in the recipient. There were no differences in early and late survival.  
 

17.14 Surgical Management of Adolescents and Young Adults with GIST: A Population-Based Analysis

K. E. Fero1, T. M. Coe5, J. D. Murphy4, J. K. Sicklick2  1University Of California – San Diego,School Of Medicine,San Diego, CA, USA 2University Of California – San Diego,Division Of Surgical Oncology And Department Of Surgery,San Diego, CA, USA 3University Of California – San Diego,Division Of Medical Oncology And Department Of Medicine,San Diego, CA, USA 4University Of California – San Diego,Radiation Medicine And Applied Sciences,San Diego, CA, USA 5Massachusetts General Hospital,Department Of Surgery,Boston, MA, USA

Introduction:  There is a dearth of population-based evidence regarding outcomes of the adolescent and young adult (AYA) population with gastrointestinal stromal tumors (GIST). The aim of this study is to describe a large cohort of AYA patients with GIST and investigate the impact of surgery on GIST-specific and overall survival.

Methods:  This is a retrospective cohort study of patients in the Surveillance, Epidemiology and End Results (SEER) database with histologically-diagnosed GIST from 2001-2013, with follow-up through 2015. SEER is a population-based cancer registry with 18 sites covering approximately 30% of the United States. We identified 392 AYA patients among 5,765 patients with GIST; the main exposure variable identified was tumor resection and the primary outcome measure was mortality. Baseline characteristics were compared between AYA (13-39 years old) and older adult (OA; ³40 years old) patients and among AYA patients stratified by operative management. Kaplan-Meier estimates were used for overall survival (OS) analyses. Cumulative incidence functions were used for GIST-specific survival (GSS) analyses. The impact of surgery on survival was evaluated with a multivariable Fine-Gray regression model.

Results: There was no significant difference between AYA and OA patients with regards to sex, race distribution, tumor size or stage. Compared to OA, more AYA patients had small intestine GISTs (35.5% vs 27.3%, P <0.01) and were managed operatively (84.7% vs 78.4%, P < 0.01). Multivariable analysis of AYA patients demonstrated that non-operative management was associated with over a 2-fold increased risk of death from GIST (SDHR 2.271; 95% CI:1.214-2.249). On subset analysis of AYA patients with tumors of the stomach and small intestine (n=349), small intestine location was associated with improved survival (OS: 91.1% vs 77.2%, P=0.01; GSS: 91.8% vs 78.0%, P<0.01). On subset analysis of AYA patients with metastatic disease (n=91), operative management was associated with improved survival (OS: 69.5% vs 53.7%, P=0.04; GSS: 71.5% vs 56.7%, P=0.03).

Conclusion: We report the first population-based analysis of GIST outcomes in the AYA population. These patients are more likely to undergo surgical management than patients in the OA cohort. Operative management is associated with improved overall and GIST-specific survival in AYA patients, including those with metastatic disease.
 

17.13 Distal Perfusion Catheters Reduce Vascular Complications Associated with Veno-Arterial ECMO Therapy.

Y. Sanaiha1, G. R. Ramos1, Y. Juo1, J. C. Jimenez1, P. B. Benharash1  1David Geffen School Of Medicine,Division Of General Surgery,Los Angeles, CA, USA

Introduction: Despite advances in cannulation technique, venoarterial (VA) Extracorporeal Membrane Oxygenation (ECMO) is still associated with near 15% incidence of acute limb ischemia (ALI) requiring interventions such as amputation. Distal perfusion catheters (DPCs) have been used to provide antegrade flow to the distal extremity, and can be placed preemptively or in response to signs of limb ischemia. However, the efficacy of DPCs in reducing incidence of vascular complications is not well-established. The present study aims to evaluate the impact of DPC on ALI incidence and mortality.

Methods: The institutional ECMO database was used to identify all adult patients who underwent VA-ECMO between January 2013 to June 2016. Demographic, technical, and clinical outcomes data were collected. Acute limb ischemia was defined as skin mottling on exam, loss of pulses, or tissue loss. Interventions included thombectomy, fasciotomy or amputation.

Results: During the study period, 103 adult ECMO patients met inclusion criteria and were included for analysis. Indications for ECMO were cardiogenic shock in 46.6%, cardiac arrest in 24.3%, post-cardiotomy syndrome in 16.5%, transplant rejection in 4.5%, and severe respiratory failure in 7.8%. 51 patients received DPCs as a preemptive measure and 1 patient received DPC as a therapeutic measure. Overall, 28 (34.1%) patients experienced ALI, with 15 (18.2%) patients requiring surgical intervention. Patients who received DPCs had similar ALI rates compared to those without a DPC (28.8% vs 29.4%, p=NS). Overall mortality with VA-ECMO cannulation was 55% over the study period, while 53% of patients with ALI did not survive to 30 days post-decannulation.

Conclusion: This study is one of the largest retrospective cohort studies examining efficacy of DPC in reducing the incidence of ALI. Patients receiving DPCs were found to have a similar incidence of ALI. Development of ALI was not significantly associated with increased mortality. Methods to continuouly assess distal blood flow are needed even in the presence of DPCs.  Establishemnt of selection criteria for the use of DPCs may improve outcomes.

 

 

17.12 Long-Term Follow Up of Activity Levels and Lifestyle Choices After Esophagectomy

Z. El-Zein1, S. Barnett1, J. Lin1, W. Lynch1, R. Reddy1, A. Chang1, M. Orringer1, P. Carrott1  1University Of Michigan,Section Of Thoracic Surgery,Ann Arbor, MI, USA

Introduction: We aimed to determine if preoperative counseling for esophagectomy patients led to durable smoking cessation, routine exercise for years after surgery, and lower postoperative complication rates. 

Methods:  Of 789 patients identified as long-term survivors from a prospectively-collected esophagectomy database, 393 (49.8%) were contacted and agreed to receive a survey of long-term lifestyle changes. Of 294/393 (74.8%) patients who returned the completed survey, 239 (81.3%) underwent esophagectomy for cancer (median follow-up 5.5 years) and 55 (18.7%) for benign disease (median follow-up 9.2 years). Each group was analyzed using descriptive statistics and Chi-square where appropriate.

Results: In the cancer population, 35/239 (14.6%) were smokers at preoperative counseling and 14/35 (40%) smoked following surgery (P<0.01). With regard to exercise, 177/239 (74.1%) reported counseling preoperatively and 62/239 (25.9%) reported no counseling. The median exercise frequency for the counseled group was 5 vs. 2 days/week for the “non-counseled” group preoperatively (p<0.0001) and 4 vs. 3 days/week postoperatively (P=0.02). Currently, 117/177 (66.1%) in the counseled group exercise regularly (2+ days/week for 30+ minutes) compared to 31/62 (50%) from the “non-counseled” group (p=0.02). In the cancer population, preoperative smokers had higher pneumonia rates than non-smokers (11.4% vs. 3.9%, P=0.06). The documented exercise-counseled group had a significantly lower pneumonia rate than the non-counseled group (3.2% vs. 11.5%, P=0.01). In the benign population, there was no significant reduction in smoking rates and no significant difference in exercise frequency or complication rates between the counseled and non-counseled groups. 

Conclusion: Preoperative interventions before a major operation such as an esophagectomy due to cancer lead to an increase in activity level, permanent changes in lifestyle habits, and lower postoperative complication rates.

 

17.11 Survival outcomes in octogenarians with early stage, non-small cell lung cancer in the recent era

W. M. Whited1, E. Schumer1, J. Trivedi1, M. Bousamra1, V. Van Berkel1  1University Of Louisville,Department Of Cardiovascular And Thoracic Surgery,Louisville, KY, USA

Introduction:
This study aims to examine survival differences in elderly patients with early stage (I and II), non-small cell lung carcinoma (NSCLC) undergoing pulmonary resection.

Methods:
The national Surveillance, Epidemiology, and End Results database was queried for lung cancer patients between 1998 and 2010.  Age at diagnosis for all patients and those undergoing surgical resection were compared by year between patients <80 and >80 years.  Using Kaplan-Meier analysis, survival was compared between age subgroups (<70, 70-79, and >80 years) stratified by cancer stage for patients with early stage, NSCLC who underwent surgical resection.

Results:

41,680 patients age 18 years or greater were identified with early stage, NSCLC.  Of these, 29, 580 patients underwent pulmonary resection.  The proportion of patients older than 80 out of all patients who underwent resection for early stage, NSCLC demonstrates an upward trend since 1998 from 180 patients (9.1%) to 278 patients (11.4%).  Survival comparison stratified by stage for patients <70, 70-79, and >80 years is shown in Figure 1.   Five year survival for patients with stage I, NSCLC, ages <70, 70-79, and >80 years, respectively, is 62%, 45%, and 28% (p<0.001).  Five year survival for patients with stage II, NSCLC, ages <70, 70-79, and >80 years, respectively, is 43%, 23%, and 17% (p<0.001). There were similar trends in survival when stratified by sex and histology.  

Conclusion:

The number of elderly patients diagnosed with NSCLC is increasing, particularly those >80 years; therefore, there is an increasing number of older patients undergoing surgical resection.  While surgical resection in octogenarians for stage I, NSCLC is feasible, elderly patients have poor overall survival and should be fully informed of alternatives to surgical intervention. 

 

17.10 Living Kidney Donor-Recipient Relationship and Development of Post-donation Comorbidities

J. McLeod1, C. Carroll1, W. Summers1, C. Baxter1, R. Deierhoi1, P. MacLennan1, J. E. Locke1  1University Of Alabama,Birmingham, Alabama, USA

Introduction: Living donor selection practices aim to quantify lifetime risk of comorbid disease development (e.g. hypertension, diabetes, kidney disease) based on candidate’s pre-donation demographic and health characteristics at the time of evaluation. Studies aimed at predicting this risk have been limited by lack of information on donor relationship or family history. The goal of this study was to better understand the relationship between donor-recipient relationship and risk for post-donation comorbid disease development.

Methods: Participants enrolled in an IRB approved study agreed to survey examination and consented for medical record abstraction, allowing for capture of baseline health characteristics and post-donation outcomes. We used descriptive statistics and logistic regression to examine the odds of comorbid disease by donor-recipient relationship (adjusted for age, ethnicity, gender). 

Results:59 adult living kidney donors were studied; median age at donation 43.3 (IQR: 38.9-56.7); median age at survey 64.5 (IQR: 51.2-69.7); 54 European American and 5 African American; with median follow-up of 6.6 years (IQR: 4.3-29.2). More than half of the cohort was related to their recipient (Related: 67% vs. Unrelated: 33%). Twenty living kidney donors developed comorbid disease over the course of the study.  Hypertension was the most common post-donation comorbidity. Interestingly, 19 of 20 post-donation comorbidities developed in related donors. Related donors had a 17-fold higher odds of developing a post-donation comorbidity compared to their unrelated donor counterparts (adjusted odds ratio:  17.00, 95%CI: 1.84-157.08, p=0.01). 

Conclusion:Donor-recipient relationship was strongly correlated with development of post-donation comorbidities. This finding suggests the potential for some underlying genetic susceptibility carried by family members and warrants further study.