58.05 Pre-Injury Cerebral Atrophy in Traumatic Brain Injury Patients Correlates with Decreased Mortality

S. Mansour1,2, A. A. Fokin2, A. Zuviv2, J. Wycech2,3, M. Crawford2, A. Tymchak1,2,3, I. Puente1,2,3,4  1Florida Atlantic University,College Of Medicine,Boca Raton, FL, USA 2Delray Medical Center,Trauma Services,Delray Beach, FL, USA 3Broward Health Medical Center,Trauma Services,Fort Lauderdale, FL, USA 4Florida Atlantic University,College Of Medicine,Boca Raton, FL, USA

Introduction:
Geriatric patients are at increased risk for falling and sustaining traumatic brain injuries (TBI) than their non-geriatric counterparts. It has been shown that elderly patients may have cerebral atrophy that may in turn affect outcomes of intracranial hemorrhage; however this claim has not been studied adequately in TBI patients. We hypothesized that patients with pre-injury cerebral atrophy would have lower morbidity associated with TBI due to increased volume available for hematoma expansion.

Methods:
This IRB approved retrospective cohort study included 346 TBI patients on pre-injury Aspirin, Clopidogrel, or both, between the ages of 17 and 101, who were delivered to a level 1 trauma center between 1/1/2015 and 3/30/2018. Patients were divided into 2 groups: Group A did not have cerebral atrophy (n=148) and Group B had cerebral atrophy documented on computed tomography (CT) reports (n=198). Patients were excluded if they were also on anti-coagulants. Age, Injury Severity score (ISS), Revised Trauma score (RTS), Glasgow Coma score (GCS), Rotterdam CT score (RCT), Marshall CT score (MCT), incidence of intracranial hemorrhage (ICH), midline shift, platelet function and status, platelet transfusion, need for neurosurgical intervention, Intensive Care Unit length of stay (ICULOS), hospital LOS (HLOS), rate of readmission and mortality were compared.

Results:

Between Groups A and B mean values for ISS (12.9 vs 12.3), RTS (7.6 vs 7.7), GCS (14.2 vs 14.3), incidence of ICH (85.5% vs 86.9%), platelet count (215.0 vs 209.8), Platelet Function Assay (PFA)-100 epinephrine (194.7 vs 188.0), PFA-100 adenosine diphosphate (ADP) (127.0 vs 160.0), Thromboelastography Platelet Mapping (TEG-PM) % inhibition ADP (39.4% vs 39.8%), TEG-PM % inhibition of arachidonic acid (60.1% vs 56.6%), Prothrombin Time (11.5 vs 11.0 seconds), Partial Thromboplastin Time (25.7 vs 26.5 seconds), platelet transfusion (37.2% vs 44.1%), neurosurgical intervention (4.7% vs 5.1%), ICULOS (3.5 vs 2.9 days), HLOS (3.8 vs 3.8 days), and readmission rate (5.5% vs 5.1%) were similar (all p>0.07).

Group B compared to Group A had significantly lower RCT (2.7 vs 2.5; p=0.002), MCT (1.2 vs 1.0; p=0.002), midline shift (12.8% vs 3.5%; p=0.001) and mortality (14.2% vs 7.6%; p=0.04) (Fig. 1).

Conclusion:
Cerebral atrophy was associated with less severe damage and lower mortality in head trauma, despite similar injury severity, when compared to patients with no atrophy. Our findings suggest that increased intracranial volume may allow TBI patients to accommodate hematoma expansion, alleviating clinical presentation and reducing the likelihood of adverse outcomes.

58.04 Sarcopenia and Frailty: Similar Yet Distinct Measurements in Geriatric Trauma and Emergency Surgery

H. K. Weiss1, M. Errea2, B. W. Stocker1, N. Weingarten1, K. E. Engelhardt3, B. Cook2, J. A. Posluszny2  1Feinberg School Of Medicine – Northwestern University,Chicago, IL, USA 2Northwestern University,Department Of Surgery,Chicago, IL, USA 3The Medical University of South Carolina,Department Of Surgery,Charleston, SC, USA

Introduction:   The diagnoses of sarcopenia and frailty are often intertwined—both identify less healthy patients with higher complication rates and worse outcomes. However, sarcopenia is a function of muscle density alone, while frailty measurements take into account a multidimensional understanding of frailty, including patient comorbidities, emotional health, nutrition, cognitive and physical function. We hypothesize that sarcopenia and frailty measure different forms of disability. The objective of our study is determine the concordance of sarcopenia and frailty in geriatric trauma and emergency general surgery (TEGS) patients.

Methods:   We reviewed our QI database of geriatric (≥65 year old) TEGS patients.  As part of this project, all geriatrics patients are screened for frailty using the validated TEGS frailty screening tool.  Patients in whom a CT of the abdomen and pelvis was obtained were identified and total psoas index (TPI) (area of right and left psoas muscle at L3/2) was calculated, with lower TPIs signifying sarcopenia.  Patients were then compared for frailty and sarcopenia.

Results:  117 geriatric TEGS patients were screened for frailty.  Of all patients, 78 (67%) had a CT of the abdomen and pelvis.  Of these 78 patients, 22 (28%) were frail.  Mean TPI for all patients was 2.34±0.77.  Mean TPI for non-frail patients was 2.34±0.81 and was 2.33±0.65 for frail patients (p=0.97).  Since 28% of our patients were frail, we then compared patients with the 28% (n=23) lowest TPI for frailty.  Only five (23%) of these patients were frail (p=0.59).  Using the lowest quintile of TPI (15 patients) to define sarcopenia, as is done in other studies, only 3 (20%) of these sarcopenic patients were frail (p=0.53).

Conclusion: Although both sarcopenia and frailty identify patients at higher risk for complications and worse surgical outcomes, sarcopenia does not specifically reflect frailty.  As such, frailty and sarcopenia are not concordant measures of illness. Further study will help elucidate the true relationship between frailty and sarcopenia and the clinical implications of sarcopenia for geriatric TEGS patients.

 

58.02 Frailty Severity Predicts Poor Outcome After First-time Lower Extremity Revascularization

L. Gonzalez1,2, M. Kassem1,2, A. Owora3, S. Cardounell1, M. Monita1, S. Brangman1, V. Gahtan1,2  1State University Of New York Upstate Medical University,Vascular And Endovascular Surgery,Syracuse, NY, USA 2Syracuse VA Medical Center,Surgery,Syracuse, NY, USA 3Syracuse University,Falk College School Of Public Health,Syracuse, NY, USA

Introduction:  Frailty severity is a predictor of poor outcome after vascular surgery. The modified frailty index (mFI) has been validated as a prognostic assessment tool in large scale databases of patients with peripheral arterial disease. Our objectives were to determine the predictive utility of the mFI after first-time lower extremity revascularization and to identify biomarkers of frailty in patients with peripheral arterial disease. Hypotheses: (1) frailty severity is associated with adverse outcome after revascularization and (2) select preoperative data may serve as biomarkers of frailty.

Methods:  A retrospective cohort study was performed of all first-time revascularizations [open surgery (OS) and endovascular surgery (ES)] in male veterans at a single institution (2003-2016). Multivariable logistic and Cox proportional hazard regression models were used to examine the relationship between the mFI and post-operative short-term (30-day morbidity, readmission, and re-intervention) and long-term (up to 2-year incidence of re-intervention, amputation, or mortality) outcomes, respectively. 

Results: 431 patients met inclusion criteria (OS n=188; ES n=243), with a mean age of 66±9 years and median follow up of 16 months. Treatment groups were similar in baseline characteristics, pre-operative lab values, and polypharmacy tallies. Mean mFI was 0.39±0.16 for the OS group and0.38±0.15 for the ES group (p=0.43). 30-day complications (aOR 4.89; 95%CI: 1.67-14.33) and early readmissions (aHR 3.32; 95%CI: 1.16-9.55) were increased in the OS group compared to the ES group. Frailty severity did not predict risk of re-intervention in either group.  Kaplan Meier analysis showed an increased risk of amputation, death, and the composite outcome of amputation and/or death in both treatment groups with increasing frailty when stratified by frailty severity (p<0.005 for all).  Multivariate analysis confirms that frailty independently predicts major amputation (aHR 2.16; 1.06-4.39), mortality (aHR 2.62; 95%CI: 1.17-5.88), and the composite outcome (aHR 1.97; 95%CI: 1.06-3.68) in the cohort as a whole. Hypoalbuminemia is correlated with increased mFI in the ES group (p<0.01), but only showed a trend with mFI in the OS group (p>0.05).  Independent of treatment assignment and preoperative mFI, higher albumin concentration was associated with lower risk of amputation (aHR: 0.58; 95% CI: 0.36 -0.94) and mortality (aHR: 0.45; 95% CI: 0.25-0.83). Higher hemoglobin concentration was also independently predictive of limb salvage (aHR 0.72 95%CI: 0.62-0.84).

Conclusion: Frailty severity is predictive of short- and long-term outcomes after lower extremity revascularization. Hypoalbuminemia and anemia are associated with higher mFI and independently predicted poor outcome after revascularization, suggesting albumin and hemoglobin concentration may serve as true biomarkers of frailty in this population. 
 

58.01 Older Age Increases Mortality And Stroke Risk at One Year After Carotid Revascularization

S. J. Aitken1,2, S. J. Aitken1,2  1Concord Repatriation General Hospital,Institute Academic Surgery (Vascular),Sydney, NEW SOUTH WALES (NSW), Australia 2University Of Sydney,Concord Clinical School,Sydney, NSW, Australia

Introduction:
Cardiac and neurological complications following carotid revascularization have been associated with an increased risk of mortality, especially in older patients. This study reports mortality and stroke following carotid revascularization in Australia, comparing outcomes up to 1 year for those with and without complications within 30days and in younger and older patients.

Methods:
Routinely collected hospital data on all patients in New South Wales (NSW), Australia, were linked to state-wide mortality records. All patients who underwent carotid revascularization (endarterectomy or stenting) between 2010-2012 were selected. Primary outcomes of mortality or stroke were measured at 30days, 90days and 1 year. Secondary outcomes were complications within 30days, length of stay and hospital readmission within 90days. Differences in outcomes between younger (aged less than 75 years old) and older (aged 75 years and older) patients were evaluated. Complications were assessed at 30days, including stroke and major adverse cardiac events (MACE). Outcomes were assessed with multivariable Cox regression and Kaplan Meier survival analysis.

Results:

3008 carotid revascularization procedures were performed between 2010 and 2013; 20% for symptomatic carotid disease (n=598). Carotid endarterectomy was the most common procedure (n=2280, 76%), with 728 patients (24%) having carotid stenting. The median age was 72 years (SD 10), with more males than females having carotid revascularization (M:F ratio 69%:31%).  Mortality at 30days was 0.8% (n=26), 90days 1.4% (n=43) and 1 year 3.9% (n=112). Postoperative stroke occurred in 14 patients at 30days (0.5%), 90days 1.1% (n=32) and 1 year 1.7% (n=52).  17.3% of patients had a major complication within 30days (n=522). Median length of stay was 3 days (IQR 7). 25% of patients (n=754) had a readmission for any cause at 90days. After adjusting for age, gender and procedure type, patients aged 75 years or older were at higher mortality risk than younger patients (HR 2.7, 95%CI 2.2-3.3, P<.0001) at 1 year. After adjusting for age, gender and procedure type, older patients had a higher risk of stroke at 1 year (HR 2.4, 95%CI 1.9-2.8, P<.0001) than younger patients. Stroke risk was also associated with carotid stenting and major complications.   MACE occurring within 30days predicted 1 year stroke (HR 2.1, 95%CI 1.6-2.9, P<.0001) and death (HR 2.0, 95%CI 1.5-2.7, P<.0001). Older patients had a higher incidence of MACE (IRR 1.9, 95%CI 1.4-2.6, P<.0001) and complications (IRR 1.2, 95%CI 1.1-1.4, P.007) within 30days than younger patients.

Conclusion:

Older age increased risk for all adverse outcomes including mortality, stroke, complications, increased length of stay and readmission. Postoperative complications also increased the risk of mortality and stroke at 1 year. Targeted strategies to improve perioperative care in older patients are required to reduce complications associated with postoperative mortality.

57.20 The Utility of ECMO After Liver Transplant: Experience at a Single High-Volume United States Center

M. E. Pulcrano1, H. J. Braun1, D. J. Weber1, B. Padilla1, N. L. Ascher1  1University Of California – San Francisco,Department Of Surgery,San Francisco, CA, USA

Introduction: Extracorporeal membrane oxygenation (ECMO) is a method to artificially support respiratory and/or cardiac function when conventional techniques fail. Liver transplant recipients constitute a special patient group at high risk for developing pathologies such as acute respiratory distress syndrome and pneumonia in the settings of resuscitation and immunosuppression, respectively. Additionally, conditions such as portopulmonary hypertension and hepatopulmonary syndrome are directly related to cirrhosis and can contribute to respiratory distress both before and after transplantation. Over the past two decades, ECMO has been described as a treatment modality for acute pulmonary and/or cardiac disease following orthotopic liver transplantation (OLT) in the adult and pediatric populations. Here we present a series of OLT adult recipients placed on ECMO after transplantation for both respiratory and cardiac indications, which constitutes the largest described review in the United States.

 

Methods: The Extracorporeal Life Support Organization database of all patients at our academic institution who had undergone ECMO cannulation was cross-referenced with the institution’s liver transplant patient database, between 2002 and 2018. The patient and disease characteristics were identified, and the types of cannulation and outcomes were described

Results: Eight patients were identified, five men and three women aged 28-68. One patient was cannulated intraoperatively and the initiation of ECMO otherwise ranged from 3 days to 6 months after transplant in the remainder of the cohort. The cannulation time ranged from 1 day to 1.5 months. Four patients were placed on venovenous ECMO, for which the indications were hypoxemic respiratory failure (n=3) and inferior vena cava (IVC) obstruction (n=1). Four patients were placed on venoarterial ECMO for right heart failure and massive pulmonary embolism (PE). Three of eight patients survived to discharge and remain alive today. Of the deceased patients (n=5), three required ECMO as complications of pulmonary disease (portopulmonary hypertension (n=1) and hepatopulmonary syndrome (n=2)), one for technical complication of IVC obstruction, and one for a massive PE. Of the three patients who survived to discharge, one was cannulated intraoperatively for a massive PE, one was cannulated for hypoxemic respiratory failure in the setting of massive resuscitation for bleeding and coagulopathy, and one was cannulated for right heart failure. These patients remain alive from 5-13 months after transplantation.

Conclusion: ECMO is a useful modality to consider in liver transplant patients with severe cardiopulmonary failure. This adjunctive support is particularly effective if the etiology of cardiopulmonary failure is recognized promptly and is thought to be transient. This is the largest series in the United States and demonstrates a 38% survival rate, which is comparable to other reports in the literature from Asia.

57.19 Post Transplant Nutrition Status: Resolution of Hypoalbuminemia

S. Atoa1, M. A. Maluccio1, R. S. Mangus1  1Indiana University School Of Medicine,Surgery / Transplant,Indianapolis, IN, USA

Introduction:
Poor nutrition status is associated with worse post-operative outcomes including complications, prolonged hospitalization and increased costs. Malnutrition is associated with weakness and frailty. There are many ways to measure malnutrition and frailty including laboratory markers to assess protein stores, functional tests to assess strength, and body imaging to assess muscle mass. This study uses serum albumin levels, measured before and after transplant, as a marker for nutrition status. We assess the utility of various methods of pre-transplant testing to determine post-transplant nutrition status.

Methods:
Adults patients undergoing liver, kidney or pancreas transplant were included in this study. Nutrition status was assessed using weight and body mass index (BMI), 5 meter walk test and computed tomography assessment of muscle and fat stores. Each of these tests was then correlated with post transplant serum albumin levels. Measures of nutrition status included a scaled scoring of core muscle mass, and visceral and subcutaneous fat stores, protein and albumin serum levels, functional status, body weight and BMI.

Results:
There were 184 patients analyzed. All transplant organ groups had an acute decrease in their serum albumin level in the post-transplant period, with an improvement to the normal range between 20 and 50 days. Liver patients lagged behind the kidney and pancreas groups in resolution of hypoalbuminemia at all time points up to 40 days (p<0.01). Among liver transplant patients, the groups that experienced a better return to normal serum albumin levels included younger patients (

Conclusion:
This study identifies an acute drop in serum albumin that occurs post transplant. This decrease returns to the normal range with kidney sooner than pancreas, which is sooner than for liver patients. Malnutrition is clearly worse for the liver failure patients. Those that resolve their hypoalbuminemia the fastest are those who are female and younger and have good physiologic strength. However, the group with the worst sarcopenia returned to normal faster than those with lesser muscle loss. These findings suggest a physiologic mechanism which stimulates normalization of serum albumin levels, which is a marker for improved nutrition status. This mechanism may be linked to baseline body muscle mass.
 

57.18 Obesity Is Not a Predictor of Poor Outcomes in Simultaneous Heart-Kidney Transplant

M. Gunder1, C. Biefeld1, V. Dioguardi1, K. N. Lau1, A. Di Carlo1, S. Karhadkar1  1Temple University,Surgery,Philadelphia, PA, USA

Introduction

Prior studies have demonstrated a relationship between obesity and increased risk for morbidity and mortality after kidney transplantation. Therefore, patients with a high body mass index (BMI) can be denied listing status, or have longer waitlist times. The relationship between obesity and simultaneous heart-kidney transplant (SHKT) outcomes has not been studied.

Methods

A retrospective analysis of all patients listed in the United Network for Organ Sharing database undergoing SHKT between 2006-2018 was performed. Patients were grouped into categories based on BMI, including underweight (BMI 16.00-18.49), normal (BMI 18.50-24.99), overweight (BMI 25.00-29.99), or obese (BMI 30.00-60.00). Data was collected on patient outcomes including length of stay, incidence of delayed graft failure, heart or kidney allograft failure, resumption of maintenance dialysis, and overall patient survival. Fisher exact test or ANOVA analysis were performed to assess correlation between BMI and SHKT outcomes. Kaplan Meier analysis was used to project patient and allograft survival across BMI groups.

Results

A total of 1054 simultaneous heart-kidney transplants were performed during the time period studied. 33 recipients were underweight, 375 were normal weight, 400 were overweight, and 246 were obese. The overall mean survival for all patients was 3238 days. Overall rate of kidney graft failure was 8.8% and of heart graft failure was 5.2%. Interestingly, a higher BMI was not associated with an increased length of stay (p=0.712), incidence of DGF (p=0.786), incidence of kidney (p=0.581) or heart (p=0.734) graft failure, or decreased overall survival (p=0.160).

Conclusion

Obesity does not correlate with adverse outcomes after simultaneous heart-kidney transplantation. A similar rate of kidney allograft failure, heart allograft failure, and patient mortality was seen in all groups. A higher BMI should not preclude potential recipients from being listed for SHKT. 

57.17 Identifying Challenges in Steroid Minimization Protocols for Kidney Transplant Immunosuppression

N. Taylor1, A. Fagenson1, K. Lau1, A. Di Carlo1, A. Diamond1, S. Karhadkar1  1Temple University,SURGERY / ABDOMINAL ORGAN TRANSPLANTATION,Philadelpha, PA, USA

Introduction:
Corticosteroids are a pharmacological staple for prevention of graft rejection in renal transplantation. Deleterious effects of long term steroid use have prompted attempts to implement maintenance immunosuppression protocols that minimize steroids with little success due to higher rejection rates. Furthermore, other factors such as African American race are postulated to increased rates of rejection. Conventionally, this has resulted in reluctance to use steroid-free or steroid minimization protocols in African American patients. We sought to determine whether steroid-free or steroid minimization protocols resulted in increased negative outcomes in African American patients when compared to their non-African American counterparts.

Methods:
A retrospective review of all consecutive renal transplantation patients at an Urban University Hospital from 2013 through 2018 was performed. Patients were stratified by race. Continuous variables were compared using student independent t-tests and categorical variables were compared using Chi-squared and Fischer exact tests. Patients were further grouped based on steroid minimization maintenance protocol (no more than 5mg of prednisone per day) and steroid-free maintenance protocol.

Results:
A total of 227 kidney allograft recipients were identified; 124 (54.6%) African Americans and 103 (45.4%) non-African Americans. There were significantly more non-African American patients that received steroid minimization maintenance immunosuppression protocol (n=86, 30.8% AA vs 51.6% non-AA, p = 0.002). Additionally, 34 patients received a steroid-free maintenance immunosuppression protocol. There were no racial disparities in this group. Initiation of steroid minimization and steroid free protocols was based on identification of risk factors for acute rejection such as PRA, prior renal transplant and presence of donor specific antibodies.

Overall, the African American group had more episodes of acute rejection (22.1% AA vs 10.2% non-AA; p=0.019) for all comers, irrespective of steroid utilization. Of the patients that underwent a steroid minimization protocol there was no difference in acute rejection between the African American and non-African American groups. Of the patients that underwent a steroid-free protocol there was no difference in acute rejection or graft failure between the African American and non-African groups.

Conclusion:

Contrary to the notion that African American patients should remain on corticosteroids during the maintenance period, our data suggests that there is no significant increase in risk of rejection when African American patients undergo steroid minimization or steroid-free maintenance immunosuppression protocols following renal transplantation. Other factors such as drug compliance and pharmacodynamics may play a role in immune modulation in this group of patients.

57.16 Bortezomib Based Induction Therapy Decreases Delayed Graft Function in High Risk Kidney Transplantion

J. Lambdin1, N. Koizumi2, K. Mahendran1, J. K. Melancon1  1George Washington University School Of Medicine And Health Sciences,Transplant Institute And Division Of Transplant Surgery,Washington, DC, USA 2George Mason University,School Of Policy, Government And International Affairs,Arlington, VIRGINIA, USA

Introduction:  Over 100 000 candidates await a kidney transplant on the national United Network for Organ Sharing (UNOS) waitlist. Among these candidates, one-third have preformed antibodies against human leukocyte antigens (HLA). These patients fall under the category of high risk kidney transplantation compared to non-sensitized patients. Those patients: 1) undergoing ABO incompatible kidney transplant; 2) with donor specific antibodies; 3) with a low level of nonspecific antibodies; or 4) with a prior transplant/s also fall into this category. These high risk transplant recipients demonstrate a higher rate of delay graft function and graft failure.

Our objective was to demonstrate that our unique Bortezomib based desensitization protocol could be an effective induction therapy among high risk kidney transplant candidates to reduce delayed graft function and acute antibody mediated rejections. Our hypothesis is that humoral responses has been an underappreciated culprit in both delayed graft function and as an interface with cell medicated rejection responses. Bortezomib's anti humoral mechanism we belive to be an important adjunct in the immunospressive armamentraium. 

Methods:  A total of 126 patients underwent deceased-donor kidney transplants at our center between 01/01/2015 and 04/23/2018. Of these, 65 patients received Bortezomib based induction therapy as they were high risk candidates. All patients were given maintenance immunosuppression with Tacrolimus, Mycophenolate Mofetil and Prednisone as per standard of care, regardless of their risk stratification.   The outcomes were assessed based on: 1) occurrence of delayed graft function; and 2) presence of protocol biopsy proven rejection; and 3) graft survival. Using the UNOS data, we performed a propensity score analysis to extract high risk transplant recipients from the SRTR national transplant database who are similar to the recipients who received the Bortezomib based induction therapy at our center. Logistic regression and survival analyses were performed to compare the outcomes between the intervention recipients and the matched sample. 

Results: The Bortezomib based desensitization therapy was effective (p=0.01) in the matched sample analysis. Adjusting for the common covariates and the propensity score, those recipients who received the therapy was about 36% less likely to experience a delayed graft function. 

Conclusion: We conclude that our Bortezomib based desensitization/induction therapy decreases delayed graft function in high risk kidney transplant recipients. We belive that this will impact very favorably on the long term graft and patient survial. 

57.15 Lung Transplant Listing Practices: Survival in Candidates Who Improve Clinically to Delisting

S. E. Rudasill1, Y. Sanaiha1, H. Xing1, A. Mardock1, H. Khoury1, P. Benharash1  1David Geffen School Of Medicine, University Of California At Los Angeles,Cardiac Surgery,Los Angeles, CA, USA

Introduction:
Listing criteria for lung transplantation influence demand for transplant. Occasionally, listed candidates clinically improve and are delisted, but their outcomes have not been elucidated. This study examined a national database for the characteristics and survival of primary and re-transplant lung transplant candidates who improved to delisting.

Methods:
This was a retrospective study of patients listed for lung transplantation between 1987-2015 in the United Network for Organ Sharing (UNOS) database. The last permanent waiting list status change was classified into transplanted, improved to delisting, or deteriorated to delisting, with those dying before or refusing transplant excluded from analysis. Survival time was calculated using the linked Social Security Administration date of death, and analysis was performed via the Kaplan-Meier method. Adjusted Cox hazards models predicted five-year mortality, while multivariable logistic regression identified the patient characteristics predicting improvement to delisting. Transplant centers were organized into quartiles by volume and analyzed for differences in the proportion of candidates improving or deteriorating.

Results:
Of 13,688 candidates, 12,188 (89.0%) were transplanted, 1,046 (7.6%) deteriorated to delisting, and 454 (3.3%) improved to delisting. Patients who improved were younger (48.5 vs. 50.9 years, p<0.001), less likely male (39.1 vs. 54.6%, p<0.001), and more likely to use life support measures at listing (7.3 vs. 2.9%, p<0.001) relative to those transplanted. Cox regressions showed increased five-year mortality for improved (HR=1.21 [1.07-1.38], p=0.002) and deteriorated (HR=3.36 [3.11-3.64], p<0.001) candidates relative to those transplanted, but two-year survival was highest in improved candidates (Figure 1). The short-term survival benefit for those improving was observed in primary (log rank p<0.001) but not re-transplant (log rank p=0.767) candidates. Those most likely to improve to delisting were older than 60 years (OR=1.96 [1.26-3.05], p=0.003), listed for primary pulmonary hypertension (OR=2.27 [1.13-4.56], p=0.022), and used life support measures at listing (OR=3.63 [2.15-6.13], p<0.001) relative to those transplanted. The percentage of candidates improving to delisting increased with increasing hospital volume (p<0.001).

Conclusion:
Primary lung transplant candidates improving to delisting faced lower short-term but higher five-year mortality relative to transplanted candidates. This effect did not persist for re-transplantation. Institutional volume may influence variations in listing practices. Understanding the characteristics and outcomes of this delisted population can better inform future candidate selection criteria.

57.14 Changing independent predictors of length of stay in pediatric kidney transplantation

E. L. Godfrey1, E. T. Pan1, D. Yoeli3, A. Rana2  1Baylor College Of Medicine,Houston, TX, USA 2Baylor College Of Medicine,Division Of Abdominal Transplantation,Houston, TX, USA 3University Of Colorado Denver,Department Of Surgery,Aurora, CO, USA

Introduction:

Factors influencing graft and patient survival in pediatric kidney transplant recipients have been studied, but the outcome of length of stay has rarely been examined, despite its usefulness as a marker of perioperative morbidity. Examining changing predictors of length of stay provides both tools for clinical decision making as well as an assessment of the impact of the current kidney allocation system (KAS); the KAS was introduced in 2014 with a specific focus on reducing ethnic and regional disparities associated with the previous Share 35 system.

Methods:

The United Network for Organ Sharing provided de-identified data on 5,111 patients under the age of 18 listed for kidney transplant from 2005 to 2017, under both the Share 35 and Kidney Allocation System. Age, race, insurance status, a diagnosis of FSGS, KDPI, listing status (active, urgent, or critical), dialysis use and glomerular filtration rate (GFR) at transplant, donation after cardiac death, BMI, regional or national share status, whether the donor organ was on a pump, and cold ischemia time thresholds were all assessed against a length of stay greater than 7 days, 10 days, and 14 days. Univariate regression was performed to identify potential predictors, and multivariate regression was performed to determine which factors were independent predictors of prolonged stay.

Results:
Predictors of prolonged stay have changed between the two allocation systems. Younger age at time of transplant is a predictor of longer stays across all groups. Under the Share 35 system, Hispanic race is a predictor of shorter stays, but being non-Hispanic and non-white is a predictor of longer stays. Being on dialysis before transplant and having a lower GFR are also associated with stays longer than 10 days. In the KAS, increased BMI is also associated with shorter stays, excluding overweight or obese status, which did not have a significant relationship with length of stay. Lower GFR remained a predictor of prolonged stay in general, while either public or private insurance coverage and dialysis before transplant were predictive of stays over 10 days. 

Conclusion:

Characteristics indicating poorer native kidney function prior to transplant have been, and remain, predictors of prolonged post-transplant hospitalization. Older age and higher BMI predict shorter hospital course; studies have suggested long-term graft and patient survival are superior in younger recipients, but this effect may only start to be observed longer after transplant: further investigation is needed to reconcile these apparently contradictory findings. The change in race as a predictive factor between Share 35 and KAS corroborates recent studies reporting that transplant rates have increased for non-white recipients, and suggests that this increased access may reduce racial differences in perioperative outcomes. Further study of how the KAS accomplishes this should be utilized to create more equitable future allocation systems.

57.13 Effect of Transarterial Chemoembolization on Hepatic Artery Quality and Post-Transplant Complications

M. A. Selim1, M. Zimmerman1, J. Kim1, E. Hohenwalter2, W. Rilling2, J. C. Hong1  2Medical College Of Wisconsin,Department Of Radiology. Division Of Vascular And Intervention Radiology.,Milwaukee, WI, USA 1Medical College Of Wisconsin,Department Of Surgery. Division Of Transplant Surgery,Milwaukee, WI, USA

Introduction: Transarterial chemoembolization (TACE) is commonly utilized to downstage irresectable hepatic tumors prior to liver transplant (LT). Vascular complications of the hepatic artery (HA) after TACE may impact its suitability for vascular anastomosis during LT. We sought to examine HA quality in patients with pre-LT TACE.

Methods: We conducted a single center analysis from our prospective database of all adult LT performed in our center between October 2012 and May 2018. Median follow up was 26 months. Outcomes were compared between patients who received pre-LT TACE and those who did not.

Results: 162 transplants were included. Group I did not receive pre-LT TACE (n=110), Group II did (n=52). . Patients with TACE tended to be older males with viral hepatitis. Other characteristics are outlined in Table 1.
Our standard target for arterial anastomosis is the native common HA. For patients with TACE (Group II) only two cases (4%) required an aortic conduit to be created for arterialization because of inadequate CHA (dissection or poor vessel quality), compared to 8% in Group I, p=0.4.
The incidence of post-LT vascular complications was similar, Group I=6.3% and Group II=5.7% (p=0.9). Most complications were late (>30 days). Only one case in the TACE group was due to native HA dissection that necessitated revision and creating an aortic conduit.
Cox regression analysis showed that TACE wasn’t associated with increased risk of vascular complications. There was no difference in patient or graft survival between the two groups (p=0.1 and 0.2 respectively)

Conclusion: We found no association between TACE and higher rates of arterial vascular complications or other post-LT adverse outcomes. Careful post-LT follow up is important for early detection and management of any vascular complication.

 

57.12 Use of the Montreal Cognitive Assessment (MoCA) in Pre-transplant Evaluation

F. Olumba1, C. S. Hwang1, S. Levea1, B. Tanriover1, Y. Liang1, P. Vagefi1, M. MacConmara1, C. Hwang1  1UT Southwestern Medical Center,Dallas, TX, USA

Introduction:
Decreased cognitive function is associated with higher mortality in end stage renal disease (ESRD) and kidney transplant recipients. The Montreal Cognitive Assessment (MoCA) is a quick test often used to screen for cognitive impairment. In addition, the Stanford Integrated Psychosocial Assessment for Transplantation (SIPAT) is a scoring system increasingly used to quantify psychosocial risk. We questioned if MOCA and SIPAT scores would predict patients listed for kidney transplantation.

Methods:
A single center retrospective analysis was performed including all adult patients (age ≥18 years) with either CKD 5 or ESRD  who were evaluated for kidney transplantation between January 1, 2016 and January 1, 2018. Non-parametric comparison tests were executed to assess differences in MoCA and SIPAT scores between subjects successfully listed for transplant and those not listed during the study period. Regression analysis was done to determine the relationship of MoCA and SIPAT scores to listing outcome. 

Results:
880 patients were included for analysis. The average age at the time of referral or evaluation was 54.1± 13.8 years. The most common cause of ESRD was from diabetes and hypertension (26.8%), with the majority of subjects having ESRD (58.1%). During the study period, 255 (29.0%) of patients were placed on the waitlist. The median score for the non-listed group vs. listed groups was 24 vs. 25, however a  score ≥ 23 oresulted in an OR of 1.88 (95% CI 1.18 to 2.99, p= 0.0078) for being placed on the transplant list. Furthermore, of those patients not listed with MOCA ≥ 23 no statistically significant difference was found between those listed with respect to age, sex, race, etiology of ESRD, dialysis use or six-minute walk distance. Interestingly, SIPAT scores of  those of ≥ 23 non-listed patients were significantly higher (9 v 12) indicating increased psychosocial (p= 0.003).

Conclusion:
Combining MOCA and SIPAT scores may have useful predictive value to determine suitable candidates for kidney transplantation.

57.10 Outcomes in Patients Transplanted for NASH – Bigger is Better

Y. Liang1, D. Desai1, P. Vagefi1, C. Hwang1, M. MacConmara1  2UT Southwestern Medical Center,Medicine,Dallas, TX, USA 1UT Southwestern Medical Center,Surgery,Dallas, TX, USA

Introduction:
Non-alcoholic steatohepatitis (NASH) will likely become the most common indication for liver transplantation during the next decade. We sought to determine the impact of recipient and donor selection practices on outcomes in NASH recipients. 

Methods:
The United Network for Organ Sharing database was queried to identify all liver transplant recipients from 2000 to 2017. Pediatric recipients (< 18 years) were excluded from the analysis. Donor and recipient demographic, clinical and biochemical data were extracted and risk factors for outcome after transplant were compared between NASH and all other cause recipients. Patient and graft survival rates were calculated for both groups. Categorical differences were compared using the unpaired Student's t-test and nominal variables using either the Chi Square test. A p-value of <0.01 was considered significant. 

Results:
88,602 adult liver transplants were completed of which NASH recipients accounted for 9,689 transplants. NASH recipients were older (58.7 vs. 53.2 years, p<0.001) with higher MELD scores (23.5 vs. 22.1, p<0.001). Donor age >60 years (17.1% vs. 12.9% p<0.001), donor macrosteatosis over 30% (3.2% vs. 2.1%, p<0.001) and donor donation after cardiac death (DCD) status (5.8% vs. 4.5% p<0.001) were all significantly higher in NASH recipients. Cold ischemia time (6.5 vs. 6.9 hours, p<0.001) was shorter for NASH recipients. Interestingly, graft survival was significantly better for NASH recipients (Figure 1, p<0.001). 

Conclusion:
NASH recipients have superior outcomes following liver transplantation despite utilization of higher risk donor livers and should be considered for offers of marginal organs. 

57.09 Impact of Hyponatremia on Adult Post-Liver Transplant Outcomes

I. Clouse1, J. W. Clouse1, C. A. Kubal1, R. S. Mangus1  1Indiana University School Of Medicine,Surgery / Transplant,Indianapolis, IN, USA

Introduction:
Serum sodium level has recently been added to calculation of the model for end-stage liver disease (MELD) score because it is recognized as an important marker for disease severity in the cirrhotic patient. This study reviews all adult patients undergoing liver transplant at a single center over a 10-year time period to assess the impact of serum sodium levels on clinical outcomes.

Methods:
The records of all adult patients undergoing liver transplant at a single center from 2007 to 2018 were reviewed. Baseline and post-transplant serum sodium levels were recorded.  Outcomes included length of stay and patient survival. Neurologic outcomes included any altered mental status, need for neurology consult and any required brain evaluation or imaging. Cox regression was used to assess 10-year patient survival.

Results:
There were 1,363 adult transplants reviewed during the study period. The median patient age was 57 years, with 67% being males and 89% being Caucasian. Etiologies of liver failure included hepatitis C (30%), fatty liver disease (21%), alcoholic liver disease (29%) and hepatocellular carcinoma (22%). The median MELD was 21, with median hospital stay of 10 days. There were 72% of patients with baseline serum sodium of 135mEq/L or higher, 20% were between 130 and 134mEq/L, and 8% (109 patients) with serum sodium less than 130mEq/L at transplant.

Patients older than 40 years of age were much more likely to present with hyponatremia (p=0.02), as well as those with alcoholic liver disease (p<0.01). Lower serum sodium levels were associated with higher MELD score. Patients with varying levels of hyponatremia did not differ in risk of perioperative death, 90-day death or in 1-year patient survival. They had similar hospital length of stay (12 versus 10 days, p=0.97). Hyponatremia was associated with 30-day post-transplant altered mental status (>25%, p=0.01) and with the request for neurology consultation (>20%, p<0.01). Brain studies in the first 30-day post-transplant were much more likely in hyponatremic patients (CT, p=0.07; MRI, p=0.05; and EEG, p<0.01).  Cox regression 10-year patient survival demonstrates a decreasing survival from 75% down to 70% with increasing level of hyponatremia.

Conclusion:
These results provide a broader understanding of the impact of hyponatremia on post-liver transplant outcomes. There were 8% of patients going to the operating room with serum sodium level <130mEq/L. Though patient survival is similar, patients with hyponatremia are much more likely to require neurologic intervention and testing. At our center, efforts are made to maintain stable sodium levels during the transplant procedure, with slow increases in the days post-transplant up to the time of discharge.
 

57.08 High BMI does not Predispose to Post-transplant DM, Morbidity or Mortality in Renal Transplantation.

S. Kaur1, L. Greco1, K. Lau1, A. Di Carlo1, S. Karhadkar1  1Temple University,Surgery / Division Of Abdominal Organ Transplantation,Philadelpha, PA, USA

Introduction:
Candidacy for renal transplantation is multifactorial and one of the variables factored in this decision is recipient obesity. Obesity has been shown to be associated with an increased risk of allograft dysfunction, however the association between obesity and short-term complications remains unclear. There is an increasing trend to subject obese patients to bariatric surgery before transplantation. The purpose of this study is to evaluate the association between obesity and the risk of short and long-term complications after renal transplantation. 

Methods:
We identified consecutive patients who underwent renal transplantation at a single center between the years 2013 and 2018. The body mass index (BMI) was calculated for all patients and patients were stratified by BMI: Obese (BMI greater than 30) and non-obese (BMI less than 30). Patient charts were reviewed for infectious complications, rejection of allograft, new onset diabetes, return to dialysis after transplant, and proteinuria. Student’s paired T-test and odds ratios were calculated to assess the relationship between obesity and the aforementioned complications.

Results:
A total of 246 patients underwent renal transplant between 2013 and 2018, 63.3% (n=155) were male, 91.1% (n=224) underwent deceased donor transplant, 85.0% (n=209) were on dialysis prior to transplant. In these patients, 2% (n=5) were underweight (BMI<18.5), 28% (n=69) were normal BMI (18.5-24.9), 35% (n=86) were overweight (BMI 25-29.9), and 34.6% (n=85) were overweight (BMI >30).  There was no difference between the obese and non-obese kidney transplant recipients with regards to incidence of return to dialysis after transplant (p<0.458, OR 0.606), new onset of diabetes after transplant (p<0.874, OR 0.915), or proteinuria (p<0.188, OR 1.424). Additionally, in patients who had complications following transplant, there was no significant difference in the obese and non-obese transplant recipients and incidence of organ rejection (p<0.340, OR 0.703) or complications that were classified as not secondary to infection or rejection (p<0.965, OR 1.017). There was a weak association of obesity with increased risk of infectious complication (p<0.051, OR 2.199). 

Conclusion:
In patients undergoing kidney transplantation, there is no significant difference in incidence of complications in obese patients compared to non-obese patients. There is a weak association of obesity with increased risk of infectious complication that is not significant. High BMI is not associated with proteinuria, graft loss or rejection. Obesity should not be a contra-indication for Renal Transplantation.

57.07 Risk Factors for Predicting Prolonged Length of Stay (PLOS) After Kidney Transplant

W. Q. Zhang1, A. Rana2, B. V. Murthy2  1Baylor College Of Medicine,Houston, TX, USA 2Baylor College Of Medicine,Division Of Abdominal Transplantation,Houston, TX, USA

Introduction:  Length of stay is a surrogate for determining use of healthcare resources and costs to both patients and hospitals. Currently, there is not a comprehensive review of risk factors for prolonged length of stay (PLOS; >14 days) at a hospital after kidney transplant. Insight into factors that increase the probability for PLOS will provide a basis for future clinical applications.

Methods: Univariate and multivariate analyses (p<0.05) were performed on 98,322 adult recipients of deceased donor kidneys between August 2008 and March 2018 using the United Network for Organ Sharing (UNOS) database to identify donor, recipient, and perioperative risk factors for PLOS.

Results: Univariate analysis identified 32 factors, in addition to Estimated Post Transplant Survival (EPTS) score and the Kidney Donor Profile Index (KDPI), as possible predictors of PLOS. Including EPTS and KDPI, 18 total factors remained significant after multivariate analysis. Factors increasing the probability of PLOS include longer cold ischemia times (CITs), admission to the intensive care unit (ICU) at time of transplant, lower functional status, African American ethnicity, male donor, body mass index (BMI) under 18.5 or over 35, longer time on dialysis, and national procurement. Factors protective against PLOS include shorter time on waitlist, shorter time on dialysis, and BMI of 25 up to 30.

Conclusion: Overall, admission to the ICU (Odds Ratio (OR) = 13.61) had the largest effect on PLOS, but other interactions were also revealed. Of note, groups with CITs of 7 hours up to 18 hours (OR = 1.65), 18 hours up to 32 hours (OR = 1.97), and over 32 hours (OR = 2.42) all had significantly increased risk of PLOS compared to the reference group of CIT under 7 hours, with the effect on PLOS increasing with increasing CIT. This emphasizes the need to minimize CIT. Other factors will require further analysis to interpret. A next step for this project will be to create a predictive index for PLOS.
 

57.05 Pediatric Liver Transplantation for Malignancy: Surgical Outcomes and the Role for Segmental Grafts

F. Lopez-Verdugo1, A. Florez-Huidobro Martinez2, S. Fujita1, K. Jensen1, I. Zendejas1, E. R. Scaife1, L. Book1, R. L. Meyers1, M. I. Rodriguez-Davalos1  1Intermountain Primary Children’s Hospital,Department of Surgery, Transplant And Hepatobiliary Surgery,Salt Lake City, UT, USA 2Universidad AnĂ¡huac,School Of Medicine,Mexico City, MX, Mexico

Introduction: Primary resection remains the mainstay treatment for liver malignancies in the pediatric population, unfortunately many of these children present with unresectable disease, for which liver transplantation has become the standard of care. Hepatoblastoma (HBL) is the most common type of liver malignancy in children, representing 80% of all liver tumors. It usually presents before the age of four and seems to affect more male patients  with a ratio that varies from 1.2 to 3.6:1. The aim of this study is to review patient and graft survival in a cohort of patients with liver malignancy who underwent liver transplantation at our center over the past 2 decades and compare the different types of liver allografts.

Methods:  All patients diagnosed with liver malignancy who underwent liver transplantation as treatment from 1998 to 2018 were analyzed. Demographics, age at the time of transplant, prior resections, type of graft, vascular complications, survival rate and recurrence were evaluated. Fisher’s exact test was performed to demonstrate differences in survival rate at 12-month follow-up between graft types used.

Results: From the 249 transplants performed in our center over the last two decades, 16 transplants (6.4%) were performed for malignancies in 15 patients. The mean age at transplant was 4.19 years (range: 0.6-7.1 years), 9 patients were female (60%). 13 transplants (81.2%) were performed for HBL, 2 for hemangioendothelioma (12.5%) and 1 for pancreatoblastoma (6.25%). 5 transplants were from living donors and 11 from deceased donors (3 reduced/split and 8 whole), 1 patient received an ABO incompatible liver. Half of our cohort received technical variants grafts either from living or deceased donors. Out of the patients with HBL, 4 (30%) had a prior resection attempt; among these, 2 patients (50%) succumbed within 2 months after the LTx. Overall, 3 patients died within the first 6 months after LTx. Causes of death included recurrence of the disease (n=2) and primary graft non-function  (n=1). All the patients with diseases other than HBL are alive and doing well. At a median follow-up of 84.5 months (range: 0-241) overall patient and graft survival were 80% and 75%, respectively. There was no statistically significant difference on survival rate between patients who received whole grafts compared to those who received technically variant grafts (p=0.5).

Conclusion:Timing is critical in providing liver transplantation for patients suffering from liver malignancy in which extent of the disease precludes complete resection. The use of segmental grafts and ABOi livers did not appear to diminish survival rate at 1 year follow-up; thus, utilization of these graft types might increase the organ donor pool and expedite treatment for patients with liver malignancy. The most important factor in our series was histology of the tumor as this was consistent with small cell undifferentiated variant of HBL in patients that died of recurrence of disease.

57.04 Small Pediatric Livers can be used safely in adult recipients with good long term outcomes

A. Shubin1, C. S. Hwang1, P. Vagefi1, D. Desai1, M. MacConmara1  1University Of Texas Southwestern Medical Center,Department Of Surgery, Division Of Surgical Transplantation,Dallas, TX, USA

Introduction:

It is well recognized from clinical experiences in living liver transplantation that the use of small sized allografts can lead to early complications in the recipient, including primary graft non-function, however less is known about outcomes from whole organ use especially long term outcomes. We evaluated patient and allograft survival in adult liver recipients who received allografts from young pediatric donors to see if broader use of these organs is warranted.

Methods:

The United Network for Organ Sharing (UNOS) database was queried to examine the outcomes in all liver recipients from 1993 to 2017.  Patients were then divided into those receiving their liver allograft from a small pediatric donor (<30Kg) or an adult graft. Further stratification was done on the basis of fulminant status of the recipient. Kaplan Meier survival curves were generated and categorical differences compared using the unpaired Student's t-test and nominal variables using either the Chi Square or Fischer's exact test. A p-value <0.05 was considered significant.

Results:

Data on 143,612 recipients was evaluated. 668 adult recipients received pediatric donor liver allografts and of those, 109 patients were transplanted for fulminant disease. The average pediatric donor age was 7.95 v 38.9 for adult donors while the donor to recipient weight difference was 35.2 v 4.5 Kg (p<0.05) respectively. The recipients of pediatric livers were smaller 61.2 v 82.7 Kg (p<0.05).  Overall long term survival of recipients who received small pediatric allografts was not statistically different.  In general recipients with fulminant disease had a worse outcome, however those adults with fulminant disease who received a small pediatric graft had markedly worse outcomes.

Conclusion:

Small pediatric liver grafts can be safely used in selective adult recipients and provide excellent outcomes.

57.03 Analysis of Anti-Thymocyte Globulin Antibody Titer Congruent with Kidney Transplantation

S. G. Lattimore1, N. J. Skill1, M. A. Maluccio1  1Indiana University School Of Medicine,Transplant,Indianapolis, IN, USA

Introduction:  Rabbit thymoglobulin is an anti-thymocyte globulin (rATG) used in transplant to eliminate lymphocytes. However, anti-rATG antibodies are associated with both acute and chronic rejection. The purpose of this study is three-fold. Firstly, to report the incidence of anti-rATG in a large renal transplant center. Secondly, to evaluate outcomes, risk factors, hazard ratios, and costs associated with positive ATG antibody titer. Finally, investigate CD40L and IL21, both linked to antibody mediated rejection, as alternate targets for patients with anti-ATG antibodies.

Methods:  Clinical records were reviewed of renal transplant recipients between January 2004 and May 2018 for anti ATG antibody titers. Serum CD40L and IL21 quantifications were performed using commercially available ELISA. Cost analysis was extracted from billing records 0-7 days post rATG titer using total charges as a proxy for cost.

Results: Between 2004 and May 2018 the Indiana University Hospital transplant program performed, on average, 160 renal transplants per year. Anti rATG antibody ELISAs was requested and performed for 19 patients per year (11.8%), 4.8 patients per year were positive at 1:100 titer (25.3%). anti rATG antibodies was associated with a significantly lower time to rejection (137days) when compared to negative patients (873 days). No correlation between rATG antibodies and time of dialysis, or lymphocytes populations was found. A slight correlation was observed between positive rATG antibody titer and recipient age. Anti rATG rates were greater in patients receiving a second kidney (37.5%). Cost of treatment in patients with positive anti-ATG titer was significantly lower ($39,549±9,504 vs 117,503±16,273 ttest p=0.0001). IL21 and CD40L were significantly greater in patients with positive anti-ATG antibody titer when compared to patients whom were negative.

Conclusion: Anti rATG antibodies significantly impact outcomes and costs of kidney rejection. Monitoring of anti rATG antibody titer is required to evaluate outcomes and treatment options, especially in the setting of second transplants. Elucidation of the mechanisms associated with positive anti rATG antibody is required. IL21 and CD40L are potential targets.