66.05 Curettage and Cautery as an Alternative to Primary Closure for Pediatric Gastrocutaneous Fistulae

N. Denning4, I. A. El-shafy2, J. G. Hagen3, S. Stylianos5, J. M. Prince1,4, A. M. Lipskar1,4  1Cohen Children’s Medical Center, Northwell Health System,Department Of Pediatric Surgery,New Hyde Park, NY, USA 2Maimonides Medical Center,Department Of Surgery,Brooklyn, NY, USA 3Cohen Children’s Medical Center, Northwell Health System,Department Of Anesthesia,New Hyde Park, NY, USA 4Hofstra University School Of Medicine-Northwell Health System,Department Of Surgery,Manhasset, NY, USA 5Morgan Stanley Children’s Hospital Of New York-Presbyterian, Columbia University Medical Center,Division Of Pediatric Surgery,New York, NY, USA

Introduction: The development of a gastrocutaneous fistula (GCF) after gastrostomy tube removal is a frequent complication that occurs 5-45% of the time. Conservative therapy with chemical cauterization is frequently unsuccessful and surgical GCF repair with open primary layered closure of the gastrotomy is often required. We describe an alternative approach of GCF closure that is an outpatient, less invasive procedure that allows patients to avoid an inpatient hospitalization and intra-abdominal surgery.

Methods: This is an IRB approved retrospective review of all patients who underwent GCF closure from 1/2010 to 7/2016 at a tertiary care children’s hospital.  Demographics including age, weight, BMI, comorbidities, and initial indication for gastrostomy tube were recorded.  Operative details such as ASA score, operative duration, type of anesthetic and airway were noted.  Based on surgeon preference two types of operative closure were used during that timeframe: primary layered closure or curettage and cautery (C&C).  The latter is a procedure in which the fistula tract is first scraped with a fine curette and then the fistula opening and tract are cauterized circumferentially. Finally, the presence of a persistent fistula and the need for formal reoperation were determined.

Results: Sixty-five unique patients requiring GCF closure were identified. Of those, 44 patients (67.6%) underwent primary closure and 21 patients (32.3%) underwent C&C.   The success rate of primary closure was 97% with one patient experiencing wound breakdown with persistent fistula.  The overall success rate of C&C was 66.7% (14/21).  Among those fourteen patients 11 (52.4%) GCF were closed by one month.  An additional two patients’ GCF were closed by four months (61.9%).   One GCF was successfully closed with a second C&C procedure.   7/21 patients (33.3%) required subsequent formal layered surgical closure.  C&C had significantly shorter operative times (13.5±14.7 min vs 93.4 ±61.8, p < 0.0001) and significantly shorter times in the post-anesthesia care unit (101.8±42.4 min vs 147 ± 86, p < 0.0001). Patients were intubated with an endotracheal tube 88.6% of the time for primary closure and 23.8% of the time for C&C.  Among patients admitted for an elective procedure, the average length of stay for primary closure was 1.9 days as compared to 0 days for the C&C group. Among patients who underwent C&C with a persistent fistula, there were no significant differences in time since initial creation of gastrostomy, age, BMI, or ASA score.

Conclusion: Our study verifies that primary closure remains the gold standard for persistent gastrocutaneous fistula. However, C&C is a safe, outpatient procedure that effectively treats a GCF the majority of the time in children.  We suggest that in select patients it may be an appropriate initial and definitive procedure for GCF closure.

 

66.04 A High Ratio of Plasma:RBC Improves Survival in Massively Transfused Injured Children

M. E. Cunningham1, E. H. Rosenfeld1, B. Naik-Mathuria1, R. T. Russell2, A. M. Vogel1  1Baylor College Of Medicine,Department Of Pediatric Surgery,Houston, TX, USA 2Children’s Hospital Of Alabama,Department Of Pediatric Surgery,Birmingham, AL, USA

Introduction: Massive transfusion protocols (MTP) with balanced high blood product ratios (plasma and platelets to red blood cells (RBC)) have been associated with improved outcomes in adult trauma. Their impact in pediatric trauma is unclear. The purpose of this study is to evaluate the effect of blood product ratios in severely injured children.

Methods: The 2015-2016 American College of Surgeons Trauma Quality Improvement Program research dataset was used to identify trauma patient’s ≤ 18 years of age who received a massive transfusion (≥40ml/kg/24-hours total blood products). Children with burns and non-survivable injuries were excluded. Plasma:RBC and platelet:RBC ratios were categorized into low (<1:2), medium (≥1:2 to <1:1), and high (≥1:1). Trauma-related clinical and outcome data was collected and analyzed using descriptive statistics, Kruskal-wallis test, Chi-Square, Fisher’s exact test, and Kaplan-Meier analysis. Continuous variables are presented as median [IQR]; p<0.05 is significant.

Results:Of 473 patients, the median age was 9 [3,16] years, 159 (34%) were female, and 350 (74%) were blunt injuries. Gender, mechanism of injury, ISS, GCS, and age-adjusted vitals were similar amongst the low, medium, and high plasma:RBC and platelet:RBC cohorts. Children in the low plasma:RBC group were younger compared to those in the medium and high groups (6 [2,13] vs 13 [6,17] & 12 [6,17]; p<0.01). Analysis of the plasma:RBC groups showed those with low ratios had a lower incidence of acute kidney injury (AKI) (0% vs 4% & 5%; p=0.01) compared to medium and high ratios, and fewer ventilator-free days (6 [0,23] vs 17 [17,23]; p=0.03) compared to high ratios. Those with medium plasma:RBC ratios had greater total blood transfusion volumes (ml/kg/24-hours) (89 [56,156] vs 68 [50,105] & 69 [50,115]; p<0.01). There was no difference in ICU-free days or other in-hospital complications between the groups. Analysis of the platelet:RBC ratios showed no significant difference in total blood volume received, ventilator free-days, ICU-free days, AKI, or other in-hospital complications. Hemorrhage control procedures were less common in the low plasma:RBC cohort (34% vs: 56% & 51%; p<0.01), while there was no difference between the platelet:RBC cohorts. Survival was improved in the high plasma:RBC cohort early on (Fig. 1), but became insignificant as time passed. There was no improvement in survival between the platelet:RBC cohorts at any time.

Conclusion: A high ratio of plasma:RBC may result in improved early survival in injured children receiving a massive transfusion. Prospective, multicenter studies are needed to determine optimal resuscitation strategies for these critically ill children.

66.01 Risk Factors for Adverse Outcomes in Children Undergoing Resection of Primary Liver Tumors

K. Culbreath1, A. Garcia1, I. Leeds1, T. Crawford1, E. Boss2, D. Rhee1  1Johns Hopkins University School Of Medicine,Surgery,Baltimore, MD, USA 2Johns Hopkins University School Of Medicine,Otolaryngology,Baltimore, MD, USA

Introduction: Children with primary malignant tumors of the liver often present with an incidental abdominal mass and anemia, with failure to thrive in more advanced disease. Complete tumor resections are major operations that offer the best chance of long-term disease-free survival. This study analyzes the effect of preoperative anemia and parenteral nutrition on surgical outcomes in these patients undergoing a major liver resection.

Methods: This is a retrospective cohort study of children undergoing a major liver resection for primary malignant hepatic tumors. Data was collected using the National Surgical Quality Improvement Program Pediatric database from 2012-2015. Demographics, comorbidities, and 30-day outcomes were compared by anemia (defined by age-specific clinical practice guidelines) and preoperative total parenteral nutrition (TPN) using Fishers exact test.  Outcomes include post-operative complications and hospital readmissions.   Propensity score matching was performed to control for significant confounders.

Results: A total of 110 children were included in the study with 80 (72.7%) undergoing lobectomy and 30 (27.3%) trisegmentectomy. The mean age was 3.5 years (range 4 days to 17.7 years). Thirty-one patients (30.4%) were born prematurely with 18 (16.4%) being born before 30 weeks gestational age. 76 (69.1%) patients had preoperative anemia, and 36 (32.7%) were receiving TPN. Children with pre-operative TPN were more likely to have cardiac (p= 0.01), respiratory (<0.01), neurologic (p<0.01), and hematologic co-morbidities (p=0.02). There were 22 (20.0%) post-operative complications and 6 (5.5%) hospital readmissions. After propensity score matching, there were 34 matched pairs for anemia, and 36 matched pairs for TPN. There was no significant difference in post-operative complications between anemic and non-anemic patients (20.6% vs 35.3%, p=0.28). Patients receiving pre-operative TPN had an increased rates of post-operative complications compared to those not (33.3% vs 11.1%, p=0.04). Neither anemia (p=0.61) nor pre-operative TPN use (p=0.05) had a significant association with readmissions.

Conclusion: Anemia and pre-operative TPN are common in children undergoing major resection of primary malignant hepatic tumors, with TPN use being associated with several comorbidities. Patients on pre-operative TPN are at higher risk of complications after surgery and may warrant special attention to their overall conditioning and nutritional status when planning their operation.

 

66.02 Delivery of Care for Biliary Atresia in 21st Century: An Analysis of Hospital Caseload in the US

T. Chkhikvadze1, C. J. Greig1, J. Shi2, R. A. Cowles1  1Yale University School Of Medicine,Section Of Pediatric Surgery, Department Of Surgery,New Haven, CT, USA 2Ohio State University,The Research Institute At Nationwide Children’s Hospital,Columbus, OH, USA

Introduction:  Biliary atresia (BA) is a rare disease affecting infants and is the most common indication for pediatric liver transplantation in the United States. Due to the lack of an organized method for centralization of care for this disease, we hypothesized that care for BA patients would be distributed in a diffuse manner. The Kids’ Inpatient Database (KID) was analyzed to assess the distribution of care and hospital volume for BA in the United States.

Methods:  The KID database, published triennially, was queried from 2000-2012 using the ICD-9-CM code for BA (751.61) and ICD-9 procedural codes for hepatic duct-GI anastomosis (51.37), and other bile duct anastomosis (51.39) in infants less than 1 year of age. National and regional estimates were calculated from captured discharges. We identified all hospitals caring for biliary atresia patients and subdivided into those providing overall care and those providing surgical care per year.

Results: The number of hospitals participating in KID increased over time as did the number of hospitals providing overall and surgical care for BA (Table). Operative caseload estimates from 2000 showed that 48 of 55 (87.3%) hospitals performed 1-2 cases/year and 7 of 55 (12.7%) performed 3-9 cases/year. Estimates from 2012 showed that 53 of 72 (73.6%) hospitals performed 1-2 cases/year and 19 of 72 (26.4%) performed 3-9 cases/year. 

Conclusion: Infants with BA are cared for in 5-6% of hospitals reporting into the KID database. Surgical care appears to be further limited to approximately 2% of total KID participating hospitals. Although the care for BA appears to be self-centralized into only a very small subset of hospitals in the United States, nearly 75% of infants with this disease receive their surgical care in hospitals that perform fewer than three portoenterostomy procedures per year. Given the cost and long-term morbidity of BA treatment, a national cross-sectional study of care patterns and outcomes is desperately needed to assure that optimal care is being rendered under the current system.

66.03 Impact of Growth and Nutrition on Risk of Acquired Diseases of Prematurity and Perinatal Mortality

T. J. Sinclair1,2, C. Ye5, F. Salimi Jazi2, J. Taylor2, K. Bianco4, G. Shaw3, D. K. Stevenson3, R. Clark6, K. G. Sylvester2  1Stanford University School Of Medicine,Department Of General Surgery,Stanford, CA, USA 2Stanford University School Of Medicine,Division Of Pediatric Surgery,Stanford, CA, USA 3Stanford University School Of Medicine,Department Of Pediatrics,Stanford, CA, USA 4Stanford University School Of Medicine,Department Of Obstetrics & Gynecology,Stanford, CA, USA 5Hangzhou Normal University,School Of Medicine,Hangzhou, ZHEJIANG, China 6Pediatrix-Obstetrix Center For Research, Education And Quality,Sunrise, FL, USA

Introduction:

Birth weight and gestational age are the predominant clinical factors used to risk stratify infants for acquired diseases of prematurity including necrotizing enterocolitis (NEC), chronic lung disease (CLD), and retinopathy of prematurity (ROP). We sought to further elucidate the relative impact of size (birth weight, BW), prematurity (gestational age, GA), pre- and postnatal growth, and postnatal nutrition on the risk of perinatal mortality and development of acquired diseases of prematurity. 

Methods:

We performed a retrospective longitudinal cohort study by querying a multicenter, longitudinal database that included 995 preterm infants (<32 weeks gestation). Detailed clinical and nutritional data were collected on subjects for the first 6 weeks of life. Comparator groups were generated based on the following variables: BW (top vs. bottom quartiles), GA (23-26 vs. 29-32 weeks), fetal growth restriction (small for gestational age (SGA defined as <10th percentile BW for GA) vs. appropriate for gestational age (AGA, BW 10-90th percentile for GA)), growth velocity (top vs. bottom quartile of change in weight z-score from birth to day of life 42), and total calories (<100 vs >120 kcal/kg/day averaged from day of life 2-42). Cases of NEC, CLD (need for supplemental oxygen at day of life 42), [DS1] and ROP were identified. Odds Ratios (OR) with 95% confidence intervals (CI) were calculated using the defined comparator groups for the outcomes of NEC, CLD, ROP, mortality (during the study period), and a composite of the disease and mortality outcomes. Statistical significance was defined as p-value < 0.05. Mortality outcome was excluded from the growth velocity and total calorie analyses given the longitudinal nature of the variable definitions.

Results:

Infants in the top quartile for BW or born at 29-32 weeks gestation were found to be at a significantly reduced risk for all disease outcomes, mortality, and the composite outcome. However, when fetal growth restriction was present there was an increased risk of ROP and mortality, but not for NEC[DS1] , CLD, or the composite outcome. Similarly, being in the bottom quartile for post-natal growth velocity was associated with an increased risk of developing ROP, but not for the other outcomes. Finally, receiving greater than 120 kcal/kg/day on average was associated with a decreased risk for all outcomes, except for NEC which approached but did not reach statistical significance. See Table 1.

Conclusion:

These results suggest that postnatal caloric delivery is a significant modifier of premature newborns’ risk of acquired major perinatal disease and mortality. These findings may be of particular importance given that postnatal caloric delivery may be modifiable.

65.08 Outcomes in Older Kidney Transplant Recipients with Prior Solid Organ Transplants

C. E. Haugen1, Q. Huang1, M. McAdams-DeMarco1,2, D. L. Segev1,2  1Johns Hopkins University School Of Medicine,Baltimore, MD, USA 2Johns Hopkins Bloomberg School Of Public Health,Epidemiology,Baltimore, MD, USA

Introduction: Outcomes in solid organ transplantation (SOT: heart, lung, liver) have improved, and SOT recipients are living longer with functioning grafts. However, between 7-21% of SOT recipients will develop end-stage renal disease secondary to calcineurin inhibitor immunosuppression and a growing number of SOT recipients will be listed for and undergo kidney transplantation (KT). Similar KT graft survival yet worse overall survival has been reported in adult prior SOT recipients, but it is unclear if these outcomes are similar among older (age≥65) prior SOT recipients who undergo KT. In light of the aging SOT recipient population, KT outcomes should be evaluated, given the higher prevalence of comorbidities and frailty in older adults.

Methods: 40,730 older (age≥65) KT recipients were identified using the US Scientific Registry of Transplant Recipients (1/1/1990-12/31/2015). Adjusted Cox proportional hazards models were used to estimate differences in graft and patient survival after KT between prior SOT and no prior SOT recipients. 

Results: Since 1990, 948 prior SOT recipients (485 liver, 396 heart, 67 lung) have undergone KT after age 65. The number of older KT recipients with prior SOTs has increased annually since 1990 with a range of 0-74 performed per year. Prior SOT recipients were more likely to male, Caucasian, have renal failure from calcineurin inhibitors, undergo a pre-emptive KT, and receive a living donor than no prior SOT recipients. Five-year death-censored graft loss was 88% for recipients with prior SOT and 88% with no prior SOT; the corresponding five-year mortality was 71% and 64% (Figure). After adjustment, death-censored graft loss (aHR:1.25, 95%CI:1.01-1.54, p=0.04) and mortality (aHR: 1.43, 95%CI:1.28-1.59, p<0.001) were greater for older prior SOT recipients than no prior SOT recipients. Regardless of prior SOT type, mortality for older prior SOT recipients was greater compared to no prior SOT recipients (lung- aHR: 4.06, 95%CI:2.35-7.00, heart- aHR: 1.35, 95%CI:1.16-1.58, and liver- aHR: 1.41, 95%CI:1.22-1.65).  

Conclusions: Older KT recipients of prior SOT have worse KT graft and overall survival compared to no prior SOT recipients. Appropriate and careful selection of older KT recipients is imperative in this population of SOT recipients given worse outcomes.   

65.09 Effects of Kidney Transplant on the Outcomes of Surgical Repair of Abdominal Aortic Aneurysm

H. Albershri1, W. Qu1, M. Nazzal1, J. Ortiz1  1The University Of Toledo Medical Center,Department Of Surgery,Toledo, OH, USA

Objectives: To investigate the impacts of history of kidney transplant (Tx) on the in-hospital outcomes of surgical repair (SR) of abdominal aortic aneurysm (AAA).

Methods:  All AAA patients from 2008 to 2013 were selected using International Classification of Diseases rev.9 (ICD-9) codes from the National Inpatient Sample (NIS) database from the Healthcare Cost and Utilization Project (HCUP). History of Tx, comorbidities, SR (open (OR) or endovascular repair (EVAAAR)) and postoperative complications were also identified by ICD-9 codes. Elixhauser comorbidity index (ECI) were calculated based on the method published by van Walraven, et al. In-hospital mortality rate (IMR), length of stay (LOS), total hospital charge (TC) and postoperative complications were compared between Tx and No Tx patients. Binary logistic regression and linear regression were used to adjust for the confounding factors. IBM SPSS ver.23 was used in all the statistical analysis. Type I error level was set at 0.05.

Results: 284451 patients in NIS were diagnosed with AAA in 6 years. Only 389 (0.14%) of them had a history of Tx. Tx patients were significantly younger (67.8±9.5 vs. 75.9±10 years old) and had more males (78.1% vs. 67.4%) than non-Tx patients (both p<.001). Among 18.3% (n=52168) of the patients who underwent SR, the majority of the procedures were EVAAAR (78.3%). There were no significant differences in incidence and types of SR between Tx and Non-Tx patients (Table 1, both p>.05). Tx group had significantly higher ECI than that of the non-Tx group (median: 6 vs. 2, p<.001). There were no significant differences in common postoperative complications, LOS, TC and IMR between Tx and non-Tx patients (Table 1, all p>.05). Multivariable analysis results also showed no significant differences in these in-hospital outcomes between Tx and non-Tx group after adjusting for confounding factors such as demographics, hospital characteristics and ECI (Table 1, all p>.05).

Conclusion: Despite the fact that Tx patients tend to have a higher rate of comorbidities, they did not show significant increases in postoperative complication rates, IMR, LOS and TC than that of the non-Tx patients. Our study is limited to the in-hospital outcomes and the statistical power was not satisfactory due to the small sample size in Tx group.

 

65.10 Hospital Length of Stay After Pediatric Liver Transplantation

K. Covarrubias1, X. Luo1, D. Mogul2, J. Garonzik-Wang1, D. L. Segev1  1Johns Hopkins University School Of Medicine,Surgery,Baltimore, MD, USA 2Johns Hopkins University School Of Medicine,Pediatric Gastroenterology & Hepatology,Baltimore, MD, USA

Introduction:  Pediatric liver transplantation is a life saving treatment modality for patients and their families that requires extensive multidisciplinary assessment in the pre-transplantation period. In order to better inform medical decision making and discharge planning, and ultimately provide more personalized patient counseling, we sought to identify recipient, donor, and surgical characteristics that influence hospital length of stay (LOS) following pediatric liver transplantation.

Methods:  We studied 3956 first time pediatric (<18 yrs old) transplant recipients between 2002 and 2016 using SRTR data. We excluded patients ever listed as status 1A and patients who died prior to discharge. We used multi-level negative binomial regression to estimate incidence rate ratios (IRR) for hospital LOS accounting for center level variation. For recipients <12 yrs old, PELD (Pediatric End-Stage Liver Disease) score was used for analysis and for older transplant recipients, MELD (Model for End-Stage Liver Disease) was used.

Results:The median LOS in our study population was 15 hospital days after transplantation. Our analysis determined that a MELD/PELD score >14 (MELD 15-25: IRR 1..081.141.21, MELD/PELD 25-29:1.271.391.52, MELD/PELD >30: 1.171.281.41,), exception points (1.061.121.18), partial grafts (1.161.231.31), and Hispanic ethnicity (1.001.071.15) were associated with a longer LOS (p<0.05). A graft from a live donor (0.810.880.96), recipient weight greater than 10 kg (10-35 kg: 0.760.800.85, >35 kg: 0.610.660.70), and non-hospitalized patient status (0.710.800.90) were associated with a decreased LOS (p<0.05).

Conclusion: Our findings suggest that the ability to transplant patients at lower MELD/PELD scores and increased use of grafts from living donors would lead to decreased healthcare utilization in the immediate postoperative period. Latino ethnicity and public health insurance were also associated with a longer LOS, however our model does not account for any healthcare disparities faced by such groups of people including socioeconomic status and language barriers. 

 

65.06 Preoperative Thromboelastography Predicts Transfusion Requirements During Liver Transplantation

J. T. Graff1, V. K. Dhar1, C. Wakefield1, A. R. Cortez1, M. C. Cuffy1, M. D. Goodman1, S. A. Shah1  1University Of Cincinnati,Department Of Surgery,Cincinnati, OH, USA

Introduction: Thromboelastography (TEG) has been shown to provide an accurate assessment of patients’ global coagulopathy and hemostatic function. While use of TEG has grown within the field of liver transplantation (LT), the relative importance of TEG values obtained at various stages of the operation and their association with outcomes remain unknown. Our goal was to assess the prevalence of TEG-based coagulopathy in patients undergoing LT, and determine whether preoperative TEG is predictive of transfusion requirements during LT.

Methods: An IRB approved, retrospective review of 380 consecutive LTs between January 2013 and May 2017 was performed. TEGs obtained during the preoperative, anhepatic, neohepatic, and initial postoperative phases were evaluated. Patients with incomplete data were excluded from the analysis, resulting in a study cohort of 110 patients. TEGs were categorized as either hypocoagulabe, hypercoagulable, or normal using a previously described composite measure of R time, k time, alpha angle, and maximum amplitude. Perioperative outcomes including transfusion requirements, need for temporary abdominal closure, and rates of reoperation for bleeding were evaluated.

Results: Of patients undergoing LT, 11.8% were hypocoagulable, 22.7% were hypercoagulable, and 65.5% were normal at the start of the operation. 46.4% of patients finished the operation in a different category of coagulation from which they started. Of patients starting LT hypocoagulable, 15.4% finished hypocoagulable, none finished hypercoagulable, and 84.6% finished normal. Patients with hypocoagulable preoperative TEGs were found to require more units of pRBC (12 vs. 6 vs. 6, p=0.04), FFP (24 vs. 13 vs. 8, p<0.01), cryoprecipitate (4 vs. 2 vs. 1, p<0.01), platelets (4 vs. 2 vs. 1, p <0.01), and cell saver (4.6 liters vs. 2.8 vs. 1.9, p<0.01) during LT compared to those with normal or hypercoagulable preoperative TEGs. Despite these higher transfusion requirements, there were no significant differences in rate of temporary abdominal closure, unplanned reoperation, ICU length of stay, or 30-day readmission (all p > 0.05) between patients with hypocoagulable, hypercoagulable, or normal preoperative TEGs.

Conclusion: Preoperative thromboelastography may be predictive of transfusion requirements during LT. By consistently evaluating the preoperative TEG, surgeons can identify patients who may be at higher risk for intraoperative coagulopathy and require increased perioperative resource utilization.
 

65.07 Evaluating Length of Stays with Electronic Medical Record Interoperability

M. Cheung1, P. Kuo1, A. Cobb1  1Loyola University Chicago Stritch School Of Medicine,Maywood, IL, USA

Introduction:

While the technology industry makes improvements in software, the healthcare industry still lags far behind in the adoption of such advancements. Electronic medical record (EMR) interoperability is specifically important due to the fluidity of care a patient may receive from multiple sources. We hypothesized that transplant patients who received care at hospitals with high EMR interoperability scores would have shorter adjusted length of stays (aLOS).

Methods:

We utilized the 2013 HCUP State Inpatient Database (SID) for New York and Washington, and identified roughly 2000 patients who had received a heart, lung, pancreas, spleen, kidney, or bone marrow transplant. We created interoperability scores between 0-44 by summing the answers to questions designated as pertaining to Health Information Exchange on the 2013 American Hospital Association Information Technology (AHAIT) survey. We calculated the aLOS by dividing the unadjusted LOS by major severity diagnostic related group (MS-DRG)-based weights from the Centers for Medicare & Medicaid Services (CMS), and calculated the geometric means of the aLOS in order to diminish the impacts of outliers. We then correlated the calculated interoperability scores with the mean aLOS.

Results:
We found that the mean aLOS for transplantation patients decreased as interoperability score increased, within 95% confidence intervals. Adjusted length of stays of patients receiving care at hospitals with the worst interoperability score of 12 were 3.33 times longer than at hospitals with the highest interoperability score of 32.5 (p<0.001). 

Conclusion:

MS-DRG weights are calculated based on expected hospital cost and severity of the patient’s disease state. Therefore, aLOS serves as an indirect proxy for efficiency related to cost as well as efficiency related to time. Our findings, although not causal in nature, suggest that hospitals could save significant time and money by increasing their ability to exchange health information between different groups and facilities.

65.04 Impact of Donor Hepatic Arterial Anatomy on Clinical Graft Outcomes in Liver Transplantation

J. R. Schroering2, C. A. Kubal1, T. J. Hathaway2, R. S. Mangus1  1Indiana University School Of Medicine,Transplant Division/Department Of Surgery,Indianapolis, IN, USA 2Indiana University School Of Medicine,Indianapolis, IN, USA

Introduction:  The arterial anatomy of the liver has significant variability. When a liver graft is procured for transplant, the donor hepatic artery anatomy must be identified and preserved to avoid injury. Reconstruction of the hepatic artery is often required in cases of accessory or replaced vessels. This study reviews a large number of liver transplants and summarizes the arterial anatomy. Clinical outcomes include hepatic artery thrombosis (HAT), early graft loss and long term graft survival.

 

Methods:  All liver transplants at a single center over a 10-year period were reviewed. The arterial anatomy was determined from a combination of the organ procurement record and the liver transplant operative note. Anatomic variants and reconstructions were noted. For this cohort, all accessory/replaced right hepatic arteries were reconstructed to the gastroduodenal artery (GDA) with 7-0 prolene suture on the back table prior to implantation. All accessory/replaced left hepatic arteries were left intact from their origin at the left gastric and hepatic artery when possible, though occasional reconstruction to the GDA with 7-0 prolene suture was performed. Post-operative anticoagulation was not utilized routinely. Antithrombolytic therapy was administered at initial incision in all cases using either aprotinin or epsilon aminocaproic acid.  A single Doppler ultrasound (US) was obtained post-operatively in the critical care unit to confirm arterial and venous flow. No other imaging (intraoperative or post-operative) was obtained unless there was an indication. 

Results: The records for 1145 patients were extracted. The median recipient age was 57, body mass index 28.4, and MELD 20. Retransplant procedures comprised 4% of the cohort. Hepatic arterial anatomy types include: normal (68%), accessory/replaced left (16%), accessory/replaced right (10%), accessory/replaced right and left (4%), and other variants (2%). There were 222 cases (19%) in which back table arterial reconstruction was required. The overall incidence of HAT was 1%. The highest rate of HAT was in liver grafts with accessory right and left hepatic arteries. The hepatic arterial resistive indices measured on post-operative Doppler US did not differ by hepatic artery anatomy. One-year survival for all grafts was above 90%, but livers with an accessory right hepatic artery (only) had lower survival at 10-years when compared with grafts with normal anatomy (62% versus 75%). 

Conclusion: There were 68% of livers with standard anatomy, with the accessory/replaced left (16%) and right (10%) arteries being the next most common variants. All anatomic variants had good 1-year graft survival, though liver grafts with an accessory/replaced right hepatic artery had lowest survival at 10-years.

65.05 Sarcopenia a Better Predictor of Survival than Serologic Markers in Elderly Liver Transplant Patients

W. J. Bush1, A. Cabrales1, H. Underwood1, R. S. Mangus1  1Indiana University School Of Medicine,Indianapolis, IN, USA

Introduction: An increasing number of liver transplant (LT) patients are geriatric (≥ 60 years).  Recent research suggests that measures of frailty, such as nutrition status, may be important predictors of surgical outcomes. This study evaluates the impact of objective measures of nutritional status on post-transplant perioperative and long term outcomes for geriatric liver transplant patients. 

Methods:  Patient inclusion criteria included all geriatric liver transplant patients at a single center over a 16-year time period. Measures of nutrition status included preoperative core muscle mass, perinephric and subcutaneous adipose volume, as well as standard serological markers of nutritional status (albumin, total protein and cholesterol). Measurements of total psoas muscle area, and total perinephric and subcutaneous adipose volumes were measured from preoperative computed tomography (CT) scans at the L2/L3 disc space, and scaled to patient height. Outcomes included length of hospital stay and patient survival.

Results: There were 564 patients included in the analysis, of whom 446 had preoperative CT scans available. There was poor correlation between serologic markers of nutrition and CT measures of tissue volume. Serologic markers of nutrition were poor predictors of survival, but abnormal values were associated with increased length of stay, prolonged ventilator requirement, and a non-home discharge. In contrast, patients with severe sarcopenia and poor subcutaneous and visceral adipose stores had worse long term survival, but these findings had poor correlation with perioperative outcomes. Cox regression analysis demonstrates decreased long term survival for patients with severe sarcopenia. 

Conclusion: In this cohort of geriatric LT recipients, common serologic markers of nutrition were associated with perioperative clinical outcomes, while CT measures of muscle and adipose stores were more predictive of early and intermediate term survival outcomes. These results support the need for the further development of frailty measures that assess core tissue volume and physiologic strength.

 

65.03 The Role of FDG-PET in Detecting Rejection after Liver Transplantation

A. M. Watson1, C. M. Jones1, E. G. Davis1, M. Eng1, R. M. Cannon1, P. Philips1  1University Of Louisville,Department Of Surgery,Louisville, KY, USA

Introduction:
Acute cellular rejection (ACR) following organ transplantation continues to be a major problem in solid organ transplantation.  ACR following organ transplantation is associated with activation of T-cells, which have increased glucose uptake and utilization. This physiologic activity could be utilized for detection of ACR. This study was designed to evaluate the effectiveness of 18[F] Fluoro-2- Deoxyglucose Positron Emission Tomography (FDG PET) in detecting acute rejection in the clinical setting.

Methods:
FDG-PET studies were performed on 88 orthotopic liver transplant patients (41 men, 47 women; mean age 51 +/- 6 years) at 7 and 17 days post-operatively (1st PET and 2nd PET respectively).   Additional studies was performed if patients had suspicion of rejection and at resolution of rejection (3rd PET and 4th PET respectively).  The FDG-PET images were matched to 107 non-transplant patients (52 +/- 20 years), which served as controls. The controls underwent 2 FDG-PET studies during the same time intervals (1st PET and 2nd PET).  A circular region of interest (ROI) was placed over the liver for semi-quantitative evaluation of FDG-PET images by means of standard uptake values (SUVs).

Results:
There was no significant difference between the SUV of the baseline FDG-PET studies (1st & 2nd PET) post-transplant versus the SUV obtained in non-transplanted patients. The mean SUVs normalized for body weight in post-orthotopic liver transplant patients measured 1.93 +/- 0.5 (p = 0.122); the mean SUVs for non-transplant patients were 2.10 +/- 0.6 (p = 0.210).  Eighteen of 88 patients in our study (20.5%) had histologically proven ACR during a 30 +/- 11 day follow-up.  There was no significant difference between the SUV values of 1st PET among non-rejecters vs. rejecters (mean 2.05; SD 0.46, median 2.19; IQR 1.75, 2.34 vs. mean 1.82, SD 0.40; median 1.77, IQR 1.76, 2.13. p value=0.127).  Within the rejection cohort, the SUVs from the 3rd PET (rejection) were higher compared to the 1st PET (baseline). The mean SUVs of the 3rd PET measured 2.41 (SD 0.48; median 2.5, IQR 2.14, 2.74) compared to the baseline 1st PET mean SUV of 1.82 (SD 0.41; median 1.77, IQR 1.76, 2.13) and this difference was statistically significant (p<0.001).

Conclusion:
To date the role of FDG-PET in the diagnosis of ACR has not been evaluated. Semi-quantitative analysis using SUV showed a statistically significant increase between baseline and rejection FDG-PET studies.  Additional prospective validation studies are essential to define the role of FDG-PET scan as an early marker for acute cellular rejection.
 

65.01 Thrombolysis during Liver Procurement Prevents Ischemic Cholangiopathy in DCD Liver Transplantation

A. E. Cabrales1, R. S. Mangus1, J. A. Fridell1, C. A. Kubal1  1Indiana University School Of Medicine,Department Of Transplant Surgery,Indianapolis, IN, USA

Introduction: The rate of donation after circulatory death (DCD) liver transplantation has decreased in recent years as a result of inferior graft and patient survival when compared to donation after brain death (DBD) transplantation. Ischemic cholangiopathy (IC) is the primary cause of these inferior outcomes and is associated with a high rate of graft loss, retransplantation and recipient mortality. Development of IC in liver transplant recipients appears to be associated with peribiliary arterial plexus microthrombi formation that can occur in DCD donors. Our center has demonstrated success using tissue plasminogen activator (tPA) flush during DCD organ procurement to prevent the formation of microthrombi and prevent IC. This study investigates the long term impact of tPA flush on graft outcomes and program use of DCD organs.

Methods: All records for liver transplants over a 15 year period at a single center were reviewed and data extracted. DCD organ procurement followed carefully established protocols including a 5-minute wait time after determination of cardiac death prior to initiation of the procurement procedure. The procurement consisted of rapid cannulation of the aorta, clamping of the aorta and decompression through the vena cava. Preservation solution included initial flush with histidine-tryptophan-ketoglutarate solution (HTK), followed by infusion of tPA in 1L NS, then further flush with HTK until the effluent was clear. Total flush volume was less than 5L.

Results: There were 57 tPA procurements (48%) and 62 non-tPA procurements (52%). Patients receiving the tPA grafts were older and had a higher MELD score. The tPA grafts had less cold and warm ischemia time.  The grafts procured using tPA had better survival at 7- and 90-days (p=0.09, p=0.06) and at 1-year (95% versus 79%, p=0.01).  Cox regression showed significantly better long-term survival for tPA grafts (88% versus 45% at 10-years; p<0.01). Improved outcomes with thrombolytic therapy in DCD liver procurement changed the use of DCD grafts at our center to extended criteria donors who were older, heavier, and out of state. Use of these higher risk DCD donors did not change clinical outcomes.

Conclusion: Our center has shown that optimization of perioperative conditions, including use of an intraoperative thrombolytic flush, significantly lowers the incidence of IC in DCD liver grafts. With improved outcomes, the percentage of DCD grafts at our center has increased, including the use of extended criteria DCD livers, without a worsening of outcomes.

65.02 The Effect of Socioeconomic Status on Patient-Reported Outcomes after Renal Transplantation

A. J. Cole1, P. K. Baliga1, D. J. Taber1  1Medical University Of South Carolina,Charleston, Sc, USA

Introduction:  Research analyzing the effect socioeconomic status (SES) has on renal transplant outcomes has demonstrated conflicting results. However, recent studies demonstrate that certain patient-reported outcomes (PROs), such as depression, medication non-adherence, health literacy, social support, and self-efficacy can influence clinical outcomes in renal transplant recipients. Our objectives were to examine the effect SES has on PROs, and determine if there is an association between SES, patient-reported outcomes, and healthcare utilization.

Methods:  Post-hoc analysis of 52 patients enrolled in an ongoing prospective trial aimed at improving cardiovascular disease risk factor control in renal transplant recipients. As part of the study, at baseline, patients completed detailed surveys assessing SES and PROs.  Patients were divided into low and high SES cohorts based on income, education, marital status, insurance, and employment. All patients were given 12 self-reported surveys in the domains of medication-related issues, self-care and knowledge, psychosocial issues, and healthcare. Analyses included the associations between 12 PRO surveys, SES measures, and healthcare utilization, including the rate of hospitalizations, ED visits and clinic visits that occurred between the date of transplant and enrollment in the trial.

Results: The low SES cohort (n=16, 30.8%) experienced more severe depression (5.75 vs 3.0, p=0.022), higher rates of inadequate health literacy (3.42 vs 1.68, p=0.022) and perceived stress (2.743 vs 3.266, p=0.027), along with significantly less self-efficacy (6.971 vs 8.214, p=0.006) and social support (3.86 vs 4.408, p=0.012; see Figure 1). Low SES was associated with a 60% higher rate of hospitalization and 90% higher rate of ED visits per patient-year. Medication non-adherence was also associated with more hospitalizations and ED visits.

Conclusion: This analysis demonstrates that low SES was significantly associated with negative PROs, including depression, health literacy, social support, stress, and self-efficacy.  Further, low SES and medication non-adherence was associated with higher rates of healthcare utilization. 

 

64.09 Role of the patient-provider relationship in Hepato-pancreato-biliary diseases

E. J. Cerier1, Q. Chen1, E. Beal1, A. Paredes1, S. Sun1, G. Olsen1, J. Cloyd1, M. Dillhoff1, C. Schmidt1, T. Pawlik1  1Ohio State University,Department Of Surgery,Columbus, OH, USA

Introduction: An optimal patient-provider relationship (PPR) may improve medication / appointment adherence, healthcare resource utilization, as well as reduce healthcare costs.  The objective of the current study was to define the impact of PPR on healthcare outcomes among a cohort of patients with hepato-pancreato-biliary (HPB) diseases.

Methods: Utilizing the Medical Expenditure Panel Survey Database from 2008-2014, patients with an HPB disease diagnosis were identified. PPR was determined using a weighted score based on survey items from the Consumer Assessment of Healthcare Providers and Systems (CAHPS). Specifically, patient responses to questions concerning access to healthcare providers, responsiveness of healthcare providers, patient-provider communication, and shared decision-making were obtained. Patient provider communication was stratified into three categories using a composite score that ranged from 4 to 12 (score 4-7: "poor," 8-11: "average," and 12 "optimal"). The relationship between PPR and health care outcomes was analyzed using regression analyses and generalized linear modeling.

Results: Among 594 adult-patients, representing 6 million HPB patients, reported PPR was "optimal" (n=210, 35.4%), "average" (n=270, 45.5%), and "poor" (n=114, 19.2%). Uninsured (uninsured: 36.3% vs. Medicaid: 28.8% vs. Medicare: 15.4% vs. Private: 14.0%; p=0.03) and poor-income (high: 14.0% vs. middle: 12.8% vs. low: 21.5% vs. poor: 24.3%; p=0.03) patients were more likely to report "poor" PPR. In contrast, other factors such as race, sex, education, and age were not associated with PPR. In addition, there was no association between PPR and overall annual healthcare expenditures ("poor" PPR: $19,405, CI $15,207-23,602 vs. "average" PPR: $20,148, CI $15,538-24,714 vs. "optimal" PPR: $19,064, CI $15,344-22,784; p=0.89) or out-of-pocket expenditures ("poor" PPR: $1,341, CI $618-2,065 vs. "average" PPR: $1,374, CI $1,079-1,668 vs. "optimal" PPR: $1,475, CI $1,150-1,800; p=0.77). Patients who reported "poor" PPR were also more likely to self-report poor mental health scores (OR 5.0, CI 1.3-16.7), as well as have high emergency room utilization (≥ 2 visits: OR 2.4, CI 1.2-5.0)(both p<0.05). Patients with reported "poor" PPR did not, however, have worse physical health scores or more previous inpatient hospital stays (both p>0.05)(Figure).

Conclusion: Patient self-reported PPR was associated with insurance and socioeconomic status.  In addition, patients with perceived "poor" PPR were more likely to have poor mental health and be high utilizers of the emergency room.  Efforts to improve PPR should focus on these high-risk populations.

64.10 Preoperative Frailty Assessment Predicts Short-Term Outcomes After Hepatopancreatobiliary Surgery

P. Bou-Samra1, D. Van Der Windt1, P. Varley1, X. Chen1, A. Tsung1  1University Of Pittsburg,Hepatobiliary & Pancreatic Surgery,Pittsburgh, PA, USA

Introduction: Given the aging of our population, increasing numbers of elderly patients are evaluated for surgery. Preoperative assessment of frailty, defined as the lack of physiological reserve, is a novel concept that has recently gained interest to predict postoperative complications. The comprehensive Risk Analysis Index (RAI) for frailty has been shown to predict mortality in a large cohort of surgical patients. RAI is now measured in all patients presenting to surgical clinics in our institution. Initial analysis showed that patients with hepatopancreatobiliary disease have the highest frailty scores, only second to patients presenting for cardiovascular surgery. Therefore, the aim of this study was to specifically evaluate the performance of RAI in predicting short-term post-operative outcomes in patients undergoing hepatopancreatobiliary surgery, a significantly frail patient population.

Methods: From June-December 2016, the RAI was determined in 162 patients prior to surgery. RAI includes 12 variables to evaluate e.g. age, kidney disease, congestive heart failure, cognitive functioning, independence in daily activities, and weight loss. Data on 30-day post-operative outcomes were prospectively collected. Complications were scored according to the Clavien-Dindo classification and summarized in the Comprehensive Complication Index (CCI). Other assessed post-operative outcomes included ICU admission, length of stay, and rates of readmissions. Logistic and linear regressions were done to assess for correlation of RAI score and each measured outcome. A multivariate analysis was done to control for the magnitude of the operation, coronary artery disease, cancer stage, and intraoperative blood loss.

Results: Our cohort of 162 patients (79 M; 83 F, median age 67, range 19-95), included 55 undergoing minor operation, 56 undergoing intermediate operation, and 51 undergoing major surgery. Their RAI scores ranged from 0 to 25, with a median of 7. With every unit increase in RAI score, length of stay increased by 5% (IRR 1.05; 95%CI 1.04-1.07, P<0.01), the odds of discharging the patient to a special facility increased by 10% (OR 1.10; 95%CI 1.02-1.17, P<0.01), the odds of admission to the ICU increased by 11% (OR 1.11; 95%CI 1.02-1.20, P=0.01), the expected ICU length of stay increased by 17% (IRR=1.17; CI 1.06-1.30), the odds of readmission increased by 8% (OR=1.08; CI 0.99-1.17, P=0.054), the CCI increased by 1.6 units (coefficient=1.60; CI 0.61-2.58, p<0.01). In multivariate analysis, frailty remained positively associated with CCI (p=0.01)

Conclusion: The RAI score is predictive of short-term post-operative outcomes after hepatopancreatobiliary surgery. Pre-operative risk assessment with RAI could aid in decision-making for treatment allocation to surgery versus less morbid locoregional treatment options in frail patients. 

 

64.07 Cost-Effectiveness of Rescuing Patients from Major Complications after Hepatectomy

J. J. Idrees1, C. Schmidt1, M. Dillhoff1, J. Cloyd1, E. Ellison1, T. M. Pawlik1  1The Ohio State University, Wexner Medical Center,Department Of Surgery,Columbus, OH, USA

Introduction:  Major complications after liver resection can increase costs, as well as be associated with higher mortality. Failure to rescue (FTR) has been inversely correlated with hospital volume.  We sought to determine whether high or medium volume centers were more cost-effective at rescuing patients from major complications relative to low volume centers following hepatic resection. 

Methods:  The Nationwide Inpatient Sample (NIS) was used to identify 96,107 liver resections that occurred between 2011-2011. Hospitals were categorized into high (HV) (150+ cases/year), medium (MV)(51-149 cases/year), and low (LV) (1-49 cases/year) volume centers. Cost-effectiveness analyses were performed using propensity score matched cohorts adjusted for patient co-morbidities among HV vs. LV (8,924 pairs), as well as MV vs. LV (18,158 pairs) centers. Incremental cost effectiveness ratio (ICER) was calculated to assess cost-effectiveness of HV and MV centers relative to LV centers. ICER was calculated at the willingness to pay threshold of $50,000. Sensitivity analyses were performed using the bootstrap method with 10,000 replications.

Results: The overall incidence of complications following hepatectomy was 14.9% (n=14,313), which was roughly comparable among centers regardless of volume (HV 14.2 % vs. MV 14.3% vs. LV 15.4%; p<0.001).  In contrast, while overall FTR was 11.2%, the FTR rate was substantially lower among HV centers (HV: 7.7%, MV: 11.2%, LV: 12.3%, p<0.001).  Both HV and MV centers were more cost-effective at rescuing patients from a major complication relative to LV centers.  Specifically, the incremental cost per year of life gained was $3,296 at HV versus $4,182 at MV centers compared with LV hospitals. HV were particularly cost-effective at managing certain complications.  For example, compared to LV centers, HV hospitals had lower costs with a higher survival benefit in managing bile duct complications (ICER: -$1,580) and sepsis (ICER: -$2,760). 

Conclusion: Morbidity following liver resection was relatively common as 1 in 7 patients experienced a complication. Not only was FTR lower at HV hospitals, but the management of most major complications was also more cost-effective at HV centers. 
 

64.08 Cost burden of overtreating low grade pancreatic cystic neoplasms

J. M. Sharib1, K. Wimmer1, A. L. Fonseca3, S. Hatcher1, L. Esserman1, A. Maitra2, Y. Shen4, E. Ozanne5, K. S. Kirkwood1  1University Of California – San Francisco,Surgery,San Francisco, CA, USA 2University Of Texas MD Anderson Cancer Center,Pathology,Houston, TX, USA 3University Of Texas MD Anderson Cancer Center,Surgery,Houston, TX, USA 4University Of Texas MD Anderson Cancer Center,Biostatistics,Houston, TX, USA 5University Of Utah,Population Health Sciences,Salt Lake City, UT, USA

Introduction: Consensus guidelines recommend resection of intraductal papillary mucinous neoplasms (IPMN) with high risk stigmata, and laborious surveillance for cysts with worrisome features. In practice, resections are performed at higher rates due to fear of malignancy. As a result, many cysts harboring no or low grade dysplasia (LGD) are removed unnecessarily, with undue risk to patients. This study compares the costs and effectiveness of practice patterns at UCSF and MD Anderson to alternative management strategies for pancreatic cysts. Potential cost savings that would be realized if diagnostic accuracy were improved and prevented resection of LGD are also estimated.

Methods: We developed a decision analytic model to compare costs and effectiveness of three treatment strategies for a newly diagnosed pancreatic cyst: 1) Immediate surgery, 2) Do nothing, and 3) “Surveillance” based on consensus guidelines. Model estimates were derived from published literature and retrospective data for pancreatic cyst resections at UCSF and MD Anderson from 2005-2016. Costs and effectiveness (quality adjusted life years, QALYs) were predicted and used to develop incremental cost effectiveness ratios (ICERs). To estimate the cost burden of resecting LGD, the “Surveillance” strategy was adjusted to remove the possibility of resecting LGD, “Precision Surveillance”, and these costs were compared with the original model.

Results: The “Immediate surgery” strategy was the costliest and most effective, while the “Do nothing” strategy was least costly and least effective (Fig 1a). The “Surveillance” strategy was the preferred strategy, however, it increased costs by $129,372 per quality adjusted life year gained (ICER) compared to “Do nothing”; above the commonly accepted $100,000/QALY willingness to pay threshold. When resection of LGD was eliminated, the cost of “Precision Surveillance” decreased by $21,295, while the effectiveness increased by 0.6 QALY, making it the preferred strategy (Fig 1b). The resulting incremental cost discount of “Precision Surgery” was $35,905 per QALY compared to “Surveillance” with current diagnostic accuracy. This cost reduction brought the “Precision Surveillance” strategy below the $100,000/QALY threshold compared to the “Do Nothing” strategy.

Conclusion: Surveillance under current consensus guidelines for IPMN is the preferred strategy compared to the ”Immediate surgery” and “Do nothing” strategies. Our present inability to distinguish LGD from high grade/invasive lesions adds significant costs to the treatment of IPMN. Improved diagnostics that accurately grade cystic pancreatic neoplasms and empower clinicians to reduce the resection of LGD would decrease overall costs and improve effectiveness of surveillance.

64.05 Prognostic Value of Hepatocellular Carcinoma Staging Systems: A Comparison

S. Bergstresser2, P. Li2, K. Vines2, B. Comeaux1, D. DuBay3, S. Gray1,2, D. Eckhoff1,2, J. White1,2  1University Of Alabama at Birmingham,Department Of Surgery, Division Of Transplantation,Birmingham, Alabama, USA 2University Of Alabama at Birmingham,School Of Medicine,Birmingham, Alabama, USA 3Medical University Of South Carolina,Department Of Surgery, Division Of Transplantation,Charleston, Sc, USA

Introduction:  Hepatocellular carcinoma (HCC) is the third most common cause of cancer related deaths worldwide. As the incidence of HCC continues to trend upwards, it is imperative to have validated staging systems to guide clinicians when choosing treatment options.  Seven HCC staging systems have been validated to varying degrees, however, there is currently inadequate evidence in the literature regarding which system is the best predictor of survival. The purpose of this investigation was to determine predictors of survival and compare the 7 staging systems in their ability to predict survival in a cohort of patients diagnosed with HCC. 

Methods:  This is a prospectively controlled chart review study of 782 patients diagnosed with HCC between January 2007 and April 2015 at a large, single-center hospital. Lab values, patient demographics, and tumor characteristics were used to stage patients and calculate Model for End Stage Liver Disease (MELD) and Child-Pugh scores. Kaplan-Meier method and log-rank test were used to identify the risk factors of overall survival. Cox regression model was used to calculate linear trend χ 2 and likelihood ratio χ 2 to determine linear trend and homogeneity of the staging systems, respectively. 

Results: Univariate analyses suggested that tumor number (P < .0001), diameter of largest lesion (P < .0001), tumor taking up > 50% of liver mass (P < .0001), tumor major vessel involvement (P = .0025), alpha fetoprotein level (AFP) 21-200 vs > 200 (P < .0001), and Child Pugh score (P <.0001) were significant predictors of overall survival; while portal hypertension (P= .520) and pre-intervention bilirubin (P= .0904) were not. In all patients, the Cancer of Liver Italian Program (CLIP) provided the largest linear trend χ 2 and likelihood ratio χ 2 in the Cox model when compared to other staging systems, indicating the best predictive power for survival. 

Conclusion:Based on our statistical analysis, Child Pugh score, tumor size, number, presence of vascular invasion, and AFP level play a significant role in determining survival. In all patients and in patients receiving treatment other than transplantation (ablation, chemoembolization), CLIP appears to be the best predictor of survival. The CLIP staging system takes into account Child Pugh score, tumor morphology, AFP level, and portal vein thrombosis, which may explain its significant ability to predict survival.