7.05 Statin Use Does Not Decrease Disease Severity or Mortality among Patients with C. difficile Infection

A. S. Kulaylat1, J. S. Kim1, C. S. Hollenbeak1, D. B. Stewart1  1Penn State University College Of Medicine,Surgery,Hershey, PA, USA

Introduction:  Clostridium difficile infection (CDI) is more commonly encountered among older, comorbid patients who frequently require the use of statins for hyperlipidemia. Recent observational data has suggested that statins have pleiotropic effects which may decrease spore germination, thus decreasing the risk of developing CDI. There have been no studies, however, evaluating whether statins affect outcomes in patients who already have CDI. The aim of this study was to evaluate whether the use of statins among inpatients with CDI was associated with measurable decreases in mortality and severity of CDI. 

Methods:  All patients admitted to a single tertiary referral center with an admission diagnosis of CDI (2005 to 2015) were identified, limiting the study cohort to subjects with a positive C. difficile stool test within 24 hours of admission. Hospital records were examined to identify use of statins at the time of hospital admission. The primary study outcome was inpatient mortality; secondary outcomes included admission for recurrent CDI within 60 days, the need for admission to a monitored care setting, the need for vasopressors and the need for an emergent total abdominal colectomy. Multivariable logistic regression was used to control for underlying comorbidities and disease-related factors to isolate associations between statin usage and study outcomes. 

Results: A total of 957 patients meeting inclusion criteria were identified. Of these, 318 (33.2%) were receiving statin therapy at the time of hospital admission. After controlling for underlying patient and disease-related factors, statin therapy was not associated with differences in inpatient mortality (odds ratio [OR] 0.90, 95% confidence interval [CI] 0.43 to 1.86), the need for admission to a monitored setting (OR 1.07, 95% CI 0.74 to 1.54), the need for vasopressors (OR 0.92, 95% CI 0.52 to 1.62) or the need for total colectomy (OR 0.51, 95% CI 0.17 to 1.53). Furthermore, statin use was not found to be a significant risk factor for admission for recurrent disease (OR 2.13, 95% CI 0.91 to 5.03). Proton pump inhibitor (PPI) therapy was observed in 447 (46.7%) study patients, and controlling for the use of PPI therapy did not reveal an association between statin use and study outcomes.

Conclusion: While prior reports suggest that statin therapy reduces the risk of developing CDI, the current study suggests that statin-pleiotropy does not influence disease mortality and severity. 

 

67.10 A Comparison of Index and Redo Operations in Crohn's Patients Following Bowel Surgery.

B. Sherman2, A. Harzman1, A. Traugott1, S. Husain1  1Ohio State University,Division Of Colon And Rectal Surgery,Columbus, OH, USA 2Ohio Health,Doctor’s Hospital,Columbus, OH, USA

Introduction: Due to chronic, recurrent nature of Crohn’s disease, many patients undergo repeat operations. These “redo” surgeries can be technically difficult due to the presence of adhesive disease and inflammatory / fibrotic changes. Thus, subsequent operative interventions are commonly perceived to be wrought with worse outcomes. While there is a plethora of literature on outcomes after index operations for Crohn’s disease, there is a scarcity of articles describing outcomes of redo operations and how they compare with index operations. An in-depth knowledge of these variables is critical for managing patient expectations and optimal perioperative planning from the surgeon’s perspective.

Methods: All Crohn’s patients undergoing surgery with the two participating surgeons over a period of six years were included. A retrospective chart review was conducted including patient demographics, comorbidities, postoperative complications, operative time, length of stay, and estimated blood loss. A comparison of in index versus redo operations was performed utilizing t-test for continuous variables and Fisher's exact test for categorical variables.

Results: We identified a total of 118 patients during the approved study period. Out of these 66 (55%) underwent index operation and 52 (45%) were redo operations. Overall complication rate was 29.66% (n=35), mean operative time was 220 minutes, average length of hospital stay was 8.36 days and EBL was 189.62 ml. There was no statistically significant difference between index and redo operations in terms of complication rates (27.27% vs 32.69%, p=0.55), EBL (211 vs 231 ml, p=0.85) and operative time (211 vs 231 min, p=0.28). However, the difference in length of stay between index operations (mean=6.79 days) and redo surgeries (mean=10.93 days) was statistically significant (p=0.0005). Laparoscopic approach was utilized at a significantly higher rate for index operations (61/66, 92.42%) compared to redo operations (35/52, 67.30%, p=0.007). Conversion rates were much higher for redo operations (3/35, 8.57%) than index operations (3/61, 4.91%). Use of laparoscopic approach narrowed the gap in length of stay between the index and redo groups from 4.14 days for the entire group to 1.8 days in patients who were treated laparoscopically.

Conclusion: Contrary to common perception, repeat operations in Crohn’s disease have similar outcomes as index operations however redo surgeries are associated with a much longer length of stay compared to initial surgeries. Utilization of laparoscopic technique reduces the gap in length of stay between index and redo operations however laparoscopy is associated with higher a conversion rate in redo operations.

66.10 Does path to Ileo-pouch anal anastomoses in the treatment of pediatric Ulcerativc colitis matter?

N. Bismar1, A. S. Patel1,2, D. Schindel1,2  2Children’s Medical Center,Pediatric Surgery,Dallas, Tx, USA 1University Of Texas Southwestern Medical Center,Pediatric Surgery,Dallas, TX, USA

Introduction:
To compare the outcomes of pediatric medically-refractory ulcerative colitis treated by a traditional (TIPAA) surgical approach (1st stage: laparoscopic proctocolectomy/ileo pouch-anal anastomosis with a protective loop ileostomy; 2nd stage:  stoma closure) versus a nontraditional (NIPAA) approach (1st stage: laparoscopic colectomy/ileostomy; 2nd stage:  completion proctectomy/ileo-pouch anal anastomosis without a stoma).  

Methods:

After IRB approval, a review of patients who underwent an ileo-pouch anal anastomosis, cared for at a children’s hospital from 2002-2017 was performed. Patient demographics, diagnosis at time of surgery, type of surgery (TIPAA vs NIPAA), time to full diet, level of continence, use of anti-diarrhea medications, and complications were recorded. A statistical analysis was performed using Graphpad® San Diego, CA. 

Results:

Forty-one children were identified (NIPAA; n=14; TIPAA; n=27). Following re-establishing bowel continuity, there were no significant differences in time to appetite recovery, continence, or incidence of complications between the TIPAA and NIPAA groups. The number of anti-diarrhea medications prescribed was significantly higher in the group following a TIPAA versus the NIPAA (p=0.01).  Nine patients (22%) required dilatation of an ileoanal anastomotic stricture, three following NIPAA and six following TIPAA (p=NS).  In addition to strictures, the most common complications observed were pouchitis and small bowel obstruction. Thirteen patients (31.7%) were treated for pouchitis: four following a NIPAA  and nine in the TIPAA group (p=NS). Of the 41 patients there were 11 who required additional surgical interventions (lysis of adhesions; stoma revisions), two (18.2%) had received a NIPAA approach and nine (81.8%) had received a TIPAA.  Two children having TIPPA, because of chronic pain and failure to achieve full continence elected placement of a diverting ileostomy.  

Conclusions:

This study suggests that children with medically-refractory UC treated by either NIPAA or TIPAA have similar outcomes.  Minimal differences in overall outcome following either apporach are noted.  However, performing an ileo-pouch anastomosis as a second stage procedure without a stoma may be associated with reduced reliance on antidiarrhea medications once intestinal continuity is restored.

66.07 Prenatal Intervention Improves Initial Outcomes in Postnatal Management of Congenital Chylothorax

B. Carr1, L. Sampang1, J. Church1, R. Mon1, S. K. Gadepalli1, M. Attar1, E. E. Perrone1  1University Of Michigan,Ann Arbor, MI, USA

Introduction:
Congenital chylothorax, an accumulation of lymphatic fluid in the pleural space arising before birth, is a poorly understood phenomenon that can have devastating consequences for neonates.  We sought to determine the outcomes of neonates treated at our institution with both operative and nonoperative measures.   We also sought to determine the role of prenatal intervention in mitigating outcomes.

Methods:
Under an IRB-approved protocol (HUM00093133), patients treated at our institution 09/2006 – 04/ 2016 with congenital chylothorax were reviewed.  Demographic information, intervention history, and perinatal outcomes were collected. All statistical analysis was performed using STATA v14 (StataCorp LLC, College Station, TX), and p<0.05 was considered significant.  

Results:

A total of 26 patients were identified and 12 (46%) were female. Average gestational age at birth was 34.1±2.7 weeks, and average birth weight was 2815±614 grams. Overall mortality was 23% (6 patients).  Twenty-two patients (85%) were prenatally diagnosed, and 14 patients (64%) underwent prenatal intervention for congenital chylothorax.  In the intervention group, median gestational age was 33.5 weeks [IQR 31-35], with median birth weight 2479 grams [IQR 2170-3025].  In the no intervention group, median gestational age was 32 weeks [IQR 32.5-36] with median birth weight 2975 grams [IQR 2575-3383].

In the intervention group, median Apgar score at 1 minute was 4 [IQR 3-7], compared to 2 [IQR 1-2] in the no intervention group (p=0.006).  Median Apgar score at 5 minutes in the intervention group was 8 [IQR 6-8], compared to 3.5 [IQR 2.5-5.5] in the no intervention group (p=0.003).   All patients in the no intervention group required chest tubes, while 3 patients (21%) in the intervention group avoided chest tube placement. Mortality in the intervention group was 3 patients (21%), compared to 4 patients (50%) in the no intervention group (p=0.34).  Of those patients who survived, median length of stay was 34 days [IQR 16-69] in the intervention group and 58.5 days [IQR 27-92] in the no intervention group (p=0.55), while median ventilator days were 3 [IQR 0-40] in the intervention group and 30.5 [IQR 10-58] in the no intervention group (p=0.47). There was no significant difference in need for chest compressions, need for surgery, or frequency of infections or pneumothoraces.

Conclusion:
Prenatal intervention for congenital chylothorax is associated with improved Apgar scores during resuscitation, but does not definitely improve overall outcomes. Our data support that clinicians must weigh the risks of prenatal intervention against the risks of difficult resuscitation in the first minutes of life.

66.08 Quality of Life Outcomes in Hepatoblastoma: Conventional Resection versus Liver Transplantation

M. R. Threlkeld1, N. Apelt1, N. Kremer1, S. F. Polites1, M. Troutt1, J. Geller1, K. Burns1, A. Pai1, R. Nagarajan1, A. J. Bondoc1, G. M. Tiao1  1Cincinnati Childrens’s Hospital Medical Center,Pediatric Surgery,Cincinnati, OH, USA

Introduction: Liver transplantation and complex surgical resection of advanced stage hepatoblastoma have equivalent five-year survival. Both modalities are associated with treatments and complications that impact quality of life. The differences in quality of life outcomes between these two treatments has not been studied and could aid in guiding future therapeutic decisions. We sought to compare the quality of life outcomes for long term survivors who underwent transplantation compared to surgical resection for PRETEXT III or IV hepatoblastoma.

Methods: Following approval from our institutional review board, patients with PRETEXT III and IV hepatoblastoma who underwent surgical therapy from 2000-2013 and survived 2 or more years post-operatively were identified. Patients were grouped into transplant or resection by intention to treat based on the primary operation. Informed consent was obtained from patients over 18 years and parents of the surviving patients younger than 18 years. Assent was obtained from children over 8 years old. Age and treatment appropriate Pediatric Quality of Life Inventory 4.0 (PedsQL™) modules were mailed to consenting participants. Parent reports were used for patients 4 years or younger. Both parents and patients reported for patients over 4 years old. Scoring was performed according to the PedsQL module manual.

Results: We identified 70 patients who met inclusion criteria, 29 (41.4%) were deceased or lost to follow up. Of the 41 remaining patients approached, 33 (80.5%) agreed to participate. At least one questionnaire was completed by 28 (84.8%) patients and or parents; 12 of 14 in the resection and 16 of 19 in the transplant group. There was no statistical difference in age at diagnosis, gender, or response rate between groups. The transplant group was older than the resection group with a respective median age of 10.2 (9.3-13.9, IQR) versus 6.4 (5.1-10.5, IQR) years (p=0.03).  We found no statistical difference in the scores between transplant and resection or between patient and parent reports where applicable (Table).

Conclusion: We found no difference in the long term quality of life outcome scores in patients primarily treated with liver transplant compared to radical surgical resection. The therapeutic decision to perform transplant as a primary surgery should not be influenced by the concern of decreased quality of life, but rather predictability of an R0 resection. 

 

66.05 Curettage and Cautery as an Alternative to Primary Closure for Pediatric Gastrocutaneous Fistulae

N. Denning4, I. A. El-shafy2, J. G. Hagen3, S. Stylianos5, J. M. Prince1,4, A. M. Lipskar1,4  1Cohen Children’s Medical Center, Northwell Health System,Department Of Pediatric Surgery,New Hyde Park, NY, USA 2Maimonides Medical Center,Department Of Surgery,Brooklyn, NY, USA 3Cohen Children’s Medical Center, Northwell Health System,Department Of Anesthesia,New Hyde Park, NY, USA 4Hofstra University School Of Medicine-Northwell Health System,Department Of Surgery,Manhasset, NY, USA 5Morgan Stanley Children’s Hospital Of New York-Presbyterian, Columbia University Medical Center,Division Of Pediatric Surgery,New York, NY, USA

Introduction: The development of a gastrocutaneous fistula (GCF) after gastrostomy tube removal is a frequent complication that occurs 5-45% of the time. Conservative therapy with chemical cauterization is frequently unsuccessful and surgical GCF repair with open primary layered closure of the gastrotomy is often required. We describe an alternative approach of GCF closure that is an outpatient, less invasive procedure that allows patients to avoid an inpatient hospitalization and intra-abdominal surgery.

Methods: This is an IRB approved retrospective review of all patients who underwent GCF closure from 1/2010 to 7/2016 at a tertiary care children’s hospital.  Demographics including age, weight, BMI, comorbidities, and initial indication for gastrostomy tube were recorded.  Operative details such as ASA score, operative duration, type of anesthetic and airway were noted.  Based on surgeon preference two types of operative closure were used during that timeframe: primary layered closure or curettage and cautery (C&C).  The latter is a procedure in which the fistula tract is first scraped with a fine curette and then the fistula opening and tract are cauterized circumferentially. Finally, the presence of a persistent fistula and the need for formal reoperation were determined.

Results: Sixty-five unique patients requiring GCF closure were identified. Of those, 44 patients (67.6%) underwent primary closure and 21 patients (32.3%) underwent C&C.   The success rate of primary closure was 97% with one patient experiencing wound breakdown with persistent fistula.  The overall success rate of C&C was 66.7% (14/21).  Among those fourteen patients 11 (52.4%) GCF were closed by one month.  An additional two patients’ GCF were closed by four months (61.9%).   One GCF was successfully closed with a second C&C procedure.   7/21 patients (33.3%) required subsequent formal layered surgical closure.  C&C had significantly shorter operative times (13.5±14.7 min vs 93.4 ±61.8, p < 0.0001) and significantly shorter times in the post-anesthesia care unit (101.8±42.4 min vs 147 ± 86, p < 0.0001). Patients were intubated with an endotracheal tube 88.6% of the time for primary closure and 23.8% of the time for C&C.  Among patients admitted for an elective procedure, the average length of stay for primary closure was 1.9 days as compared to 0 days for the C&C group. Among patients who underwent C&C with a persistent fistula, there were no significant differences in time since initial creation of gastrostomy, age, BMI, or ASA score.

Conclusion: Our study verifies that primary closure remains the gold standard for persistent gastrocutaneous fistula. However, C&C is a safe, outpatient procedure that effectively treats a GCF the majority of the time in children.  We suggest that in select patients it may be an appropriate initial and definitive procedure for GCF closure.

 

66.04 A High Ratio of Plasma:RBC Improves Survival in Massively Transfused Injured Children

M. E. Cunningham1, E. H. Rosenfeld1, B. Naik-Mathuria1, R. T. Russell2, A. M. Vogel1  1Baylor College Of Medicine,Department Of Pediatric Surgery,Houston, TX, USA 2Children’s Hospital Of Alabama,Department Of Pediatric Surgery,Birmingham, AL, USA

Introduction: Massive transfusion protocols (MTP) with balanced high blood product ratios (plasma and platelets to red blood cells (RBC)) have been associated with improved outcomes in adult trauma. Their impact in pediatric trauma is unclear. The purpose of this study is to evaluate the effect of blood product ratios in severely injured children.

Methods: The 2015-2016 American College of Surgeons Trauma Quality Improvement Program research dataset was used to identify trauma patient’s ≤ 18 years of age who received a massive transfusion (≥40ml/kg/24-hours total blood products). Children with burns and non-survivable injuries were excluded. Plasma:RBC and platelet:RBC ratios were categorized into low (<1:2), medium (≥1:2 to <1:1), and high (≥1:1). Trauma-related clinical and outcome data was collected and analyzed using descriptive statistics, Kruskal-wallis test, Chi-Square, Fisher’s exact test, and Kaplan-Meier analysis. Continuous variables are presented as median [IQR]; p<0.05 is significant.

Results:Of 473 patients, the median age was 9 [3,16] years, 159 (34%) were female, and 350 (74%) were blunt injuries. Gender, mechanism of injury, ISS, GCS, and age-adjusted vitals were similar amongst the low, medium, and high plasma:RBC and platelet:RBC cohorts. Children in the low plasma:RBC group were younger compared to those in the medium and high groups (6 [2,13] vs 13 [6,17] & 12 [6,17]; p<0.01). Analysis of the plasma:RBC groups showed those with low ratios had a lower incidence of acute kidney injury (AKI) (0% vs 4% & 5%; p=0.01) compared to medium and high ratios, and fewer ventilator-free days (6 [0,23] vs 17 [17,23]; p=0.03) compared to high ratios. Those with medium plasma:RBC ratios had greater total blood transfusion volumes (ml/kg/24-hours) (89 [56,156] vs 68 [50,105] & 69 [50,115]; p<0.01). There was no difference in ICU-free days or other in-hospital complications between the groups. Analysis of the platelet:RBC ratios showed no significant difference in total blood volume received, ventilator free-days, ICU-free days, AKI, or other in-hospital complications. Hemorrhage control procedures were less common in the low plasma:RBC cohort (34% vs: 56% & 51%; p<0.01), while there was no difference between the platelet:RBC cohorts. Survival was improved in the high plasma:RBC cohort early on (Fig. 1), but became insignificant as time passed. There was no improvement in survival between the platelet:RBC cohorts at any time.

Conclusion: A high ratio of plasma:RBC may result in improved early survival in injured children receiving a massive transfusion. Prospective, multicenter studies are needed to determine optimal resuscitation strategies for these critically ill children.

66.01 Risk Factors for Adverse Outcomes in Children Undergoing Resection of Primary Liver Tumors

K. Culbreath1, A. Garcia1, I. Leeds1, T. Crawford1, E. Boss2, D. Rhee1  1Johns Hopkins University School Of Medicine,Surgery,Baltimore, MD, USA 2Johns Hopkins University School Of Medicine,Otolaryngology,Baltimore, MD, USA

Introduction: Children with primary malignant tumors of the liver often present with an incidental abdominal mass and anemia, with failure to thrive in more advanced disease. Complete tumor resections are major operations that offer the best chance of long-term disease-free survival. This study analyzes the effect of preoperative anemia and parenteral nutrition on surgical outcomes in these patients undergoing a major liver resection.

Methods: This is a retrospective cohort study of children undergoing a major liver resection for primary malignant hepatic tumors. Data was collected using the National Surgical Quality Improvement Program Pediatric database from 2012-2015. Demographics, comorbidities, and 30-day outcomes were compared by anemia (defined by age-specific clinical practice guidelines) and preoperative total parenteral nutrition (TPN) using Fishers exact test.  Outcomes include post-operative complications and hospital readmissions.   Propensity score matching was performed to control for significant confounders.

Results: A total of 110 children were included in the study with 80 (72.7%) undergoing lobectomy and 30 (27.3%) trisegmentectomy. The mean age was 3.5 years (range 4 days to 17.7 years). Thirty-one patients (30.4%) were born prematurely with 18 (16.4%) being born before 30 weeks gestational age. 76 (69.1%) patients had preoperative anemia, and 36 (32.7%) were receiving TPN. Children with pre-operative TPN were more likely to have cardiac (p= 0.01), respiratory (<0.01), neurologic (p<0.01), and hematologic co-morbidities (p=0.02). There were 22 (20.0%) post-operative complications and 6 (5.5%) hospital readmissions. After propensity score matching, there were 34 matched pairs for anemia, and 36 matched pairs for TPN. There was no significant difference in post-operative complications between anemic and non-anemic patients (20.6% vs 35.3%, p=0.28). Patients receiving pre-operative TPN had an increased rates of post-operative complications compared to those not (33.3% vs 11.1%, p=0.04). Neither anemia (p=0.61) nor pre-operative TPN use (p=0.05) had a significant association with readmissions.

Conclusion: Anemia and pre-operative TPN are common in children undergoing major resection of primary malignant hepatic tumors, with TPN use being associated with several comorbidities. Patients on pre-operative TPN are at higher risk of complications after surgery and may warrant special attention to their overall conditioning and nutritional status when planning their operation.

 

66.03 Impact of Growth and Nutrition on Risk of Acquired Diseases of Prematurity and Perinatal Mortality

T. J. Sinclair1,2, C. Ye5, F. Salimi Jazi2, J. Taylor2, K. Bianco4, G. Shaw3, D. K. Stevenson3, R. Clark6, K. G. Sylvester2  1Stanford University School Of Medicine,Department Of General Surgery,Stanford, CA, USA 2Stanford University School Of Medicine,Division Of Pediatric Surgery,Stanford, CA, USA 3Stanford University School Of Medicine,Department Of Pediatrics,Stanford, CA, USA 4Stanford University School Of Medicine,Department Of Obstetrics & Gynecology,Stanford, CA, USA 5Hangzhou Normal University,School Of Medicine,Hangzhou, ZHEJIANG, China 6Pediatrix-Obstetrix Center For Research, Education And Quality,Sunrise, FL, USA

Introduction:

Birth weight and gestational age are the predominant clinical factors used to risk stratify infants for acquired diseases of prematurity including necrotizing enterocolitis (NEC), chronic lung disease (CLD), and retinopathy of prematurity (ROP). We sought to further elucidate the relative impact of size (birth weight, BW), prematurity (gestational age, GA), pre- and postnatal growth, and postnatal nutrition on the risk of perinatal mortality and development of acquired diseases of prematurity. 

Methods:

We performed a retrospective longitudinal cohort study by querying a multicenter, longitudinal database that included 995 preterm infants (<32 weeks gestation). Detailed clinical and nutritional data were collected on subjects for the first 6 weeks of life. Comparator groups were generated based on the following variables: BW (top vs. bottom quartiles), GA (23-26 vs. 29-32 weeks), fetal growth restriction (small for gestational age (SGA defined as <10th percentile BW for GA) vs. appropriate for gestational age (AGA, BW 10-90th percentile for GA)), growth velocity (top vs. bottom quartile of change in weight z-score from birth to day of life 42), and total calories (<100 vs >120 kcal/kg/day averaged from day of life 2-42). Cases of NEC, CLD (need for supplemental oxygen at day of life 42), [DS1] and ROP were identified. Odds Ratios (OR) with 95% confidence intervals (CI) were calculated using the defined comparator groups for the outcomes of NEC, CLD, ROP, mortality (during the study period), and a composite of the disease and mortality outcomes. Statistical significance was defined as p-value < 0.05. Mortality outcome was excluded from the growth velocity and total calorie analyses given the longitudinal nature of the variable definitions.

Results:

Infants in the top quartile for BW or born at 29-32 weeks gestation were found to be at a significantly reduced risk for all disease outcomes, mortality, and the composite outcome. However, when fetal growth restriction was present there was an increased risk of ROP and mortality, but not for NEC[DS1] , CLD, or the composite outcome. Similarly, being in the bottom quartile for post-natal growth velocity was associated with an increased risk of developing ROP, but not for the other outcomes. Finally, receiving greater than 120 kcal/kg/day on average was associated with a decreased risk for all outcomes, except for NEC which approached but did not reach statistical significance. See Table 1.

Conclusion:

These results suggest that postnatal caloric delivery is a significant modifier of premature newborns’ risk of acquired major perinatal disease and mortality. These findings may be of particular importance given that postnatal caloric delivery may be modifiable.

65.10 Hospital Length of Stay After Pediatric Liver Transplantation

K. Covarrubias1, X. Luo1, D. Mogul2, J. Garonzik-Wang1, D. L. Segev1  1Johns Hopkins University School Of Medicine,Surgery,Baltimore, MD, USA 2Johns Hopkins University School Of Medicine,Pediatric Gastroenterology & Hepatology,Baltimore, MD, USA

Introduction:  Pediatric liver transplantation is a life saving treatment modality for patients and their families that requires extensive multidisciplinary assessment in the pre-transplantation period. In order to better inform medical decision making and discharge planning, and ultimately provide more personalized patient counseling, we sought to identify recipient, donor, and surgical characteristics that influence hospital length of stay (LOS) following pediatric liver transplantation.

Methods:  We studied 3956 first time pediatric (<18 yrs old) transplant recipients between 2002 and 2016 using SRTR data. We excluded patients ever listed as status 1A and patients who died prior to discharge. We used multi-level negative binomial regression to estimate incidence rate ratios (IRR) for hospital LOS accounting for center level variation. For recipients <12 yrs old, PELD (Pediatric End-Stage Liver Disease) score was used for analysis and for older transplant recipients, MELD (Model for End-Stage Liver Disease) was used.

Results:The median LOS in our study population was 15 hospital days after transplantation. Our analysis determined that a MELD/PELD score >14 (MELD 15-25: IRR 1..081.141.21, MELD/PELD 25-29:1.271.391.52, MELD/PELD >30: 1.171.281.41,), exception points (1.061.121.18), partial grafts (1.161.231.31), and Hispanic ethnicity (1.001.071.15) were associated with a longer LOS (p<0.05). A graft from a live donor (0.810.880.96), recipient weight greater than 10 kg (10-35 kg: 0.760.800.85, >35 kg: 0.610.660.70), and non-hospitalized patient status (0.710.800.90) were associated with a decreased LOS (p<0.05).

Conclusion: Our findings suggest that the ability to transplant patients at lower MELD/PELD scores and increased use of grafts from living donors would lead to decreased healthcare utilization in the immediate postoperative period. Latino ethnicity and public health insurance were also associated with a longer LOS, however our model does not account for any healthcare disparities faced by such groups of people including socioeconomic status and language barriers. 

 

65.06 Preoperative Thromboelastography Predicts Transfusion Requirements During Liver Transplantation

J. T. Graff1, V. K. Dhar1, C. Wakefield1, A. R. Cortez1, M. C. Cuffy1, M. D. Goodman1, S. A. Shah1  1University Of Cincinnati,Department Of Surgery,Cincinnati, OH, USA

Introduction: Thromboelastography (TEG) has been shown to provide an accurate assessment of patients’ global coagulopathy and hemostatic function. While use of TEG has grown within the field of liver transplantation (LT), the relative importance of TEG values obtained at various stages of the operation and their association with outcomes remain unknown. Our goal was to assess the prevalence of TEG-based coagulopathy in patients undergoing LT, and determine whether preoperative TEG is predictive of transfusion requirements during LT.

Methods: An IRB approved, retrospective review of 380 consecutive LTs between January 2013 and May 2017 was performed. TEGs obtained during the preoperative, anhepatic, neohepatic, and initial postoperative phases were evaluated. Patients with incomplete data were excluded from the analysis, resulting in a study cohort of 110 patients. TEGs were categorized as either hypocoagulabe, hypercoagulable, or normal using a previously described composite measure of R time, k time, alpha angle, and maximum amplitude. Perioperative outcomes including transfusion requirements, need for temporary abdominal closure, and rates of reoperation for bleeding were evaluated.

Results: Of patients undergoing LT, 11.8% were hypocoagulable, 22.7% were hypercoagulable, and 65.5% were normal at the start of the operation. 46.4% of patients finished the operation in a different category of coagulation from which they started. Of patients starting LT hypocoagulable, 15.4% finished hypocoagulable, none finished hypercoagulable, and 84.6% finished normal. Patients with hypocoagulable preoperative TEGs were found to require more units of pRBC (12 vs. 6 vs. 6, p=0.04), FFP (24 vs. 13 vs. 8, p<0.01), cryoprecipitate (4 vs. 2 vs. 1, p<0.01), platelets (4 vs. 2 vs. 1, p <0.01), and cell saver (4.6 liters vs. 2.8 vs. 1.9, p<0.01) during LT compared to those with normal or hypercoagulable preoperative TEGs. Despite these higher transfusion requirements, there were no significant differences in rate of temporary abdominal closure, unplanned reoperation, ICU length of stay, or 30-day readmission (all p > 0.05) between patients with hypocoagulable, hypercoagulable, or normal preoperative TEGs.

Conclusion: Preoperative thromboelastography may be predictive of transfusion requirements during LT. By consistently evaluating the preoperative TEG, surgeons can identify patients who may be at higher risk for intraoperative coagulopathy and require increased perioperative resource utilization.
 

65.04 Impact of Donor Hepatic Arterial Anatomy on Clinical Graft Outcomes in Liver Transplantation

J. R. Schroering2, C. A. Kubal1, T. J. Hathaway2, R. S. Mangus1  1Indiana University School Of Medicine,Transplant Division/Department Of Surgery,Indianapolis, IN, USA 2Indiana University School Of Medicine,Indianapolis, IN, USA

Introduction:  The arterial anatomy of the liver has significant variability. When a liver graft is procured for transplant, the donor hepatic artery anatomy must be identified and preserved to avoid injury. Reconstruction of the hepatic artery is often required in cases of accessory or replaced vessels. This study reviews a large number of liver transplants and summarizes the arterial anatomy. Clinical outcomes include hepatic artery thrombosis (HAT), early graft loss and long term graft survival.

 

Methods:  All liver transplants at a single center over a 10-year period were reviewed. The arterial anatomy was determined from a combination of the organ procurement record and the liver transplant operative note. Anatomic variants and reconstructions were noted. For this cohort, all accessory/replaced right hepatic arteries were reconstructed to the gastroduodenal artery (GDA) with 7-0 prolene suture on the back table prior to implantation. All accessory/replaced left hepatic arteries were left intact from their origin at the left gastric and hepatic artery when possible, though occasional reconstruction to the GDA with 7-0 prolene suture was performed. Post-operative anticoagulation was not utilized routinely. Antithrombolytic therapy was administered at initial incision in all cases using either aprotinin or epsilon aminocaproic acid.  A single Doppler ultrasound (US) was obtained post-operatively in the critical care unit to confirm arterial and venous flow. No other imaging (intraoperative or post-operative) was obtained unless there was an indication. 

Results: The records for 1145 patients were extracted. The median recipient age was 57, body mass index 28.4, and MELD 20. Retransplant procedures comprised 4% of the cohort. Hepatic arterial anatomy types include: normal (68%), accessory/replaced left (16%), accessory/replaced right (10%), accessory/replaced right and left (4%), and other variants (2%). There were 222 cases (19%) in which back table arterial reconstruction was required. The overall incidence of HAT was 1%. The highest rate of HAT was in liver grafts with accessory right and left hepatic arteries. The hepatic arterial resistive indices measured on post-operative Doppler US did not differ by hepatic artery anatomy. One-year survival for all grafts was above 90%, but livers with an accessory right hepatic artery (only) had lower survival at 10-years when compared with grafts with normal anatomy (62% versus 75%). 

Conclusion: There were 68% of livers with standard anatomy, with the accessory/replaced left (16%) and right (10%) arteries being the next most common variants. All anatomic variants had good 1-year graft survival, though liver grafts with an accessory/replaced right hepatic artery had lowest survival at 10-years.

65.05 Sarcopenia a Better Predictor of Survival than Serologic Markers in Elderly Liver Transplant Patients

W. J. Bush1, A. Cabrales1, H. Underwood1, R. S. Mangus1  1Indiana University School Of Medicine,Indianapolis, IN, USA

Introduction: An increasing number of liver transplant (LT) patients are geriatric (≥ 60 years).  Recent research suggests that measures of frailty, such as nutrition status, may be important predictors of surgical outcomes. This study evaluates the impact of objective measures of nutritional status on post-transplant perioperative and long term outcomes for geriatric liver transplant patients. 

Methods:  Patient inclusion criteria included all geriatric liver transplant patients at a single center over a 16-year time period. Measures of nutrition status included preoperative core muscle mass, perinephric and subcutaneous adipose volume, as well as standard serological markers of nutritional status (albumin, total protein and cholesterol). Measurements of total psoas muscle area, and total perinephric and subcutaneous adipose volumes were measured from preoperative computed tomography (CT) scans at the L2/L3 disc space, and scaled to patient height. Outcomes included length of hospital stay and patient survival.

Results: There were 564 patients included in the analysis, of whom 446 had preoperative CT scans available. There was poor correlation between serologic markers of nutrition and CT measures of tissue volume. Serologic markers of nutrition were poor predictors of survival, but abnormal values were associated with increased length of stay, prolonged ventilator requirement, and a non-home discharge. In contrast, patients with severe sarcopenia and poor subcutaneous and visceral adipose stores had worse long term survival, but these findings had poor correlation with perioperative outcomes. Cox regression analysis demonstrates decreased long term survival for patients with severe sarcopenia. 

Conclusion: In this cohort of geriatric LT recipients, common serologic markers of nutrition were associated with perioperative clinical outcomes, while CT measures of muscle and adipose stores were more predictive of early and intermediate term survival outcomes. These results support the need for the further development of frailty measures that assess core tissue volume and physiologic strength.

 

65.01 Thrombolysis during Liver Procurement Prevents Ischemic Cholangiopathy in DCD Liver Transplantation

A. E. Cabrales1, R. S. Mangus1, J. A. Fridell1, C. A. Kubal1  1Indiana University School Of Medicine,Department Of Transplant Surgery,Indianapolis, IN, USA

Introduction: The rate of donation after circulatory death (DCD) liver transplantation has decreased in recent years as a result of inferior graft and patient survival when compared to donation after brain death (DBD) transplantation. Ischemic cholangiopathy (IC) is the primary cause of these inferior outcomes and is associated with a high rate of graft loss, retransplantation and recipient mortality. Development of IC in liver transplant recipients appears to be associated with peribiliary arterial plexus microthrombi formation that can occur in DCD donors. Our center has demonstrated success using tissue plasminogen activator (tPA) flush during DCD organ procurement to prevent the formation of microthrombi and prevent IC. This study investigates the long term impact of tPA flush on graft outcomes and program use of DCD organs.

Methods: All records for liver transplants over a 15 year period at a single center were reviewed and data extracted. DCD organ procurement followed carefully established protocols including a 5-minute wait time after determination of cardiac death prior to initiation of the procurement procedure. The procurement consisted of rapid cannulation of the aorta, clamping of the aorta and decompression through the vena cava. Preservation solution included initial flush with histidine-tryptophan-ketoglutarate solution (HTK), followed by infusion of tPA in 1L NS, then further flush with HTK until the effluent was clear. Total flush volume was less than 5L.

Results: There were 57 tPA procurements (48%) and 62 non-tPA procurements (52%). Patients receiving the tPA grafts were older and had a higher MELD score. The tPA grafts had less cold and warm ischemia time.  The grafts procured using tPA had better survival at 7- and 90-days (p=0.09, p=0.06) and at 1-year (95% versus 79%, p=0.01).  Cox regression showed significantly better long-term survival for tPA grafts (88% versus 45% at 10-years; p<0.01). Improved outcomes with thrombolytic therapy in DCD liver procurement changed the use of DCD grafts at our center to extended criteria donors who were older, heavier, and out of state. Use of these higher risk DCD donors did not change clinical outcomes.

Conclusion: Our center has shown that optimization of perioperative conditions, including use of an intraoperative thrombolytic flush, significantly lowers the incidence of IC in DCD liver grafts. With improved outcomes, the percentage of DCD grafts at our center has increased, including the use of extended criteria DCD livers, without a worsening of outcomes.

65.02 The Effect of Socioeconomic Status on Patient-Reported Outcomes after Renal Transplantation

A. J. Cole1, P. K. Baliga1, D. J. Taber1  1Medical University Of South Carolina,Charleston, Sc, USA

Introduction:  Research analyzing the effect socioeconomic status (SES) has on renal transplant outcomes has demonstrated conflicting results. However, recent studies demonstrate that certain patient-reported outcomes (PROs), such as depression, medication non-adherence, health literacy, social support, and self-efficacy can influence clinical outcomes in renal transplant recipients. Our objectives were to examine the effect SES has on PROs, and determine if there is an association between SES, patient-reported outcomes, and healthcare utilization.

Methods:  Post-hoc analysis of 52 patients enrolled in an ongoing prospective trial aimed at improving cardiovascular disease risk factor control in renal transplant recipients. As part of the study, at baseline, patients completed detailed surveys assessing SES and PROs.  Patients were divided into low and high SES cohorts based on income, education, marital status, insurance, and employment. All patients were given 12 self-reported surveys in the domains of medication-related issues, self-care and knowledge, psychosocial issues, and healthcare. Analyses included the associations between 12 PRO surveys, SES measures, and healthcare utilization, including the rate of hospitalizations, ED visits and clinic visits that occurred between the date of transplant and enrollment in the trial.

Results: The low SES cohort (n=16, 30.8%) experienced more severe depression (5.75 vs 3.0, p=0.022), higher rates of inadequate health literacy (3.42 vs 1.68, p=0.022) and perceived stress (2.743 vs 3.266, p=0.027), along with significantly less self-efficacy (6.971 vs 8.214, p=0.006) and social support (3.86 vs 4.408, p=0.012; see Figure 1). Low SES was associated with a 60% higher rate of hospitalization and 90% higher rate of ED visits per patient-year. Medication non-adherence was also associated with more hospitalizations and ED visits.

Conclusion: This analysis demonstrates that low SES was significantly associated with negative PROs, including depression, health literacy, social support, stress, and self-efficacy.  Further, low SES and medication non-adherence was associated with higher rates of healthcare utilization. 

 

64.09 Role of the patient-provider relationship in Hepato-pancreato-biliary diseases

E. J. Cerier1, Q. Chen1, E. Beal1, A. Paredes1, S. Sun1, G. Olsen1, J. Cloyd1, M. Dillhoff1, C. Schmidt1, T. Pawlik1  1Ohio State University,Department Of Surgery,Columbus, OH, USA

Introduction: An optimal patient-provider relationship (PPR) may improve medication / appointment adherence, healthcare resource utilization, as well as reduce healthcare costs.  The objective of the current study was to define the impact of PPR on healthcare outcomes among a cohort of patients with hepato-pancreato-biliary (HPB) diseases.

Methods: Utilizing the Medical Expenditure Panel Survey Database from 2008-2014, patients with an HPB disease diagnosis were identified. PPR was determined using a weighted score based on survey items from the Consumer Assessment of Healthcare Providers and Systems (CAHPS). Specifically, patient responses to questions concerning access to healthcare providers, responsiveness of healthcare providers, patient-provider communication, and shared decision-making were obtained. Patient provider communication was stratified into three categories using a composite score that ranged from 4 to 12 (score 4-7: "poor," 8-11: "average," and 12 "optimal"). The relationship between PPR and health care outcomes was analyzed using regression analyses and generalized linear modeling.

Results: Among 594 adult-patients, representing 6 million HPB patients, reported PPR was "optimal" (n=210, 35.4%), "average" (n=270, 45.5%), and "poor" (n=114, 19.2%). Uninsured (uninsured: 36.3% vs. Medicaid: 28.8% vs. Medicare: 15.4% vs. Private: 14.0%; p=0.03) and poor-income (high: 14.0% vs. middle: 12.8% vs. low: 21.5% vs. poor: 24.3%; p=0.03) patients were more likely to report "poor" PPR. In contrast, other factors such as race, sex, education, and age were not associated with PPR. In addition, there was no association between PPR and overall annual healthcare expenditures ("poor" PPR: $19,405, CI $15,207-23,602 vs. "average" PPR: $20,148, CI $15,538-24,714 vs. "optimal" PPR: $19,064, CI $15,344-22,784; p=0.89) or out-of-pocket expenditures ("poor" PPR: $1,341, CI $618-2,065 vs. "average" PPR: $1,374, CI $1,079-1,668 vs. "optimal" PPR: $1,475, CI $1,150-1,800; p=0.77). Patients who reported "poor" PPR were also more likely to self-report poor mental health scores (OR 5.0, CI 1.3-16.7), as well as have high emergency room utilization (≥ 2 visits: OR 2.4, CI 1.2-5.0)(both p<0.05). Patients with reported "poor" PPR did not, however, have worse physical health scores or more previous inpatient hospital stays (both p>0.05)(Figure).

Conclusion: Patient self-reported PPR was associated with insurance and socioeconomic status.  In addition, patients with perceived "poor" PPR were more likely to have poor mental health and be high utilizers of the emergency room.  Efforts to improve PPR should focus on these high-risk populations.

64.10 Preoperative Frailty Assessment Predicts Short-Term Outcomes After Hepatopancreatobiliary Surgery

P. Bou-Samra1, D. Van Der Windt1, P. Varley1, X. Chen1, A. Tsung1  1University Of Pittsburg,Hepatobiliary & Pancreatic Surgery,Pittsburgh, PA, USA

Introduction: Given the aging of our population, increasing numbers of elderly patients are evaluated for surgery. Preoperative assessment of frailty, defined as the lack of physiological reserve, is a novel concept that has recently gained interest to predict postoperative complications. The comprehensive Risk Analysis Index (RAI) for frailty has been shown to predict mortality in a large cohort of surgical patients. RAI is now measured in all patients presenting to surgical clinics in our institution. Initial analysis showed that patients with hepatopancreatobiliary disease have the highest frailty scores, only second to patients presenting for cardiovascular surgery. Therefore, the aim of this study was to specifically evaluate the performance of RAI in predicting short-term post-operative outcomes in patients undergoing hepatopancreatobiliary surgery, a significantly frail patient population.

Methods: From June-December 2016, the RAI was determined in 162 patients prior to surgery. RAI includes 12 variables to evaluate e.g. age, kidney disease, congestive heart failure, cognitive functioning, independence in daily activities, and weight loss. Data on 30-day post-operative outcomes were prospectively collected. Complications were scored according to the Clavien-Dindo classification and summarized in the Comprehensive Complication Index (CCI). Other assessed post-operative outcomes included ICU admission, length of stay, and rates of readmissions. Logistic and linear regressions were done to assess for correlation of RAI score and each measured outcome. A multivariate analysis was done to control for the magnitude of the operation, coronary artery disease, cancer stage, and intraoperative blood loss.

Results: Our cohort of 162 patients (79 M; 83 F, median age 67, range 19-95), included 55 undergoing minor operation, 56 undergoing intermediate operation, and 51 undergoing major surgery. Their RAI scores ranged from 0 to 25, with a median of 7. With every unit increase in RAI score, length of stay increased by 5% (IRR 1.05; 95%CI 1.04-1.07, P<0.01), the odds of discharging the patient to a special facility increased by 10% (OR 1.10; 95%CI 1.02-1.17, P<0.01), the odds of admission to the ICU increased by 11% (OR 1.11; 95%CI 1.02-1.20, P=0.01), the expected ICU length of stay increased by 17% (IRR=1.17; CI 1.06-1.30), the odds of readmission increased by 8% (OR=1.08; CI 0.99-1.17, P=0.054), the CCI increased by 1.6 units (coefficient=1.60; CI 0.61-2.58, p<0.01). In multivariate analysis, frailty remained positively associated with CCI (p=0.01)

Conclusion: The RAI score is predictive of short-term post-operative outcomes after hepatopancreatobiliary surgery. Pre-operative risk assessment with RAI could aid in decision-making for treatment allocation to surgery versus less morbid locoregional treatment options in frail patients. 

 

64.05 Prognostic Value of Hepatocellular Carcinoma Staging Systems: A Comparison

S. Bergstresser2, P. Li2, K. Vines2, B. Comeaux1, D. DuBay3, S. Gray1,2, D. Eckhoff1,2, J. White1,2  1University Of Alabama at Birmingham,Department Of Surgery, Division Of Transplantation,Birmingham, Alabama, USA 2University Of Alabama at Birmingham,School Of Medicine,Birmingham, Alabama, USA 3Medical University Of South Carolina,Department Of Surgery, Division Of Transplantation,Charleston, Sc, USA

Introduction:  Hepatocellular carcinoma (HCC) is the third most common cause of cancer related deaths worldwide. As the incidence of HCC continues to trend upwards, it is imperative to have validated staging systems to guide clinicians when choosing treatment options.  Seven HCC staging systems have been validated to varying degrees, however, there is currently inadequate evidence in the literature regarding which system is the best predictor of survival. The purpose of this investigation was to determine predictors of survival and compare the 7 staging systems in their ability to predict survival in a cohort of patients diagnosed with HCC. 

Methods:  This is a prospectively controlled chart review study of 782 patients diagnosed with HCC between January 2007 and April 2015 at a large, single-center hospital. Lab values, patient demographics, and tumor characteristics were used to stage patients and calculate Model for End Stage Liver Disease (MELD) and Child-Pugh scores. Kaplan-Meier method and log-rank test were used to identify the risk factors of overall survival. Cox regression model was used to calculate linear trend χ 2 and likelihood ratio χ 2 to determine linear trend and homogeneity of the staging systems, respectively. 

Results: Univariate analyses suggested that tumor number (P < .0001), diameter of largest lesion (P < .0001), tumor taking up > 50% of liver mass (P < .0001), tumor major vessel involvement (P = .0025), alpha fetoprotein level (AFP) 21-200 vs > 200 (P < .0001), and Child Pugh score (P <.0001) were significant predictors of overall survival; while portal hypertension (P= .520) and pre-intervention bilirubin (P= .0904) were not. In all patients, the Cancer of Liver Italian Program (CLIP) provided the largest linear trend χ 2 and likelihood ratio χ 2 in the Cox model when compared to other staging systems, indicating the best predictive power for survival. 

Conclusion:Based on our statistical analysis, Child Pugh score, tumor size, number, presence of vascular invasion, and AFP level play a significant role in determining survival. In all patients and in patients receiving treatment other than transplantation (ablation, chemoembolization), CLIP appears to be the best predictor of survival. The CLIP staging system takes into account Child Pugh score, tumor morphology, AFP level, and portal vein thrombosis, which may explain its significant ability to predict survival. 

 

64.06 Epidural-related events are associated with ASA class, but not ketamine infusion following pancreatectomy

V. Ly1, J. Sharib1, L. Chen2, K. Kirkwood1  2University Of California – San Francisco,Anesthesia,San Francisco, CA, USA 1University Of California – San Francisco,Surgical Oncology,San Francisco, CA, USA

Introduction:

Epidural analgesia following pancreatectomy has become widely adopted; however, high epidural rates are often associated with early hypotensive events that require rate reduction and fluid resuscitation. It is unclear which patients are most at risk for such events. Continuous subanesthetic ketamine infusion reduces opioid consumption after major abdominal surgery. The effects of ketamine added to epidural analgesia have not been well studied in patients undergoing pancreatectomy. This study evaluates the safety and postoperative analgesic requirements in patients who received continuous ketamine infusion as an adjunct to epidural analgesia following pancreatectomy.

Methods:

A retrospective data analysis was conducted on 234 patients undergoing pancreaticoduodenectomy (n=165) or distal pancreatectomy (n=69) at UCSF Medical Center between January 2014 and January 2017. Patient demographics, including history of prior opiate use, along with perioperative fentanyl-ropivacaine epidural and continuous intravenous ketamine rates were collected. Oral morphine equivalents (OME) and visual analogue pain scales (VAS) were recorded at post op day 0, 1, 2, 3, and 4. To assess for safety, epidural rate decreases due to hypotension within the first 24 hours post op and ketamine-related adverse events were recorded.

Results:

Epidural (n=197) and other opiate analgesia (n=234) were administered perioperatively per surgeon preferences and institutional standards. Continuous ketamine infusion was given intraoperatively, postoperatively, or both in 71 patients, with a trend toward preferential use in patients with prior opiate exposure. Ketamine infusion was not associated with hypotensive events, daily maximum epidural rates, or significant epidural rate changes on postoperative days 0-4. OMEs and VAS were similar between groups, regardless of prior opiate use. Patients with American Society of Anesthesia (ASA) class 3 or 4 (n=111) were more likely to require epidural rate decreases (OR 2.37, 95%CI 1.3-4.2, p = 0.003) and associated interventions in the first 24 hours post op. Three patients reported ketamine-related adverse events such as unpleasant dreams and hallucinations.

Conclusion:

Subanesthetic ketamine infusion as an adjunct to epidural analgesia for pancreatic surgery patients is safe. Patients with ASA classification 3 or 4 experience more hypotensive events which require epidural rate decreases in the first postoperative day following pancreatectomy. Further study is required to assess whether ketamine infusion allows for use of lower epidural rates, reduces post op opioid consumption, or improves pain score in the early postoperative period.

64.02 Isolated Pancreatic Tail Remnants After Transgastric Necrosectomy Can Be Observed

C. W. Jensen1, S. Friedland2, P. J. Worth1, G. A. Poultsides1, J. A. Norton1, W. G. Park2, B. C. Visser1, M. M. Dua1  1Stanford University,Surgery,Palo Alto, CA, USA 2Stanford University,Gastroenterology,Palo Alto, CA, USA

Introduction:  Severe necrotizing pancreatitis may result in mid-body necrosis and ductal disruption. When a significant portion of the tail remains viable but cannot drain into the proximal pancreas, the “unstable anatomy” that results is often deemed an indication for distal pancreatectomy. The transgastric approach to pancreatic drainage/debridement has been shown to be effective for retrogastric walled-off collections. A subset of these cases are performed in patients with an isolated viable tail. The purpose of this study was to characterize the outcomes among patients with an isolated pancreatic tail remnant who underwent trangastric drainage or necrosectomy (endoscopic or surgical) and determine how often they required subsequent operative management.

Methods:  Patients with necrotizing pancreatitis and retrogastric walled-off collections that were treated by either surgical transgastric necrosectomy or endoscopic cystgastrostomy +/- necrosectomy between 2009-2017 were identified by retrospective chart review. Clinical and operative details were obtained through the medical record. All available pre- and post-procedure imaging was reviewed for evidence of isolated distal pancreatic tail remnants. 

Results: A total of 75 patients were included in this study (41 surgical and 34 endoscopic). All of the patients in the surgical group underwent laparoscopic transgastric necrosectomy; the endoscopic group consisted of 27 patients that underwent pseudocyst drainage and 7 that underwent necrosectomy. Median follow-up for the entire cohort was 13 months and there was one death. A disconnected pancreatic tail was identified in 22 (29%) patients (13 laparoscopic and 9 endoscopic). After the surgical or endoscopic creation of an internal fistula (“cystgastrostomy”), there were no external fistulas despite the viable tail. Of the 22 patients, there were 5 (23%) patients that developed symptoms at a median of 23 months from the index procedure (3-recurrent episodic pancreatitis and 2-intractable pain). Two patients (both initially in endoscopic group) ultimately required distal pancreatectomy and splenectomy at 6 and 24 months after index procedure. 

Conclusion: Patients with a walled-off retrogastric collection and an isolated viable tail are effectively managed by a transgastric approach. Despite this seemingly “unstable anatomy,” the creation of an internal fistula via surgical or endoscopic “cystgastrostomy” avoids external fistulas/drains and the short term (near to initial pancreatitis) necessity of surgical distal pancretectomy. A very small subset require intervention for late symptoms. In our series, the patients that ultimately required distal pancreatectomy had initially undergone an endoscopic rather than a surgical approach; however, whether there is a difference between the two approaches in the outcome of the isolated pancreatic remnant is difficult to conclude due to small sample size.