89.09 Comparison of UW and HTK Preservation Solutions in Pediatric Liver Transplantation

T. J. Hathaway2, C. A. Kubal1, J. R. Schroering2, R. S. Mangus1  1Indiana University School Of Medicine,Transplant Division/Department Of Surgery,Indianapolis, IN, USA 2Indiana University School Of Medicine,Indianapolis, IN, USA

Introduction:  University of Wisconsin (UW) and Histidine-Tryptophan-Ketoglutarate (HTK) solutions are the two primary organ preservation solutions used in the United States. Multiple studies have been published comparing the two solutions in adult liver transplantation, most showing similar long-term results. This study analyzes the two solutions in pediatric liver transplantation at a single center.

Methods:  All pediatric liver transplants performed at a single center from 2001-2017 were reviewed. Transplant recipients were grouped based on the type of preservation solution used. Outcome measures included early graft function, as well as graft and patient survival. Early graft function was assessed by comparing 1-, 7-, 14-, and 30-day aspartate and alanine aminotransferases, and total bilirubin levels. Survival outcomes were compared between the two groups at 7-, 90- and 365-days post-transplant. Primary use of HTK began at our center as of May 2003. Operative technique, immunosuppressive protocols, and donor acceptance criteria remained uniform among participating surgeons throughout the study period. 

Results: There were 104 pediatric liver transplants with complete data during the study period, 75 preserved with HTK (68%) and 29 with UW (26%). The two groups had similar recipient and donor demographics. The locally procured organs were more likely to be preserved with HTK as that was the preferred solution of the local center. Cold and warm ischemia times were similar between the groups. Peak alanine-aminotransferase (ALT) post-transplant was higher in the UW group at both peak and post-transplant day 3. The peak total bilirubin levels were similar. Graft survival was statistically similar in the UW and HTK groups at 7-, 90- and 365-days post-transplant. 

Conclusion: Use of HTK has been recommended in pediatric liver transplantation because of its low viscosity which may improve blood clearance from the small pediatric vessels. This study provides evidence that use of HTK in pediatric liver transplantation is safe, and use of this solution results in outcomes which are similar to those seen for UW. 

 

89.08 Propensity Matched Survival Analysis of Simultaneous Lung-Liver and Isolated Lung Transplantation

K. Freischlag1, B. Ezekian2, M. S. Mulvihill2, P. M. Schroder2, H. Leraas1, S. Knechtle2  1Duke University Medical Center,School Of Medicine,Durham, NC, USA 2Duke University Medical Center,Department Of Surgery,Durham, NC, USA

Introduction:
There is debate in the field of transplantation as to whether simultaneous lung-liver transplant (LLT) long-term outcomes warrant allocation of two organs to a single recipient. We hypothesized that LLT recipients would have improved survival compared to single-organ lung recipients with a similar degree of liver dysfunction and tested this question using a large national database. 

Methods:
The OPTN/UNOS STAR file was queried for adult recipients of LLT and isolated lung transplant from 2006-2016. Demographic characteristics were subsequently generated and examined. LLT recipients were respectively matched 1:2 with single-organ lung recipients on age, gender, ethnicity, number of previous transplants, diagnosis, diabetes status, BMI, donor BMI, calculated MELDXI, LAS, and year of transplant. Kaplan-Meier analysis with the log-rank test compared survival between groups. 

Results:
A total of 18,273 lung recipients were identified. Of those, 43 patients underwent simultaneous LLT. In an unadjusted comparison of the LLT recipients with the isolated lung transplant recipients after the introduction of the lung allocation score (LAS), the LLT recipients were younger (34.19 years vs 54.74 years, p<0.001), had a lower BMI (20.83 vs 25.02, p<0.001), had a lower donor BMI (23.26 vs 25.86, p=0.001), had a higher percentage of diabetics (44.5% vs 19.5%, p<0.001), had a lower FEV1 (27.78 vs 38.16, p<0.001), higher percentage of cystic fibrosis patients (62.8% vs 11.9%, p<0.001), and had higher MELDXI (11.95 vs 9.95, p<0.001). Forty-one LLT patients were matched to eighty-two single-organ lung recipients with no differences in baseline characteristics and similar liver dysfunction. Overall survival was not significantly different between matched lung alone and LLT (Figure 1: 1-year 92.6% vs 87.8%, 5-year 62.6% vs 66.7%, 10-year 39.6% vs 43.1%, p=0.29).

Conclusion:
Survival in combined LLT was comparable to isolated lung transplantation, even after matching for patient characteristics and level of liver dysfunction. Thus, this analysis failed to identify a cohort of patients that benefit significantly from LLT. In order to continue this practice, further studies must identify a cohort of patients with dual organ failure that are most likely to benefit from this strategy.
 

89.06 Single and Double Lung Transplantation in Idiopathic Pulmonary Fibrosis with Pulmonary Hypertension

A. Kashem1, S. Keshavamurthy1, M. N. Sakib1, J. Gomez-Abraham1, E. Leotta1, K. Minakata1, F. Cordova1, V. Dulam1, G. Ramakrishnan1, S. Brann1, Y. Toyoda1  1Temple University,Cardiothoracic Surgery,Philadelphia, PA, USA

Introduction:  Patients diagnosed with Idiopathic pulmonary fibrosis (IPF) often demonstrate borderline and higher pulmonary hypertension (PH) with pulmonary artery mean pressure (PAP mean) below 25 mmHg and above 25 mmHg pressure.  Many of these patients are candidates for either single or double lung transplantation. We investigated the survival outcome of IPF patients with <25 mmHg vs. >25 mmHg PAP mean who underwent a surgical procedure of single (SLT) or double lung transplantation (DLT).

Methods:  165 IPF patients that underwent either single or double LTx at our center from 2012 to 2016 were reviewed retrospectively. 86 patients had borderline PH with <25 mmHg PAP mean and 79 patients had PH with >25 mmHg PAP mean. Demographics, recipients’ age and height, donor age and height, LAS, length of stay (LOS), survival days, death, types of induction, and surgical procedures were compared between SLT vs DLT in IPF patients for significance in two groups of PAP mean (<25 vs. >25 mmHg).  Actuarial survival was assessed by Kaplan-Meier curve and compared by log rank test. Data were expressed as mean ± standard deviation and p-value less than 0.05 was considered as statistically significant (Stata Inc.). 

Results: Out of 165 IPG patients, group 1 with <25 mmHg PAP mean (n=86) had 32 DLT and 54 SLT procedures and group 2 with >25 mmHg PAP mean (n=79) had 48 DLT and 31 SLT procedures. Group 1 had 11 females with DLT vs. 13 with SLT compared to 21 males (70%M)  with DLT and 41 males with SLT (p=0.303). Group 2 had 16 females with DLT vs. 8 with SLT compared to 32 males (72%M) with DLT and 23 males with SLT (p=0.478). In group 1, median length of stay is 16 days for SLT vs. 19 days for DLT procedure. In Group 2, median length of stay is 17 days for SLT vs. 20 days for DLT.  Within groups 1 and 2, we compared incision types of antero-axillary, clamshell, and median sternotomy  (p=0.001), induction type campath and simulect (p=0.241; p=0.824), and found no differences in BMI, race, donor age, and concomitant procedures. In group 1, recipient age had no differences 66 (SLT) vs. 66 (DLT); p=0.881 and in group 2, recipient age had no differences 67 (SLT) vs. 62 (DLT); p=0.996. There were significant differences in LAS in group 1, 50 (SLT) vs. 66 (DLT); p=0.001 and in group 2, 51 (SLT) vs. 69 (DLT); p=0.002. Kaplan-Meier curve showed no survival difference in SLT vs DLT in group 1 and 2 (fig 1).

Conclusion: Our results showed no differences in survival outcome with single or double lung transplantation when pulmonary artery mean pressure was below or above 25 mmHg in IPF patients. 

 

89.04 Incidence and Outcomes of Pleural Effusion in Liver Transplantation

J. W. Clouse1, C. A. Kubal1, A. N. Blumenthaler1, R. S. Mangus1  1Indiana University School Of Medicine,Transplant/Surgery,Indianapolis, IN, USA

Background

Pulmonary complications after liver transplantation have previously been associated with longer hospital stays, increased time on a ventilator, and higher mortality. This study reports the incidence and outcomes of a specific pulmonary complication, pleural effusion, in orthotopic liver transplant recipients at an active transplant center. 

Methods

Records from a single transplant center were analyzed retrospectively for all adult patients (>17 years of age) receiving a liver transplant over a four year period from July 2013 through June 2017. Radiologic reports were used to diagnose pleural effusions and determine therapeutic interventions. Patients with documented pleural effusion by radiographic imaging within 30 days pre- or post-transplant were considered as cases for the analysis with those not having an effusion being non-cases. Outcomes included length of hospital stay, discharge disposition, hospital readmission, and discharge with home oxygen. Patient survival is assessed using Cox regression survival modeling.

Results

During the study period, 512 liver transplants were performed, with 102 patients (20%) developing a peri-transplant pleural effusion.  In total, 47 patients (9%) had a pre-transplant effusion, 87 (17%) had a post-transplant effusion, and 32 (6%) had both. Factors associated with the presence of any pleural effusion included an increasing MELD score, retransplantation, and a diagnosis of alcoholic liver disease. Median hospital stay for patients with any effusion was 17 days, compared to 10 days for patients with no effusion. Of the patients with pleural effusion, 48% either died in the hospital or required admission to a rehabilitation facility at discharge (52% discharged to home). Readmission to the hospital within 90-days of discharge occurred in 69% of effusion patients. Cox regression analysis showed decreased survival among patients who developed pleural effusion at 12, 24, and 36 months post-transplant.

Conclusion

Overall 20% of liver transplant recipients developed a peri-transplant pleural effusion. In this patient population, pleural effusion significantly decreases life-expectancy post-transplant. Risk factors for the development of pleural effusion included higher MELD score (>20), retransplantation, and alcoholic liver disease.

 

89.02 Tumor Size or Tumor Number? Which Is More Predictive of Survival with Hepatocellular Carcinoma?

K. R. Vines2, P. Li1, S. Bergstresser2, B. Comeaux1, D. Dubay3, S. Gray1,2, D. Eckhoff1,2, J. White1,2  1University Of Alabama at Birmingham,Department Of Surgery, Division Of Transplantation,Birmingham, Alabama, USA 2University Of Alabama at Birmingham,School Of Medicine,Birmingham, Alabama, USA 3Medical University Of South Carolina,Department Of Surgery, Division Of Transplantation,Charleston, Sc, USA

Introduction: Hepatocellular carcinoma (HCC) is the leading cause of death among patients with cirrhosis and is common in the US and the world, with an approximately 10-15% 5-year survival rate. The tumor size and number are among the main factors affecting patients’ condition and treatment effect. The aim of this study is to investigate the predictive power of tumor number and size on the overall survival of HCC patients.

Methods: We utilized our prospectively collected single-center database to identify a  cohort of 436 HCC patients who received cancer treatment other than liver transplant. We reviewed records to analyze tumor numbers,  diameter of the largest lesion, or the sum of diameters of the largest three lesions (SDL3), and other tumor characteristics, as well as their survival status. The patients were sub-grouped into 4 categories: single lesion smaller than 5cm, single lesion equal or greater than 5cm, multiple lesions with SDL3<5cm, and multiple lesions with SDL3≥5cm. Kaplan-Meier method was used to compare the overall survival of HCC patients in the 4 groups. Cox regression model was used to calculate the hazard ratio (HR) controlling for other tumor characteristics including major vessel involvement and portal hypertension.

Results: After controlling for major vessel involvement and portal hypertension, smaller lesion size (SDL3 < 5cm vs SDL3 ≥ 5cm, HR=0.74, p=0.0085) or smaller lesion number (single vs multiple, HR=0.75, p=0.0160) indicated better survival. When the SDL3 < 5cm, there was no significant difference of survival between single and multiple lesions (multi vs single, HR=1.05, p=0.7945). In addition, there was no significant survival difference between patients with a single large lesion and patients with multiple smaller lesions (single ≥ 5cm vs multi SDL3<5cm, HR=1.10, p=0.6636).  However, patients with multiple large lesions (SDL≥5cm) tended to have the worst survival compared to the other three groups (HR=1.46 vs multiple small lesions, p=0.0671; HR=1.54 vs single small lesion, p=0.0017; HR=1.32 vs single large lesion, p=0.1073).

Conclusion: Our results suggest that both lesion number and size are important in predicting patient survival. Furthermore, patients with multiple lesions may have worse survival than patients with single lesion in the similar or smaller size. Our results may also suggest that the measurement of the diameters of the largest three lesions could be clinically more important than the measurement of the largest lesion only.

 

88.18 Infection, Local Complication, and Graft Failure Rates in Alloplastic Cranioplasty Reconstruction

J. D. Oliver2, J. B. Mancilla1, K. Vyas1, B. Sharaf1  1Mayo Clinic,Plastic And Reconstructive Surgery,Rochester, MN, USA 2Mayo Clinic,School Of Medicine,Rochester, MN, USA

Introduction: There are currently numerous options for reconstructive surgeons to repair acquired defects of the cranium. Traditionally, autologous bone tissue was used as the gold-standard in cranial vault reconstruction, dating back to the early 1600’s. More recently, alloplastic cranioplasty was developed and has evolved significantly over the years as numerous alterations have been implemented, including the development of different materials to serve as a medium of repair to the defect, such as Titanium Mesh (Ti), Polymethyl Methacrylate (PMMA), Polyether Ether Ketone (PEEK), and Norion implants. There is little data in the literature today comparing the surgical outcomes of these various types of alloplastic cranioplasty methods, and there has yet to be published a systematic review of such outcomes among the alloplastic materials we have compared in this study. Our objective in this study is to compare postoperative rates of infection, local complications and allograft failures following cranioplasty reconstruction using Titanium Mesh (Ti), Polymethyl Methacrylate (PMMA), Polyether Ether Ketone (PEEK), and Norion implants.

Methods: We performed the first systematic review of available literature on four different methods of alloplastic cranioplasty reconstruction, including Titanium Mesh (Ti), Polymethyl Methacrylate (PMMA), Polyether Ether Ketone (PEEK), and Norion implants, using the Newcastle-Ottawa Quality Assessment Scale guidelines for article identification, screening, eligibility and inclusion. The electronic literature search included Medline/Pubmed, Scopus and Cochrane Database.

Results: A total of 47 studies and 2,972 adult patients were included in our review. Overall, Titanium Mesh (Ti) was associated with the lowest post-operative infection rate (3.71%) and fewest post-operative local complications (9.23%), as well as the lowest number of graft failures requiring reoperation (1.80%) as compared to Polymethyl Methacrylate (PMMA), Polyether Ether Ketone (PEEK), and Norion implants, all of which yielding significantly higher rates of infection, local complication, and ultimate graft failure postoperatively. The rates of infection, local complications, and graft failure in PMMA, PEEK, and Norion were found to be: 7.10%, 8.75%, and 19.56%, respectively for infection; 11.40%, 20.0%, and 26.09% for local complications; and 3.37%, 7.50%, and 15.22% for graft failure.

Conclusion: Current data suggests greater outcomes as measured by infection rate, local surgical complication rate, and graft failure rate in Titanium Mesh (Ti) cranioplasty reconstruction. This study qualifies as a preliminary analysis that begins to address the knowledge gap in determining the infection, local surgical complication and failure rates in alloplastic cranioplasty procedures, although longer-term and randomized trials are warranted to validate any association found in this study.

 

88.19 Nationwide Outcomes of Laparoscopic Versus Open Ventral Hernia Repair with Component Separation

S. Scurci1, J. Parreco1, J. Buicko1, A. Rice1, R. Rattan3, R. Chandawarkar2  3University Of Miami,Trauma And Acute Care Surgery,Miami, FL, USA 1University Of Miami – Palm Beach Regional Campus,General Surgery,Miami, FL, USA 2Ohio State University,Plastic Surgery,Columbus, OH, USA

Introduction:

The optimal repair of large ventral hernias presents a challenge to surgeons and has not yet been elucidated.  Component separation is one method commonly used to bring autologous tissue together at the midline without tension.  Open component separation creates large subcutaneous dissection to produce large myocutaneous flaps which require division of perforator vessels which can produce flap necrosis, wound infections, and seromas.  Theoretically, laparoscopic component separation aims to reduce these wound complications by avoiding an extensive dissection.  Small, observational studies have compared outcomes after component separation, however no large nationwide studies have been published thus far. The purpose of this study was to compare outcomes after laparoscopic vs open component separation perform including nationwide readmission rates.

Methods:

The Nationwide Readmission Database for 2013-2014 was queried for all patients aged 18 years or older undergoing elective ventral hernia repair with component separation. Patients undergoing laparoscopic versus open repair were compared for the outcomes: length of stay (LOS) > 7 days, in-hospital mortality, readmission within 30 days, and readmission within 30 days to a different hospital. Univariable logistic regression was performed for these outcomes and the variables with p<0.05 were used for multivariable logistic regression. Results were weighted for national estimates

Results:

There were 6,867 patients who underwent ventral hernia repair with component separation during the study period. There were 158 (2.3%) patients undergoing laparoscopic repair. Multivariable logistic regression revealed that patients undergoing laparoscopic repair had a reduced risk for LOS > 7 days (OR 0.29, p<0.01) and readmission within 30 days (OR 0.41, p=0.01). However, there was no difference in mortality (p=0.27) and patients undergoing laparoscopic repair were at increased risk for readmission to a different hospital (OR 5.57, p=0.02) (Table 1).

Conclusion:

The laparoscopic approach to repair of ventral hernia with component separation is associated with improved outcomes including decreased readmission rates. However, readmissions after laparoscopic surgery are often to a different hospital and studies that miss these readmissions are at risk for underestimating readmission rates.  The most common cause for readmission was wound complications.  Laparoscopic component separation reduced readmissions and LOS, making it an ideal alternative to open component separation to reduce post-operative morbidity.

88.13 To Scan or not to Scan: Evaluation of Children with Head Trauma and Unconfirmed Loss of Consciousness

J. D. Kauffman1, C. N. Litz1, S. A. Thiel1, A. Carey1, P. D. Danielson1, N. M. Chandler1  1Johns Hopkins All Children’s Hospital,Division Of Pediatric Surgery,St. Petersburg, FLORIDA, USA

Introduction: Traumatic brain injury (TBI) results in nearly half a million pediatric emergency department (ED) visits annually in the U.S. Computed Tomography (CT) is often used to evaluate for TBI but increases a child’s risk of malignancy. The PECARN Pediatric Head Injury/Trauma Algorithm was developed to provide guidance as to when CT is indicated for children presenting after head trauma. One criterion in the algorithm, history of loss of consciousness (LOC), may be indeterminate if the event is unwitnessed. The purpose of this study is to determine whether children presenting with unknown history of LOC are at greater risk for TBI than those with no history of LOC.

Methods:  Following IRB approval, the institutional trauma registry was reviewed to identify children 0–17 years of age presenting within 24 hours of minor head injury with score of 14 or 15 on the Glasgow Coma Scale (GCS) from January, 2010 to April, 2017. Those who underwent CT prior to arrival, those with penetrating injuries, and those suspected of non-accidental trauma were excluded. Age-specific predictor variables for clinically important TBI (ciTBI), defined as TBI that results in hospital admission for two or more nights, intubation for greater than 24 hours, neurosurgical intervention, or death, were extracted. Ordinal data was analyzed using a chi-square test or Fisher’s exact test as indicated.

Results: Among 1852 patients reviewed, 741 met inclusion criteria. Median age was 7.6 years; 66% were male. The majority (56.4%) reported no LOC, 260 (35.1%) reported LOC, and in 63 (8.5%) LOC history was indeterminate. Those in the indeterminate LOC group were significantly more likely than those with no reported LOC to undergo CT, but significantly less likely to have evidence of TBI on CT (Table 1). There was no difference in rate of ciTBI or neurosurgical intervention between groups. Each of the three children in the indeterminate LOC group who developed ciTBI met criteria (altered mental status and/or severe mechanism of injury) that would have resulted in CT being recommended even if LOC history had not been considered. Overall, 89% of those in the indeterminate LOC group exhibited findings apart from LOC status that justified CT. Of the remaining 11% (those with no clinical criteria for CT apart from indeterminate LOC status), none developed TBI.

Conclusion: Children presenting to the ED within 24 hours of minor head injury for whom history of LOC is unknown and who otherwise meet PECARN criteria for observation may not be at greater risk than those with no history of LOC for findings of TBI on CT, ciTBI, or need for neurosurgical intervention. Additional positive findings on PECARN algorithm may be used to direct need for CT or observation.

88.10 Prediction of Adverse Outcomes in TBI Using Continuous Hemodynamic Monitoring and Biomarker Levels

A. M. Crawford1, S. Yang1, C. L. Ramirez1, P. Hu1, Y. Li1, H. Li1, T. M. Scalea1, D. M. Stein1  1University Of Maryland,Shock Trauma And Anesthesiology Research (STAR)-Organized Research Center,Baltimore, MD, USA

Introduction: Identification of prognostic adverse outcomes after traumatic brain injury (TBI) could preclude immediate long distance military air evacuation and determine if a patient is “Fit-to-fly” to a neurosurgical-capable facility. In the study, various biomarkers were tested to predict adverse intracranial pressure (ICP) changes in severe traumatic brain injury (TBI) prior to occurrence.

Methods: Adult direct admitted trauma patients with severe TBI were prospectively enrolled. Continuously measured VS and biomarker levels were obtained on admission and every 6 hours for 72 hours. Systemic vital signs (SVS), such as blood pressure and heart rate, and intracerebral monitoring (ICM), such as ICP and cerebral perfusion pressure (CPP), were recorded.  Fifteen individual biomarkers including cytokines and its associated principal components (BMPC1-15) [Fig 1] were used in a boosting decision tree model for the prediction of elevation of ICP and hypoperfusion in the following next 6 hours.  Area Under the Receiver Operating Characteristic Curve (AUROC) was used to evaluate the outcome prediction. Variable importance was ranked based on contribution to the outcome prediction 

Results: 50 patients were enrolled in the study. The mean age was 40±18.9 years and 78.7% were male. Median admission motor Glasgow Coma Score was 3, median Marshall Classification score was 3, and in-hospital mortality rate was 22.9%. A Total of 491 biomarker measurements were available. The models demonstrated a prediction of ICP > 20mmHg for > 30min with an AUROC =0.75 (95% CI: 0.73-0.78) and an AUROC of 0.76 (95% CI: 0.74-0.78) in predicting a CPP < 50mmHg for > 15 min in the next 6 hours. Respectively, BMPC9 contributed by IL8 and S100β ranked as an important variable [Fig.1].  Individual biomarker IL-8 alone had the highest contribution for prediction of ICP elevation.

Conclusion: Cumulative biomarker patterns with elevated ICP can confer a pattern of importance with ICP as the most direct predictor of neurological worsening. Biomarker patterns provide indicative data in TBI and could provide early prediction of intracranial insult, thus expediting care of critically injured and improve patient outcomes.

 

88.12 Correlation in Trauma Patients between Mild Traumatic Brain Injury and Facial Fractures

M. C. Justin1, E. Kiwanuka3, M. A. Chaudhary1, E. J. Caterson1,2  1Brigham And Women’s Hospital,Center For Surgery And Public Health, Department Of Surgery, Harvard Medical School,Boston, MA, USA 2Brigham And Women’s Hospital,Division Of Plastic Surgery, Department Of Surgery, Harvard Medical School,Boston, MA, USA 3Brown University School Of Medicine,Division Of Plastic Surgery, Department Of Surgery,Providence, RI, USA

Introduction: The diagnosis of mild traumatic brain injury (mTBI) remains a diagnostic challenge that can lead to delay in diagnosis preventing early intervention. Force can be transmitted through the facial skeleton to the intracranial space leading to direct injury or coup/contrecoup insult. Facial fractures can serve as an objective surrogate marker of potential force transmission to the neural cavity. We hypothesize that, within the National Trauma Data Bank (NTDB), we can characterize the association of facial fractures and mTBI at all injury severity scores (ISS). A secondary hypothesis is that as injury moves up the craniofacial skeleton from the mandible there will be a stronger correlation of mTBI with facial fractures due to proximity to the cranial vault and required impulse to cause fractures of these bones.

 

Methods: Data from the NTDB (2007-2014) was used for this retrospective cross sectional study. Patients with mTBI and facial fractures were identified using the International Classification of Disease Ninth Revision (ICD9) codes. mTBI was identified with ICD9 codes utilizing the 2003 CDC definition for mTBI. Facial fractures were codified into nasal bone, mandible, malar and maxilla, orbital floor, and “other facial fractures.” Frontal bone fractures were not assessed for correlation as they are included with parietal and other skull vault fractures in ICD9 coding. Absence of diagnostic codes for other skull or facial fracture was then used to characterize individual types of facial fracture as the only type present. Further subdivision by ISS was performed.

 

Results: Of the 5,855,226 patients diagnosed with a traumatic injury, 19.2% were found to have a mTBI. The prevalence of mTBI in patients with isolated facial fractures ranged from 18.2% to 33.3%. The correlation strengthened going up the craniofacial skeleton with the lowest incidence within mandible fractures and highest within the other facial fracture category (table). At mild ISS, similar trends were demonstrated showing the lowest association of mTBI with mandible fractures and the highest incidence with isolated nasal bone fractures.

 

Conclusions: Isolated facial fractures have a high incidence of concurrent mTBI at all ISS levels. Without distracting from current trauma protocols and treatment of immediate life threatening injuries, clinicians can use this information in poly-trauma patients to alert to potential mTBI presence. Based on the level of the fracture going up the craniofacial skeleton one can also expect a higher likelihood of mTBI. Our data suggest that, within the context of trauma patients, facial fractures can serve as clinical markers for mTBI, raising both awareness and potential for early intervention.

88.08 Right Place at Right Time: Thoracotomies at Level I Trauma Centers have Associated Improved Survival

J. R. Oliver1, C. J. DiMaggio3,4, M. L. Duenes1, A. M. Velez5, S. G. Frangos3, C. D. Berry5, M. Bukur3  1New York University School Of Medicine,New York, NY, USA 3New York University School Of Medicine,Department Of Surgery, Division Of Trauma And Acute Care Surgery,New York, NY, USA 4New York University School Of Medicine,Department Of Population Health,New York, NY, USA 5New York University School Of Medicine,Department Of Surgery,New York, NY, USA

Introduction:  Previous studies have demonstrated a survival benefit for severely injured patients treated at Level 1 Trauma Centers (TCs). Emergency thoracotomy (ET) is a rare procedure performed on patients presenting in extremis. The objective of this study was to assess whether ET performed at Level 1 TCs is associated with improved survival.

Methods:  This was a retrospective study utilizing the National Trauma Databank 2014-2015. Patients were stratified according to TC ACS verification level. Patient demographics, outcomes, and center characteristics were compared.  Multivariate regression was conducted with mortality as the outcome adjusting for differences in patient characteristics.

Results: 1559 ETs were included in this study. 43.3% of ETs were performed at Level 1 TCs, while 14.1% were performed at Level 2 and 43.6% at other TCs. 1079 ETs were performed in the emergency department (ED) while 480 were performed in the operating room (OR). Over the two year study period, Level 1 TCs performed significantly more ETs (12.6 ± 16.4) than non-Level 1 TCs (6.3 ± 11.1, p = 0.0003). Mean patient age (34.6 years) and gender (85.8% male) were similar; more Hispanics and Caucasians were treated at Level 1 TCs (p < 0.0001). Patients treated at Level 1 TCs had higher median Injury Severity Score (ISS) (26.0 ± 20.0 vs. 25.0 ± 20.7, p = 0.007), were less likely to have signs of life on arrival (29.8% vs. 35.4%, p = 0.02), and were more likely to have severe (Abbreviated Injury Score ≥ 3) brain injuries (15.1% vs. 9.8%, p = 0.002) and abdominal injuries (36.4% vs. 27.0%, p = 0.004). Patients treated at level 1 TCs had significantly higher survival (24.3 vs. 19.5%, Adjusted Odds Ratio (AOR) = 1.44, 95% CI = 1.04 – 1.99, p = 0.03). ETs performed in the OR had 45.0% survival vs. 11.1% for ED (AOR = 2.30, 95% CI = 1.70 – 3.18, p < 0.0001), despite patients having a similar injury burden (ED median ISS 25.0 ± 21.5 vs. OR ISS 25.0 ± 17.5, p = 0.56). Penetrating injuries had 25.4% survival following ET vs. 13.7% for blunt injuries (AOR = 3.08, 95% CI = 2.18 – 4.39, p < 0.0001).

Conclusion: ETs performed at Level 1 TCs were associated with higher survival rates as compared with non-Level 1 TCs suggesting that survival after severe injury is to some degree procedurally related.  Outcomes were also improved when ETs were performed in the controlled environment of the OR and on patients with penetrating mechanisms of injury.

88.09 12 Year Review of Urban vs Rural Recreational Vehicle Injuries at a Level 1 Trauma Center

C. A. Butts1, R. Gonzalez1, L. Nguyen1, J. P. Gaughan1, S. Ross1, J. Porter1, J. P. Hazelton1  1Cooper University Hospital,Trauma, Surgical Critical Care, & Acute Care Surgery,Camden, NEW JERSEY, USA

Introduction: Traditionally, all-terrain vehicles (ATV) and dirt bikes (DB) have been used in rural locations for recreation and work.  Recently, there has been an increase in the use of these vehicles in an urban environment.  The aim of this study is to compare the injury patterns of patients involved in crashes while riding recreational vehicles in both an urban (URV) and rural (RRV) environment.

Methods: A retrospective review (2005-2016) of patients who presented to an urban Level I trauma center as a result of any ATV or DB crash was performed.   URV was defined as any ATV or DB accident which occurred on paved inner city, suburban or major roadways. RRV was defined as those accidents which occurred on secondary roadways or off-road.  Patients who presented more than 48 hours from time of accident were excluded. A p<0.05 was considered significant.

Results: 528 patients were identified to have an ATV or DB injury [RRV n=296 (56%); URV n=232 (44%)]. Patients involved in URV accidents had a higher ISS (12.2 vs 9.7; p<0.05), lower presenting GCS (13.8 vs 14.3; p<0.05), and were more likely to need emergent procedures (ie: intubation, central line, tube thoracostomy) in the trauma bay (28.5% vs. 17.9%; p=0.005).  URV patients were less likely to have been helmeted (39.6% vs 71.2%; p <0.001), and more likely to have traumatic brain injuries (35.8% vs 27.7%; p = 0.058), or extremity injuries (53.5% vs 41.2%; p=0.006). Additional injury patterns for the two groups were as follows: [face (18.1% vs 12.5%, p=0.09); spine (18.1% vs 19.6%, p=0.74); thoracic (34.9% vs 34.1%, p=0.85); abdomen (14.2% vs 16.9%, p=0.47); pelvis (7.8% vs 6.4%, p=0.61).  There was no difference in remaining hospital outcomes including mortality.

Conclusion: Our data suggests that URV use was associated with decreased helmet use, higher mean ISS, lower presenting GCS, an increased need for emergent trauma bay procedures, higher rates of traumatic brain injury, and higher rates of extremity injuries.

 

88.05 3-Factor vs. 4-Factor PCC in Coagulopathy of Trauma: Four is Better Than Three

M. Zeeshan1, M. Hamidi1, T. O’Keeffe1, N. Kulvatunyou1, A. Tang1, E. Zakaria1, L. Gries1, A. Jain1, B. Joseph1  1University Of Arizona,Tucson, AZ, USA

Introduction:
Coagulopathy of trauma (COT) is common and highly lethal. Prothrombin complex concentrate (PCC) has been shown to be a useful adjunct for the correction of this coagulopathy. However, the difference in efficacy between 3-factor PCC (3-PCC) vs. 4-factor PCC (4-PCC) in correcting COT remains unclear. The aim of our study is to compare the efficacy of 3-PCC vs. 4-PCC in trauma patients.

Methods:
A 4 year (2013-2016) review of all trauma patients at our Level I trauma center who received 3-factor or 4-factor PCC. Patients were divided into two groups (4-PCC and 3-PCC) and were matched in a 1:2 ratio using propensity score matching for demographics, injury severity, admission vitals, pre-injury warfarin use and initial INR. Corrected INR was defined as INR ≤1.5 Outcome measures were time to correction of INR, pRBC & FFP units transfused, thromboembolic complications (DVT or mesenteric thrombosis), and mortality. Sub analysis was performed on induced coagulopathy (on oral anticoagulant) patients.

Results:
516 patients who received PCC were analyzed of which 210 patients (4-PCC, 70; 3-PCC, 140) were matched.  The mean age was 50 ± 17 years; 55 % were male, and median [IQR] ISS was 25 [14–36]. 4-factor PCC was associated with accelerated correction of INR (336.7 vs. 401 min; p=0.02), decrease in pRBC units (5.4 vs. 6.9 units; p 0.03) and FFP units (3.1 vs. 4.5 units; p 0.03) transfused. There was no difference in thromboembolic complications (1.7% vs. 2.5% p=0.51), and mortality rate (23% vs. 25%; p=0.56) between the two groups. On sub-analysis of patients on warfarin (n=42), 4-factor PCC use resulted in an accelerated correction of INR (357.7 vs. 455.3 min; p=0.01) and reduction in FFP units transfused (3.8 vs. 5.1 units; p= 0.03). However, there was no reduction in pRBC requirements (0.9 vs. 1.2 units; p= 0.56).

Conclusion:
4-factor PCC is more effective as compared to the 3-factor PCC in reversal of coagulopathy of trauma by rapidly reversing the INR and decreasing the transfusion requirements. 4-factor PCC should be considered as a preferred agent for rapid reversal of coagulopathy of trauma.
 

88.04 It Still Hurts! Persistent Pain One Year After Injury

C. Velmahos1, J. P. Herrera-Escobar2, S. S. Al Rafai2, J. M. Lee1, R. Rivero3, M. Apoj3, H. M. Kaafarani1, G. Kasotakis3, A. Salim2, D. Nehra2, A. H. Haider2  1Massachusetts General Hospital,Boston, MA, USA 2Brigham And Women’s Hospital,Boston, MA, USA 3Boston University,Boston, MA, USA

Introduction:  Chronic pain after major trauma decreases productivity, impedes functional recovery, and increases health care costs. Early identification of trauma patients at higher risk of developing chronic pain may lead to interventions that improve long-term outcomes. The aim of the study was to identify early predictors of chronic pain and long-term use of pain medications after major trauma.  

Methods:  We interviewed major trauma patients (Injury Severity Score ≥ 9) from three level I trauma centers at 6- and 12-months after injury. We evaluated the presence of daily pain using the Trauma Quality of Life questionnaire and used multivariate logistic regression models to identify patient- and injury-related independent predictors of chronic pain and use of pain medications 6-12 months after injury. The models included demographics, educational level, injury characteristics, hospital course variables, and accounted for correlation within trauma center. The three most significant predictors of chronic pain after major trauma were identified based on the highest coefficients and were used to create a probability table predicting chronic pain. 

Results: Of 608 patients interviewed, 304 (50%) reported to have pain daily and 140 (23%) were taking pain medications daily. Among those who reported having pain, 40% reported to take pain medications. Age < 65 [OR: 2.63 (95% CI: 1.78-3.89)], high school or lower education [OR: 1.52 (95% CI: 1.04-2.17)], motor-vehicle crash (MVC) [OR: 1.71 (95% CI: 1.09-2.64)], work-related injuries [OR: 2.96 (95% CI: 1.21-7.22)], discharge to rehabilitation [OR: 2.11 (95% CI: 1.31-3.39)], and increased hospital stay [OR: 1.04 (95% CI: 1.01-1.07)] were significant independent predictors of chronic pain after major trauma. On the other hand, high school or lower education [OR: 1.91 (95% CI: 1.22-3.00)], falls [OR: 1.97 (95% CI: 1.14-3.40)], and discharge to rehabilitation [OR: 1.77 (95% CI: 1.01-3.11)] were significant predictors of chronic pain medication usage post-trauma. The three most significant independent predictors together augmented the probability of chronic pain in patients after trauma, as shown in the Table.

Conclusion: Age, educational level, MVC, work-related injuries, discharge disposition, and hospital length of stay were identified as early predictors of chronic pain after major trauma. Similarly, educational level, discharge disposition, and falls predicted chronic use of pain medications. Identifying patients at higher risk for chronic pain and usage of pain medications can be used to offer appropriate clinical services and closely monitor patients’ pain and its treatment.

 

88.02 Oral Xa Inhibitors versus Low Molecular Heparin for Thromboprophylaxis after non-Op. Spine Trauma

M. N. Khan1, M. Zeeshan1, E. Zakaria1, N. Kulvatunyou1, T. O’Keeffe1, L. Gries1, A. Tang1, A. Jain1, B. Joseph1  1University Of Arizona,Tucson, AZ, USA

Introduction:
Patients with spinal trauma have the highest risk of a venous thromboembolism (VTE) despite pharmacological thromboprophylaxis. Oral Xa Inhibitors (XaInh) are recommended after major orthopedic operation, however, its role in spine trauma is not well-defined. The aim of our study was therefore, to assess the impact of XaInh on VTE in spinal trauma patients managed non-operatively.

Methods:
A 2-year (2013-2014) review of the TQIP database for all patients with an isolated spine-trauma (spine-abbreviated injury scale [S-AIS] ≥2 and other body region AIS<4) who were managed non-operatively and received prophylactic anticoagulation with either Low molecular weight heparin (LMWH) or XaInh. Patients were divided into two groups based on the thromboprophylactic agent received: LMWH and XaInh, and matched in a 1:2 ratio using propensity score matching for demographics, admission vitals, injury severity, timing of intitation of thromboprophylaxis and level of spine-injury. Outcomes were incidence of deep venous thrombosis (DVT) and pulmonary embolism (PE), return to the operating room (OR), and mortality.

Results:
We analyzed a total of 22260 patients, of which 603 patients (LMWH: 402, XaInh: 201) were matched. Matched groups were similar in age (p=0.34), gender (p =0.39), systolic blood pressure (p=0.46), heart rate (p=0.53), injury severity score (p=0.44), and S-AIS (p=0.43).  Patients who received XaInh were less likely to develop a DVT (1% vs. 4.9%, p=0.01) compared to those who received LMWH. Furthermore, there was no difference in PE rates (p=0.55), return to the operating room (p=0.11), mortality rate (p=0.29), hospital (p=0.34) and intensive care unit length of stay (p=0.24) among two groups.

Conclusion:
Oral Xa inhibitors were more effective as prophylactic pharmacological agent for the prevention of deep venous thrombosis in patients with non-operative spinal trauma compared to low molecular weight heparin. The two drugs had similar safety profile. Further prospective trials should be performed to make specific recommendations.
 

87.07 Small Sized Aorta in Left-sided Congenital Diaphragmatic Hernia Improves Following Repair of the Defect.

P. E. Lau1, C. C. Style1, S. M. Cruz1, D. A. Castellanos2, T. C. Lee1, J. A. Kailin2, D. L. Cass1, C. Fernandes3, S. G. Keswani1, O. O. Olutoye1  1Baylor College Of Medicine,Michael E Debakey Department Of Surgery,Houston, TX, USA 2Texas Children’s Hospital,Cardiology/Pediatrics,Houston, TX, USA 3Texas Children’s Hospital,Neonatology/Pediatrics,Houston, TX, USA

Introduction: Congenital diaphragmatic hernia (CDH) has been associated with smaller left heart structures and aorta which are thought to result from extrinsic compression by the viscera in the left chest cavity or intrinsic hypoplasia. The purpose of this study was to evaluate the effect of visceral herniation on aortic dimensions in neonates with CDH.

Methods: A retrospective review of the medical records of neonates with CDH treated at a tertiary children’s hospital from January 2011 to December 2016 was performed. Prenatal ultrasounds and MRIs were used to assess defect sidedness, the severity CDH using the observed to expected lung-to-head ratio (O/E LHR), observed to expected total fetal lung volume (O/E-TFLV), and degree of visceral herniation using percent of liver herniation (%LH) into the chest cavity. Aortic measurements from transthoracic echocardiograms in the immediate post-natal period and following surgical CDH repair were reviewed and reported based on the deviation from normal values (z-scores).

Results: A total of 113 CDH neonates were identified (90 left and 23 right). The size of the aortic annulus, aortic root, transverse arch, aortic isthmus and aortic valve were significantly smaller in L-CDH compared to R-CDH neonates (Table 1). A paired t-test showed a significant increase in size of the aortic annulus, root, ascending aorta, transverse arch and aortic valve (Table 1) after surgical repair. There were no differences in aortic dimensions in right-sided CDH after repair. Using the Pearson’s correlation coefficient, there was no significant correlation between aortic dimensions and severity of visceral herniation as determined by %LH or with lung hypoplasia as determined by O/E-TFLV and O/E LHR.   

Conclusion:  Neonates with left-sided CDH have smaller thoracic aortas compared to those with right-sided CDH. There was no correlation between degree of aortic hypoplasia and severity of CDH. However, CDH repair is associated with a significant improvement of the aortic hypoplasia. This will aid in prenatal counseling of these patients.    

87.03 Parent Assessment of a Tablet-Administered BOQ-P Feedback System in the Outpatient Surgery Setting

P. H. Chang1,3, J. Nelson1, L. Fowler1, P. Warner1,3, S. Romo2, M. Murphy2, R. Sheridan2,4  1Shriners Hospitals For Children,Cincinnati, OH, USA 2Shriners Hospitals For Children,Boston, MA, USA 3University Of Cincinnati,Division Of Plastic/Burn Surgery,Cincinnati, OH, USA 4Massachusetts General Hospital,Department Of Surgery,Boston, MA, USA

Introduction:
Parent proxy questionnaires are an important tool in the assessment of outcomes in the pediatric burn population.  Surveys performed via mail suffer from low response rates.  Surveys performed in an outpatient clinic suffer from the time pressures of a busy clinical schedule.  Our group has been utilizing the Burn Outcomes Questionnaire for 5-18 year olds and the 17-item Pediatric Symptom Checklist (BOQ+P) using a tablet in a group of pediatric burn patients undergoing outpatient procedures.  The objective of this study was to assess parental attitudes about the survey.

Methods:
The BOQ+P was administered to 39 patients, aged 5-18 years, undergoing a scheduled outpatient surgical procedure at a single pediatric burn center.  The BOQ+P was administered on iPads using the TonicHealth© app to parents while the patients were undergoing pre-operative preparation.  Additionally, parents were then administered a separate survey on paper asking for their opinions about the experience of taking the survey on the iPad.

Results:
39 patients were enrolled in the study at this particular site with 23 male and 16 female.  The average ages of the males was 11.43 and females was 12.31 years (ranges 5-18 years).  36 parents responded to the post-BOQ-P survey.  97% of parents reported that the iPad was easy to use and were either very satisfied to satisfied with the experience.   100% of parents reported that the results of the BOQ-P were discussed during the visit.  89% of parents felt that the discussion with the burn clinician was helpful because of the results from the BOQ-P iPad survey.   86% of parents expressed a desire to see the BOQ-P on iPad utilized for future outpatient visits.

Conclusion:
The highly favorable responses of the parents towards administration of the BOQ+P on iPad demonstrate the perceived value of the instrument as a clinical tool to enhance communication between burn care provider and family.  Furthermore, tablet administration of the BOQ+P was deemed a user-friendly means to complete the surveys.  Use of a parent-proxy questionnaire is feasible in the outpatient surgical setting.  Clinical providers have the opportunity to discuss findings from the questionnaire in a setting where parents and burn providers can talk privately while the children are recovering post-anesthetically in another area.
 

86.19 Analysis of Multidisciplinary Pediatric Clinic Weight Reduction Program: Are parents disengaged?

B. D. Hughes1, C. B. Cummins1, O. Nunez-Lopez1, J. Prochaska2, E. Lyons3, D. Jupiter2, K. Perino3, A. Glaser4, R. S. Radhakrishnan1,4, K. D. Bowen-Jallow1  1University Of Texas Medical Branch,Division Of Surgery,Galveston, TX, USA 2University Of Texas Medical Branch,Preventive Medicine And Community Health,Galveston, TX, USA 3University Of Texas Medical Branch,Department Of Nutrition And Metabolism,Galveston, TX, USA 4University Of Texas Medical Branch,Department Of Pediatrics,Galveston, TX, USA

Introduction:
Pediatric obesity is a major public health concern. Severe obesity affects approximately 6.8% of adolescents in the U.S. This subgroup has been ascertained to be the most rapidly growing of those diagnosed with obesity. Efforts to prevent and eliminate obesity have highlighted the importance of parental engagement. There is evidence linking parents’ knowledge, attitudes, and behavior with childhood obesity.

Methods:
Parents and obese adolescents are evaluated in our multidisciplinary clinic for an intensive weight reduction program. After the initial clinic visit, subsequent visits are planned for every 4-6 weeks. During the initial visit and every 3 months thereafter, the parents and participants are independently administered a 57 and 64 question survey created by our research team, respectively. This study focuses on parental engagement based on questions selected from the survey which had an emphasis on self-perception, goal-setting, individual effort, and utilization of technology for weight reduction. 

Results:
Of the categories selected, the differences between the obese adolescents and their caretakers are reported. The main stem of each question is the same: only words to appropriately address the cohorts as either caretakers or adolescents were modified. The percentage difference between categorical responses of participants is provided, as well as the number of participants who answered ‘yes’ to the question, with its’ associated percentage (adolescent versus parents,  respectively): Self-perception: 4% (40[44%] vs. 36 [40%]); Goal-

Setting: 28% (38[73%] vs. 24 [45%]);  Individual Effort: 10% (26[29%] vs. 17[19%]); Utilization of technology: 2% (19 [21%] vs. 17 [19%]).  

Conclusion:
Parental engagement is imperative for weight reduction efforts in obese youth. Programs aimed at weight reduction should incorporate methods to survey parents and obese participants. In this study we found parents’ knowledge of self-perception, individual efforts, and utilization of technology to align with responses from the obese youth participants. An area of discordance was related to goal setting and may signify a barrier to weight loss. Goal setting could represent an area of potential impact as interventions advance in the curriculum. Comparison data between the initial visit and planned subsequent visits are necessary to validate these findings. 
 

86.16 Intestinal Function After Early vs. Late Appendectomy in Children with Perforated Appendicitis

A. N. Munoz1, R. Hazboun1, I. Vannix1, V. Pepper1, T. Crane1, E. P. Tagge1, D. C. Moores1, J. E. Baerg1  1Loma Linda University School Of Medicine,Division Of Pediatric Surgery,Loma LInda, CA, USA

Introduction:

To prospectively document the impact of early vs. late operation on intestinal function in children undergoing planned appendectomy at initial presentation of perforated appendicitis.

Methods:

After IRB approval, between September 2016 and August 2017, complete data were prospectively collected for children undergoing planned appendectomy for perforated appendicitis. Pathologist-confirmed transmural perforations were included.  Antibiotics and intra-operative irrigation were standardized.  The median time to operation after first abdominal pain was 3 days (range: 1-9 days).  Operation at day 2 or before (early) was compared to day 3 or after (late).  Vomiting, nasogastric tubes (NGs) placed for vomiting, and time to tolerate diet evaluated intestinal function.  Categorical and continuous variables were analyzed by chi-square and t-tests. A p<0.05 achieved significance.  Data were reported as mean and standard deviation, median and range.

Results:

125 children with abdominal pain and suspected perforated appendicitis underwent appendectomy (99% laparoscopic), 101 had a confirmed perforation and were included.  They were 67% male, 80% Hispanic and none were Asian.

There were 45 in the early and 56 in the late group, with 22/56(39 %) operated on day 3 (range: 3-9 days).  Follow-up evaluation was documented in 44/101 (41%), median: 41.5 days (5-81 days)

Children with early appendectomy were significantly younger (p=0.02), 7.8(3.5) vs. 9.5(3.8) years.

Pre-appendectomy, over 80% of each group were vomiting (p=0.84).  There were no significant differences in NGs (p= 0.07), WBC (p=0.62), fever (p=0.29), diarrhea (p=0.17) or imaged-abscesses (p=0.97) reported. The maximum imaged-abscess diameter was significantly greater in the late group (p=0.02), none were drained. 

At appendectomy, reported purulent fluid (p=0.41), fecaliths (p=0.48) and operation time (p=0.07) did not differ significantly.  

Post-appendectomy, 5(5%) developed abscesses (p=0.38) treated with drainage and antibiotics; 4 recovered.  One in the late group had persistent obstruction and required laparotomy 12 days after appendectomy.   The late group had a significantly longer hospital stay 3.5 (2.2) vs 5.6 (4.3) days (p=0.01).  All 44 with documented follow-up evaluation recovered completely

Conclusion:

A cohort of younger children with rapid progression of perforated appendicitis that recover after appendectomy was prospectively identified. Over 80% vomited before operation, but by the second post-operative day, only 18% persisted in vomiting. If appendectomy is performed on the third day of pain or later, significantly more NGs are placed and the time to tolerate diet is significantly prolonged. Early operation for perforated appendicitis is beneficial in children. 

 

 

 

86.12 Does age affect surgical outcomes following ileo-pouch anal anastomosis in children?

N. Bismar1, A. S. Patel1,2, D. Schindel1,2  1University Of Texas Southwestern Medical Center,Pediatric Surgery,Dallas, TX, USA 2Children’s Medical Center,Pediatric Surgery,Dallas, Tx, USA

Introduction:
To determine if younger children having a laparoscopic restorative proctocolectomy, mucosectomy and ileo-pouch anal anastomosis (LRS-IPAA) have comparable outcomes to older counterparts in the treatment of ulcerative colitis (UC) and familial adenomatosis polyposis (FAP).

Methods:
After IRB approval, a review of 65 children with FAP and UC who underwent LRS-IPAA at a children's hospital from 2002 to 2017 was performed.  The study population was separated into two groups based on age: Young group (YG) (5- 12yrs); Older group (OG) (13-18yrs). Patient demographics, post-procedure course and outcomes data was collected.  A statistical analysis of the data was performed using Graphpad® San Diego, CA.

Results:
There were 65 children identified.  YG, n=22 (13 with UC; 9 with FAP). There were 15 females and 7 males in YG.  OG, n=43 (UC; n=28), (FAP; n=15). There were 20 females and 23 males in OG.  Following LRS-IPAA, continence, appetite recovery, use of antidiarrhea medications, and complications were not significantly different between groups. The incidence of pouchitis was 21.5% (14): YG (n=5);  OG (n= 9) (p=NS).  The incidence of anastomotic stricture was 13.8% (9): YG (n=2) and OG (n=7) (p=NS).  Two children (one in each group) required re-operative adhesiolysis after presenting with a bowel obstruction (p=NS).  Three children elected to have a loop ileostomy constructed secondarily to chronic rectal pain and failure to achieve full continence following LRS-IPAA: all three were in the OG (p=NS).

Conclusion:
There are no significant differences in the outcomes of younger children when compared to older pediatric patients following LRS-IPAA in the treatment of FAP or UC.  While numbers are small, these data suggest that a younger age should not be a deterrent when contemplating LRS-IPAA in the treatment of UC and FAP in the pediatric population.