77.12 Effect of Patient-Centric Discharge Process on Readmissions for Kidney Transplant Recipients

A. Chandrasekaran1, L. Sharma1, M. Gadkari1, P. Ward1, T. Pesavento1, M. Hauenstein1, S. Moffatt-Bruce1 1Ohio State University,Surgery,Columbus, OH, USA

Introduction: This research investigates the effect of a newly developed patient-centric standardized discharge process on patient anxiety upon discharge and 30-day readmissions for kidney transplant recipients. Inconsistencies in information delivery during patient discharge is generally blamed for high number of 30-day readmissions among kidney transplant recipients. A standardized discharge process comprises activities that have clearly defined content, sequence, timing and outcome. In this research, a new standardized discharge process was developed through the participation of nurses performing discharge planning in five structured workshops, and supplemented with inputs from patients. We collect and analyze pre- and post- process standardization patient data to understand the efficacy of the new discharge process.

Methods: This is a three year study and we are currently in year 2. In year 1, we collected pre-intervention data (qualitative and quantitative) from over 100 kidney transplant recipients at a major kidney transplant center in the United States. In addition, we shadowed nurses during the delivery of discharge instructions and collected data on their job satisfaction. In Year 2, we conducted five structured workshops involving inpatient and outpatient nurses (with inputs from patients) to design and develop a standardized discharge process. We also standardized various teaching aids, IT systems and patient-books to reflect the newly developed standards. These new standards were adopted starting July 27, 2015. At this time, we also initiated continuous improvement activities (using morning huddles) in these units to discuss the implementation and further improvement of discharge procedures. In year 3, we will collect patient data on readmissions and quality of life upon discharge as well as data from nurses on their quality of work. Comparisons will be made with the pre-intervention data.

Results:Preliminary results using pre-intervention data from 100 patients indicate that patient anxiety one week following discharge strongly predicts the occurrence of readmissions. Readmission rates prior to our intervention were approximately 27%. Specifically, the odds of getting readmitted increase by 38% for a one unit increase in anxiety levels of the patient after discharge. We also found that patient anxiety levels primarily depend on both having a standardized discharge process and empathy shown during their hospital stay. We are in the process of collecting patient and caregiver data after implementation of the new standardized discharge process. We expect that patient anxiety levels, readmissions, and caregiver job satisfaction will show improvements as a result of the intervention.

Conclusion:Our study aims to result in an efficient, patient-centric discharge process that will improve kidney transplant patient discharge experiences and higher compliance with discharge instructions resulting in reduced readmissions.

77.07 Title: The Incidence and Risk Factors of Hepatocellular Cancer Recurrence Following Liver Transplantation

E. D. Colhoun1, C. G. Forsberg1, K. D. Chavin1, P. Baliga1, D. Taber1 1Medical University Of South Carolina,Charleston, Sc, USA

Introduction: Hepatocellular Carcinoma (HCC) rates have doubled over the past two decades, becoming the sixth most common cancer and the third most lethal cancer worldwide. Orthotopic liver transplant (OLT) is now the gold standard for treating HCC, although recurrence rates of HCC after OLT still remain an obstacle.

Methods: We performed a single-center retrospective longitudinal cohort study with the aim of determining the predominant baseline and follow-up variables that are significantly associated with HCC recurrence. We gathered pre and post transplant data and conducted univariant and multivariant analysis to assess variables predicting HCC recurrence after OLT.

Results: Between 2003 and 2015, 141 patients underwent OLT for HCC at our institution. We identified 9 cases of documented HCC recurrence (6.4% of patients). Univariant analysis indicated that the mean serum alphafetoprotein (AFP) difference (maximum pretransplant value minus most recent prior to transplant value) was significantly lower in the HCC recurrence group (157.7 vs 0.64; p=0.018), as well as the mean maximum pre-transplant cholesterol (162.9 vs 119.3; p=0.038), and mean days between HCC treatment initiation and transplantation (146.9 vs. 37.9; p=0.025). Maximum tumor size was larger in the HCC recurrence group (3.1 vs. 2.2 cm; p=0.14) as well. Multivariant analysis revealed that only pre-transplant mean maximum serum cholesterol levels were independently associated with HCC recurrence free survival (HR 11.0, p=0.004, see Figure 1).

Conclusion: The risk of HCC recurrence following OLT was quite low in this cohort of patients, which may be a reflection of low baseline risk or lack of reporting. Based on univariate and multivariant analysis, pre-transplant cholesterol was the strongest predictor of recurrence and may help clinicians risk-stratify patients for appropriate post-transplant monitoring and follow-up.

77.08 Does Donor Organ Quality Affect Outcomes of Patients with Severe Liver Decompensation?

A. Bertacco1, J. Merola1, K. Giles1, G. D’Amico1, S. Luczycki1, S. Kulkarni1, P. S. Yoo1, S. Emre1, D. Mulligan1, M. Rodriguez-Davalos1 1Yale University School Of Medicine,Surgery/Transplantation,New Haven, CT, USA

Introduction Patients with high Model for End-stage Liver Disease (MELD) scores have high risk for death without liver transplant (LT). The impact of MELD in predicting post liver transplant outcomes has been criticized. Several better predictive scoring systems using donor and recipient data have been developed. The aim of this study is to assess how donor quality impacts the outcomes of liver transplant recipients with a MELD >35 or acute liver failure (Status 1a) under the current allocation system.

Materials and Methods This is a retrospective IRB approved study, of adult patients with MELD >35 or Status 1a who received LT from 7/2006 to 7/2015 were included. We excluded pediatric transplant and living donor recipients. Recipient demographics, etiology, previous transplant, combined liver-kidney transplant, quality of donor were analyzed. ECD donors were defined as age >65yrs, BMI >35, HBcAb positive, HCV seropositive, DCD (donation after circulatory death), split liver, CIT (Cold Ischemia Time) >12 hrs, Total Bilirubin >2mg/dl, and/or requirement of more than 2 vasopressors. Survival rates in recipients transplanted with high-risk donors (ECD) and standard criteria donors (SCD) were compared using the Kaplan-Meier methods.

Results 56 patients with MELD >35 received LT during the study period (35 male and 21 female). Mean age was 51 years. 11 (19.6%) patients had acute liver failure (ALF) and 20 (35.7%) were in ICU at time of transplant. Six (10.7%) patients needed ventilator support. Ten of the 56 (17.8%) patients received combined liver-kidney transplants (CLKT) and Three patients had previous LT.

Mean MELD for patients with chronic liver disease was 41.4. Most common indication was HCV(37.5%) followed by ETOH (16.07%). In ALF cases, 1 (9.2%) had HBV related, 4 (36.3%) had drug induced, and 6 (54.5%) were unknown etiology.

Seventeen (30.3%) patients were transplanted with ECD grafts. Patient survival at 90 days was equal in both ECD and SCD groups (93% and 97% ns). There was no statistical significance between SCD and ECD groups in terms of 1 and 3 year patient survival (85.7% and 57.1% in ECD vs 76.4% and 56.8% SCD. (p=0.661)

Conclusions Donor organ quality does not seem to play a role in early patient survival after LT, although patients with MELD >35 may have worse long-term survival. Our analysis showed no significant difference when we use marginal or standard donors for high MELD or Status 1 recipients. This fact suggests that recipients with high MELD scores or fulminant liver failure may benefit from this pool of grafts. Accepting well selected ECD livers and having timely LT may result in better patient outcomes rather than waiting SCD for a longer period; however, long-term outcomes need to be considered.

77.09 No Difference in Post-Transplant Length of Stay in Pediatric Liver Transplant Recipients

C. S. Hwang1,2, M. Mac Conmara1,2, T. Meza2, D. Desai1,2, E. J. Alfrey3 1University Of Texas Southwestern Medical Center,Surgery,Dallas, TX, USA 2Children’s Medical Center,Pediatric Transplantation,Dallas, Tx, USA 3Stanford University,Surgery,Palo Alto, CA, USA

Introduction: Previous studies have examined factors that lead to increased post-operative length of stay (LOS) in the adult liver transplant population. Severity of illness has been shown to increase LOS in this population. We examined how severity of illness at time of transplantation as demonstrated by medical condition at the time of transplant affected overall, pre-transplant and post-transplant length of stay (LOS) in the pediatric liver transplant population at a single center.

Methods: We examined outcome in all pediatric patients who underwent liver transplantation between January 2010 and July 2014. Recipient and donor demographic data, and outcome data were examined. Demographic and outcome data included age, race, sex, weight, blood type, cold storage time (CST, time from donor aortic cross clamp to out of ice) in minutes, anastomotic time in minutes, estimated blood loss (EBL) at transplant, intraoperative transfusion requirement, presence and number of rejections, graft survival, liver disease, total LOS, medical condition at time of transplant (admitted to the intensive care unit (ICU), to the ward, or coming from home), pre-transplant LOS, and post-transplant LOS. Categorical differences were compared using the unpaired Student's t-test and nominal variables using either the Chi Square or the Fischer's exact test. A p-value of <0.05 was considered significant.

Results: Forty-six pediatric patients underwent liver transplantation. The recipient age in months at transplantation was 62.7+/-59.5 months, weight was 20.2+/-15.5 kg, CST was 432+/-98 minutes, anastomotic time was 57+/-17 minutes, EBL was 399+/-501 mL, intraoperative transfusion requirement was 360+/-371 mL, donor age was 89.5+/-115 months, and donor weight was 26.3+/-20.7 kg. The average LOS for all patients was 24.7+/-12.6 days. The mean LOS was 31.7+/-13.8 days, 29.0+/-10.9 days, and 20.2+/-11.2 days in the ICU, ward, and home groups respectively. The average pre-transplant LOS in the ICU, ward, and home groups were 13.3+/-7.8, 14.8+/-11.7, and 0.7+/-0.5 days respectively. The overall LOS and pre-transplant LOS were significantly longer in the ICU and ward groups compared to the home group (p < 0.05 and p < 0.0001, respectively). The post-transplant LOS was 18.5+/-10.5, 14.2+/-4.1, and 19.6+/-11.1 days for the ICU, ward, and home groups (p=NS). There was also no significant difference between the etiology of liver disease and LOS.

Conclusion: Patients who had increased severity of illness had an increased overall length of stay. The length of stay pre-transplant did not differ significantly between the ICU and ward groups. Despite the ICU and ward patients being more ill pre-transplant, this did not translate into a longer post-transplant length of stay.

77.05 HLA-DR13 and -DR15 Are Associated with Short-term Rejection in Lung Transplantation

T. W. Liang1, A. S. Gracon1, K. Rothhaar1, J. Wu1, D. S. Wilkes1,2 1Indiana University School Of Medicine,Indianapolis, IN, USA 2University Of Virginia,Charlottesville, VA, USA

Introduction:
Lung transplantation outcomes are among the least favorable, with the majority of recipients eventually developing bronchiolitis obliterans syndrome (BOS) and subsequent graft failure. Human leukocyte antigen (HLA)-DR15 has been implicated in the pathogenesis of BOS. Presence of this and other HLA-DR antigens may play a role in these poor outcomes. The objective was to identify the effects of having these antigens on patient outcome.

Methods:
Lung transplant donor and recipient data were retrospectively gathered from the United Network for Organ Sharing (UNOS) database from January 2006 to June 2013. Donor and recipient characteristics, proportion of recipients treated for rejection within the 1st year of transplant, 5-year survival rate, and 5-year rate of freedom from BOS were determined according to HLA-DR1, -DR7, -DR13, and -DR15 status in both the donor and recipient. Each HLA-DR allele was stratified by donor and recipient pair status – donor- and recipient-positive (D+R+); donor-positive but recipient-negative (D+R); donor-negative but recipient-positive (DR+); and donor- and recipient-negative (DR) for a particular allele.

Results:
8161 lung transplant recipients were analyzed. There were significant but small differences in donor and recipient characteristics for each HLA-DR group. The recipients in the DR+ pairing for HLA-DR13 and those in the D+R pairing for HLA-DR15 had significantly higher rates of receiving treatment for rejection within the 1st year after transplant (p=0.024 and p=0.001, respectively). There were no differences in 5-year survival or freedom from BOS for any of the four HLA-DR alleles that were studied.

Conclusion:
There are higher rates of patients treated for rejection within the 1st year who are either negative for the HLA-DR15 allele but received a donor-positive lung or positive for the HLA-DR13 allele but received a donor-negative lung for that allele. However, these differences do not appear to affect long-term survival and freedom from BOS.

77.06 Outcome Analysis of Intraoperative Hemodialysis in the Highest Acuity Liver Transplant Recipients

M. Selim1, M. Zimmerman1, J. Kim1, K. Regner2, D. C. Cronin1, K. Saeian3, L. A. Connolly4, K. K. Lauer4, H. J. Woehlck4, J. C. Hong1 1Medical College Of Wisconsin,Division Of Transplant Surgery/Department Of Surgery,Milwaukee, WI, USA 2Medical College Of Wisconsin,Division Of Nephrology/Department Of Medicine,Milwaukee, WI, USA 3Medical College Of Wisconsin,Division Of Gastroenterology And Hepatology/Department Of Medicine,Milwaukee, WI, USA 4Medical College Of Wisconsin,Department Of Anesthesiology,Milwaukee, WI, USA

Introduction: The Model of End Stage Liver Disease (MELD) scoring system (ranges between 6 to > 40) is an accurate predictor of pretransplant mortality. Liver transplantation (LT) in critically ill patients with end-stage liver disease (MELD > 35) and concomitant renal failure carries a high intraoperative and immediate post-transplant risk. Intraoperative hemodialysis (IOHD) has been utilized to aid these patients undergoing LT. We sought to analyze outcomes after LT for patients requiring IOHD.

Methods: A retrospective analysis from our prospective database of 82 adult LTs between October 2012 and August 2015. The median follow up was 12 months.

Results: Among those 82 LTs, 37 (45.1%) patients had a MELD score > 35. Thirty-three patients had pre-LT renal dysfunction: 25 (76%) underwent IOHD (Grp I) and 8 (24%) did not (Grp II). Hepatitis C and alcohol were the commonest etiologies for liver failure. While the mean MELD scores were comparable between groups I (43 + 5) and II (41 + 5), patients in Grp I were acutely ill compared to Grp II: there was a higher need for ICU management and urgent HD prior to LT (96% in Grp I versus 50% in Grp II, p=0.002), and an increased requirement for immediate post-transplant HD (92% in Grp I versus 50% in Grp II, p=0.007). In Grp I, 48% received a combined liver-kidney transplant (CLK) compared to 87.5% in Grp II, p=0.04. Among those who are alive at 3 months posttransplant, the HD-free recovery of renal function was 100% in Grp I and 75% in Grp II, p= 0.018. The organ preservation ischemic times and intraoperative blood transfusion requirement were comparable for both groups. There was a trend towards a lower intraoperative post-liver reperfusion hemodynamic instability in Grp I (16%) versus Grp II (37.5%), p=0.19. Post LT patient and graft survival rates were comparable for Grp I and Grp II at 30 days (88% vs. 100%) and at 6 months (84% vs. 100%), p=0.14.

Conclusions: IOHD is a safe therapeutic adjunct during LT of patients with the highest acuity. Identifying patients who require IOHD would improve intraoperative and post-LT survival outcomes and may facilitate recovery of post-LT renal function.

77.03 Outcomes Following Laparoscopic and Open Appendectomy in Kidney Transplant Recipients

N. Dagher1, I. Olorundare1, S. DiBrito1, C. Landzabal1, D. Segev1,2 2Johns Hopkins Bloomberg School Of Public Health,Department Of Epidemiology,Baltimore, MD, USA 1Johns Hopkins University School Of Medicine,Department Of Surgery,Baltimore, MD, USA

Introduction: Published case series and opinion articles have suggested that kidney transplant recipients (KTR) have higher complication rates and longer length of stay (LOS) following appendectomy than non-transplant patients (NTP). Literature suggests that compared to laparoscopic appendectomy (LA), open appendectomy (OA) is safer, associated with fewer complications in KTR. Those assumptions have likely resulted in a shift of KTR care from the community to kidney transplant (KT) centers and to a conservative open approach. This is the only reported study to date investigating outcomes of LA vs OA in the KTR population using a large nationwide dataset.

Methods: The Nationwide Inpatient Sample database was used to identify 1336 KTR and 2.6 million NTP who underwent appendectomy between 2000-11. Surgical approach (LA vs OA), and location of surgery at either a transplant center (defined as a hospital performing at least one KT during the study period) or a non-transplant center were compared. Postoperative complications for both groups were categorized using ICD-9 codes and risk factors for complications were tested using logistic regression. Negative binomial regression models were used to compare LOS and hospital charges.

Results: A total of 1336 KTR and 2.6 million NTP who underwent appendectomy were identified. LA was performed in 44% of KTR and 55% of NTP. KT centers performed 49.6%(663) and non-KT centers performed 50.4%(673) appendectomies on KTR. Although the overall rate of complications in KTR was significantly higher than in NTP undergoing OA (22.2% vs 14.8%), they were similar for LA (10.6% vs 8.7%). Compared to NTP, KTR were not at greater risk of complications overall (OR 0.95, 95% CI: 0.65-1.37). However, they were more likely to have infectious (OR: 1.82, 95% CI: 1.00-3.31) or wound (OR 3.25, 95% CI: 1.41-7.48) complications. Median LOS was longer in KTR regardless of surgical approach (KTR: 3 days for LA vs 4 days for OA; NTP: 2 days for LA vs 3 days for OA). Median hospital costs were higher for KTR for both LA and OA ($8600 and $9900 for KTR vs $7100 and $6300 for NTP). Transplant centers and non-transplant centers had similar complication rates (14.8% vs 19.2%), LOS (median 3 vs 3 days), and hospital charges ($6700 vs $6800) for KTR.

Conclusion: Though KTR are more likely to have an OA, they have better outcomes following LA and are comparable to NTP. This suggests that when indicated and technically feasible, LA should be favored. KTR have a longer LOS and higher hospital charge regardless of surgical approach or type of center. Outcomes do not differ significantly between transplant and non-transplant centers suggesting it is safe to perform appendectomy on KTR at either type of center.

77.04 Individualized Immunosuppression in the Elderly Preserves Excellent Outcomes with Improved Value

C. Eymard1, W. Ally1, K. L. Brayman1, A. Agarwal1 1University Of Virginia,Charlottesville, VA, Virgin Islands, U.S.

Introduction: Modern induction immunosuppression protocols for renal transplantation balance the risks of over-immunosuppression, namely drug toxicity and infection, against graft rejection. There is limited literature on induction therapy optimization with polyclonal antibodies in the elderly population (≥ 65 years). Rising health care costs and fixed payment are pressuring transplant programs to balance quality and fiscal management. Beyond the surgery, medications such as anti-thymocyte globulin (ATG) are the largest expense in transplantation. The aim of this study is to evaluate graft outcomes and associated cost savings of reduced immunosuppression in elderly renal transplant patients as part of a risk stratified immunosuppression protocol.

Methods: This is a retrospective cohort study of elderly patients undergoing solitary renal transplants from 2011 to 2014. Risk-stratification (RS) was based on immunological risk (sensitization and age) with varying ATG dosing utilizing adjusted body weight as opposed to total body weight. All patients were maintained on tacrolimus, mycophenolate mofetil (MMF), and prednisone. MMF target dosage was reduced from 2 g/day to 1.5 g/day. Patient (PS) and graft survival (GS), acute rejection (AR), total dosage (TD) and ATG cost are reported. Univariate analysis was performed and p<0.05 was considered significant.

Results:21 elderly patients underwent transplantation: 4 in the historical (HT) and 17 in the RS group. Mean follow up was 34±3 months in the HT and 23±8 months in the RS group. One year patient and graft survival were comparable (HT: 100 vs. RS: 100%; p=ns). Demographics were similar between groups, including mean age (67±2 years vs. 69±1; p=ns), % patients undergoing living donor transplants (0 vs. 6%; p=ns) and % patients with delayed graft function (25% vs. 31% p=ns). The HT group received two-fold total ATG dosage per patient vs. the RS group (450±92 mg (5.3 mg/kg) vs. 209±57 mg (2.6 mg/kg), p<0.05). MMF dosage was reduced in RS by 30% (1875±216 mg/day vs. 1312±258 mg/day; p=0.01). Renal function, as determined by eGFR, remained excellent at 3 and 6 months (52±13 ml/min/1.73m2 vs. 65±23; 46±16 vs. 62±21; p=ns). Acute cellular and humoral rejection was similar between groups at 6 months (25% vs. 6% p=ns; and 0%vs. 12%; p=ns). Overall 6 month infection rates were similar between cohorts (75% vs. 58%) with urinary tract infections as most common infection. The 53% reduction of ATG use in the RS cohort translated to a savings of $5825 per patient.

Conclusion:
The data would suggest that future modifications with immune monitoring could allow further optimization of immunosuppression. These data demonstrate that an individualized approach to immunosuppression with reduced dose ATG appears to maintain excellent clinical outcomes with low rejection rates and improved value of care with quantifiable cost savings.

76.22 Falls from Height and Motor Vehicle Accidents during the Snowvember 2014 in Western New York

D. Gleason1, W. A. Guo1 1State University Of New York At Buffalo,Surgery,Buffalo, NY, USA

Introduction: The November 13–21, 2014 North American winter storm was an epic lake-effect snow blitz. It created copious amounts of snowfall and crippled parts of Western New York, given the code name ‘Snowvember’. We describe the relationship between this snow blizzard and emergency hospital admissions for falls and motor vehicle related injuries in Western New York.

Methods: We obtained data on adult trauma patients with ICD-9 codes for external causes of injury (E880-888, E810-819, and E820-825) over a period of one month (Nov 13-Dec 12) in 2013 and 2014 from our trauma registry. The data was analyzed after the removal of identifiers.

Results:Figure 1 shows snowfalls in Western NY in 2014 and 2013. The number of admission in 2014 for falls from height and MVA was double what it was in 2013 (110 vs 61) and MVC (78 vs 41). There was no difference in male percentage and age. The ISS was lower (6.5 vs 9.0, p<0.01) for falls, but higher for MVA (59 vs 50, p<0.05) in 2014 than in 2013 (Table). The hospital LOC was shorter in 2014 than 2013 for patients sustaining falls (p<0.05). The most common cause of MVA in 2014 was due to loss of control without collision on the highway (E-code 816). Furthermore, the percentage of this cause was significantly higher in 2014 than in 2013 (29.5% vs 10%, χ2 p=0.015).

Conclusion:Our study demonstrates that this epic snowstorm in Western New York significantly increased the hospital admission for falls from height and MVA. Although the State government imposed a driving ban during this snowstorm, hospital admission for and the ISS of motor vehicle related injuries were significantly higher, with a higher proportion being due to loss of control of the vehicles. To prevent similar unnecessary injuries during snowstorms in the future, a public service campaign and media reports to advise the public on the risks of injury is advisable.

77.01 Risk Factors for Perioperative Death After Kidney Transplant in the Elderly Population

A. E. Ertel1, K. Wima1, R. S. Hoehn1, D. E. Abbott1, S. A. Shah1 1University Of Cincinnati Medical College,General Surgery,Cincinnati, OH, USA

Introduction: Due to the increase in kidney transplants performed in the elderly population (>65 years old), we aimed to characterize the effects of recipient age on perioperative outcomes in the United States.

Methods: The Scientific Registry of Transplant Recipients (SRTR) was queried for all kidney transplantations from 2009-2012 and linked to the University HealthSystems Consortium (UHC) database (n=32,010). Two groups were created: recipients < 65 years (n= 26,479) and elderly recipients >65 years (n=5,531). Primary endpoints were in-hospital mortality and resource utilization metrics in the perioperative period.

Results: Compared to recipients < 65 years, elderly recipients were more likely to be white, male, have multiple co-morbidities including diabetes, ischemic heart disease, and cancer. They were more likely to receive extended criteria allografts (34.2% vs. 12.4%, p<0.001) and less like to receive living donors kidneys (28.9% vs. 41.1%, p<0.001). Elderly recipients also had a higher in-hospital mortality, readmission rate, and were less likely to be discharged to home (Table). Odds of in-hospital mortality after kidney transplant were independently associated with recipient age > 65 on multivariate analysis (OR 2.4, 95% CI 1.7-3.4), and moderate to extreme severity of illness (OR 4.1, 95% CI 1.6-10.2).

Conclusion: Elderly recipients represent 17.3% of kidney transplants in this national cohort. In-hospital mortality and resource utilization are significantly higher for elderly patients (>65) undergoing kidney transplantation as compared to their younger counterparts despite controlling for donor and recipient variables and adjusting for patient selection.

77.02 Kidney Transplant Recipients’ Attitudes toward Using mHealth for Medication Management

R. Browning1, K. Chavin1, P. Baliga1, D. Taber1 1Medical University Of South Carolina,Division Of Transplant Surgery,Charleston, SC, USA

Introduction:

Mobile health technology (mHealth) may be a useful tool to assist in managing medication therapy in complex surgical patients, such as transplant recipients. There is, however, limited data assessing the attitudes of kidney transplant recipients toward this technology. The objective of this study was to survey kidney transplant recipients on smartphone ownership, use of mHealth apps, and willingness to utilize this technology to facilitate medication management.

Methods:

The survey included 13 items assessing demographics and general health, 26 items regarding use of technology and willingness to use mHealth, and 15 items regarding medication adherence and side effects. The survey was administered to patients in a kidney transplant clinic. Following consent, the patient was provided with an iPad to complete the survey. Standard descriptive and comparative statistics were utilized for data analysis.

Results:

Between May and July 2015, a total of 139 kidney transplant recipients participated in the survey. The results indicate that 96% (129/135) of respondents own a mobile phone, 61% (82/135) own a smartphone, 30% (40/135) had prior knowledge of mHealth and 7% (10/135) were already using a mHealth app. The majority of respondents (78%, 105/135), reported a positive attitude toward the use of mHealth for medication management. Smartphone ownership has increased over the past three years (61%, 82/135 vs. 35%, 35/99; p=0.0002) and smartphone owners were more likely to strongly agree with the use of mHealth (52%, 43/82 vs. 43%, 23/53; p=0.006). Patients under 55 were more likely to own smartphones (75%, 51/68 vs. 46%, 31/67; p=0.0008) and more likely to strongly agree with the use of mHealth (62%, 43/68 vs. 36%, 24/67; p=0.0152). African Americans were more likely to strongly agree with the use of mHealth than Caucasians (53%, 45/85 vs. 37%, 17/46; p=0.0997). Self-reported non-adherence was higher in Medicaid patients (37% vs. 21%, p=0.049), but non-adherence did not appreciably influence a patient’s willingness to utilize mHealth. Mean years from transplant was higher in those that reported severe side effects (4.3±5.4 vs. 2.0±3.3, p=0.013), but severe side effects were not significantly associated with willingness to utilize mHealth.

Conclusion:

In kidney transplantation, smartphone ownership continues to dramatically increase and respondents have a positive attitude toward the use of mHealth for improving medication management. Patients that were non-adherent or reported severe side effects were equally willing to adopt this technology, suggesting that it may be a promising tool to help improve medication-related outcomes in vulnerable populations.

76.18 Features of Trauma Diaphragmatic Injuries at a Level I Trauma Center: Has Anything Changed?

B. C. Patterson1, A. H. Palmer1, A. Ekeh1 1Wright State University,Department Of Surgery,Dayton, OH, USA

Introduction: Injuries to the diaphragm are rare events that typically require operative intervention, can occasionally be difficult to identify by current imaging techniques, have a higher occurrence with blunt mechanisms and are more predominant on the left side. We reviewed a single center’s experience with these injuries to determine if the patterns of presentation have evolved from reported historical trends.

Methods: All patients that sustained diaphragmatic injuries and presented to an American College of Surgeons verified Level I Trauma Center over an 8 year period (January 2004 – December 2011) were identified from the Trauma Registry. Demographic data, mechanism of injury, associated injuries, laterality, method of diagnosis, length of stay, mortality and other data points were abstracted from the patient records. Comparisons where necessary were performed by Student t-test and analysis of variance (ANOVA) for continuous variables and Fishers exact test for categorical ones.

Results:In the studied time period, there were 23,578 trauma admissions with 126 patients identified with diaphragmatic injuries. These represented 0.53% of all trauma admissions. The mean age was 37.0 years and 82% were male. Motor vehicle crashes (MVCs) – 40.5%, gunshot wounds – 36.5%, and stabbings – 11.1% were the most common etiologies of injury. Blunt mechanisms overall represented 50.8% with a mean Injury Severity Score (ISS) of 32.9. Penetrating mechanisms occurred in 49.2%. (mean ISS 21.5) Left-sided diaphragmatic injuries (65.9%) were more common than right sided (25.4%) and 7.9% were bilateral. Exploratory laparotomy was the most frequent method of diagnosis (45.2%). Chest X-ray (18.3%), CT (15.1%), Thoracotomy (6.3%) and Laparoscopy (4.8%) were the other diagnostic methods. Delayed diagnosis (>12 hours) occurred in 14.5% of patients with 10.3% of these found at autopsy. The overall mortality rate was 30.2%. Mortality was higher in blunt trauma – 40.6% vs. 19.4% in penetrating trauma. (p= 0.012) Patients who were stabbed had the lowest mortality rate (0%) while pedestrians trauma had the highest (80%). The laterality of the injury had no effect on mortality.

Conclusion: Diaphragmatic injuries are uncommon and can occasionally present a diagnostic challenge with delayed diagnosis –as seen in 14.5% of patients in this series. Left sided injuries predominate and mortality is higher with blunt trauma mechanisms – related to associated injuries. Exploratory laparotomy remains the most frequent method of diagnosis. Our findings are consistent with prior series in the literature with no major shifts in trends noted. Mechanisms due to penetrating injuries are higher in our study than has been historically reported.

No optimal imaging diagnostic technique has emerged for diaphragmatic injuries and a high index of suspicion remains necessary to avoid delayed diagnoses and missed injuries.

76.19 Intra-operative Shock Index Predicts Mortality in Patients Undergoing Damage Control Laparotomy

B. Zangbar1, B. Joseph1, K. Ibraheem1, N. Kulvatunyou1, A. Tang1, T. O’Keeffe1, L. Gries1, R. Latifi1, R. S. Friese1, P. Rhee1 1University Of Arizona,Trauma Surgery,Tucson, AZ, USA

Introduction: Shock index (SI=Heart rate/Systolic Blood Pressure) is a well-established measure for predicting adverse outcomes in trauma patients. Intra-operative vital signs and lab data is likely to affect the surgeons decision during the operation. However the impact of intra-operative shock index (IOSI) on trauma patient outcomes has not been assessed. The aim of this study was to assess the impact of shock index on mortality in patients undergoing damage control laparotomy.

Methods: We performed an 8-year (2006 – 2013) retrospective analysis of all trauma patients undergoing an exploratory laparotomy. Patients ≥ 18 y/o who underwent a damage control laparotomy were included. Intra-operative vital signs, blood gas, coagulation profile, and volume of blood products and crystalloids used during the operation were collected by chart review. Patients were divided into two groups of IOSI ≥ 1, and IOSI < 1 and the outcomes were compared in two groups. Our outcome measure was in-hospital mortality. Regression analysis was performed.

Results:A total of 150 patients were included in our analysis. Mean age was 38 18, 70% were male, median ISS was 29 [17 – 41], and overall mortality rate was 47%. 35% of the patients had a shock index ≥ 1. Patients with an IOSI ≥ 1 had a higher mortality rate (57% vs. 15%, p < 0.001).

Conclusion:Shock continues to be a major cause of mortality and it is often reversible. Intraoperative shock index is a strong predictor of mortality in patients undergoing DCL. Optimization of resuscitation in the identified patients may help improve the outcomes.

76.20 Implications of ICU-Based or Operating Room-Based Tracheostomy

N. Pettit2, M. Leiber1, E. Buggie1, M. S. O’Mara1,2 1Grant Medical Center,Trauma And Acute Care Surgery,Columbus, OHIO, USA 2Ohio University Heritage College Of Medicine,Columbus, OHIO, USA

Introduction:
Few studies exist comparing percutaneous bedside tracheostomy (PBT) versus the open surgical tracheostomy (OST) procedure. In this study we examined patients that underwent one of these procedures and we hypothesized that those patients undergoing PBT may have more post-operative complications but decreased costs.

Methods:
579 patients that underwent either PBT or OST from 2008-2014 were retrospectively evaluated. Outcomes were post-operative complications, mortality, length of stay (LOS), and charges. Patients were divided into groups by tracheostomy technique (open or percutaneous) and location (ICU or OR): open-OR, percutaneous-ICU, percutaneous-OR. Variables examined and controlled for in the study were admitting service (trauma vs. medicine), age, body mass index (BMI), and the Charlson Comorbidity Index (CCI).

Results:
Using any-post-op complication as our first outcome there was a significant difference between the 3 tracheostomy groups (p = 0.001). No significant difference existed between open-OR and perc-OR (8.7% vs 7.4% complication rate, respectively, p = 0.70), however significant differences were found between perc-ICU vs perc-OR (19.1% vs 7.4%, odds ratio 3.6, p = 0.028) and between perc-ICU vs open-OR (19.1% vs 8.7%, odds ratio 2.5, p = 0.005). A logistic regression model determined that tracheostomy group (p = 0.011) and service (medicine vs. trauma, p = 0.017) were significant predictors of any post-operative complication. Medicine cases were 2.3 times more likely to have post-op complications than trauma cases (95% CI for odds ratio 1.2 – 4.6). While the CCI is significantly different between the medicine and trauma patients (mean of 3.2 vs. 0.9, respectively, p < 0.0001), CCI was not a predictor of post-op complications. Mortality was not significantly different between the various procedures, and only age was a significant predictor of mortality (p = 0.0001). The service and age were significant predictors of LOS (p = 0.006 and p = 0.002, respectively), with a mean LOS of 27.1 and 22.8 days for trauma and medicine, respectively. Lastly, there are no significant differences in charges between the different procedures, and only the days to procedure was a predictor of the total charge to the patient (p < 0.0001).

Conclusion:
Those patients that had a tracheostomy in the OR, whether perc or open, had less complications than those done in the ICU, and without an identifiable difference in the total charge. Patients with tracheostomies placed by a percutaneous method in the ICU were 3.6 times more likely to have post-op complications than were patients with the same procedure done in the operating room. Furthermore, those patients admitted to medicine, also had more complications with a shorter length of stay. Understanding these results should lead the surgeon to performing tracheostomies in higher risk patients in the operating room, thereby decreasing complications without increasing overall charges.

76.14 Variability in Trauma Center Patient Demographics: One Level Fits All?

C. Kapsalis1, A. Lai1, D. Kim1, D. Ciesla1 1University Of South Florida College Of Medicine,Tampa, FL, USA

Introduction:
Trauma center standards define the resources and processes that best meet the needs of injured patients. The regional trauma system aims to match the distribution and level of designated trauma centers to meet the needs of its population. Population demographics vary substantially across regions and predict variation in trauma center demographics. The purpose of this study was to characterize the variation in Florida trauma center patient populations.

Methods:
A statewide discharge dataset was queried for all injury related discharges from Florida acute care hospitals using ICD-9 codes in 2014. Hospitals were categorized as non-trauma centers (NTC), Level 2 (DTC2), pediatric Level 2 (DTC2p), and Level I (DTC1) Designated trauma centers. Elderly patients with isolated hip fractures from falls were excluded. ICISS values were calculated for all ages based on survival risk ratios of the adult (age 16-65) age group for the 5 years preceding the reported year. An ICISS <0.85 defined high mortality risk and predicts <85% survival.

Results:
There were 133710 injured patients discharged from 211 Florida acute care hospitals in 2014; 54,478 (41%) from one of 24 designated trauma centers including 10226 (80%) of 12771 high-risk patients. Individual trauma center volumes are shown in the figure. Although there was variability within each group, DTC1’s as a group had a higher proportion of high risk patients (22%) compared to DTC2p (16%) and DTC2 (17%), lower proportion of elderly patients (26%) compared to DTC2p (42%) and DTC2 (43%), and a lower proportion of injuries resulting from falls (35%) compared to DTC2p (48%) and DTC2 (47%). Among the high risk patients, DTC1 treated a lower proportion of patients with Traumatic brain injuries (19%) compared to DTC2p (23) and DTC2 (28%).

Conclusion:
Although all trauma centers may meet level specific standards and serve distinct populations, there is significant variability in populations served among Level 1 and Level 2 trauma centers and substantial overlap between some Level 1 and Level 2 centers. This information is useful in trauma systems planning when designating trauma centers within specific regions.

76.15 The ‘Ins and Outs’ of Acute Kidney Injury following Renal Trauma

R. Won1, D. Plurad1, B. Putnam1, A. Neville1, S. Bricker1, F. Bongard1, J. Smith1, D. Y. Kim1 1Harbor-UCLA Medical Center,Trauma/Acute Care Surgery/Surgical Critical Care,Torrance, CA, USA

Introduction: The incidence and outcomes of acute kidney injury (AKI) among trauma patients who sustain direct renal trauma are not well-defined. Previous studies have demonstrated that the choice of operation undertaken in the management of renal injuries may not impact subsequent renal function. The objective of this study was to identify risk factors for the development of AKI among trauma patients sustaining renal trauma.

Methods: We performed a 10-year retrospective cohort analysis of our level 1 trauma center database to identify patients admitted to the hospital for >24 hours with a diagnosis of renal trauma. Grade of renal injury was determined using the American Association for the Surgery of Trauma (AAST) kidney injury scale, whereas AKI was defined according to the RIFLE classification (both glomerular filtration rate and urine output criteria). The primary outcome measure was the development of AKI. Secondary outcomes included the incidence of kidney-related complications and mortality. Multivariate logistic regression analysis was performed to identify independent predictors of AKI.

Results: Of 246 patients, 191 patients (78%) were managed non-operatively. Forty-three patients (17%) developed AKI (Risk, n=25; Injury, n=9; Failure, n=9), of which 7 (16%) required renal replacement therapy. Patients with AKI were older (35 vs. 28 years, p=0.03), more likely to present following a penetrating mechanism (p=0.02), and had a higher Injury Severity Score (26 vs. 20, p=0.02). There was an increased incidence of high-grade (IV/V) injuries among patients with AKI and these patients underwent nephrectomy more commonly (both p<0.03). Kidney-related complications (26 vs. 8%, p=0.0003) and mortality (14 vs. 2%, p=0.001) were increased among patients with AKI. On multivariate logistic regression analysis, high-grade injuries (OR=3.3, 95% CI=1.3-8.4; p=0.015) and nephrectomy (OR=3.0, 95% CI=1.1-8.7; p=0.04) were independently associated with the development of AKI.

Conclusion: AAST high-grade kidney injuries are associated with an increased risk for AKI as defined by the RIFLE criteria. Patients undergoing nephrectomy are also at an increased risk for this morbid complication. Future studies examining the safety and efficacy of renal salvage versus nephrectomy on the development and severity of AKI are required in patients with renal injuries identified at the time of operation.

76.17 Does Initial Intensive Care Unit Admission Predict Hospital Readmission in Pediatric Trauma?

R. M. Dorman1,2, H. Naseem1, K. D. Bass1,2, D. H. Rothstein1,2 1Women And Children’s Hospital Of Buffalo,Department Of Surgery,Buffalo, NY, USA 2State University Of New York At Buffalo,Department Of Surgery,Buffalo, NY, USA

Introduction: Hospital readmission after discharge for trauma care in adults confers significant morbidity, mortality, and resource utilization. Less is known about hospital readmission in pediatric patients after an index admission for trauma care. In this study, we examine pediatric intensive care unit (PICU) admission as a risk factor for hospital readmission after trauma care.

Methods: This is a retrospective cohort study of patients aged 1-19 years discharged with a trauma diagnosis from hospitals in the Pediatric Health Information System database between March, 2010, and February, 2015. Patients with inadequate clinical information, those transferred to the PICU after initial ward admission, and those who died after admission were excluded. Demographic patient variables included age, gender, payer status, and race/ethnicity. Clinical variables included length of stay, presence of mechanical ventilation, APR-DRG severity of illness (SOI), and disposition upon discharge. The main outcome variable was hospital readmission within 30 days of discharge. Odds ratios (OR) were calculated in both univariate and multivariate analyses with corresponding 95% confidence intervals (C.I.).

Results: During the study period, 87,401 patients were admitted with a trauma diagnosis. Of these, 14,770 (16.9%) were admitted directly to the PICU. The overall population was 65.3% male and 62.4% white, and had an average age of 9.26 (SD 4.83) years. The most common payers were private (43.9%) and Medicaid (43.4%). Nearly half of the patients had a low SOI (49.3%). Most were discharged without home health services (96.1%). Hospital readmissions within 30 days occurred in 3.4% of patients. On univariate analysis, patients directly admitted to the PICU had more than twice the risk of 30-day hospital readmission compared to those never admitted to the PICU, 6.4% vs 2.8% (OR 2.36, C.I. 2.18-2.56). On multivariate analysis, controlling for demographic and clinical variables (excluding SOI), the OR for hospital readmission in patients initially admitted to the PICU was 1.45 (C.I. 1.32-1.59) compared to those never admitted to the PICU. When including SOI, the OR dropped to 1.11 (C.I. 1.002-1.228).

Conclusion: Direct admission to the PICU during an index hospitalization for trauma care is an independent risk factor for hospital readmission within 30 days of discharge. The majority of this effect is likely related to the patient’s severity of illness. Further risk stratification may help appropriately focus resources on high risk patients in order to improve clinical outcomes and reduce unnecessary hospital readmissions.

76.10 Analysis of a Novel Valet Parking Attendant Car Seat Safety Injury Prevention Program

I. Abd El-shafy1, L. W. Hansen1, E. Flores1, D. Riccardi1, F. Bullaro-Colasuonno1, J. Nicastro1, J. Prince1 1Cohen Children’s Hospital,Pediatric Surgery,Queens, NY, USA

Introduction: Motor vehicle collisions are the leading cause of death and morbidity among children under age 12. With up to 75% of car seats improperly installed nationally, we propose to characterize a novel valet parking attendant car seat safety injury prevention program and to assess the potential impact when established in a hospital setting.

Methods: : A voluntary, 22 question, anonymous survey was distributed to drivers utilizing the valet parking service in a single suburban Children’s Hospital on weekdays during August 2014. Demographic information, previous car seat installation education or experience, and interest in using a free car seat inspection program were ascertained. Concurrently, a valet parking attendant, who was also a trained car seat technician, offered to provide a free inspection of any car seats. Any errors in installation were immediately corrected and the driver was educated about proper technique. The data were analyzed with descriptive statistics using n Query Advisor 4.0 including a power analysis to determine the sample size.

Results: Survey results were collected from 65 participants. Among these, 16 car seats were inspected and only 3 (18.8%) were installed correctly. A minority of participants reported that they had received car seat installation education (29 out of 63, 46%) or had their car seat installed by a trained technician (13 out of 65, 20%). Most reported that they would use a car seat inspection program (63 out of 65, 96.9%) and that it should be provided by hospitals (61 out of 65, 93.8%).

Conclusion: Proper car seat installation in a suburban New York setting is lower than the reported national average of 25%. Despite 97% stated interest in a free car seat inspection program, at most, only 25% completed car seat inspection when offered. Future studies are needed to evaluate more effective means of encouraging participation and adherence.

76.12 Venous Thromboembolism After Splenectomy for Trauma: Is There an Increased Risk?

S. Sheikh1, Z. G. Hashmi2, S. Zafar3, A. H. Tyroch4 1York Hospital,Department Of Surgery,York, PA, USA 2Sinai Hospital Of Baltimore,Department Of Surgery,Baltimore, MD, USA 3Howard University College Of Medicine,Department Of Surgery,Washington, DC, USA 4Texas Tech University Health Sciences Center,Department Of Surgery,El Paso, TX, USA

Introduction: Splenectomy is associated with an increased risk of venous thromboembolism (VTE) when performed for hematologic disorders. However, the risk of VTE after splenectomy for trauma remains largely unknown. The objective of this study was to determine the risk of VTE following splenectomy for trauma.

Methods: Adults with blunt/penetrating splenic injuries treated at Level I/II trauma centers included in the National Trauma Data Bank 2009-2011 were analyzed. The primary outcome was the development of a VTE. ICD-9-CM diagnoses codes were used to determine patients who underwent a splenectomy, as well as other major surgical procedures. Multiple logistic regression analyses, with VTE as the outcome, were performed, adjusting for patient level (demographics, injury severity characteristics, major surgical procedures) and hospital level factors (rate of hospital duplex ultrasound). Additional sensitivity analyses were performed stratifying by hospital length of stay and hospital duplex ultrasound rates.

Results: A total of 19,124 patients were included of whom 4,221 (22.10%) underwent a splenectomy. Only 198 (1.00%) patients underwent an isolated splenectomy. The overall VTE rate was 4.90% (932). On univariate and multivariate analyses, the odds ratios (95% CI) for the development of VTE were found to be 2.00 (1.74-2.30) and 1.48 (1.23-1.77), respectively. However, splenectomy was no longer associated with an increased risk of VTE when additionally adjusted for the presence of concomitant major surgical procedures [0.90 (0.76-1.08)]. Additional sensitivity analyses revealed qualitatively similar findings.

Conclusion: Splenectomy, in the presence of concomitant major surgical procedures, does not confer an additional risk of a VTE event.

76.13 FURTHER DEFINITION OF THROMBUS IN PULMONARY ARTERIES OF TRAUMA PATIENTS

T. S. Hester1, J. C. Allmon1, J. H. Habib1, J. W. Dennis1 1University Of Florida,Surgery,Jacksonville, FL, USA

Introduction:

Venous thromboembolic (VTE) events are a major contributor to the morbidity and mortality of the severely injured trauma patient. Despite aggressive prophylaxis there is still a number of trauma patients diagnosed with pulmonary embolism. The purpose of this study was to better characterize the incidence, etiology, and distribution of VTE in multi-system trauma patients to promote a more specific directed treatment and improve prophylactic measures.

Methods:
The trauma registry at a level I trauma center was utilized to collect data retrospectively on all trauma patients with a diagnosis of PE over a 14 year period. Age, sex, ISS, injuries, operations, DVT prophylaxis, IVC filter placement, specific CTA findings of PE, septic episodes and blood products administered were recorded.

Results:
A total of 77 patients had a diagnosis of ‘PE’, but 12 had incomplete data or previous underlying diseases that affected their VTE risk. The remaining 65 patients were placed into the four categories: Group 1: 29 patients (45%) patients represented cases where the protocol for prophylactic IVC filter placement was not followed. Group 2: 11 patients (17%) were identified as direct primary pulmonary thrombosis (PPT) from the patient’s injury. All were diagnosed with a ‘PE’ within 48 hours of admission, had DVT and the thrombosis correlated with the location and severity of their chest trauma. All thrombi were in small segmental pulmonary arteries. Group 3: 19 patients (29%) had no high risk injuries and were not considered for filter placement, yet failed routine VTE prophylaxis. Most had prolonged immobilization, multiple surgeries or other risk factors. Group 4: 6 patients (9%) represent PPT with no high risk injuries, no direct lung trauma, and no DVT. All had thrombosis of small segmental vessels and only sepsis as a common factor.

Conclusion:
VTE remains a significant contributor to the morbidity and mortality of the trauma patient. One fourth of radiographic diagnosed ‘PE’ (groups 2 and 4) can be classified as PPT and not emboli.These VTE events are likely in situ pulmonary arterial thrombosis secondary to the patient’s injury itself or a result of hypercoagulability.Prospective studies of the natural history and treatment of PPT are needed.