17.09 Overall Survival Following Salvage Liver Transplant for Hepatocellular Carcinoma

J. P. Silva1, N. G. Berger1, T. C. Gamblin1  1Medical College Of Wisconsin,Surgical Oncology,Milwaukee, WI, USA

Introduction:  Shortage of available organs has led to the method of salvage liver transplantation (SLT). Primary hepatic resection is carried out, and then transplantation may occur in the setting of hepatocellular carcinoma (HCC) recurrence or liver function deterioration. The survival outcomes of SLT compared with primary liver transplantation (PLT) have not been described in a nationally representative population. The present study sought to evaluate the differences in overall survival (OS) between SLT and PLT in hepatocellular carcinoma patients, using prior upper abdominal surgery (UAS) as a proxy for prior hepatic resection or ablation.

Methods:  HCC patients undergoing liver transplantation were identified using the Organ Procurement and Transplantation Network (OPTN) database (1987-2015). The patients were separated by presence of prior UAS into two cohorts, PLT and UAS. In order to focus on patients with prior surgery related to HCC, prior UAS without a corresponding record of prior HCC treatment were excluded. OS was analyzed by log-rank test and graphed using Kaplan-Meier method. Recipient and donor demographic and clinical characteristics were also studied using Cox univariate and multivariate analysis.

Results: A total of 15,070 patients were identified, with a median age of 58 years. 78.5% of all patients were male. 6,220 patients (41.3%) composed the UAS group, and 8,850 (58.7%) composed the PLT group. Compared to the UAS cohort, PLT patients were more likely to be older (p<0.001), female (p<0.001), and diabetic (p<0.001). OS was improved in the PLT compared to UAS , with median OS of 131.4 months and 122.4 months, respectively (p<0.001). UAS was associated with an increased hazard ratio (HR) on univariate analysis (HR 1.14, 95% CI 1.07-1.22; p<0.001). However, upon multivariate analysis, UAS was not associated with significantly increased HR (HR 1.15, 95% CI 0.92-1.42; p=0.21). Independent factors associated with increased HR include increased recipient age (HR 1.02, 95% CI 1.01-1.04; p=0.005), African American race (HR 1.43, 95% CI 1.03-2.00; p=0.034), and number of tumors (HR 1.09, 95% CI 1.04-1.15; p<0.001).

Conclusion: Primary liver transplant for hepatocellular carcinoma patients is associated with an improved overall survival when compared to patients with prior UAS. However, on multivariate analysis, no significant increased hazard ratio exists, likely due to differences in patient demographics and disease characteristics. With thoughtful patient selection, SLT is a feasible alternative to PLT and a deserving focus for further investigation.

17.08 Association between donor hemoglobin A1c and recipient liver transplant outcomes: a national analysis

B. A. Yerokun1, M. S. Mulvihill1, R. P. Davis1, M. G. Hartwig1, A. S. Barbas1  1Duke University Medical Center,Department Of Surgery,Durham, NC, USA

Introduction:
While the association between donor diabetes mellitus and liver transplant recipient outcomes is well described, limited data exist on the effect of donor glyclated hemoglobin (Hgb A1c) and recipient outcomes after liver transplantation. The objective of the analysis was to evaluate liver transplant recipient outcomes associated with donor Hgb A1c levels in non-diabetic donors in the United States. We tested the hypothesis that the use of allografts from non-diabetic donors with an elevated donor Hgb A1c would be associated with decreased allograft and overall survival.

Methods:
The Scientific Registry of Transplant Recipients was used to identify adult patients who underwent liver transplantation (2010-2015). Liver transplant recipients were stratified into two groups based of the Hgb A1c level of the donor: Hgb A1c <6.5 (euglycemic) vs ≥6.5 (hyperglycemic). Recipients of donors with a missing Hgb A1c were excluded. Propensity score matching (10:1) was used to adjust for donor age, donor gender, donor ethnicity, donor serum creatinine, extended criteria allografts, recipient age, recipient gender, recipient ethnicity, recipient MELD score. Kaplan-Meier analysis was used to assess overall survival.

Results:
A total of 10491 liver transplant recipients were included: 10156 (96.8%) received euglycemic allografts & 335 (3.2%) received hyperglycemic allografts. Hyperglycemic donors were older (49 v 36), had a higher BMI (29.8 v 26.2), had a higher serum creatinine (1.3 v 1.0 mg/dL), and were more likely to be an extended criteria donor (35.8 v 14.4%). Recipients of hyperglycemic allografts were more often female (74 v 66%) and had a lower MELD score at time of transplantation (20 v 23). Recipients of hyperglycemic and euglycemic allografts were similar in age, BMI, diabetes, and time on waitlist. After adjustment, overall survival was not statistically different between the two groups (p=0.065), but allograft survival was significantly increased in the recipients of euglycemic allografts (p=0.035).

Conclusion:
In this nationally representative study of liver transplant recipients, patients who received hyperglycemic allografts had decreased allograft survival compared to those patients who received euglycemic allografts. This analysis demonstrates potential utility in the measurement of Hgb A1c for assessment of liver allografts.
 

17.07 Infections During the First Year Post-transplant in Adult Intestine Transplant Recipients

J. W. Clouse1, C. A. Kubal1, B. Ekser1, J. A. Fridell1, R. S. Mangus1  1Indiana University School Of Medicine,Department Of Surgery, Transplant Division,Indianapolis, IN, USA

Introduction:
The risk for post-intestine transplant infection is greatest in the first year, primarily related to the need for high doses of immunosuppression when compared to other transplant organs. This study reports the infection rate, location of infection, and pathogen causing bacterial, fungal, or viral infections in intestine transplant recipients at an active transplant center. Additional risk for infection was assessed based on simultaneous inclusion of the liver or colon as a part of the intestine transplant.

Methods:
Records from a single transplant center were reviewed for adult (≥18 years old) patients receiving an intestine transplant. Positive cultures and pathology reports were used to diagnose bacterial, fungal, and viral infections and also to determine location and infectious agent.

Results:
During the study period 184 intestine transplants were performed on 168 adult patients. One-year bacterial, fungal, and viral infection rates were 91%, 47%, and 29%, respectively. The most commonly infected sites were the urinary tract and bloodstream. Coagulase-negative Staphylococcus was the most commonly isolated pathogen and found in 48% of patients. Antibiotic resistant strains were frequently observed with VRE infecting 45% of patients and MRSA infecting 15%. The majority of VRE cases occurred after 2008. Klebsiella and E. coli had the highest incidence of infection among gram-negative bacteria at 42% and 35%. Candida species were the most common fungal pathogens, and were seen in 42% of all patients, and in 89% of patients who developed a fungal infection. Candida glabrata was the most prominent Candida species and was present in over 75% of Candida-infected patients. Cytomegalovirus infections were present in 15% of transplant recipients, with 41% (6% overall) of those patients developing tissue-invasive disease. Bacterial and fungal bloodstream infections (BSI) developed in 72% of patients, with a median time to first infection of 30 days. Age, gender, race, liver inclusion, or colon inclusion did not have a significant impact on the development of a BSI or median time to first BSI.

Conclusion:
Overall, 91% of intestine transplant patients had a bacterial infection in the first year post transplant. Opportunistic fungal and viral infections were also very common. Inclusion of the colon with the small intestine significantly (p≤0.01) increases the risk for fungal bloodstream infections, and infections caused by C. difficile, Bacteroides species, Epstein-Barr virus and upper respiratory viruses (p<0.01 for each). In contrast, simultaneous transplant of the liver significantly (p=0.01) reduced the risk of developing a bacterial urinary tract infection.
 

17.06 “Patterns of Liver Discard in Texas – Why Are Older Donors Not Used?”

S. Gotewal1, C. Hwang1, J. Reese1, J. Parekh1, M. MacConmara1  1University Of Texas Southwestern Medical Center,Transplant,Dallas, TEXAS, USA

Introduction:  Utilization patterns of marginal donor livers vary throughout the different donation service areas (DSA) and regions in the US.  In addition, aggressive utilization is not uniform across all types of marginal donors.  The aim of this study was to examine the procurement, utilization and discard of marginal livers by the organ procurement organization (OPO) in the North Texas DSA and to compare this with the national trend in order to identify areas for potential increase in organ utilization. 

Methods:  This retrospective study identified all donors between January 1, 2013 and September 30, 2015 in the North Texas DSA, in which the intent at time of donor procurement was to use the liver for transplant. Donor age, type, BMI were collected together with serologies, biochemistry and macrovesicular fat content.  In addition, use or discard of the liver was confirmed through OPO records.  In the event of discard further data regarding the underlying reason was obtained.  Marginal organ utilization rates were then compared with national data available through the SRTR database.

Results: 703 donor liver procurements took place during the study period and of these 628 were transplanted and 75 (10.7%) discarded.  Older age (>65 years), donation after cardiac death (DCD) status and macrovesicular fat content (>30%) were all associated with statistically higher rates of discard.  Interestingly, older age had a higher discard rate when compared with national trends (23% vs. 17%).  Within this subgroup macrovesicular fat content (6% vs. 16.5% national), average BMI (27.7 vs. 28.4), and DCD status (0/36 vs. 28/175 national) would all suggest that these organs would be of better quality. OPO records noted severe calcification within visceral arteries as the reason for organ decline and discard 7 of the cases.  Importantly, when a second transplant surgeon was present at the procurement none of the organs were discarded. Of the 44 livers above age 65, a second surgeon was present on 8, and had 0 discards. There were 10 discards over age 65 when a second surgeon was not present (p=.045). 

Conclusion: Marginal livers from donors of advanced age are less frequently used in North Texas.  These data suggest that many were discarded for subjective causes.  Importantly, the presence of another surgeon may act to increase utilization.

 

17.05 Minimalist Approach to Sedation in TAVR May Decrease Incidence of Post-Procedural Dysphagia

L. Mukdad1, W. Toppen1,3, K. Kim1, S. Barajas1, R. Shemin1, A. Mendelsohn2, P. Benharash1  1David Geffen School Of Medicine, University Of California At Los Angeles,Cardiac Surgery,Los Angeles, CA, USA 2David Geffen School Of Medicine, University Of California At Los Angeles,Head And Neck Surgery,Los Angeles, CA, USA 3David Geffen School Of Medicine, University Of California At Los Angeles,Internal Medicine,Los Angeles, CA, USA

Introduction:  Transcatheter aortic valve replacement (TAVR) has become the preferred therapy for severe aortic stenosis in patients with high surgical risk. While general anesthesia has been the standard during these procedures, recent evidence suggests that a minimalistic approach utilizing conscious sedation may have similar if not improved clinical outcomes. The incidence and impact of post-TAVR dysphagia has yet to be fully elucidated. The purpose of this study was to compare the incidence of postoperative dysphagia and aspiration pneumonia in patients undergoing TAVR procedures with either conscious sedation (CS) or general anesthesia (GA).

Methods:  This was a retrospective single center study involving all adult patients undergoing TAVR between September 2013 and May 2016. The diagnosis of postoperative dysphagia was confirmed by a fiberoptic endoscopic evaluation of swallowing test. Propensity score matching was used to control for intergroup differences and account for potential selection biases. The Society of Thoracic Surgeons predicted risk of mortality score, a previously validated composite risk stratification tool was used as our propensity score. Categorical variables were analyzed by Fisher’s exact test and continuous variables were analyzed by the independent sample T-test for unequal sample size. An alpha of < 0.05 was considered statistically significant. All data were analyzed using STATA 13.0 statistical software (StataCorp, College Station, TX).

Results: A total of 200 patients were included in this study (CS=58 and GA= 142). After propensity score matching and exclusion of unmatched patients, 187 patients remained (CS=58 and GA=129). There were no differences in baseline comorbidities between the matched groups (Table 1). Postoperative dysphagia was significantly higher in the GA group (0% CS vs 8% GA, p=0.03) with a trend towards more aspiration pneumonia in the GA group (0% CS vs 3% GA, p=0.31). Additionally, procedure room time (95 ± 44 min CS vs. 148 ± 109 min GA, p < 0.001), total ICU time (27 ± 32 h CS vs. 87 ± 98 h GA, p < 0.001), and total length of hospital stay (3 days CS vs. 7 days GA, p < 0.001) were significantly less in the conscious sedation TAVR group. 

Conclusion: We found that the use of conscious sedation in TAVR was associated with a significant decrease in the incidence of postoperative dysphagia and a trend towards fewer instances of aspiration pneumonia. Additionally, CS was associated with lower operative times and shorter ICU and hospital length of stay at our institution. Given the increase in resource utilization with dysphagia, our results suggest significant improvements with the use of CS for TAVR procedures.

17.04 Use of Enhanced Recovery Protocol in Kidney Transplant to Improve Postoperative Length of Stay

C. Schaidle-Blackburn1, A. Kothari1, P. Kuo1, D. DiSabato1, J. Almario-Alvarez1, R. Batra1, R. Garcia-Roca1, A. Desai1, A. Lu1  1Loyola University Chicago Stritch School Of Medicine,Intra-abdominal Transplant,Maywood, IL, USA

Introduction: Implementation of enhanced recovery (ER) protocols has been demonstrated to improve perioperative outcomes in multiple surgical specialties. Limited data exist on using ER protocols in patients undergoing kidney transplantation. The objectives of this study were to develop a multidisciplinary ER protocol for patients undergoing single organ kidney transplants and measure the impact of a pilot implementation on postoperative length of stay (LOS).

Methods: Retrospective review of pre- and post-intervention postoperative outcomes at a single institution between January, 2015 and July, 2016. All patients receiving a single organ kidney transplant over the age of 18 were included. Dual organ recipients and pediatric patients (≤18 years) were excluded. Project was conducted in two stages. Stage 1 was a multidisciplinary gap analysis using a modified Delphi approach to define a consensus ER protocol. Stage 2 was a 9-month pilot implementation. A prospective observational analysis was performed on the cohort with postoperative LOS identified as the primary outcome. Means reported ± standard deviations, tests of significance conducted with student’s t-tests (alpha=0.05).

Results: A total of 66 patients were included for study, 18 prior to intervention and 48 following intervention. Mean age of the study population was 49.8±12.9 years (49.5+16.6 years pre-ER and 50.0±11.4 years post-ER, P=.899). Gap analysis led to the development of an ER protocol through multidisciplinary engagement based on 3 major domains: (1) standardization of postoperative pathways, (2) early mobilization, (3) patient engagement. Standardization of postoperative pathways was achieved through updating perioperative electronic order sets and operating room protocols; early mobilization achieved through elimination of central lines, early Foley catheter removal, and physical/occupational therapy consults; patient engagement was achieved through creation of patient education materials and patient-driven diet advancement. Following implementation, postoperative length of stay decreased from 7.5±4.0 days to 4.9±2.0 days (P<.001).  

Conclusion: Implementation of a locally-developed, consensus-based ER protocol can reduce LOS in patients undergoing kidney transplant. This study also demonstrates the feasibility of implementing ER in this complex patient population. Based on pilot data and institutional transplant volume, this ER protocol could generate cost savings that exceed $950,000 annually.

 

17.03 Medial Row Perforators: Higher Rates of Fat Necrosis in Bilateral DIEP Breast Reconstruction

B. Tran1, P. Kamali1, M. Lee1, B. E. Becherer1, W. Wu1, D. Curiel1, A. Tobias1, S. J. Lin1, B. T. Lee1  1Beth Israel Deaconess Medical Center,Surgery/Plastic And Reconstructive Surgery,Boston, MA, USA

Introduction:
The purpose of this study is to evaluate perfusion related complications in bilateral deep inferior epigastric perforator (DIEP) flap reconstruction of the breast based on perforator selection over a decade.

Methods:
A retrospective review of a prospectively maintained DIEP database was performed on all patients undergoing bilateral DIEP reconstruction at a single institution between 2004-2014. The flaps were divided into three cohorts based on perforator location: lateral row only perforator group, medial row only perforator group, and medial plus lateral row perforator group. Postoperative flap related complications were compared and analyzed.

Results:
Between 2004 and 2014, 818 bilateral DIEP flap reconstructions were performed. Seven hundred and twenty eight flaps met the study criteria. Within the study group, 263 (36.1%) flaps had perforators based only on the lateral row, 225 (30.9%) had perforators based only on the medial row, 240 (33.0%) flaps had perforators based on both the medial and lateral row. The groups were well matched in terms of perforator number and flap weight. Fat necrosis occurrence was significantly higher in flaps based solely on the medial row versus flaps based only on the lateral row perforators (24.5% versus 8.2%, p< 0.001). There was no statistically significant difference in fat necrosis between flaps based only on the lateral row versus flaps based on both the medial and lateral row (8.2% versus 11.6%, p=0.203). Generally, within the same row, increasing the number of perforators decreased the incidence of fat necrosis.

Conclusion:
Perforator selection is critical to minimizing perfusion related flap complications. In bilateral DIEP flaps, lateral row based perforators result in significantly less fat necrosis than medial row based perforators. Our data suggests that the addition of a lateral row perforator to a dominant medial row perforator will decrease the risk of fat necrosis.
 

17.02 Improving Outcomes in use of Extended Criteria Donors of Liver Allografts: A National Analysis

R. Davis1, M. S. Mulvihill1, B. A. Yerokun1, M. G. Hartwig1, A. S. Barbas1  1Duke University Medical Center,Department Of Surgery,Durham, NC, USA

Introduction:  Orthotopic liver transplantation (OLT) remains the gold standard for patients with end-stage liver disease.  However, the scarcity of ideal donor allografts has led to the use of extended criteria donors (ECD). While a precise definition of an ECD donor is not well defined, donor age > 60 years is thought to negatively impact survival following OLT. Despite this, use of donor allografts > 60 years old has steadily increased with reports of improved outcomes. Here, we hypothesized that the risk to recipients receiving donor allografts > 60 years old has decreased since the inception of ECD liver donation. 

Methods:  The OPTN/UNOS STAR (Standard Transplant Analysis and Research) file was queried for all first-time isolated adult liver transplants. From 247,723 records in the registry, this yielded 121,399 recipients suitable for analysis, representing transplants from September, 1987 to June, 2015. Cohorts were then developed based on donor age greater than 60 years old and 5-year increments of transplant date. Kaplan-Meier analysis with the log-rank test compared survival between recipients of donor allografts > 60 years across each five-year interval.

Results: Utilization of donor allografts > 60 years old for OLT has steadily increased. Before 1990, no donors over age 60 were accepted for transplant. From 1990 to 1995, 210 recipients received allografts from donors over age 60, and peaked at 3,945 recipients receiving allografts from donors > 60 years between 2005 and 2010, the last complete era available for analysis. Across all eras, recipients of allografts from donors > 60 years old tended to be older (53 vs 56 years) and be on the waitlist longer (82 vs 100 days). Recipient INR, albumin, medical condition at transplant, and creatinine were similar between groups. Donors > 60 years old had similar serum creatinine and percent steatosis. Unadjusted 5-year survival varied significantly by era of transplant, from 54.7% (1990-1995) and increasing to 69% (2010-2015). 

Conclusion: This study demonstrates improved long term survival of recipients receiving donor allografts > 60 years old in the current era compared with recipients in previous eras. This data supports the continued use of ECD liver allografts > 60 years old. Further refinement of donor management, surgical techniques, and the potential use of ex vivo liver perfusion should serve to continue the trend toward improved recipient outcomes.    

 

17.01 Duct-to-Duct Bile Duct Reconstruction in Liver Transplant for Primary Sclerosing Cholangitis (PSC)

S. Chanthongthip1, C. A. Kubal1, B. Ekser1, J. A. Fridell1, R. S. Mangus1  1Indiana University School Of Medicine,Transplant,Indianapolis, IN, USA

Introduction:
Reconstruction of the bile duct after liver transplantation in patients with primary sclerosing cholangitis (PSC) is controversial. Historically, these patients have all undergone Roux-en-Y choledochojejunostomy (RY). More recently, several centers have published results suggesting equivalent outcomes using duct-to-duct reconstruction (DD). Recently, a meta-analysis of 10 single center studies demonstrated similar post-transplant incidence of bile duct complications, and similar 1-year graft survival. This study presents a similar single center analysis comparing RY and DD reconstruction of the bile duct, with 10-year Cox regression analysis of graft survival.

Methods:
The records of all liver transplants at a single center between 2001 and 2015 were reviewed. Patient with primary sclerosing cholangitis were identified. The primary operative reports for these patients were reviewed and reconstruction of the bile duct was coded as RY or DD. The decision to perform DD was based upon the appearance of the remaining bile duct after hepatectomy, with a healthy appearing native duct being reconstructed with DD and a diseased duct with RY.  All bile duct imaging and interventions were recorded to assess for strictures and leaks. Survival data was extracted from the transplant database, which is compared on a regular basis with national death databases to assure accuracy. Cox regression analysis was performed with a direct entry method.

Results:
There were 1722 liver transplants during the study period, with 164 patients having a diagnosis of PSC (10%). Bile duct reconstruction included 93 RY and 71 DD. The DD patients had a higher MELD and older age, while RY patients were more likely to be a retransplant and have a longer warm ischemia time. Among the PSC patients, 8% had a bile duct leak (13% RY and 3% DD, p=0.03).  Strictures were seen in 29% of patients (5% RY and 39% DD, p<0.001), though this is a result of nearly all patients undergoing ERCP receiving a diagnosis of stricture and having a stent placed. Only one patient in either group require operative intervention for stricture, and another patient required operative intervention for leak. Length of hospital stay was equivalent in both group (10 days, p=0.51). Cox regression 10-year graft survival showed no difference between RY and DD (p=0.47).

Conclusion:
DD reconstruction of the bile duct after liver transplantation for PSC results in similar post-operative outcomes, and equivalent long-term survival, compared to RY. These results were achieved using surgeon judgement of bile duct quality to determine the method of reconstruction.
 

16.22 Intraoperative Parathyroid Identification Not Associated with Increased Permanent Hypoparathyroidism

J. Zagzag1, R. Rokosh1, K. S. Heller1, J. Ogilvie1, K. Patel1, A. Kundel1  1New York University School Of Medicine,New York, NY, USA

Introduction:  One major risk of total thyroidectomy is permanent hypoparathyroidism, and this risk may be increased if a central neck dissection is also performed.  This study was undertaken to evaluate whether identification of parathyroid glands intraoperatively during total thyroidectomy (TT) and total thyroidectomy with central neck dissection (TTCND) is related to inadvertent parathyroid gland excision in the final pathologic specimen.  We also assessed the effect of intraoperative and pathologic parathyroid identification on rates of permanent hypoparathyroidism.

Methods:  A retrospective review of all TT and TTCND performed by our endocrine surgery group between 2011 and 2015 was performed. Patients were stratified into two groups, those with 0-2 and those with 3-4 parathyroid glands identified intraoperatively. The presence of any parathyroid tissue in the final pathologic specimen was examined. Intraoperative and pathologic parathyroid identification was correlated with permanent hypoparathyroidism.  Chi-squared test was used for statistical significance.

Results: A total of 496 cases included 351 TT and 145 TTCND. At least 3 parathyroid glands were identified intraoperatively in 63% of cases. 37% of final specimens contained unexpected parathyroid glands. Intraoperative identification  of 3-4 parathyroid glands was inversely related to the number of parathyroid glands identified on pathology in TTCND but not TT (RR 0.34, 95%CI 0.17-0.69, p-value 0.003). Parathyroid gland identification intraoperatively had no relationship to rates of permanent hypoparathyroidism in either group (TT 2.2% vs 3.8%, p-value 0.721, TTCND 4.1% vs 0.0%, p-value 0.213). Parathyroid tissue on final pathology had no relation to rates of permanent hypoparathyroidism (3.3% vs 2.5%, p-value 0.138).

Conclusion: Intraoperative identification of parathyroid glands is associated with a lower incidence of unexpected parathyroid gland excision when performing a total thyroidectomy with central neck dissection. Total thyroidectomy with or without central neck dissection, when performed by experienced endocrine surgeons who routinely identify parathyroid glands, was not associated with increased rates of hypoparathyroidism when fewer than three parathyroid glands were identified intraoperatively or when parathyroid tissue was found on final pathology. The identification of parathyroid glands intraoperatively did not result in permanent hypoparathyroidism.

 

16.21 Use of I2b2 Cohort Discovery Tool to Identify Potentially Unrecognized Primary Hyperparathyroidism

J. Park1, K. Doffek1, T. W. Yen1, K. E. Coan1, T. S. Wang1  1Medical College Of Wisconsin,Surgical Oncology,Milwaukee, WI, USA

Introduction:  The majority of patients with primary hyperparathyroidism (pHPT) present with asymptomatic disease, as nonspecific presenting signs and symptoms are heterogeneous to normal aging or other diseases. As a result, patients with hypercalcemia may not be appropriately referred for further evaluation/treatment of potential pHPT. The purpose of this study was to determine the prevalence and trends of potentially undiagnosed pHPT at a tertiary care institution.

Methods:  This is a retrospective review of de-identified patient data of all patients from a single health system collected within Informatics for Integrating Biology and the Bedside (i2b2) Cohort Discovery Tool between 1/1/15 and 9/30/15. The study cohort was defined as any patient with at least one serum calcium levels >10.2 mg/dL (normal, 8.6-10.2) and PTH level of >30 pg/mL (normal, 16-72) in the study period; labs were not necessarily drawn concurrently. Patients were divided into 4 groups based on the presence or absence of an ICD-9 diagnosis of HPT (pHPT, secondary/tertiary HPT, HPT not otherwise specified, and no diagnosis). The presence of symptoms of pHPT (nephrolithiasis, gastroesophageal reflux disease [GERD] and/or bone-related disease [osteopenia, osteoporosis, or compression fractures]), extent of hypercalcemia and hyperparathyroidism, and referral to Endocrinology or Surgery within the study period were determined and compared between patients with PTH levels between 30-70 and those >70.

Results: Of the 941 patients, 446 (47%) had PTH levels of 30-70 and 495 (53%) had PTH levels >70. Those patients with a PTH >70 were more likely to have a diagnosis of HPT (primary or unspecified) than patients with PTH levels of 30-70 (Table). There was no difference in reported symptoms between the two groups (p=0.521). However, those with a PTH level >70 were more likely to be referred for additional evaluation (262, 53%) than patients with PTH levels of 30-70 (200, 45%; p=0.005). Patients with PTH >70 were also more likely to be referred to Surgery (31% vs. 22%).

Conclusion: Based on the findings of this study, patients with elevated serum calcium levels and PTH levels 30-70 appear to be less frequently referred for evaluation/treatment of potential pHPT than patients with PTH levels >70 pg/mL. Despite the limitations of this de-identified database, this suggests that pHPT may be underdiagnosed and undertreated within the health care system.  Further examination of these data and broader dissemination of the diagnosis and symptoms of pHPT to primary care and other providers should be considered.

 

16.20 Impact of Symptom Association Probability on Outcomes of Laparoscopic Nissen Fundoplication

A. D. Jalilvand1, S. E. Martin Del Campo1, J. W. Hazey1, K. A. Perry1  1Ohio State University,Columbus, OH, USA

Background: Laparoscopic Nissen fundoplication (LNF) is the gold standard for surgical reflux control in patients with objective evidence of gastroesophageal reflux disease (GERD). Symptom association probability (SAP) score is used with pH testing to correlate symptoms to reflux events, with scores above 95% implying a high correlation to reflux exposure. It is unclear whether these scores impact the outcomes of laparoscopic anti-reflux surgery. We hypothesize that a negative SAP score for typical GERD symptoms in the setting of a positive pH test is not associated with persistent symptoms after LNF.

Methods: We reviewed all patients undergoing LNF for objectively confirmed GERD between May 2011 and June 2016. Patients without pH testing due to complicated GERD or who did not have SAP scores were excluded from this analysis. SAP scores >95% were considered positive and those <95% were considered negative. Reflux symptoms and quality of life were assessed using the Gastroesophageal Reflux Symptom Scale (GERSS) and Gastroesophageal Reflux Disease Health-Related Quality of Life (GERD-HRQL) questionnaires. Baseline and post-operative data were collected in the clinic setting. Data are presented as incidence (%), mean ± SD, or median (IQ range) as appropriate, and a p-value of <0.05 was considered statistically significant.

Results: LNF was performed in 142 patients during the study period with an average age of 48.5 ± 13.5 years, BMI of 31.3 ± 5.6, and 78% (n=111) were female. Median preoperative DeMeester score was 38.5 (28.0-54.4), baseline GERSS was 37.5 (26.0-51.0) and GERD-HRQL was 32 (22.0-37.5). Patient characteristics and baseline symptoms did not differ between positive and negative SAP scores. Positive SAP scores were reported in 63% of patients for heartburn, 64.4% for regurgitation, and 40.9% for chest pain. Compared to baseline, GERSS improved from 36 (26-50) to 8 (2-16, p<0.001) in those with positive SAP for heartburn and 39 (28-54) to 8 (0-13, p<0.001) with negative SAP. GERD-HRQL scores improved from 31.5 (22-37) to 4 (2-8, p<0.001) and 34 (22-39) to 4 (1-11, p<0.001) respectively. Postoperative GERSS (p=0.923) and GERD-HRQL (p=0.600) scores did not differ between groups. Complete resolution of HB was achieved in 86.8% of patients with positive SAP compared to 66.7% of patients with negative SAP for HB (p=0.065). There were no significant differences in postoperative GERSS, GERD-HRQL, or symptom resolution following LNF for patients with positive and negative SAP for regurgitation or chest pain.

Conclusion: LNF achieves excellent symptom control and improves disease-specific quality of life in patients with symptomatic GERD confirmed by pH testing. Negative SAP scores for typical GERD symptoms are not associated with higher GERD symptom scores or reduced disease-specific quality of life following LNF and should not be used to select patients for laparoscopic anti-reflux surgery in this setting. 

16.19 Analytic Morphomics And Geriatric Assessment Predict Pancreatic Fistula After Pancreaticoduodenectomy

A. J. Benjamin1, A. Schneider1, M. M. Buschmann2, B. A. Derstine4, S. C. Wang3,4, W. Dale2, K. Roggin1  1University Of Chicago,Surgery,Chicago, IL, USA 2University Of Chicago,Geriatrics & Palliative Medicine,Chicago, IL, USA 3University Of Michigan,Surgery,Ann Arbor, MI, USA 4University Of Michigan,Morphomic Analysis Group,Ann Arbor, MI, USA

Introduction:

Following pancreaticoduodenectomy (PD), pancreatic fistula (PF) remains a significant cause of morbidity, and current models used to predict PF rely on measures which are only available at the time of operation.  Body imaging analysis, such as analytic morphomics (AM), and pre-operative geriatric assessment (GA) have been shown to forecast significant adverse outcomes following PD.  We hypothesized preoperative AM and GA can accurately predict PF.

Methods:

An IRB-approved review identified patients (n=63) undergoing PD by experienced pancreatic surgeons who had a non-contrast computed tomography scan (CT), pre-operative geriatric assessments, and prospectively tracked postoperative 90-day outcomes collected between 10/2007 and 3/2016.  PF were graded according to the International Study Group for PF (ISGPF) criteria.  Pre-operative GA included the Short Physical Performance Battery, self-reported exhaustion on the Center for Epidemiologic Studies Depression Scale (CES-D exhaustion; one of the five criteria of Fried’s frailty), and the Vulnerable Elders Survey (VES-13).  CT scans were processed to measure morphomic variables which included measures of psoas muscle area and Hounsfield units (HU), subcutaneous fat measures, visceral fat measures, and total body dimensions.  Correlations with the development of a PF were obtained using univariate analysis and multivariate elastic net regression models.  

Results:

The median patient age was 67 (37-85) years old and the median BMI was 27.0 (18.9-50.5).  In total, 15/63 patients (23%) had a documented PF: 8 patients had a ISGPF grade A PF (53%), 6 had a grade B PF (40%), and one had a grade C PF (7%). On univariate analysis, PF was associated with CES-D exhaustion (p=0.005), VES-13 (p=0.038), subcutaneous fat HU (p=0.009), visceral fat area (p=0.035), visceral fat HU (p=0.001), average psoas HU (p=0.040) and psoas low density muscle area (p=0.049).  A predictive model based on demographics, analytic morphomics, and GA had a high AUC for predicting PF (AUC=0.915) when compared to a clinical “base model” including age, BMI, ASA class, and Charlson comorbidity index (AUC=0.685).  

Conclusion:

Preoperatively measured AM, in combination with GA, can accurately predict the likelihood of PF following PD.  Validation of this model on a larger cohort would provide surgeons with a practical tool to more accurately risk-stratify patients prior to PD.

16.17 Defining Intrinsic Operative Risk Separate from Patient Factors for Preoperative Evaluations

J. B. Liu1,4, Y. Liu1, M. E. Cohen1, C. Y. Ko1,3, K. Y. Bilimoria1,2, B. J. Sweitzer2  1American College Of Surgeons,Chicago, IL, USA 2Northwestern University,Chicago, IL, USA 3University Of California – Los Angeles,Los Angeles, CA, USA 4University Of Chicago,Chicago, IL, USA

Introduction:

Surgical-patient care is enhanced by multidisciplinary co-management. While accurate understanding of perioperative risks are a necessary component of care management, this is dependent on both procedure-intrinsic and patient-specific risk factors, both of which can be challenging to assess and to effectively share with non-surgeons. To improve interdisciplinary communication, we sought to describe intrinsic and patient risks for common operations.

Methods:

3,631,160 patients encompassing 2,010 Common Procedural Terminology (CPT) codes between 2010-2015 from the ACS NSQIP database were identified. Hierarchical regression modeling was used to categorize each procedure based upon its risk of death or serious morbidity (DSM) into three (low, medium, and high) risk categories. Procedures comprising 80% of the cases within each category were identified. Risk categories were also created within each surgical specialty. The distribution of risk for each procedure was then examined to illustrate the effect of including patient characteristics.

Results:

The overall rate of DSM was 7.4%. There were 37 commonly performed low, 106 medium, and 78 high risk procedures across all specialties. Shoulder arthroscopy had the lowest intrinsic risk, and Ivor-Lewis esophagectomy the highest. As expected, incorporating patient characteristics revealed variability in risk regardless of intrinsic operative risk. For instance, predicted risk of DSM for shoulder arthroscopy ranged from 0.3-3%, while for Ivor-Lewis esophagectomy ranged from 15-81%.

Conclusion:

Understanding an operation’s intrinsic risk can assist providers when evaluating patients and inform decisions about care. This study compiled a list of the most commonly performed procedures across all and within each specialty and stratified them into low, medium, and high risk categories. Patient factors undoubtedly play a role in the risk evaluation.

16.16 Neurocognitive Performance Profile Post-Parathyroidectomy: A Pilot Study of Computerized Assessment

C. Bell1, M. Warrick1, N. Baregamian1  1Vanderbilt University Medical Center,Department Of Surgery,Nashville, TN, USA

Introduction:

Neurocognitive factors are integral to the diagnosis of primary hyperparathyroidism (PHPT), and parathyroidectomy has been shown to improve health-related quality of life, functional and physical capacity, visual-spatial working memory, and reduction of depression and anxiety. Our pilot study performs computerized analysis of neurocognitive executive functions such as memory, attention, speed of processing, and problem-solving in a reliable, measurable way. It examines the Neurocognitive Performance Profile (NCPP) pre- and post-parathyroidectomy by using an online battery of brief, repeatable, modular, well-known neuropsychological assessments (Neurocognitive Performance Testing, NCPT) developed by Lumosity (LumosLabs, Inc).

Methods:

Thirty-three patients with biochemically confirmed PHPT and indication for parathyroidectomy were enrolled and asked to complete online computerized NCPTs at 3 time points (preoperative, early post-operative, and 6-month post-operative) to calculate their overall NCCP and individual test scores by quantifying levels of performance on defined categories of cognition. These scores were normalized to their age-matched controls from the Lumosity database. All patients underwent parathyroidectomy, 24 patients completed pre- and early post-operative NCPTs only, and 10 patients completed all 3 visits. Nine patients were excluded from analysis for incomplete testing.

Results:

Significant difference was observed in overall NCPP scores over three visits using one-way ANOVA (n=10, p=0.043) and post-hoc Tukey’s multiple comparison analysis of pre- vs 6-month post-op performance (n=10, p=0.012). There was significant improvement in visual-spatial memory (Object Recognition, n=10, p=0.015) post-operatively. No significant difference in overall NCPP score was observed in paired comparison of 24 subjects completing only pre- and early post-operative testing, however, significant improvement in speed of processing and memory (Digit Symbol Coding, n=24, p=0.017) in early post-operative period was observed. Biochemical cure was achieved post-parathyroidectomy in all patients (n=24, both serum calcium and parathyroid hormone levels, p<0.0001). Patients reporting neurocognitive symptoms preoperatively (95.8%) and at early postoperative time points (58.3%) expressed significant relief of symptoms (n=24, p=0.001). This effect persisted at 6 months (n=10, p=0.004).

Conclusion:

This pilot study has begun to characterize the types of neurocognitive deficits and post-operative improvements in the overall neurocognitive performance in PHPT patient population in a measurable way. NCPT is a novel method that can provide a long-term metric for objective assessment of neurocognitive changes post-parathyroidectomy and biochemical normalization. NCPT can be a valuable diagnostic and prognostic testing tool for all patients with PHPT, and a large multi-center prospective randomized trial may further elucidate the importance of NCPT.
 

16.15 Withdrawal of Life Sustaining Treatments in Trauma Patients Without Severe Head Injury

J. Leonard1, S. Polites2, A. Glasgow2, N. Martin1, E. Habermann2  1University Of Pennsylvania,Philadelphia, PA, USA 2Mayo Clinic,Rochester, MN, USA

Introduction:   Many trauma patients and their families make the difficult decision to withdraw life sustaining support following injury.  While this population has been studied in the setting of severe traumatic brain injury (TBI), little is known about patients who undergo withdrawal of support (WOS) exclusive of TBI.  The objective of this study was to describe this population of patients and help providers identify patients who would benefit from early involvement of palliative care resources.

Methods:   Patients were identified from the 2013-2014 Trauma Quality Improvement Program who underwent WOC. WOC patients were compared to those who died without WOS and those who survived to discharge. Patients who died within the first 24 hours or had a head AIS of 3 or greater were excluded. 

Results:  WOS after 24 hours occurred in 2301 patients.  The median age was 71 years, 35.7% were women, and 95.4% had a blunt injury mechanism.  Compared with patients who died in-hospital with full supportive measures, WOS patients had a higher ISS (21.6 vs. 12.5, p = 0.001), were more likely to have in hospital complications (71.4% vs. 41.6%, p = < 0.0001), and had a longer ICU length of stay (8.9 days vs. 7.5 days, p = <0.0001).

Conclusion:  WOS occurs in many trauma patients without severe TBI, demonstrating the importance of having palliative care options and resources available for these patients. Direction of resources can be optimized using the characteristics of patients who chose WOS identified in this study.
 

16.14 Improved Outcomes after Operative Intervention for Secondary Lymphedema

E. I. Chang1, J. Balaicuis1, J. Buhler1, W. Morgan1, A. Nadler1, J. M. Farma1  1Fox Chase Cancer Center,Plastic And Reconstructive Surgery,Philadelphia, PA, USA

Introduction: The incidence of lymphedema has been increasing.  There has been increasing enthusiasm and interest for the surgical management of lymphedema in the United States.  Currently, the two most common surgical procedures available include vascularized lymph node transplantation (VLNT) and lymphovenous bypass (LVBP).  We present our early experience of patients undergoing surgical treatment of lymphedema at a tertiary referral center.

Methods: A retrospective review of a single surgeon experience of all patients undergoing surgical management of lymphedema was performed.  Patient demographics including age, cancer type, body mass index (BMI), and history of radiation treatment were recorded, as well as, postoperative outcomes.

Results:A total of 30 procedures were performed in 28 patients for the surgical management of lymphedema.  All patients had grade II-III lymphedema and the majority had received radiation (75.0%).  Treatment for breast cancer was the most common etiology for developing lymphedema in this series (n=15, 53.6%) followed by gynecologic malignancies (n=5, 17.9%) and sarcoma (n=4, 14.3%).  Seven patients underwent VLNT using the supraclavicular lymph node basin and 16 patients had LVBP performed.  Seven patients underwent simultaneous vascularized lymph node transfer in conjunction with autologous tissue breast reconstruction.  All patients reported subjective improvement of the lymphedema after surgery (100%) with decreased episodes of cellulitis.  The average measurable reduction was 76.4% (19.5%-320.0%) over an average follow-up of 7.8 months.  Two patients undergoing VLNT experienced major complications requiring operative intervention (9.1%). 

Conclusion:VLNT and LVBP are safe and effective strategies for the surgical management of lymphedema with early excellent results regardless of the type of malignancy.  This technique provides a novel treatment option that could benefit all patients with lymphedema.  Further studies with longer follow-up are necessary to evaluate the advantages between the two surgical techniques.

 

16.12 Can Surgeons Still Be Scientists? Productivity Remains High Despite Competitive Funding

A. K. Narahari1, E. J. Charles1, J. H. Mehaffey1, R. B. Hawkins1, A. K. Sharma1, V. E. Laubach1, C. G. Tribble1, I. L. Kron1  1University Of Virginia,Division Of Thoracic And Cardiovascular Surgery, Department Of Surgery,Charlottesville, VA, USA

Introduction:  Obtaining federal funding for scientific research is as competitive as ever. Lung transplant research has been dominated by surgeons ever since the first transplant in 1963 by Dr. James Hardy, but is that still the case? We hypothesized that even in this difficult era of funding, surgeon-scientists have remained among the most productive and impactful researchers in lung transplantation.

Methods:  Grants awarded by the National Institutes of Health (NIH) for the study of lung transplantation between 1985 and 2015 were identified by querying the NIH Research Portfolio Online Reporting Tool Expenditures and Results (RePORTER), an online database that combines NIH project databases, funding records, abstracts, full-text articles, and information from the U.S. Patent and Trademark Office. Five research areas were targeted: lung preservation, ischemia reperfusion, ex vivo lung perfusion, anti-rejection medication, and airway healing. Grants not related to lung transplantation were excluded following a secondary search for “lung transplant” in the description page of NIH RePORTER. Identified papers from each grant were assigned the impact factor (Journal of Citation Reports 2014) for the journal in which it was published. A grant impact metric was calculated for each grant by dividing the sum of impact factors for all associated manuscripts by the total funding for that grant [Σ(Impact factor of each paper in grant) / Funding of grant].  Univariate analysis of grant impact metrics was completed.

Results: One hundred and eight lung transplantation grants were identified, totaling approximately $300 million and resulting in 2,300 papers published in 421 different journals. Surgery departments received $102.5 million over a total of 27 grants, while Internal Medicine departments received $118.8 million over 42 grants. There was no significant difference in the median grant impact metric between Surgery and Internal Medicine departments (4.2 vs. 5.4 per $100,000, p=0.86; Table 1), or between Surgery and Physiology, basic science, or Medicine subspecialty departments (all p>0.05; Table 1). Surgery departments had a significantly higher median grant impact metric compared with private companies (4.2 vs. 0 per $100,000, p<0.0001; Table 1).

Conclusion: Surgeon-scientists in the field of lung transplantation have received fewer grants and less total funding compared to other researchers but have maintained an equally high level of productivity and impact.  The dual-threat academic surgeon-scientist is an important asset to the research community and should continue to be supported by the NIH. 

 

16.11 The Readability of Psychosocial Wellness Patient Resources: Improving Surgical Outcomes

M. A. Kugar1, A. C. Cohen1, W. Wooden1, S. S. Tholpady1, M. Chu1  1Indiana University School Of Medicine,Indianapolis, IN, USA

Introduction:  Patient education, increasingly achieved with online resources, is an essential component of patient satisfaction and clinical outcomes. The average American adult reads at a seventh-grade level, yet due to the complexity of medical information the National Institute of Health (NIH) and the American Medical Association (AMA) recommend that information be written at a sixth-grade reading level. Because mental illness and limited literacy commonly co-occur, appropriate levels of readability in mental health and psychosocial wellness resources are of great importance. In this study, we investigated the readability of mental health resources currently available through the Veterans Health Administration (VA) web site and the web sites of the 2016-2017 Best Hospitals for Psychiatry according to U.S. News and World Report.

Methods:  An internet search was performed to identify patient information on mental health from the VA (the VA Health Library Encyclopedia and the VA Mental Health Website) and the top psychiatric hospitals. Seven mental health topics were included in the analysis: generalized anxiety disorder (GAD), bipolar, major depressive disorder, post-traumatic stress disorder, schizophrenia, substance abuse, and suicide. Readability analyses were performed using the Flesch Reading Ease score, Gunning Fog, Flesch-Kincaid Grade Level, the Coleman-Liau Index, the SMOG Index, Automated Readability Index, and the Lisear Write Formula, all of which were combined into a Readability Consensus score. A 2-sample T-test was used to compare mean readability scores and statistical significance was set at p < 0.05. 

Results: Twelve of the Best Hospitals for Psychiatry 2016-2017 were identified. Nine had educational material. Six of the nine cited the same resource, The StayWell Company, LLC, for at least one of the mental health topics analyzed. The VA Mental Health web site (http://www.mentalhealth.va.gov) had a significantly higher Readability Consensus than six of the top psychiatric hospitals (p<0.05, p=0.0067, p=0.019, p=0.041, p=0.0093, p=0.0054, p=0.0093). The overall average Readability Consensus for mental health information was 9.55. 

Conclusion: Online resources for mental health disorders are more complex than recommended by the NIH and AMA. Efforts to improve readability of mental health and psychosocial wellness resources could benefit patient understanding and outcomes, and are of particular importance in a population with a high occurrence of low literacy. Surgical outcomes are correlated with patient mental health and psychosocial wellness. Thus, surgical outcomes can be improved with more appropriate levels of readability of psychosocial wellness resources.  

 

16.10 Interdisciplinary Morbidity and Mortality Conference, A Method for System-Wide Improvement

C. J. Tignanelli1, G. G. Embree2, A. Barzin3  1University Of North Carolina,Department Of Surgery,Chapel Hill, NC, USA 2University Of North Carolina,Division Of Preventive Medicine,Chapel Hill, NC, USA 3University Of North Carolina,Department Of Family Medicine,Chapel Hill, NC, USA

Introduction:
Improvements in patient safety are critical to improving clinical outcomes.  Morbidity and Mortality (M&M) conferences provide a unique opportunity to identify system issues for improvement. However, these conference are historically department specific and don’t engage providers across multiple disciplines.  Here we present a resident-led initiative aimed at improving the clinical learning environment by broadly engaging residents across disciplines in M&M and using this to facilitate resident involvement in quality improvement (QI) projects.

Methods:
In an effort to promote patient safety and identify new avenues for continuous QI, the UNC housestaff council implemented a monthly, resident-led hospital-wide M&M conference. Systems-based problems were identified during M&M conferences. Improvement task forces were established to develop and implement system-wide improvements in an effort to engage residents in QI and improve patient care.

The UNC housestaff executive council reviewed submitted cases on a monthly basis and selected cases for presentation. Cases involving two or more service lines and resulting from systems issues were selected for presentation. The cases were then presented by an interdisciplinary group of senior level residents involved in the patient’s care. Resident and attending panels facilitated discussions to target systems-based problems and identify potential systems solutions. Post-conference task forces addressed these problems to develop and implement initiatives to improve care delivered and bridge quality gaps identified.

Results:
15 cases were presented between May 2014 and January 2016 involving 16 service lines as well as nursing and pharmacy. Communication (60%), coordination or utilization of care (40%), poor process or workflow (33%), and inadequate training (27%) were the main systems issues identified for each case, with an average of 1.5 systems issues identified per case. Post-conference task forces identified systems-based improvement projects in 78% (11 of 14) of cases with 82% (9 of 11) of projects successfully implemented or currently in process. Examples of projects include: standardization of placement and training of central line insertion, development of hospital-wide mock codes, development of closed communication and utilization of teamSTEPPS during codes and major resuscitations, the ED to OR Red Trauma project (this project aims to improve communication and streamline workflow between the ED/OR/Trauma team for critical traumas), and the optimization of our psychiatric ED fast track system. 

Conclusion:
This is an example of a resident-led initiative that has resulted in continuous QI across our institution. Interdisciplinary M&M is an ideal setting to identify potential system issues. The development of post-conference task forces engages residents in QI and promotes initiatives to improve patient care.