65.08 Outcomes in Older Kidney Transplant Recipients with Prior Solid Organ Transplants

C. E. Haugen1, Q. Huang1, M. McAdams-DeMarco1,2, D. L. Segev1,2  1Johns Hopkins University School Of Medicine,Baltimore, MD, USA 2Johns Hopkins Bloomberg School Of Public Health,Epidemiology,Baltimore, MD, USA

Introduction: Outcomes in solid organ transplantation (SOT: heart, lung, liver) have improved, and SOT recipients are living longer with functioning grafts. However, between 7-21% of SOT recipients will develop end-stage renal disease secondary to calcineurin inhibitor immunosuppression and a growing number of SOT recipients will be listed for and undergo kidney transplantation (KT). Similar KT graft survival yet worse overall survival has been reported in adult prior SOT recipients, but it is unclear if these outcomes are similar among older (age≥65) prior SOT recipients who undergo KT. In light of the aging SOT recipient population, KT outcomes should be evaluated, given the higher prevalence of comorbidities and frailty in older adults.

Methods: 40,730 older (age≥65) KT recipients were identified using the US Scientific Registry of Transplant Recipients (1/1/1990-12/31/2015). Adjusted Cox proportional hazards models were used to estimate differences in graft and patient survival after KT between prior SOT and no prior SOT recipients. 

Results: Since 1990, 948 prior SOT recipients (485 liver, 396 heart, 67 lung) have undergone KT after age 65. The number of older KT recipients with prior SOTs has increased annually since 1990 with a range of 0-74 performed per year. Prior SOT recipients were more likely to male, Caucasian, have renal failure from calcineurin inhibitors, undergo a pre-emptive KT, and receive a living donor than no prior SOT recipients. Five-year death-censored graft loss was 88% for recipients with prior SOT and 88% with no prior SOT; the corresponding five-year mortality was 71% and 64% (Figure). After adjustment, death-censored graft loss (aHR:1.25, 95%CI:1.01-1.54, p=0.04) and mortality (aHR: 1.43, 95%CI:1.28-1.59, p<0.001) were greater for older prior SOT recipients than no prior SOT recipients. Regardless of prior SOT type, mortality for older prior SOT recipients was greater compared to no prior SOT recipients (lung- aHR: 4.06, 95%CI:2.35-7.00, heart- aHR: 1.35, 95%CI:1.16-1.58, and liver- aHR: 1.41, 95%CI:1.22-1.65).  

Conclusions: Older KT recipients of prior SOT have worse KT graft and overall survival compared to no prior SOT recipients. Appropriate and careful selection of older KT recipients is imperative in this population of SOT recipients given worse outcomes.   

65.09 Effects of Kidney Transplant on the Outcomes of Surgical Repair of Abdominal Aortic Aneurysm

H. Albershri1, W. Qu1, M. Nazzal1, J. Ortiz1  1The University Of Toledo Medical Center,Department Of Surgery,Toledo, OH, USA

Objectives: To investigate the impacts of history of kidney transplant (Tx) on the in-hospital outcomes of surgical repair (SR) of abdominal aortic aneurysm (AAA).

Methods:  All AAA patients from 2008 to 2013 were selected using International Classification of Diseases rev.9 (ICD-9) codes from the National Inpatient Sample (NIS) database from the Healthcare Cost and Utilization Project (HCUP). History of Tx, comorbidities, SR (open (OR) or endovascular repair (EVAAAR)) and postoperative complications were also identified by ICD-9 codes. Elixhauser comorbidity index (ECI) were calculated based on the method published by van Walraven, et al. In-hospital mortality rate (IMR), length of stay (LOS), total hospital charge (TC) and postoperative complications were compared between Tx and No Tx patients. Binary logistic regression and linear regression were used to adjust for the confounding factors. IBM SPSS ver.23 was used in all the statistical analysis. Type I error level was set at 0.05.

Results: 284451 patients in NIS were diagnosed with AAA in 6 years. Only 389 (0.14%) of them had a history of Tx. Tx patients were significantly younger (67.8±9.5 vs. 75.9±10 years old) and had more males (78.1% vs. 67.4%) than non-Tx patients (both p<.001). Among 18.3% (n=52168) of the patients who underwent SR, the majority of the procedures were EVAAAR (78.3%). There were no significant differences in incidence and types of SR between Tx and Non-Tx patients (Table 1, both p>.05). Tx group had significantly higher ECI than that of the non-Tx group (median: 6 vs. 2, p<.001). There were no significant differences in common postoperative complications, LOS, TC and IMR between Tx and non-Tx patients (Table 1, all p>.05). Multivariable analysis results also showed no significant differences in these in-hospital outcomes between Tx and non-Tx group after adjusting for confounding factors such as demographics, hospital characteristics and ECI (Table 1, all p>.05).

Conclusion: Despite the fact that Tx patients tend to have a higher rate of comorbidities, they did not show significant increases in postoperative complication rates, IMR, LOS and TC than that of the non-Tx patients. Our study is limited to the in-hospital outcomes and the statistical power was not satisfactory due to the small sample size in Tx group.

 

65.10 Hospital Length of Stay After Pediatric Liver Transplantation

K. Covarrubias1, X. Luo1, D. Mogul2, J. Garonzik-Wang1, D. L. Segev1  1Johns Hopkins University School Of Medicine,Surgery,Baltimore, MD, USA 2Johns Hopkins University School Of Medicine,Pediatric Gastroenterology & Hepatology,Baltimore, MD, USA

Introduction:  Pediatric liver transplantation is a life saving treatment modality for patients and their families that requires extensive multidisciplinary assessment in the pre-transplantation period. In order to better inform medical decision making and discharge planning, and ultimately provide more personalized patient counseling, we sought to identify recipient, donor, and surgical characteristics that influence hospital length of stay (LOS) following pediatric liver transplantation.

Methods:  We studied 3956 first time pediatric (<18 yrs old) transplant recipients between 2002 and 2016 using SRTR data. We excluded patients ever listed as status 1A and patients who died prior to discharge. We used multi-level negative binomial regression to estimate incidence rate ratios (IRR) for hospital LOS accounting for center level variation. For recipients <12 yrs old, PELD (Pediatric End-Stage Liver Disease) score was used for analysis and for older transplant recipients, MELD (Model for End-Stage Liver Disease) was used.

Results:The median LOS in our study population was 15 hospital days after transplantation. Our analysis determined that a MELD/PELD score >14 (MELD 15-25: IRR 1..081.141.21, MELD/PELD 25-29:1.271.391.52, MELD/PELD >30: 1.171.281.41,), exception points (1.061.121.18), partial grafts (1.161.231.31), and Hispanic ethnicity (1.001.071.15) were associated with a longer LOS (p<0.05). A graft from a live donor (0.810.880.96), recipient weight greater than 10 kg (10-35 kg: 0.760.800.85, >35 kg: 0.610.660.70), and non-hospitalized patient status (0.710.800.90) were associated with a decreased LOS (p<0.05).

Conclusion: Our findings suggest that the ability to transplant patients at lower MELD/PELD scores and increased use of grafts from living donors would lead to decreased healthcare utilization in the immediate postoperative period. Latino ethnicity and public health insurance were also associated with a longer LOS, however our model does not account for any healthcare disparities faced by such groups of people including socioeconomic status and language barriers. 

 

65.06 Preoperative Thromboelastography Predicts Transfusion Requirements During Liver Transplantation

J. T. Graff1, V. K. Dhar1, C. Wakefield1, A. R. Cortez1, M. C. Cuffy1, M. D. Goodman1, S. A. Shah1  1University Of Cincinnati,Department Of Surgery,Cincinnati, OH, USA

Introduction: Thromboelastography (TEG) has been shown to provide an accurate assessment of patients’ global coagulopathy and hemostatic function. While use of TEG has grown within the field of liver transplantation (LT), the relative importance of TEG values obtained at various stages of the operation and their association with outcomes remain unknown. Our goal was to assess the prevalence of TEG-based coagulopathy in patients undergoing LT, and determine whether preoperative TEG is predictive of transfusion requirements during LT.

Methods: An IRB approved, retrospective review of 380 consecutive LTs between January 2013 and May 2017 was performed. TEGs obtained during the preoperative, anhepatic, neohepatic, and initial postoperative phases were evaluated. Patients with incomplete data were excluded from the analysis, resulting in a study cohort of 110 patients. TEGs were categorized as either hypocoagulabe, hypercoagulable, or normal using a previously described composite measure of R time, k time, alpha angle, and maximum amplitude. Perioperative outcomes including transfusion requirements, need for temporary abdominal closure, and rates of reoperation for bleeding were evaluated.

Results: Of patients undergoing LT, 11.8% were hypocoagulable, 22.7% were hypercoagulable, and 65.5% were normal at the start of the operation. 46.4% of patients finished the operation in a different category of coagulation from which they started. Of patients starting LT hypocoagulable, 15.4% finished hypocoagulable, none finished hypercoagulable, and 84.6% finished normal. Patients with hypocoagulable preoperative TEGs were found to require more units of pRBC (12 vs. 6 vs. 6, p=0.04), FFP (24 vs. 13 vs. 8, p<0.01), cryoprecipitate (4 vs. 2 vs. 1, p<0.01), platelets (4 vs. 2 vs. 1, p <0.01), and cell saver (4.6 liters vs. 2.8 vs. 1.9, p<0.01) during LT compared to those with normal or hypercoagulable preoperative TEGs. Despite these higher transfusion requirements, there were no significant differences in rate of temporary abdominal closure, unplanned reoperation, ICU length of stay, or 30-day readmission (all p > 0.05) between patients with hypocoagulable, hypercoagulable, or normal preoperative TEGs.

Conclusion: Preoperative thromboelastography may be predictive of transfusion requirements during LT. By consistently evaluating the preoperative TEG, surgeons can identify patients who may be at higher risk for intraoperative coagulopathy and require increased perioperative resource utilization.
 

65.07 Evaluating Length of Stays with Electronic Medical Record Interoperability

M. Cheung1, P. Kuo1, A. Cobb1  1Loyola University Chicago Stritch School Of Medicine,Maywood, IL, USA

Introduction:

While the technology industry makes improvements in software, the healthcare industry still lags far behind in the adoption of such advancements. Electronic medical record (EMR) interoperability is specifically important due to the fluidity of care a patient may receive from multiple sources. We hypothesized that transplant patients who received care at hospitals with high EMR interoperability scores would have shorter adjusted length of stays (aLOS).

Methods:

We utilized the 2013 HCUP State Inpatient Database (SID) for New York and Washington, and identified roughly 2000 patients who had received a heart, lung, pancreas, spleen, kidney, or bone marrow transplant. We created interoperability scores between 0-44 by summing the answers to questions designated as pertaining to Health Information Exchange on the 2013 American Hospital Association Information Technology (AHAIT) survey. We calculated the aLOS by dividing the unadjusted LOS by major severity diagnostic related group (MS-DRG)-based weights from the Centers for Medicare & Medicaid Services (CMS), and calculated the geometric means of the aLOS in order to diminish the impacts of outliers. We then correlated the calculated interoperability scores with the mean aLOS.

Results:
We found that the mean aLOS for transplantation patients decreased as interoperability score increased, within 95% confidence intervals. Adjusted length of stays of patients receiving care at hospitals with the worst interoperability score of 12 were 3.33 times longer than at hospitals with the highest interoperability score of 32.5 (p<0.001). 

Conclusion:

MS-DRG weights are calculated based on expected hospital cost and severity of the patient’s disease state. Therefore, aLOS serves as an indirect proxy for efficiency related to cost as well as efficiency related to time. Our findings, although not causal in nature, suggest that hospitals could save significant time and money by increasing their ability to exchange health information between different groups and facilities.

65.04 Impact of Donor Hepatic Arterial Anatomy on Clinical Graft Outcomes in Liver Transplantation

J. R. Schroering2, C. A. Kubal1, T. J. Hathaway2, R. S. Mangus1  1Indiana University School Of Medicine,Transplant Division/Department Of Surgery,Indianapolis, IN, USA 2Indiana University School Of Medicine,Indianapolis, IN, USA

Introduction:  The arterial anatomy of the liver has significant variability. When a liver graft is procured for transplant, the donor hepatic artery anatomy must be identified and preserved to avoid injury. Reconstruction of the hepatic artery is often required in cases of accessory or replaced vessels. This study reviews a large number of liver transplants and summarizes the arterial anatomy. Clinical outcomes include hepatic artery thrombosis (HAT), early graft loss and long term graft survival.

 

Methods:  All liver transplants at a single center over a 10-year period were reviewed. The arterial anatomy was determined from a combination of the organ procurement record and the liver transplant operative note. Anatomic variants and reconstructions were noted. For this cohort, all accessory/replaced right hepatic arteries were reconstructed to the gastroduodenal artery (GDA) with 7-0 prolene suture on the back table prior to implantation. All accessory/replaced left hepatic arteries were left intact from their origin at the left gastric and hepatic artery when possible, though occasional reconstruction to the GDA with 7-0 prolene suture was performed. Post-operative anticoagulation was not utilized routinely. Antithrombolytic therapy was administered at initial incision in all cases using either aprotinin or epsilon aminocaproic acid.  A single Doppler ultrasound (US) was obtained post-operatively in the critical care unit to confirm arterial and venous flow. No other imaging (intraoperative or post-operative) was obtained unless there was an indication. 

Results: The records for 1145 patients were extracted. The median recipient age was 57, body mass index 28.4, and MELD 20. Retransplant procedures comprised 4% of the cohort. Hepatic arterial anatomy types include: normal (68%), accessory/replaced left (16%), accessory/replaced right (10%), accessory/replaced right and left (4%), and other variants (2%). There were 222 cases (19%) in which back table arterial reconstruction was required. The overall incidence of HAT was 1%. The highest rate of HAT was in liver grafts with accessory right and left hepatic arteries. The hepatic arterial resistive indices measured on post-operative Doppler US did not differ by hepatic artery anatomy. One-year survival for all grafts was above 90%, but livers with an accessory right hepatic artery (only) had lower survival at 10-years when compared with grafts with normal anatomy (62% versus 75%). 

Conclusion: There were 68% of livers with standard anatomy, with the accessory/replaced left (16%) and right (10%) arteries being the next most common variants. All anatomic variants had good 1-year graft survival, though liver grafts with an accessory/replaced right hepatic artery had lowest survival at 10-years.

65.05 Sarcopenia a Better Predictor of Survival than Serologic Markers in Elderly Liver Transplant Patients

W. J. Bush1, A. Cabrales1, H. Underwood1, R. S. Mangus1  1Indiana University School Of Medicine,Indianapolis, IN, USA

Introduction: An increasing number of liver transplant (LT) patients are geriatric (≥ 60 years).  Recent research suggests that measures of frailty, such as nutrition status, may be important predictors of surgical outcomes. This study evaluates the impact of objective measures of nutritional status on post-transplant perioperative and long term outcomes for geriatric liver transplant patients. 

Methods:  Patient inclusion criteria included all geriatric liver transplant patients at a single center over a 16-year time period. Measures of nutrition status included preoperative core muscle mass, perinephric and subcutaneous adipose volume, as well as standard serological markers of nutritional status (albumin, total protein and cholesterol). Measurements of total psoas muscle area, and total perinephric and subcutaneous adipose volumes were measured from preoperative computed tomography (CT) scans at the L2/L3 disc space, and scaled to patient height. Outcomes included length of hospital stay and patient survival.

Results: There were 564 patients included in the analysis, of whom 446 had preoperative CT scans available. There was poor correlation between serologic markers of nutrition and CT measures of tissue volume. Serologic markers of nutrition were poor predictors of survival, but abnormal values were associated with increased length of stay, prolonged ventilator requirement, and a non-home discharge. In contrast, patients with severe sarcopenia and poor subcutaneous and visceral adipose stores had worse long term survival, but these findings had poor correlation with perioperative outcomes. Cox regression analysis demonstrates decreased long term survival for patients with severe sarcopenia. 

Conclusion: In this cohort of geriatric LT recipients, common serologic markers of nutrition were associated with perioperative clinical outcomes, while CT measures of muscle and adipose stores were more predictive of early and intermediate term survival outcomes. These results support the need for the further development of frailty measures that assess core tissue volume and physiologic strength.

 

65.03 The Role of FDG-PET in Detecting Rejection after Liver Transplantation

A. M. Watson1, C. M. Jones1, E. G. Davis1, M. Eng1, R. M. Cannon1, P. Philips1  1University Of Louisville,Department Of Surgery,Louisville, KY, USA

Introduction:
Acute cellular rejection (ACR) following organ transplantation continues to be a major problem in solid organ transplantation.  ACR following organ transplantation is associated with activation of T-cells, which have increased glucose uptake and utilization. This physiologic activity could be utilized for detection of ACR. This study was designed to evaluate the effectiveness of 18[F] Fluoro-2- Deoxyglucose Positron Emission Tomography (FDG PET) in detecting acute rejection in the clinical setting.

Methods:
FDG-PET studies were performed on 88 orthotopic liver transplant patients (41 men, 47 women; mean age 51 +/- 6 years) at 7 and 17 days post-operatively (1st PET and 2nd PET respectively).   Additional studies was performed if patients had suspicion of rejection and at resolution of rejection (3rd PET and 4th PET respectively).  The FDG-PET images were matched to 107 non-transplant patients (52 +/- 20 years), which served as controls. The controls underwent 2 FDG-PET studies during the same time intervals (1st PET and 2nd PET).  A circular region of interest (ROI) was placed over the liver for semi-quantitative evaluation of FDG-PET images by means of standard uptake values (SUVs).

Results:
There was no significant difference between the SUV of the baseline FDG-PET studies (1st & 2nd PET) post-transplant versus the SUV obtained in non-transplanted patients. The mean SUVs normalized for body weight in post-orthotopic liver transplant patients measured 1.93 +/- 0.5 (p = 0.122); the mean SUVs for non-transplant patients were 2.10 +/- 0.6 (p = 0.210).  Eighteen of 88 patients in our study (20.5%) had histologically proven ACR during a 30 +/- 11 day follow-up.  There was no significant difference between the SUV values of 1st PET among non-rejecters vs. rejecters (mean 2.05; SD 0.46, median 2.19; IQR 1.75, 2.34 vs. mean 1.82, SD 0.40; median 1.77, IQR 1.76, 2.13. p value=0.127).  Within the rejection cohort, the SUVs from the 3rd PET (rejection) were higher compared to the 1st PET (baseline). The mean SUVs of the 3rd PET measured 2.41 (SD 0.48; median 2.5, IQR 2.14, 2.74) compared to the baseline 1st PET mean SUV of 1.82 (SD 0.41; median 1.77, IQR 1.76, 2.13) and this difference was statistically significant (p<0.001).

Conclusion:
To date the role of FDG-PET in the diagnosis of ACR has not been evaluated. Semi-quantitative analysis using SUV showed a statistically significant increase between baseline and rejection FDG-PET studies.  Additional prospective validation studies are essential to define the role of FDG-PET scan as an early marker for acute cellular rejection.
 

65.01 Thrombolysis during Liver Procurement Prevents Ischemic Cholangiopathy in DCD Liver Transplantation

A. E. Cabrales1, R. S. Mangus1, J. A. Fridell1, C. A. Kubal1  1Indiana University School Of Medicine,Department Of Transplant Surgery,Indianapolis, IN, USA

Introduction: The rate of donation after circulatory death (DCD) liver transplantation has decreased in recent years as a result of inferior graft and patient survival when compared to donation after brain death (DBD) transplantation. Ischemic cholangiopathy (IC) is the primary cause of these inferior outcomes and is associated with a high rate of graft loss, retransplantation and recipient mortality. Development of IC in liver transplant recipients appears to be associated with peribiliary arterial plexus microthrombi formation that can occur in DCD donors. Our center has demonstrated success using tissue plasminogen activator (tPA) flush during DCD organ procurement to prevent the formation of microthrombi and prevent IC. This study investigates the long term impact of tPA flush on graft outcomes and program use of DCD organs.

Methods: All records for liver transplants over a 15 year period at a single center were reviewed and data extracted. DCD organ procurement followed carefully established protocols including a 5-minute wait time after determination of cardiac death prior to initiation of the procurement procedure. The procurement consisted of rapid cannulation of the aorta, clamping of the aorta and decompression through the vena cava. Preservation solution included initial flush with histidine-tryptophan-ketoglutarate solution (HTK), followed by infusion of tPA in 1L NS, then further flush with HTK until the effluent was clear. Total flush volume was less than 5L.

Results: There were 57 tPA procurements (48%) and 62 non-tPA procurements (52%). Patients receiving the tPA grafts were older and had a higher MELD score. The tPA grafts had less cold and warm ischemia time.  The grafts procured using tPA had better survival at 7- and 90-days (p=0.09, p=0.06) and at 1-year (95% versus 79%, p=0.01).  Cox regression showed significantly better long-term survival for tPA grafts (88% versus 45% at 10-years; p<0.01). Improved outcomes with thrombolytic therapy in DCD liver procurement changed the use of DCD grafts at our center to extended criteria donors who were older, heavier, and out of state. Use of these higher risk DCD donors did not change clinical outcomes.

Conclusion: Our center has shown that optimization of perioperative conditions, including use of an intraoperative thrombolytic flush, significantly lowers the incidence of IC in DCD liver grafts. With improved outcomes, the percentage of DCD grafts at our center has increased, including the use of extended criteria DCD livers, without a worsening of outcomes.

65.02 The Effect of Socioeconomic Status on Patient-Reported Outcomes after Renal Transplantation

A. J. Cole1, P. K. Baliga1, D. J. Taber1  1Medical University Of South Carolina,Charleston, Sc, USA

Introduction:  Research analyzing the effect socioeconomic status (SES) has on renal transplant outcomes has demonstrated conflicting results. However, recent studies demonstrate that certain patient-reported outcomes (PROs), such as depression, medication non-adherence, health literacy, social support, and self-efficacy can influence clinical outcomes in renal transplant recipients. Our objectives were to examine the effect SES has on PROs, and determine if there is an association between SES, patient-reported outcomes, and healthcare utilization.

Methods:  Post-hoc analysis of 52 patients enrolled in an ongoing prospective trial aimed at improving cardiovascular disease risk factor control in renal transplant recipients. As part of the study, at baseline, patients completed detailed surveys assessing SES and PROs.  Patients were divided into low and high SES cohorts based on income, education, marital status, insurance, and employment. All patients were given 12 self-reported surveys in the domains of medication-related issues, self-care and knowledge, psychosocial issues, and healthcare. Analyses included the associations between 12 PRO surveys, SES measures, and healthcare utilization, including the rate of hospitalizations, ED visits and clinic visits that occurred between the date of transplant and enrollment in the trial.

Results: The low SES cohort (n=16, 30.8%) experienced more severe depression (5.75 vs 3.0, p=0.022), higher rates of inadequate health literacy (3.42 vs 1.68, p=0.022) and perceived stress (2.743 vs 3.266, p=0.027), along with significantly less self-efficacy (6.971 vs 8.214, p=0.006) and social support (3.86 vs 4.408, p=0.012; see Figure 1). Low SES was associated with a 60% higher rate of hospitalization and 90% higher rate of ED visits per patient-year. Medication non-adherence was also associated with more hospitalizations and ED visits.

Conclusion: This analysis demonstrates that low SES was significantly associated with negative PROs, including depression, health literacy, social support, stress, and self-efficacy.  Further, low SES and medication non-adherence was associated with higher rates of healthcare utilization. 

 

64.09 Role of the patient-provider relationship in Hepato-pancreato-biliary diseases

E. J. Cerier1, Q. Chen1, E. Beal1, A. Paredes1, S. Sun1, G. Olsen1, J. Cloyd1, M. Dillhoff1, C. Schmidt1, T. Pawlik1  1Ohio State University,Department Of Surgery,Columbus, OH, USA

Introduction: An optimal patient-provider relationship (PPR) may improve medication / appointment adherence, healthcare resource utilization, as well as reduce healthcare costs.  The objective of the current study was to define the impact of PPR on healthcare outcomes among a cohort of patients with hepato-pancreato-biliary (HPB) diseases.

Methods: Utilizing the Medical Expenditure Panel Survey Database from 2008-2014, patients with an HPB disease diagnosis were identified. PPR was determined using a weighted score based on survey items from the Consumer Assessment of Healthcare Providers and Systems (CAHPS). Specifically, patient responses to questions concerning access to healthcare providers, responsiveness of healthcare providers, patient-provider communication, and shared decision-making were obtained. Patient provider communication was stratified into three categories using a composite score that ranged from 4 to 12 (score 4-7: "poor," 8-11: "average," and 12 "optimal"). The relationship between PPR and health care outcomes was analyzed using regression analyses and generalized linear modeling.

Results: Among 594 adult-patients, representing 6 million HPB patients, reported PPR was "optimal" (n=210, 35.4%), "average" (n=270, 45.5%), and "poor" (n=114, 19.2%). Uninsured (uninsured: 36.3% vs. Medicaid: 28.8% vs. Medicare: 15.4% vs. Private: 14.0%; p=0.03) and poor-income (high: 14.0% vs. middle: 12.8% vs. low: 21.5% vs. poor: 24.3%; p=0.03) patients were more likely to report "poor" PPR. In contrast, other factors such as race, sex, education, and age were not associated with PPR. In addition, there was no association between PPR and overall annual healthcare expenditures ("poor" PPR: $19,405, CI $15,207-23,602 vs. "average" PPR: $20,148, CI $15,538-24,714 vs. "optimal" PPR: $19,064, CI $15,344-22,784; p=0.89) or out-of-pocket expenditures ("poor" PPR: $1,341, CI $618-2,065 vs. "average" PPR: $1,374, CI $1,079-1,668 vs. "optimal" PPR: $1,475, CI $1,150-1,800; p=0.77). Patients who reported "poor" PPR were also more likely to self-report poor mental health scores (OR 5.0, CI 1.3-16.7), as well as have high emergency room utilization (≥ 2 visits: OR 2.4, CI 1.2-5.0)(both p<0.05). Patients with reported "poor" PPR did not, however, have worse physical health scores or more previous inpatient hospital stays (both p>0.05)(Figure).

Conclusion: Patient self-reported PPR was associated with insurance and socioeconomic status.  In addition, patients with perceived "poor" PPR were more likely to have poor mental health and be high utilizers of the emergency room.  Efforts to improve PPR should focus on these high-risk populations.

64.10 Preoperative Frailty Assessment Predicts Short-Term Outcomes After Hepatopancreatobiliary Surgery

P. Bou-Samra1, D. Van Der Windt1, P. Varley1, X. Chen1, A. Tsung1  1University Of Pittsburg,Hepatobiliary & Pancreatic Surgery,Pittsburgh, PA, USA

Introduction: Given the aging of our population, increasing numbers of elderly patients are evaluated for surgery. Preoperative assessment of frailty, defined as the lack of physiological reserve, is a novel concept that has recently gained interest to predict postoperative complications. The comprehensive Risk Analysis Index (RAI) for frailty has been shown to predict mortality in a large cohort of surgical patients. RAI is now measured in all patients presenting to surgical clinics in our institution. Initial analysis showed that patients with hepatopancreatobiliary disease have the highest frailty scores, only second to patients presenting for cardiovascular surgery. Therefore, the aim of this study was to specifically evaluate the performance of RAI in predicting short-term post-operative outcomes in patients undergoing hepatopancreatobiliary surgery, a significantly frail patient population.

Methods: From June-December 2016, the RAI was determined in 162 patients prior to surgery. RAI includes 12 variables to evaluate e.g. age, kidney disease, congestive heart failure, cognitive functioning, independence in daily activities, and weight loss. Data on 30-day post-operative outcomes were prospectively collected. Complications were scored according to the Clavien-Dindo classification and summarized in the Comprehensive Complication Index (CCI). Other assessed post-operative outcomes included ICU admission, length of stay, and rates of readmissions. Logistic and linear regressions were done to assess for correlation of RAI score and each measured outcome. A multivariate analysis was done to control for the magnitude of the operation, coronary artery disease, cancer stage, and intraoperative blood loss.

Results: Our cohort of 162 patients (79 M; 83 F, median age 67, range 19-95), included 55 undergoing minor operation, 56 undergoing intermediate operation, and 51 undergoing major surgery. Their RAI scores ranged from 0 to 25, with a median of 7. With every unit increase in RAI score, length of stay increased by 5% (IRR 1.05; 95%CI 1.04-1.07, P<0.01), the odds of discharging the patient to a special facility increased by 10% (OR 1.10; 95%CI 1.02-1.17, P<0.01), the odds of admission to the ICU increased by 11% (OR 1.11; 95%CI 1.02-1.20, P=0.01), the expected ICU length of stay increased by 17% (IRR=1.17; CI 1.06-1.30), the odds of readmission increased by 8% (OR=1.08; CI 0.99-1.17, P=0.054), the CCI increased by 1.6 units (coefficient=1.60; CI 0.61-2.58, p<0.01). In multivariate analysis, frailty remained positively associated with CCI (p=0.01)

Conclusion: The RAI score is predictive of short-term post-operative outcomes after hepatopancreatobiliary surgery. Pre-operative risk assessment with RAI could aid in decision-making for treatment allocation to surgery versus less morbid locoregional treatment options in frail patients. 

 

64.07 Cost-Effectiveness of Rescuing Patients from Major Complications after Hepatectomy

J. J. Idrees1, C. Schmidt1, M. Dillhoff1, J. Cloyd1, E. Ellison1, T. M. Pawlik1  1The Ohio State University, Wexner Medical Center,Department Of Surgery,Columbus, OH, USA

Introduction:  Major complications after liver resection can increase costs, as well as be associated with higher mortality. Failure to rescue (FTR) has been inversely correlated with hospital volume.  We sought to determine whether high or medium volume centers were more cost-effective at rescuing patients from major complications relative to low volume centers following hepatic resection. 

Methods:  The Nationwide Inpatient Sample (NIS) was used to identify 96,107 liver resections that occurred between 2011-2011. Hospitals were categorized into high (HV) (150+ cases/year), medium (MV)(51-149 cases/year), and low (LV) (1-49 cases/year) volume centers. Cost-effectiveness analyses were performed using propensity score matched cohorts adjusted for patient co-morbidities among HV vs. LV (8,924 pairs), as well as MV vs. LV (18,158 pairs) centers. Incremental cost effectiveness ratio (ICER) was calculated to assess cost-effectiveness of HV and MV centers relative to LV centers. ICER was calculated at the willingness to pay threshold of $50,000. Sensitivity analyses were performed using the bootstrap method with 10,000 replications.

Results: The overall incidence of complications following hepatectomy was 14.9% (n=14,313), which was roughly comparable among centers regardless of volume (HV 14.2 % vs. MV 14.3% vs. LV 15.4%; p<0.001).  In contrast, while overall FTR was 11.2%, the FTR rate was substantially lower among HV centers (HV: 7.7%, MV: 11.2%, LV: 12.3%, p<0.001).  Both HV and MV centers were more cost-effective at rescuing patients from a major complication relative to LV centers.  Specifically, the incremental cost per year of life gained was $3,296 at HV versus $4,182 at MV centers compared with LV hospitals. HV were particularly cost-effective at managing certain complications.  For example, compared to LV centers, HV hospitals had lower costs with a higher survival benefit in managing bile duct complications (ICER: -$1,580) and sepsis (ICER: -$2,760). 

Conclusion: Morbidity following liver resection was relatively common as 1 in 7 patients experienced a complication. Not only was FTR lower at HV hospitals, but the management of most major complications was also more cost-effective at HV centers. 
 

64.08 Cost burden of overtreating low grade pancreatic cystic neoplasms

J. M. Sharib1, K. Wimmer1, A. L. Fonseca3, S. Hatcher1, L. Esserman1, A. Maitra2, Y. Shen4, E. Ozanne5, K. S. Kirkwood1  1University Of California – San Francisco,Surgery,San Francisco, CA, USA 2University Of Texas MD Anderson Cancer Center,Pathology,Houston, TX, USA 3University Of Texas MD Anderson Cancer Center,Surgery,Houston, TX, USA 4University Of Texas MD Anderson Cancer Center,Biostatistics,Houston, TX, USA 5University Of Utah,Population Health Sciences,Salt Lake City, UT, USA

Introduction: Consensus guidelines recommend resection of intraductal papillary mucinous neoplasms (IPMN) with high risk stigmata, and laborious surveillance for cysts with worrisome features. In practice, resections are performed at higher rates due to fear of malignancy. As a result, many cysts harboring no or low grade dysplasia (LGD) are removed unnecessarily, with undue risk to patients. This study compares the costs and effectiveness of practice patterns at UCSF and MD Anderson to alternative management strategies for pancreatic cysts. Potential cost savings that would be realized if diagnostic accuracy were improved and prevented resection of LGD are also estimated.

Methods: We developed a decision analytic model to compare costs and effectiveness of three treatment strategies for a newly diagnosed pancreatic cyst: 1) Immediate surgery, 2) Do nothing, and 3) “Surveillance” based on consensus guidelines. Model estimates were derived from published literature and retrospective data for pancreatic cyst resections at UCSF and MD Anderson from 2005-2016. Costs and effectiveness (quality adjusted life years, QALYs) were predicted and used to develop incremental cost effectiveness ratios (ICERs). To estimate the cost burden of resecting LGD, the “Surveillance” strategy was adjusted to remove the possibility of resecting LGD, “Precision Surveillance”, and these costs were compared with the original model.

Results: The “Immediate surgery” strategy was the costliest and most effective, while the “Do nothing” strategy was least costly and least effective (Fig 1a). The “Surveillance” strategy was the preferred strategy, however, it increased costs by $129,372 per quality adjusted life year gained (ICER) compared to “Do nothing”; above the commonly accepted $100,000/QALY willingness to pay threshold. When resection of LGD was eliminated, the cost of “Precision Surveillance” decreased by $21,295, while the effectiveness increased by 0.6 QALY, making it the preferred strategy (Fig 1b). The resulting incremental cost discount of “Precision Surgery” was $35,905 per QALY compared to “Surveillance” with current diagnostic accuracy. This cost reduction brought the “Precision Surveillance” strategy below the $100,000/QALY threshold compared to the “Do Nothing” strategy.

Conclusion: Surveillance under current consensus guidelines for IPMN is the preferred strategy compared to the ”Immediate surgery” and “Do nothing” strategies. Our present inability to distinguish LGD from high grade/invasive lesions adds significant costs to the treatment of IPMN. Improved diagnostics that accurately grade cystic pancreatic neoplasms and empower clinicians to reduce the resection of LGD would decrease overall costs and improve effectiveness of surveillance.

64.05 Prognostic Value of Hepatocellular Carcinoma Staging Systems: A Comparison

S. Bergstresser2, P. Li2, K. Vines2, B. Comeaux1, D. DuBay3, S. Gray1,2, D. Eckhoff1,2, J. White1,2  1University Of Alabama at Birmingham,Department Of Surgery, Division Of Transplantation,Birmingham, Alabama, USA 2University Of Alabama at Birmingham,School Of Medicine,Birmingham, Alabama, USA 3Medical University Of South Carolina,Department Of Surgery, Division Of Transplantation,Charleston, Sc, USA

Introduction:  Hepatocellular carcinoma (HCC) is the third most common cause of cancer related deaths worldwide. As the incidence of HCC continues to trend upwards, it is imperative to have validated staging systems to guide clinicians when choosing treatment options.  Seven HCC staging systems have been validated to varying degrees, however, there is currently inadequate evidence in the literature regarding which system is the best predictor of survival. The purpose of this investigation was to determine predictors of survival and compare the 7 staging systems in their ability to predict survival in a cohort of patients diagnosed with HCC. 

Methods:  This is a prospectively controlled chart review study of 782 patients diagnosed with HCC between January 2007 and April 2015 at a large, single-center hospital. Lab values, patient demographics, and tumor characteristics were used to stage patients and calculate Model for End Stage Liver Disease (MELD) and Child-Pugh scores. Kaplan-Meier method and log-rank test were used to identify the risk factors of overall survival. Cox regression model was used to calculate linear trend χ 2 and likelihood ratio χ 2 to determine linear trend and homogeneity of the staging systems, respectively. 

Results: Univariate analyses suggested that tumor number (P < .0001), diameter of largest lesion (P < .0001), tumor taking up > 50% of liver mass (P < .0001), tumor major vessel involvement (P = .0025), alpha fetoprotein level (AFP) 21-200 vs > 200 (P < .0001), and Child Pugh score (P <.0001) were significant predictors of overall survival; while portal hypertension (P= .520) and pre-intervention bilirubin (P= .0904) were not. In all patients, the Cancer of Liver Italian Program (CLIP) provided the largest linear trend χ 2 and likelihood ratio χ 2 in the Cox model when compared to other staging systems, indicating the best predictive power for survival. 

Conclusion:Based on our statistical analysis, Child Pugh score, tumor size, number, presence of vascular invasion, and AFP level play a significant role in determining survival. In all patients and in patients receiving treatment other than transplantation (ablation, chemoembolization), CLIP appears to be the best predictor of survival. The CLIP staging system takes into account Child Pugh score, tumor morphology, AFP level, and portal vein thrombosis, which may explain its significant ability to predict survival. 

 

64.06 Epidural-related events are associated with ASA class, but not ketamine infusion following pancreatectomy

V. Ly1, J. Sharib1, L. Chen2, K. Kirkwood1  2University Of California – San Francisco,Anesthesia,San Francisco, CA, USA 1University Of California – San Francisco,Surgical Oncology,San Francisco, CA, USA

Introduction:

Epidural analgesia following pancreatectomy has become widely adopted; however, high epidural rates are often associated with early hypotensive events that require rate reduction and fluid resuscitation. It is unclear which patients are most at risk for such events. Continuous subanesthetic ketamine infusion reduces opioid consumption after major abdominal surgery. The effects of ketamine added to epidural analgesia have not been well studied in patients undergoing pancreatectomy. This study evaluates the safety and postoperative analgesic requirements in patients who received continuous ketamine infusion as an adjunct to epidural analgesia following pancreatectomy.

Methods:

A retrospective data analysis was conducted on 234 patients undergoing pancreaticoduodenectomy (n=165) or distal pancreatectomy (n=69) at UCSF Medical Center between January 2014 and January 2017. Patient demographics, including history of prior opiate use, along with perioperative fentanyl-ropivacaine epidural and continuous intravenous ketamine rates were collected. Oral morphine equivalents (OME) and visual analogue pain scales (VAS) were recorded at post op day 0, 1, 2, 3, and 4. To assess for safety, epidural rate decreases due to hypotension within the first 24 hours post op and ketamine-related adverse events were recorded.

Results:

Epidural (n=197) and other opiate analgesia (n=234) were administered perioperatively per surgeon preferences and institutional standards. Continuous ketamine infusion was given intraoperatively, postoperatively, or both in 71 patients, with a trend toward preferential use in patients with prior opiate exposure. Ketamine infusion was not associated with hypotensive events, daily maximum epidural rates, or significant epidural rate changes on postoperative days 0-4. OMEs and VAS were similar between groups, regardless of prior opiate use. Patients with American Society of Anesthesia (ASA) class 3 or 4 (n=111) were more likely to require epidural rate decreases (OR 2.37, 95%CI 1.3-4.2, p = 0.003) and associated interventions in the first 24 hours post op. Three patients reported ketamine-related adverse events such as unpleasant dreams and hallucinations.

Conclusion:

Subanesthetic ketamine infusion as an adjunct to epidural analgesia for pancreatic surgery patients is safe. Patients with ASA classification 3 or 4 experience more hypotensive events which require epidural rate decreases in the first postoperative day following pancreatectomy. Further study is required to assess whether ketamine infusion allows for use of lower epidural rates, reduces post op opioid consumption, or improves pain score in the early postoperative period.

64.04 Comparing Frailty Scales to Guide Creation of a Multidimensional Assessment for Surgical Patients

J. McDonnell1, P. R. Varley1, D. E. Hall1,2, J. W. Marsh1, D. A. Geller1, A. Tsung1  1University Of Pittsburgh,General Surgery,Pittsburgh, PA, USA 2VA Pittsburgh Healthcare System,General Surgery,Pittsburgh, PA, USA

Introduction:  Frailty defines a phenotype of functional decline that places patients at risk for death and disability, and the American College of Surgeons and American Geriatric Society have joint guidelines which recommend implementation of a frailty assessment for aging patients. Though various instruments for measuring patient frailty have been described in the literature, it is unclear which is the most appropriate for routine screening of surgical patients. The goal of this project was to compare assessments from three separate frailty instruments in a cohort of surgical patients in order to inform the development of a robust, clinically feasible frailty assessment for surgical patients.

Methods:  Demographic and medical history for all new patients evaluated at the Liver Cancer Center of UPMC was collected by patient-completed questionnaire and verified by a research associate (RA). Patients were then assessed for functional measures of frailty including extended timed up-and-go (eTUG), walking speed, grip strength, and Mini-Cog. Information from this assessment was then used to calculate scores for the Fried Frailty Phenotype (FF), Edmonton Frail Scale (EFS), and Risk Analysis Index (RAI). Frailty was defined as FF ≥ 3, EFS ≥ 8, or RAI ≥ 21.

Results: As part of a pilot project, 127 patients were evaluated. 64 (52.0%) of the patients were male. The cohort had a mean age of 62.9±15.0 years, and mean BMI of 29.4±6.4. Median scores for the RAI were 10 [IQR 7-17], 3 [IQR 2-5] for the EFS, and 1 [IQR 0-2] for FF. With respect to frailty, 36 (28.4%) of the patients were frail with respect to any of the three measures of frailty. 12 (9.5%) of patients were rated frail by the EFS, while 21 (16.5%) of patients were rated frail by the FF and 23 (18.1%) by the RAI. 20 patients (15.8%) were classified frail by only one measure, 12 (9.5%) by two measures, and only 4 (2.2%) by all 3 scales. Inter-rater agreement between the three scales was fair (κ = 0.33, p <0.001). Figure 1 demonstrates the concordance of measures among all three instruments, and demonstrates that choosing only one of the EFS, RAI or FF would have failed to recognize 16 (44.4%), 10 (27.8%), and 12 (33.3%) of the potentially frail patients respectively. 

Conclusion: The results of this pilot project suggest that it is feasible to implement a routine frailty screening process in a busy surgical clinic. Utilizing only single frailty instrument to evaluate patients may lead to an underestimate of frailty in surgical populations. Future work should focus on creation of a frailty screening process developed specifically for surgical patients and linked to surgical outcomes.

 

64.02 Isolated Pancreatic Tail Remnants After Transgastric Necrosectomy Can Be Observed

C. W. Jensen1, S. Friedland2, P. J. Worth1, G. A. Poultsides1, J. A. Norton1, W. G. Park2, B. C. Visser1, M. M. Dua1  1Stanford University,Surgery,Palo Alto, CA, USA 2Stanford University,Gastroenterology,Palo Alto, CA, USA

Introduction:  Severe necrotizing pancreatitis may result in mid-body necrosis and ductal disruption. When a significant portion of the tail remains viable but cannot drain into the proximal pancreas, the “unstable anatomy” that results is often deemed an indication for distal pancreatectomy. The transgastric approach to pancreatic drainage/debridement has been shown to be effective for retrogastric walled-off collections. A subset of these cases are performed in patients with an isolated viable tail. The purpose of this study was to characterize the outcomes among patients with an isolated pancreatic tail remnant who underwent trangastric drainage or necrosectomy (endoscopic or surgical) and determine how often they required subsequent operative management.

Methods:  Patients with necrotizing pancreatitis and retrogastric walled-off collections that were treated by either surgical transgastric necrosectomy or endoscopic cystgastrostomy +/- necrosectomy between 2009-2017 were identified by retrospective chart review. Clinical and operative details were obtained through the medical record. All available pre- and post-procedure imaging was reviewed for evidence of isolated distal pancreatic tail remnants. 

Results: A total of 75 patients were included in this study (41 surgical and 34 endoscopic). All of the patients in the surgical group underwent laparoscopic transgastric necrosectomy; the endoscopic group consisted of 27 patients that underwent pseudocyst drainage and 7 that underwent necrosectomy. Median follow-up for the entire cohort was 13 months and there was one death. A disconnected pancreatic tail was identified in 22 (29%) patients (13 laparoscopic and 9 endoscopic). After the surgical or endoscopic creation of an internal fistula (“cystgastrostomy”), there were no external fistulas despite the viable tail. Of the 22 patients, there were 5 (23%) patients that developed symptoms at a median of 23 months from the index procedure (3-recurrent episodic pancreatitis and 2-intractable pain). Two patients (both initially in endoscopic group) ultimately required distal pancreatectomy and splenectomy at 6 and 24 months after index procedure. 

Conclusion: Patients with a walled-off retrogastric collection and an isolated viable tail are effectively managed by a transgastric approach. Despite this seemingly “unstable anatomy,” the creation of an internal fistula via surgical or endoscopic “cystgastrostomy” avoids external fistulas/drains and the short term (near to initial pancreatitis) necessity of surgical distal pancretectomy. A very small subset require intervention for late symptoms. In our series, the patients that ultimately required distal pancreatectomy had initially undergone an endoscopic rather than a surgical approach; however, whether there is a difference between the two approaches in the outcome of the isolated pancreatic remnant is difficult to conclude due to small sample size.                                              

 

64.03 National Trends and Predictors of Adequate Nodal Sampling for Resectable Gallbladder Adenocarcinoma

A. J. Lee1, Y. Chiang1, C. Conrad1, Y. Chun-Segraves1, J. Lee1, T. Aloia1, J. Vauthey1, C. Tzeng1  1University Of Texas MD Anderson Cancer Center,Surgical Oncology,Houston, TX, USA

Introduction: For gallbladder cancer (GBC), the new American Joint Committee on Cancer 8th edition (AJCC8) staging system classifies lymph node (LN) stage by the number of metastatic LN, rather than their anatomic location as in AJCC6 and AJCC7.  Additionally, AJCC8 now recommends resection of ≥6 LNs for adequate nodal staging.  In the context of this new staging system and recommendation for GBC surgery, we evaluated current national trends in LN staging and sought to identify factors associated with any and/or adequate LN staging according to this new guideline.

Methods: Utilizing the National Cancer Data Base (NCDB), we identified all gallbladder adenocarcinoma patients treated with surgical resection with complete tumor staging information between 2004-2014.  We excluded patients with T1a and lower pathologic T-stage, as nodal staging is not indicated in these patients.  Nodal staging and nodal positivity rates were compared over the study period.  Univariate and multivariate logistic regression modeling were performed to identify factors associated with any and/or adequate nodal staging.

Results: We identified 11,525 patients with T-stage ≥T1b, for whom lymphadenectomy is recommended.  Only 49.6% (n=5,719) of patients had any LN removed for staging.  On multivariate analysis, treatment at academic centers (OR=2.33, p<0.001), more recent year of diagnosis (OR=2.29, p<0.001), clinical node-positive status (OR=3.46, p<0.001), pathologic T2 stage (OR=1.25, p<0.001), and radical surgical resection (OR=4.85, p<0.001) were associated with higher likelihood of having any nodal staging.  Age ≥80 (OR=0.57, p <0.001), and higher co-morbidity index (OR=0.70, p<0.001) were associated with lower likelihood of having any nodal staging.  However, of the 5,719 patients who underwent any nodal staging, only 21.8% (n=1,244) met the AJCC8 recommendation of adequate LN staging.  On multivariate analysis, female sex (OR=1.18, p=0.02), treatment at academic centers (OR=1.52, p<0.001), radical surgical resection (OR=2.53, p<0.001), and pathologic T4 stage (OR=2.14, p<0.001) were associated with having ≥6 LN resected concomitantly with their oncologic operation.  Patients over 80 years old (OR=0.60, p<0.001) and in South region (OR=0.79, p=0.002) were less likely to have adequate LN sampling according to the new recommendation.

Conclusion: National trends in the overall GBC LN staging rate of 49.6% do not live up to the new AJCC8 recommendations.  Furthermore, the finding that only 21.8% of patients met the 6 LN threshold highlights the gap between the new AJCC8 recommendations and reality.  We have identified demographic and clinicopathologic factors associated with any and/or adequate LN staging, which can be incorporated into future targeted quality improvement initiatives.

63.09 Outcomes in VATS Lobectomies: Challenging Preconceived Notions

D. J. Gross1, P. L. Rosen1, V. Roudnitsky4, M. Muthusamy3, G. Sugiyama2, P. J. Chung3  2Hofstra Northwell School Of Medicine,Department Of Surgery,Hempstead, NEW YORK, USA 3Coney Island Hospital,Department Of Surgery,Brooklyn, NY, USA 4Kings County Hospital Center,Department Of Surgery, Division Of Acute Care Surgery And Trauma,Brooklyn, NY, USA 1SUNY Downstate,Department Of Surgery,Brooklyn, NY, USA

Introduction:   The number of thoracic resections performed for lung cancer is expected to rise due to increased screening in high risk populations. However majority of thoracic surgical procedures in the US are performed by general surgeons (GS). Currently Video Assisted Thoracoscopic Surgery (VATS) has become the preferred approach to lung resection when feasible. Our goal is to examine short term outcomes of VATS lobectomy for malignancy performed by either GS or CT surgeons using the America College of Surgeons National Surgical Quality Improvement Project (ACS NSQIP) database.

Methods:  Using ACS NSQIP 2010-2015 we identified patients that had an ICD 9 diagnosis of lung cancer (162) that underwent VATS lobectomy (CPT 32663). We included only adults (≥18 years) and elective cases and excluded cases that had preoperative sepsis, contaminated/dirty wound class, and missing data. Risk variables of interest included demographic, comorbidity, and perioperative variables. Outcomes of interest included 30-day postoperative mortality, 30-day postoperative morbidity, and length of stay (LOS). Univariate analysis comparing cases performed by GS vs CT was performed. We then performed propensity score analysis using a 3:1 ratio of CT:GS cases with categorical outcome variables assessed using conditional logistic regression.

Results: A total of 4,308 cases met criteria; 649 (15.1%) by GS and 3,659 (84.9%) by CT. Mean age in the GS group was 68.6 vs 67.8 years in the CT group (p=0.034). There was a greater proportion of African American patients in the GS compared to CT group (8.0% vs 3.4%, p<0.0001), but higher rates of dyspnea with moderate exertion in the CT compared to GS group (19.8% vs 12.9%, p<0.0001). Operative time was shorter in the GS group vs CT group (179 vs 196 minutes, p <0.0001).  After propensity score matching the two groups were found to be well balanced on all risk variables. LOS was longer in the GS vs matching CT group (mean 6.2 vs 5.3 days, p=0.0001). Conditional logistic regression showed that GS treated patients had no greater risk of 30-day mortality (p=0.806), but had greater risk of postoperative sepsis (OR 2.20, 95% CI [1.01, 4.79], p=0.047).

Conclusion: In this large observational study using a prospectively collected clinical database, we found that while general surgeons had longer LOS, compared to cardiothoracic trained surgeons there were no differences in short-term mortality and morbidity with the exception of increased risk of postoperative sepsis. Further prospective studies are warranted to investigate oncologic and long-term outcomes.