65.01 Burden of Chronic Post-Operative Pain in Patients Undergoing Open Repair of Complex Ventral Hernias

C. G. DeLong1, J. A. Doble1, J. Miller1, E. M. Pauli1, D. I. Soybel1  1Pennsylvania State University College Of Medicine,Department Of Surgery,Hershey, PA, USA

Introduction: Recent studies suggest that chronic pain is present in 30% of patients at one year following ventral hernia repair (VHR).  In these studies, chronic pain has been defined using patient-reported numeric pain scores at predetermined postoperative time points.  Not addressed in this approach is the hidden burden of care for chronic post-operative pain, which would include distress and cost of ED visits, unscheduled clinic visits, additional imaging, and pain-prompted phone calls.   The aim of this study was to characterize the entire course of treating postoperative pain, including all complaints of pain for up to 1 post-operative year. 

Methods: We performed a retrospective medical record review of patients who underwent open repairs of complex ventral hernias, requiring at least 2 days of hospital admission, by two surgeons at an academic referral center from January 2013 through August 2015.  Exclusions included laparoscopic and hybrid repairs, metastatic cancer, and inflammatory bowel disease. 

Results: One hundred and seventy-seven (177) patients were included with the following profile: average age 58.1 ± 11.5 years, 56.5% female, 79.1% BMI ≥30, 67.8% ASA class ≥3, 15.3% smokers within 1 year of surgery, 21.5% diabetic, and 22.0% reporting preoperative narcotic use.  Mesh was used for repair in 96% of cases and 53% of cases were for recurrent hernias. Follow-up at 90 days was 91.0% and 82.5% at 1 year.   At least one outpatient complaint of pain was recorded for 51.4% of patients in the year following surgery; of these, 45.1% reported additional complaints of pain and were labeled with “ongoing pain” (Figure 1).   The average pain score at 1 year for patients with ongoing pain was not markedly different compared to those not reporting ongoing pain (1.94 ± 2.86 versus 1.02 ± 2.25, p=.0522), suggesting adequacy of efforts to control pain.  Of patients with ongoing pain, there was no diagnosis to explain the pain in 78.0%; of these, 75.0% initiated additional phone calls related to pain, 81.3% had additional clinic visits, 37.5% received additional, unscheduled prescriptions for pain medication, and 15.6% were referred to pain management specialists.

Conclusion: Significant resource utilization is required to assess and manage pain in the year following open VHR, even for patients who are without pain by 1 year.  Patients who report pain without receiving a diagnosis frequently make additional complaints of pain and require costly resource utilization at later time points.  Care management pathways in VHR patients should be designed with an emphasis on early identification, prompt treatment and ongoing monitoring of the occurrence, consequences, and costs of postoperative pain. 

64.10 Documentation Initiative Leads to Improved Quality Metrics in Cardiac Surgical Patients

J. R. Gillen1, L. Jin2, J. M. Isbell3  1University Of Michigan,Surgery,Ann Arbor, MI, USA 2University Of Virginia,Surgery,Charlottesville, VA, USA 3Memorial Sloan-Kettering Cancer Center,Thoracic Surgery,New York, NY, USA

Introduction:
Hospitals are increasingly being judged by risk-adjusted quality metrics, which are directly influenced by the completeness and accuracy of medical documentation. This study evaluated the impact of a multi-faceted documentation improvement initiative. We hypothesized that this initiative would lead to improvement in risk-adjusted quality metrics.

Methods:
The prospective cohort consisted of all cardiac surgical patients over a 2-year period, divided into pre-intervention (2013) and post-intervention (2014) groups. The intervention took place in the cardiac surgery intensive care unit and consisted of 1) templated problem-based admission notes and daily progress notes, 2) distribution of pocket cards with diagnoses important to risk stratification, and 3) education of residents and faculty with documentation tips. Operative demographics and several indicators of patient acuity were compared between 2013 and 2014. 

Results:
The pre- and post-intervention cohorts consisted of 768 and 791 patients, respectively. The distribution of procedure types performed were similar between groups (p=0.51). For the intervention group, there was a significant improvement in case-mix index (CMI) (6.86 vs 6.56, <0.001) and several University HealthSystem Consortium (UHC) quality indicators, including Severity of Illness (p<0.001) and Risk of Mortality scores (p=0.0014). However, there was no change in UHC expected mortality (3.69% vs 3.54%, p=0.49) or Society of Thoracic Surgeons (STS) Predicted Risk of Mortality (3.28% vs 3.11%, p=0.21). Additionally, documentation of complications increased in UHC data (11.76% vs 8,07%, p=0.018), but no similar increase was observed in STS data (11.81% vs 10.27%, p=0.25), suggesting a higher capture of complications rather than an actual increase in complication rates. The increase in CMI generated by this initiative correlated with an increase in hospital revenue of $3,849 per case, totaling over $3 million for 1 year. 

Conclusion:
This documentation initiative was effective at improving CMI and UHC risk of mortality, which appears to have been primarily driven by improved documentation of complications. This study demonstrates that attention to accurate documentation can be an effective method to improve publicly reported quality indices as well as increase hospital revenues. Future interventions targeting clinic personnel documenting present-on-admission diagnoses may further improve these quality metrics.
 

64.09 Radiation Therapy Did Not Improve Survival Over Surgery Alone for Stage-3A(N2) Lung Cancer in SEER

D. T. Nguyen4, C. C. Moodie1, J. R. Garrett1, J. P. Fontaine1,2,3, R. J. Keenan1,2,3, L. A. Robinson1,2,3, E. M. Toloza1,2,3  1Moffitt Cancer Center,Thoracic Oncology,Tampa, FL, USA 2University Of South Florida Morsani College Of Medicine,Surgery,Tampa, FL, USA 3University Of South Florida Morsani College Of Medicine,Oncologic Sciences,Tampa, FL, USA 4University Of South Florida,Morsani College Of Medicine,Tampa, FL, USA

Introduction:   Stage-IIIA nonsmall-cell lung cancer (NSCLC) includes T1N2M0 and T2N2M0 in the current Tumor-Nodal-Metastases (TNM) classification, indicating mediastinal lymph node involvement.  We evaluated postoperative survival of T1N2/T2N2 patients (pts) who underwent lobectomy without or with radiation therapy (RT) between 1988 and 2013.

Methods:   Using the Surveillance, Epidemiology, and End Results (SEER) database, we identified pts who underwent surgery (SURG) without or with RT before (RT+SURG) or after (SURG+RT) surgery for T1N2 and T2N2 NSCLC during 1988-2013.  We included pts with Adenocarcinoma (AD) and Squamous Cell (SQ) histology and excluded those with multiple primary NSCLC tumors.  Log-rank test was used to compare Kaplan-Meier survival of pts who had SURG vs. SURG+RT as well as of AD vs. SQ pts and of T1N2 vs. T2N2 pts during 1988-2003 and 2004-2013.

Results:  Of 2,271 pts, 142 (6.25%) had RT+SURG, 777 (34.2%) had SURG+RT, and 1352 (59.5%) had SURG.  During 1988-2013, there were 1681 AD pts (74.0%) and 590 SQ pts (25.9%), while 696 pts were T1N2 (30.6%) and 1,575 pts were T2N2 (69.4%).  There was no significant difference in 5-yr survivals between RT and no RT pts or between RT+SURG and SURG+RT during 1988-2013 (p=0.171), in 1988-2003 (p=0.408), or in 2004-2013 (p=0.822).  For 1988-2013, AD pts had better 5-yr survival (p=0.016) and median survival time (MST; 36.0±1.4 mon vs. 27.0±1.6 mon; p<0.01) than SQ pts.  For 1988-2003, 5-yr survival for AD pts and SQ pts did not differ (p=0.181).  However, for 2004-2013, AD pts had better 5-yr survival (36.0% vs. 31.6%; p<0.01) and MST (41.0±2.1 mon vs. 27.0±2.2 mon; p<0.01) than SQ pts.  As expected, T1N2 had better 5-yr survival than T2N2 during 1988-2013 (39.1% vs. 29.8%; p<0.01), during 1988-2003 (34.7% vs. 25.7%; p=0.002), and during 2004-2013 (41.5% vs. 31.9%; p<0.01).  Similarly, AD pts had better MST than SQ pts for 1988-2013 (44.0±3.2 mon vs. 30.0±1.2 mon; p<0.001), for 1988-2003 (41.0±3.6 mon vs. 26.0±1.1 mon; p<0.01), and for 2004-2013 (47.0±4.2 mon vs. 33.0±1.8 mon; p<0.01).  With AD T1N2, AD T2N2, SQ T1N2, or SQ T2N2 pts, we found no differences in 5-yr survival between RT+SURG vs. SURG+RT (p>0.05).

Conclusion:  Pts with RT+SURG or SURG+RT did not have better survival than SURG alone in T1N2 and T2N2 pts.  Of these pts, AD pts had better survival than SQ pts during 1988-2013, but this advantage was due to improved survival during 2004-2013.  As expected, T1N2 pts had better survival than T2N2 pts across all time periods.

64.08 Preoperative Intraaortic Balloon Support: Is it Necessary for Significant Left Main Disease?

A. Fiedler1, C. Walsh1, G. Tolis1, G. Vlahakes1, T. MacGillivray1, T. Sundt1, S. Melnitchouk1  1Massachusetts General Hospital,Cardiac Surgery,Boston, MA, USA

Introduction: Some practitioners routinely place an intra-aortic balloon pump (IABP) preoperatively in high-risk patients undergoing coronary artery bypass grafting (CABG) such as those with significant left main (LM) disease identified in the setting of acute coronary syndrome.  Their value in the current era with increasing burden of comorbid conditions including peripheral vascular disease, however, has not been examined.

Methods: We identified 495 patients presenting with significant LM disease (≥ 50%) and acute coronary syndrome undergoing CABG between January 2002 and April 2014 using our institutional prospective cardiac surgery database. Of these, 198 patients had an  IABP placed preoperatively (IABP group) while the other 297 did not (No-IABP group). Operative mortality (30-day or in-hospital) and major complications were compared unadjusted and after adjustment using propensity score based on 25 pre-specified variables.

Results:  The IABP group patients had significantly worse baseline clinical profiles with higher rates of ST-elevation myocardial infarction (12.0% vs. 3.2%), critical status (15.3% vs. 3.5%), and lower left ventricular ejection fraction (50.2±15.3% vs. 57.9±13.1%), but had superior renal function (all P values<0.01) compared with those in the No-IABP group.  As might be expected, unadjusted operative mortality rates were significantly higher in IABP group (8.8% and 3.2% in P=0.006) as were major complications (cardiac arrest, need for mechanical support, neurologic injuries, requirement for dialysis, limb ischemia, multi-organ failure, pneumonia and pulmonary thromboembolism; 26.9% (58/216) vs. 14.83% (47/317); P=0.001) as compared with No-IABP group. With propensity score matching, however, 109 pairs of patients well balanced with regard to all the baseline variables (P values, 0.27-0.99) were identified. In this matched cohort, the IABP group still had a higher operative mortality (odds ratio [OR], 3.52; 95% CI, 0.95-13.14; P=0.060) and rate of major complications (odds ratio, 1.84; 95% CI, 1.00-3.39; P=0.050) compared with the No-IABP group. The increased risk of mortality and morbidity were validated by propensity-adjustment models and inverse-probability-treatment weighting. Higher rates of major complication by the IABP usage were mainly derived by increased risks of renal failure, limbs ischemia and multi-organ failure. Although there may have been an impact of unmeasured covariates unaccounted for in the matching scheme, the nature of the increased morbidity is consistent with known complications of IABP. 

Conclusion:  Preoperative use of IABP in patients with significant LM disease in the setting of acute coronary syndrome did not reduce 30-day mortality following CABG, but was associated with greater morbidity. Circumspection in the prophylactic use of IABP in this clinical setting may be advisable.

 

64.07 Grade of Differentiation Affects Survival after Lobectomy for Stage-I Neuroendocrine Cancer

D. T. Nguyen4, J. P. Fontaine1,2,3, L. A. Robinson1,2,3, R. J. Keenan1,2,3, E. M. Toloza1,2,3  1Moffitt Cancer Center,Thoracic Oncology,Tampa, FL, USA 2University Of South Florida Morsani College Of Medicine,Surgery,Tampa, FL, USA 3University Of South Florida Morsani College Of Medicine,Oncologic Sciences,Tampa, FL, USA 4University Of South Florida,Morsani College Of Medicine,Tampa, FL, USA

Introduction:   Non-small cell lung cancer (NSCLC) is staged using Tumor-Node-Metastasis(TNM) status.  Tumor histology is graded based on being well differentiated (Grade1), moderately differentiated (Grade2), or poorly differentiated (Grade3).  We investigated effects of histologic grade on survival of patients (pts) with stage-I neuroendocrine tumors after surgical resection.

Methods:   Using the Surveillance, Epidemiology, and End Results (SEER) database, we identified pts who underwent lobectomy for stage-I (T1N0M0 or T2N0M0) neuroendocrine carcinoma during 1988-2013, but excluded those who had multiple primary NSCLC tumors.  We grouped pts by histologic grade and performed Kaplan-Meier survival analyses, with log-rank test to compare 5-yr cancer-specific survival between grades, between T status, and between 1988-2003 versus 2004-2013.

Results:  Of 515 study pts, 348 were T1N0 pts, of whom 200 were Grade1, 52 were Grade2, and 96 were Grade3, and 167 were T2N0 pts, of whom 53 were Grade1, 31 were Grade2, and 83 were Grade3.  During 1988-2013, T1N0 Grade3 pts had worse 5-yr survival than either T1N0 Grade2 or Grade1 pts (52.6% vs. 82.8% or 97.6%; p<0.001) and worse mean survival time (MST) than either T1N0 Grade2 or Grade1 pts (42.8 mon vs. 56.2 mon or 59.4 mon, p<0.05).  In contrast, both T2N0 Grade3 and Grade2 pts had worse 5-yr survival than T2N0 Grade1 pts (54.0% or 68.4% vs. 93.6%; p=0.001) and MST (41.2 mon or 47.9 mon vs. 56.8 mon, p<0.05).  For all Grade2 pts during 1988-2013, T2N0 pts had worse 5-yr survival than T1N0 pts (68.4% vs. 82.8%; p=0.07 by Log Rank Mantel-Cox, but p=0.038 by Breslow-Wilcoxon and p=0.047 by Tarone-Ware) and worse MST than T1N0 pts (47.9 mon vs. 56.2 mon, p<0.05).  In contrast, for all Grade3 pts, T1N0 and T2N0 pts had similar 5-yr survival (52.6% vs 54.0%; p>0.53) and MST (41.2 mon vs. 42.8 mon, p>0.05) during 1988-2013.  Neither Grade2 T1N0 or T2N0 nor Grade3 T1N0 or T2N0 had 5-yr survival or MST that significantly changed between 1988-2003 versus 2004-2013 (p>0.05). 

Conclusion:  Using SEER data, we found that histologic grade significantly affected survival of stage-I neuroendocrine carcinoma pts after pulmonary lobectomy, with Grade3 pts having significantly worst survival than Grade2 or Grade1 for both T1N0 and T2N0 pts, but with Grade2 pts having worse survival than Grade1 only for T2N0 pts.  These results suggest that histologic grade should be considered when determining adjuvant therapy and prognosis for surgically resected stage-I neuroendocrine carcinoma pts.

64.06 Outcomes and Costs of Surgical versus Transcatheter Aortic Valve Replacement

Y. Juo1, A. Mantha1, P. Benharash1  1University Of California – Los Angeles,Cardiac Surgery,Los Angeles, CA, USA

Introduction:
Surgical aortic valve replacement (SAVR) is considered the standard of care for adults with severe symptomatic aortic stenosis. More recently tanscatheter aortic valve replacement (TAVR) is being utilized in patients with high surgical risk with encouraging results. With rapidly evolving technology, application of TAVR is expected to reach moderate and low-risk cohorts. We aim to examine the national outcomes and costs of SAVR versus TAVR in the first year following US Food and Drug Administration (FDA) market approval of TAVR in the United States.

Methods:
Patients who underwent elective SAVR or TAVR in the year 2013 were identified from Nationwide Inpatient Sample, a longitudinal inpatient health care database with weighted estimates of more than 35 million annual hospitalizations. Baseline demographics, primary diagnoses, income/payer type and hospital characteristics were tabulated. The primary outcome was in-hospital mortality while secondary outcomes included hospital length of stay (LOS) and overall hospitalization cost.

Results:
During the study period, a total of 68,845 discharges after aortic valve replacements were identified from Nationwide Inpatient Sample, 12,125 (17.6%) of which were TAVR. Compared to SAVR, the TAVR patients were more likely to be elderly (> 85 y/o: 42.4% vs 6.1%, p<0.05), less likely to be uninsured or Medicaid recipients (1.2% vs 6.7%, p<0.05), and and less likely to be of low income status (21.2 vs 22.7%, p<0.05). Primary disease severity information was not available in the database. Unadjusted in-hospital mortality was significantly higher among TAVR than SAVR patients (4.62% vs 3.01%, p<0.05). SAVR was associated with a significantly longer mean LOS (9.7 vs 8.6 days, p<0.05) but lower mean overall hospitalization cost ($184,715 vs $218,246, p<0.05).

Conclusion:
Our results demonstrate rapid adoption of TAVR technology in the US in the year following FDA market approval. As expected based on selection criteria, SAVR is associated with less in-hospital mortality and less overall hospitalization cost than TAVR. The post-market mortality in TAVR patients was significantly lower than previously reported in the PARTNER trial. Adoption of TAVR technology and its impact on SAVR warrant further investigation in order to develop optimal decision algorithms for SAVR versus TAVR.
 

64.05 Venous thromboembolism Following Lung Transplantation: A study of the National Inpatient Sample.

J. K. Aboagye1, J. W. Hayanga4, B. D. Lau1, D. Shaffer1, P. Kraus2, D. Hobson1, M. Streiff3, J. D’Cuhna4, E. R. Haut1,5  1Johns Hopkins University School Of Medicine,Surgery,Baltimore, MD, USA 2Johns Hopkins School Of Medicine,Pharmacy,Baltimore, MD, USA 3Johns Hopkins University School Of Medicine,Hematology,Baltimore, MD, USA 4University Of Pittsburgh Medical Center,Cardiothoracic Surgery,Pittsburgh, PA, USA 5Johns Hopkins Bloomberg School Of Public Health,Health Policy,Baltimore, MD, USA

Introduction:

Previous studies on the incidence of and risk factors for venous thromboembolism (VTE) following lung transplantation (LT) were largely limited to single centers, hence limiting their generalizability. The purpose of this study was to estimate the incidence of VTE, identify risk factors associated with VTE post LT using a nationally representative sample of patients. We also aimed to determine the impact of VTE following LT on in-hospital mortality, length of hospitalization and cost.

Methods:
We retrospectively examined the National Inpatient Sample database to identify patients who had undergone LT from 2000 to 2011. We calculated the incidence of VTE and predictors of VTE following LT. In multivariate analyses we estimated the association between VTE and in-hospital mortality, length of hospitalization and total hospital cost.

Results:
A total of 16,318 adult lung transplant recipients underwent LT during the study period. Of these 1029 (6.3%) developed VTE post-operatively. This comprised of 854 (5.4%) and 175 (1.1%) who developed only deep vein thrombosis and pulmonary embolism respectively. The predictors of VTE in this cohort included age greater than 60 years, (OR 1.37 95% CI 1.02-1.85), female gender, (OR 0.61 95% CI 0.45-0.84) receiving mechanical ventilation support (OR 2.35 95% CI 1.77- 3.13) and receiving extracorporeal membrane oxygenation support 1.75 95% CI (1.05-2.91). The adjusted odds of in-hospital mortality in LT recipients who had VTE was twice as much as their counterparts who did not. (OR 1.89 95% CI 1.17-3.06).  On the average, the length of hospitalization and total cost of hospitalization were 46% (95% CI 33-58) and 29% (95% CI 21-36) higher for LT patients who developed VTE compared with those who did not after adjusting for confounders.

Conclusion:
VTE is a frequent complication after lung transplantation with an associated odds of increased mortality, total hospital length of stay and cost.  There is the need to reexamine prophylaxis practices in these patients to address this complication.
 

64.04 Sequelae of Donor-Derived Ureaplasma Transmission in Lung Recipients

R. Fernandez1, M. M. DeCamp1, D. D. Odell1, A. Bharat1  1Feinberg School Of Medicine – Northwestern University,Division Of Thoracic Surgery,Chicago, IL, USA

Introduction:  Hyperammonemia is a fatal syndrome of unclear etiology following lung transplantation. It is postulated to result from an inborn error of Urea-cycle metabolism. We  recently demonstrated that a Mollicute, Ureaplasma, causes this syndrome. Here, we further investigated the source of Ureaplasma infection and the incidence of hyperammonemia.

Methods:  Consecutive lung transplant recipients were prospectively evaluated between July 2014 and May 2016. All recipients had pre-transplant urine and bronchoalveolar lavage fluid (BALF) tested for all Mollicutes (Ureaplasma and Mycoplasma species) by PCR and culture, and antimicrobial susceptibilities determined. Additionally, BALF from all donors was tested. Patients found positive for Mollicutes pre-transplant were successfully treated using antimicrobials based on the antimicrobial susceptibility. Ammonia was analyzed in all patients post-transplant. Ureaplasma isolates were grown in specialized culture media and titrated amounts were injected into immunocompetent C57BL6 mice to determine development of hyperammonemia, thereby testing Koch’s third postulate. 

Results: Human Studies: Five of the 29(17%) recipients tested positive for Mollicutes (Ureaplasma=4, Mycoplasma=1) in urine pre-transplant and were successfully treated. Native lung BALF from all patients was negative for Mollicutes. BALF from four (14%) donors was positive for Ureaplasma, but not Mycoplasma. These donors were younger, 23.3 vs 38.3 years (p<0.001), tended to be male, sexually active, and all had aspiration. All recipients of Ureaplasma-positive, but not of Ureaplasma-negative, donor lungs developed hyperammonemia and demonstrated increased morbidity and mortality. One isolate revealed macrolide-resistance associated with a novel ribosomal mutation. All other isolates were pan-susceptible to macrolides, fluoroquinolones, and tetracycline. All recipients demonstrated a decrease in ammonia levels within 24-hours of antimicrobial therapy and normalization within 7 days. Murine Studies: Human isolates of Ureaplasma led to dose-dependent hyperammonemia in wild-type mice (Ammonia >3 times the baseline, p<0.001). Treatment with antibiotics to which the isolate was susceptible prevented hyperammonemia. 

Conclusion: Ureaplasma infection in lung recipients is transmitted via donor lungs resulting in significant morbidity and mortality. The incidence of Ureaplasma infection is 14%, higher than previously reported, which may support the role of routine donor screening. Early recognition and treatment of Ureaplasma infection can improve lung transplant outcomes. 

 

64.02 Learning curve of minimally invasive Ivor-Lewis esophagectomy

F. Van Workum1, G. H. Berkelman3, A. E. Slaman4, M. Stenstra1, M. I. Van Berge Henegouwen4, S. S. Gisbertz4, F. J. Van Den Wildenberg5, F. Polat5, M. Nilsson2, T. Irino2, G. A. Nieuwenhuijzen3, M. D. Luyer3, C. Rosman1  1Radboudumc,Surgery,Nijmegen, GELDERLAND, Netherlands 2Karolinska Institutet,Surgery,Stockholm, -, Sweden 3Catharina Hospital,Surgery,Eindhoven, BRABANT, Netherlands 4AMC,Surgery,Amsterdam, NOORD HOLLAND, Netherlands 5Canisius-Wilhelmina Ziekenhuis,Surgery,Nijmegen, GELDERLAND, Netherlands

Introduction: Totally minimally invasive Ivor-Lewis esophagectomy (TMIE-IL) has a learning curve but the length of the learning curve and the extent of learning curve associated morbidity for surgeons experienced in TMIE-McKeown is unknown.

Methods: This study was performed in 4 high volume European esophageal cancer centers from December 2010 until April 2016. Surgeons experienced in TMIE-McKeown changed operative technique to TMIE-IL. All consecutive patients with esophageal carcinoma undergoing TMIE-IL with curative intent were included. Baseline, surgical and outcome parameters were analyzed in quintiles and were plotted in order to explore the learning curve. Textbook outcome (the percentage of patients in which the process from surgery until discharge was <21 days and uneventful in terms of complications, interventions, mortality and oncological aspects) was also analyzed. CUSUM analysis was performed in order to determine after how many cases proficiency was reached. An area under the curve analysis was performed to calculate the learning associated anastomotic leakage and costs.

Results: Four hundred and sixty eight patients were included. In one hospital, ASA classification was significantly higher in quintile 2 and 3 (p=0.01) and in one hospital, more distal esophageal tumors were operated in quintile 4 and 5 (p=0.01). In the pooled curve analysis, anastomotic leakage decreased from 26% at introduction of MIE-IL to 8% at the plateau phase which occurred after 121 cases. Textbook outcome increased from 39% to 60% and the plateau phase occurred after 128 cases. Learning curve associated anastomotic leakage occurred in 42 patients and this excess morbidity was associated with more than € 2 million in healthcare costs.

Conclusion: TMIE-IL has a significant learning curve. Learning curve associated morbidity and costs are substantial, even for surgeons experienced in TMIE-McKeown. The length of the learning curve was more than 100 operations.

 

64.01 Drivers of Variation in 90-day Episode Payments for Coronary Artery Bypass Grafts (CABG)

V. Guduguntla1,2,5, J. Syrjamaki1,5, C. Ellimoottil1,3,4,5, D. Miller1,3,4,5, P. Theurer1,6, D. Likosky1,7, J. Dupree1,3,4,5  1University Of Michigan,Ann Arbor, MI, USA 2University Of Michigan,Medical School,Ann Arbor, MI, USA 3Institute For Healthcare Policy And Innovation,Ann Arbor, MICHIGAN, USA 4Dow Division Of Health Services Research,Department Of Urology,Ann Arbor, MICHIGAN, USA 5Michigan Value Collaborative,Ann Arbor, MICHIGAN, USA 6The Michigan Society Of Thoracic And Cardiovascular Surgeons Quality Collaborative,Ann Arbor, MICHIGAN, USA 7Section Of Health Services Research And Quality,Department Of Cardiac Surgery,Ann Arbor, MICHIGAN, USA

Introduction:  Coronary Artery Bypass Grafting (CABG) is a common and expensive surgery. Starting in July 2017, CABG will become part of a mandatory Medicare bundled payment program, an alternative payment model where hospitals are paid a fixed amount for the entire care process, including care for 90-days post-discharge. Details on the specific drivers of CABG payment variation are largely unknown, and an improved understanding will be important for policy makers, hospital leaders, and clinicians.

Methods:  We identified patients undergoing CABG at 33 non-federal hospitals in Michigan from 2012 through June 2015 using data from the Michigan Value Collaborative, which includes adjusted claims from Medicare and Michigan’s largest private payer. We calculated 90-day price-standardized, risk-adjusted, total episode payments for each of these patients, and divided hospitals into quartiles based on mean total episode payments. We then disaggregated payments into four components: readmissions, professional, post-acute care, and index hospitalization. Lastly, we compared payment components across hospital quartiles and determined drivers of variation for each component.

Results: We identified a total of 5,910 patients across 33 Michigan hospitals.  The mean age was 68 and 74% were male.  The mean 90-day episode payment for CABG was $48,571 (SD: $20,739; Range: $11,723-$356,850). The highest cost quartile had a mean total episode payment of $45,487 compared to $54,399 for the lowest cost quartile, resulting in a difference of $8,912 or 16.4%. The highest cost quartile hospitals, when compared to the lowest cost quartile hospitals, had greater readmissions (1.35x), professional (1.34x), post-acute care (1.30x), and index hospitalization payments (1.15x) (all p<0.05). The main drivers of this variation are patients with multiple readmissions, increased use of evaluation and management (E&M) services, higher utilization of inpatient rehabilitation, and diagnosis related group distribution, respectively.

Conclusions: Significant variation exists in 90-day CABG episode payments for Medicare and private payer patients in Michigan. These results have implications for policymakers implementing CABG bundled payment programs, hospital administrators leading quality improvement efforts, and clinicians caring for these patients. Specifically, hospitals and clinicians entering bundled payment programs for CABG should work to understand local sources of variation, including multiple readmissions as well as inpatient E&M and post-discharge rehabilitation services. 

55.20 An Analysis of Discussions Following National Presentation of a Surgical Case Series

A. Siy1, E. Winslow1  1University Of Wisconsin,Department Of Surgery,Madison, WI, USA

Introduction: One unique method of clinical research in surgical disciplines is the surgical case series.  A case series typically details the outcomes of consecutive patients operated on at a single center. Although these series lend understanding, they also have limitations. Because no reporting guidelines specific to case series exist, the elements described in their presentation are quite varied. We aimed to determine the primary areas of academic inquiry after presentation of a case series at national surgical meetings.

Methods: Abstracts of manuscripts published in Journal of the American College of Surgeons and Annals of Surgery from 2010 to 2015 were reviewed. A case series was defined as the study of a consecutive series of patients at a single institution for the purpose of describing their clinical outcomes. Those case series with accompanying discussions were analyzed. All interrogative sentences in the discussion were selected for thematic analysis and were classified by a redundant iterative process into descriptive categories.

Results: 186 case series were identified, 55 of which included the transcript for the post-presentation discussion. A total of 476 unique interrogatives were identified and classified into 4 categories and 13 subcategories. The most frequent single inquiry (20.8%) pertained to the applicability of the findings to patient care (e.g. how the data changed the author’s practice). A full 18% sought clarification of study variables (e.g. personnel involved, technical details, definitions of study terms). Nearly 7% highlighted selection bias (i.e. how patients not selected for the procedure fared in comparison). Interestingly, areas that received minimal to no attention included: cost, long-term outcomes, statistical methods, details of data collection or patient satisfaction.

Conclusion: This analysis of inquiries after presentation of surgical case series to national academic audiences highlights some areas of common concern. Of most importance is the direct applicability of the data presented to the care of patients. Specifically stating how the findings of the presented study have affected the authors’ clinical care for patients would improve face validity. In addition, defining all study variables including personnel and technical approaches is of primary interest to the audience. Finally, a description of the potential effects of selection bias on the outcomes appears to also be of major import. Addressing these areas of interest at the time of presentation of a case series is likely to improve its quality and to maximize the utility of this form of clinical research.

 

55.19 A Comparison of Post-Operative MI Rates Based on the Universal 2012 and NSQIP Definitions

K. S. Shrestha1, A. A. Gullick2, T. S. Wahl2, R. H. Hollis2, J. Richman2, J. K. Kirklin2, M. S. Morris2  1University Of Alabama at Birmingham,School Of Medicine,Birmingham, Alabama, USA 2University Of Alabama at Birmingham,Department Of Surgery,Birmingham, Alabama, USA

Introduction:  The American College of Surgeons National Surgical Quality Improvement Program (NSQIP) provides benchmarking quality standards designed to improve quality of care and surgical outcomes. We explore NSQIP’s current definition (2007 Universal Definition) of myocardial infarctions (MI) and compare to an updated definition, the 2012 Universal Definition of MI.

Methods:  All NSQIP assessed post-operative cardiac events in 2013-2015 from a single institution were examined as part of a quality improvement project. The current NSQIP definition (2007 Universal definition) classifies an MI by one of the following: a troponin elevation 3-times the upper limit of normal (ULN), ischemic EKG changes, or any charted physician diagnosis. The 2012 Universal definition is defined by a troponin elevation of 5-times the ULN and at least one of the following: ischemic symptoms, ischemic EKG changes, wall motion abnormalities on imaging studies, or an intraluminal thrombus detected on an angiogram. The study group included all patients who met the NSQIP definition for postoperative MI. The 2012 Universal definition was then applied to the group with patient- and procedure-specific characteristics compared by MI definition (NSQIP vs. 2012) using Chi-Square tests. 

Results: Eighty-one patients were identified. Only 27 (33.3%) of patients meeting the NSQIP definition also met the 2012 Universal definition of MI. Overall, the average patient was a 68.1 (SD 12.2) year old white (69.1%) male (54.3%) with a BMI of 28.2 (SD 7.8). There were no significant differences between definition groups (NSQIP vs. 2012) regarding patient demographics or perioperative complications. Only 22.2% of the NSQIP defined group had a troponin level 5-times the ULN meeting the 2012 Universal defined group (p<0.0001). Patients classified using the 2012 Universal definition had significantly more ischemic EKG changes compared to the NSQIP definition (ST-elevation: 25.9% vs 3.7%, respectively, p=0.01; Q wave: 22.2% vs. 0%, respectively, p= 0.001). NSQIP defined MI occurrences were more likely to be NSTEMI type II events compared to the 2012 Universal group (85.2% vs. 51.9%, p = 0.003). Patients with MI meeting 2012 definition were more likely to have Ischemic symptoms (70.4% vs 37%, p=0.005) and abnormal imaging changes (33.3% vs 13%, p=0.03) compared to the current NSQIP definition.

Conclusion: Only one-third of patients with a MI defined by the current NSQIP definition met the 2012 universal MI definition. Compared to the current NSQIP MI definition, the 2012 Universal definition captures patients with more ischemic symptoms, ischemic EKG changes, abnormal changes on imaging studies, and higher troponin levels during an MI with fewer NSTEMI Type II events. The current NSQIP definition may over-estimate true coronary events. Further consideration to update and reclassify NSQIP’s MI definition may be warranted.

The project described was supported by Awards Numbered T32DK062710 and P30DK079626 from the National Institute of Diabetes and Digestive and Kidney Diseases. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Diabetes and Digestive and Kidney Diseases or the National Institutes of Health.

55.18 Preoperative Invasive Care Setting and Postoperative Infection in Pancreaticoduodenectomy

A. T. Nguyen2, Z. M. Dong2, J. W. Marsh1, A. Tsung1  1University Of Pittsburgh,Department Of Surgery,Pittsburgh, PA, USA 2University Of Pittsburgh,School Of Medicine,Pittsburgh, PA, USA

Introduction:  Pancreaticobiliary and duodenal tumors often present with obstructive pathology and require invasive procedures for therapeutic and diagnostic purposes such as ERCP or endoscopic biopsy. Currently, there is no evidence favoring inpatient versus outpatient intervention. Though admission for these procedures may be more convenient for providers, it may also predispose patients to microbial colonization and consequent infection. The purpose of this study is to evaluate the relationship between inpatient and outpatient preoperative management and postoperative infection. 

Methods:  This retrospective cohort study includes 301 patients who underwent pancreaticoduodenectomy from 2012 to 2015. Demographic, preoperative care and tumor characteristic data were collected. All patients underwent either endoscopic biopsy, ERCP, or PTC prior to surgery. Patients were categorized as inpatients or outpatients based on the setting of preoperative intervention 180 days prior to surgery. Chi-square, Mann-Whitney U, univariable and multivariable logistic regression were carried out with Stata 14. Adjustment variables had p-values less than 0.2.

Results: Of the 301 patients 34.9% were outpatients and 65.1% were inpatients. The groups did not differ in prevalence of diabetes, hypertension, coronary artery disease, age, sex or type of cancer. The primary outcome was postoperative infection subdivided into specific infection type. The rate of all postoperative infections was 45.8% and not significantly different between groups (p = 0.45). Of the infection subtypes, SSI significantly differed and occurred in 20.9% of outpatients versus 32.7% of inpatients (p = 0.032). In univariable logistic regression for SSI, inpatient status had an OR of 1.83 (95% CI 1.05 – 3.19, p = .034). The multivariable model adjusted for tumor size, stage and type of preoperative intervention. In multivariable logistic regression for SSI, inpatient status had an adjusted OR of 1.74 (95% CI 0.95 – 3.18, p = .071). No adjustment variables were significantly related to SSI. 

Conclusion: Inpatient invasive care prior to pancreaticoduodenectomy was associated with a significant increase in postoperative surgical site infection. This suggests that patients with pancreaticobiliary and duodenal cancers should receive outpatient workup whenever possible to reduce postoperative morbidity. 

 

55.17 Correlation of Hebal-Malas Index with Haller Index in Pediatric Patients with Pectus Excavatum

F. Hebal1, J. Green2, B. Malas1, M. Reynolds1,2  1Ann & Robert H Lurie Children’s Hospital Of Chicago,Pediatric Surgery,Chicago, IL, USA 2Northwestern University,Feinberg School Of Medicine,Chicago, IL, USA

Introduction: Computed tomography (CT) derived Haller Index (HI) is the gold-standard metric of Pectus Excavatum (PE) deformity severity. White Light Scanning (WLS), a novel 3D imaging modality, offers a potential alternative that is quick, inexpensive, and safe. Using no ionization radiation, WLS may be safely used for repeated scanning in longitudinal monitoring of progression of PE deformity. Previous pilot investigation demonstrated feasibility of using a handheld White Light Scanning (WLS) device to measure PE deformity and showed promising early correlation results of a new WLS-derived PE severity index, the Hebal-Malas Index (HMI), with CT-derived HI. Further investigation is necessary to establish WLS-derived HMI as a potential preoperative study modality in PE. This study assesses correlation of HMI with HI measured in preoperative CT scans of pediatric patients with PE.

Methods: We conducted a retrospective review of preoperative CT scans in pediatric patients with PE from 2006-2015. Reported HI was collected from the CT impression documented in the electronic medical record (EMR), and two raters independently measured HI and HMI using this same CT scan. Pearson correlation assessed rater measured HMI with EMR reported HI and rater measured HI. Intraclass Correlation (ICC) assessed interrater reliability of HMI. Measurement and calculation of HMI and HI is shown in Figure 1.

Results:Of 140 identified charts, 35 with incomplete data collected were excluded (18 HI undocumented in record, 8 HMI unmeasurable in scan, 4 no CT image found, 5 incomplete HMI data) leaving 105 for review. For Rater 1 measured HMI, Pearson analysis showed strong correlation with EMR reported HI (r=0.74;p-value<.0001;n=105), Rater 1 measured HI (r=0.77;p-value<.0001;n=105), and Rater 2 measured HI (r=0.71;p-value<.0001;n=41). For Rater 2 measured HMI, Pearson analysis showed moderate correlation with Reported HI (r=0.41;p-value=0.012;n=37), Rater 1 measured HI (r=0.43;p-value=0.008;n=36), and Rater 2 measured HI(r=0.42;p-value=0.009;n=37). To date, complete HMI data for both raters was collected for 36 scans. For these 36 scans, ICC showed strong interrater reliability of HMI between two raters (ICC=0.71;CI:0.501-0.841;P-value<0.0001). 

Conclusion:Correlation of HMI with HI and reliability between raters demonstrates strong potential for the use of HMI as a proxy for Haller Index. Currently enrolling prospective investigation of WLS-derived HMI with CT-derived HI continues with the goal of establishing WLS as a preoperative study and progress monitoring modality for patients with PE

 

55.16 Continuous Monitoring of Vital Signs on the General Ward

M. Weenk1, S. Bredie2, L. Engelen3, T. Van De Belt3, H. Van Goor1  1Radboudumc,Surgery,Nijmegen, GELDERLAND, Netherlands 2Radboudumc,Internal Medicine,Nijmegen, GELDERLAND, Netherlands 3Radboudumc,Radboud REshape Innovation Center,Nijmegen, GELDERLAND, Netherlands

Introduction: Measurement of vital signs in hospitalized patients is necessary to assess the clinical situation of the patient. Early warning scores (EWS), such as the Modified Early Warning Score (MEWS) are generally measured three to four times a day and may not capture early deterioration. A delay in diagnosing  deterioration is associated with increased mortality and costs. Clinical deterioration might be detected earlier by wearable devices continuously monitoring vital signs, which allows clinicians to take corrective interventions. Further these devices potentially reduce patient discomfort and work load of nurses. In this pilot study, reliability of continuous monitoring using the ViSi Mobile (VM; Sotera; HR, RR, saturation, BP, skin temperature) and HealthPatch (HP; Vital Connect; HR, RR, skin temperature) was tested and experiences of patients and nurses were collected.

Methods: Twenty patients, 10 at the surgical and 10 at the internal medicine ward, were monitored with both devices simultaneously for 2-3 days and data were compared with MEWS measurements taken as reference method. Artifacts in continuous data were registered and analyzed. Patient and nurse experiences were obtained by semi-structured interviews.

Results: Eighty-six MEWS measurements were compared with VM and HP measurements. Almost all VM vital signs (mean difference HR -0.09 bpm; RR 1.00 breaths/min; saturation 0.19%; temperature 0.00 ?C; BP systolic 1.33 mmHg) and all HP vital signs (HR -2.10 bpm; RR -0.58 breaths/min; temperature 0.00 ?C) were in range of accepted discrepancies, although wide limits of agreement were found. The largest discrepancy in mean difference was found for VM diastolic blood pressure (-8.33 mmHg) probably due to inaccuracy of measurement by nurses. Predominant VM artifact (70%) was a connection failure. Over 50% of all HP artifacts had unknown cause, were self limiting and took less than one hour. The majority of patients, family members, and nurses were positive about VM and HP, e.g. increased feelings of safety, better sleep and more comfort for patient and nurses. Devices did not restrict patients’ daily activities. Disadvantage were the cables (showering) and the short battery life of the VM device.

Conclusion: Both VM and HP have potential for continuously measuring vital signs in hospitalized patients. The devices were well received and comfortable for most patients. A further study focuses on the different effects of VM or HP compared to routine MEWS on patient comfort and safety and nurse workload, and on early detection of deterioration.

 

55.15 The Ottawa Criteria for Appropriate Transfusions in Hepatectomy (OCATH)

S. Bennett1,10, A. Tinmouth2,10, D. I. McIsaac3,10, S. English2,10, P. C. Hébert4, P. J. Karanicolas5, L. McIntyre2,10, A. F. Turgeon7, J. Barkun8, T. M. Pawlik9, D. Fergusson10, G. Martel1,10  1University Of Ottawa,Department Of Surgery,Ottawa, Ontario, Canada 2University Of Ottawa,Department Of Medicine,Ottawa, Ontario, Canada 3University Of Ottawa,Department Of Anesthesiology,Ottawa, Ontario, Canada 4Centre Hospitalier De L’Université De Montréal,Department Of Medicine,Montréal, QUEBEC, Canada 5University of Toronto,Department Of Surgery,Toronto, Ontario, Canada 6Université Laval,Department Of Anesthesiology,Quebec City, QUEBEC, Canada 7Université Laval,Department Of Anesthesiology,Quebec City, QUEBEC, Canada 8McGill University,Department Of Surgery,Montreal, QUEBEC, Canada 9Ohio State University,Department Of Surgery,Columbus, OH, USA 10Ottawa Hospital Research Institute,Ottawa, ONTARIO, Canada

Introduction:  Hepatectomy is associated with a high prevalence of blood transfusions. A transfusion can be a life-saving intervention in the appropriate patient, but is associated with important adverse effects. Given the prevalence of transfusions, their potential for great benefit and harm, and the difficulty in conducting clinical trials, this topic is well-suited for a study of appropriateness. Using the RAND/UCLA Appropriateness Method, the objective of this study was to determine the indications for which the expected health benefits of a transfusion exceed expected negative consequences in patients undergoing hepatectomy.

 

Methods: An international, multidisciplinary panel of eight experts in hepatobiliary surgery, surgical oncology, anesthesiology, transfusion medicine, and critical care were identified. The panelists were sent a recently conducted systematic review and asked to rate a series of 468 intraoperative and postoperative scenarios for the appropriateness of a blood transfusion using a validated, 1-9 ordinal scale. The scenarios were rated in two stages: individually, followed by an in-person moderated panel session. Median scores and level of agreement were calculated to classify each scenario as appropriate, inappropriate, or uncertain.

 

Results: 48% of scenarios were rated appropriate, 28% inappropriate, and 24% uncertain. Level of agreement increased significantly after the in-person session. Based on the scenario ratings, there were five key recommendations.

Intraoperative:

1) It is never inappropriate to transfuse for significant bleeding or ST segment changes.

2) It is never inappropriate to transfuse for a hemoglobin value of 75 g/L or less.

3) Without major indications (excessive bleeding or ST changes), it is inappropriate to transfuse at a hemoglobin of 95 g/L, and transfusion at 85 g/L requires strong justification.

Postoperative:

1) In a stable, asymptomatic patient an appropriate transfusion trigger is 70g/L (without coronary artery disease) or 80 g/L (with coronary artery disease).

2) It is appropriate to transfuse for a hemoglobin of 75 g/L or less in the recovery unit immediately post-operative, or later with a significant hemoglobin drop (>15 g/L).

Factors that increased the likelihood of a transfusion being inappropriate included no history of coronary artery disease, normal hemodynamics, and good postoperative functional status. Patient age did not affect the rating significantly.

Conclusion: Based on the best available evidence and expert opinion, criteria for the appropriate use of perioperative blood transfusions in hepatectomy were developed.These criteria provide clinical guidance for those involved in perioperative blood management. In addition, the areas of uncertainty and disagreement can inform the direction of future clinical trials.

55.14 Risk Factors Associated with Nationwide Readmission After Parathyroidectomy

J. L. Buicko1, J. P. Parreco1, M. A. Lopez1, R. A. Kozol1  1University Of Miami,Palm Beach General Surgery Residency,Atlantis, FL, USA

Introduction:

Readmission rates after surgery receive significant attention as a measurement of quality of patient care.  According to a recent study in the New England Journal of Medicine, almost one in seven patients hospitalized for a major surgical procedure were readmitted within 30 days of discharge.  The morbidity and mortality of parathyroidectomy is low and readmission data is poorly characterized in the literature.  Our objective is to identify national readmission rates after parathyroidectomy and to characterize reasons and risk factors for readmission after parathyroidectomy.

Methods:

The Nationwide Readmission Database (NRD) was queried for all patients undergoing parathyroidectomy in 2013 who survived the initial admission. Multivariate logistic regression was then implemented using patient comorbidities and demographics as well as hospital characteristics to determine the odds ratios (OR) for nonelective readmission within 30 days.

Results:
During the study period, 4,082 patients underwent parathyroidectomy and 357 (8.7%) had nonelective readmissions within 30 days. The most common primary diagnoses on initial admission were benign neoplasm of parathyroid gland (1,232, 30.2%) and primary hyperparathyroidism (899, 22.0%). There were 772 patients (18.9%) with a diagnosis of secondary hyperparathyroidism and these patients had an OR for readmission of 2.38 (p<0.01, 95% CI 1.77 to 3.22).  The most common primary diagnoses on readmission were hypocalcemia (57, 8.0%) and hungry bone syndrome (31, 4.3%). The comorbidities associated with the highest ORs for readmission were weight loss (OR 3.08, p<0.01, 95% CI 1.88 to 5.03), renal failure (OR 2.37, p<0.01, 95% CI 1.79 to 3.13), and congestive heart failure (OR 2.13, p<0.01, 95% CI 1.47 to 3.08).

Conclusion:
Overall, 8.9% of patients who underwent parathyroidectomy had a nonelective readmission.  Hypocalcemia and hungry bone syndrome were the most common reasons for readmission. As thirty-day readmission rates are frequently used as a quality metric for patient care, identifying risk factors for readmission is of paramount importance, and efforts should be made to reduce readmission rates for these patient groups at higher risk.
 

55.13 Predictors of Recurrent Emergency Department Visits in Patients with Benign Biliary Disease

B. F. Goldberg1, K. M. Mueck1, H. M. Starkey-Smith1, C. C. Wan1, J. P. Hasapes1, T. C. Ko1, L. S. Kao1  1University Of Texas Health Science Center At Houston,General Surgery,Houston, TX, USA

Introduction: Benign biliary disease accounts for a disproportionate amount of recurrent emergency department (ED) visits and readmissions. It is unknown what factors present at ED consultation predict subsequent readmission with more severe disease such as acute cholecystitis, choledocholithiasis, gallstone pancreatitis, or ascending cholangitis. The aim of this study was to determine if there are patient or radiologic factors which predict recurrent ED visits, readmission with complicated biliary disease, and worse outcomes.

Methods: This was a retrospective cohort study of all patients presenting to a single safety-net hospital ED June 2014-2016 who received an abdominal ultrasound (US) for benign biliary disease. Demographic, admission, and outcome data were recorded. Univariate and logistic regression analyses were performed to identify factors associated with readmission with complicated biliary disease.

Results: Of 288 patients, 189 (66%) were admitted for surgery, and 99 (34%) were discharged. Of those discharged, 71 (72%) were not evaluated by a surgeon at index ED visit. There was no difference in age, gender, race/ethnicity, language, or ASA score between the groups. Discharged patients were more likely to have diabetes (10% vs 19%, p=0.03), heart disease (3% vs 10%, p=0.01), cancer (1% vs 6%, p=0.02), or chronic liver disease (3% vs 9%, p=0.02). 15 (15%) patients underwent elective outpatient cholecystectomy, and 15 (15%) were readmitted with complicated biliary disease. There was no difference in age, gender, race/ethnicity, language preference, ASA score, or comorbidities between the readmitted and non-readmitted groups. Readmitted patients had more prior ED visits (p=0.02) and hospitalizations (p<0.01). They were more likely to have an impacted stone (40% vs 0%, p=0.02), or a stone in the gallbladder neck (p<0.01). Rates of postoperative complications, reoperation, and conversion to open were similar between patients undergoing elective versus urgent surgery, while postoperative readmission rate was higher in the latter group (31% vs 7%, p=0.02).

Conclusion: Patients with comorbidities, with sonographic findings of complicated biliary disease, and without surgical consultation were more likely to be readmitted after discharge from the emergency department. Further study is necessary to determine what factors contributed to discharge and to assess whether admission on index presentation would have resulted in improved outcomes.

55.12 The Effects of Promotion and Tenure on Surgeon Productivity

A. Lam1, M. Heslin1, C. D. Tzeng2, H. Chen1  1University Of Alabama At Birmingham,Surgery,Birmingham, ALABAMA, USA 2University Of Kentucky,Surgery,Lexington, KENTUCKY, USA

Introduction:
In the dynamic environment of academia, tenure has recently come under investigation.  Nationally, more academic faculty members are now appointed to non-tenure track positions than tenure-track positions. However, studies investigating the impact of promotion and tenure on surgeon productivity are lacking. The aim of this study is to understand the relationship of promotion and tenure to surgeon productivity.

Methods:
We reviewed data for the 114 faculty members in the Department of Surgery at our institution. Two metrics were used to assess surgeon productivity: relative value units (RVUs) billed per year and publications per year from the period 2010-2016. Publication number was measured by Pubmed search, and affiliations were used to verify authorship. We analyzed two groups: tenure track (TT) surgeons and non-tenure (NT) track surgeons and compared productivity within these groups by faculty rank: Assistant (ASST), Associate (ASSOC), and Full (FULL) Professor. Kruskal-Wallis test was used to assess significance, and Mann-Whitney U tests were used to ascertain relationships between groups.   

Results:
As TT faculty were promoted, they had more research production, with publication rates highest among TT FULL.  TT faculty publishing rates increased from ASST to ASSOC (1 vs 2, p=0.006) and from ASSOC to FULL (2 vs 4, p<0.001). There were no differences in the low publication rates between NT ranks.  Clinical production (RVUs) was highest among TT ASSOC and NT FULL. TT faculty increased productivity between ASST and ASSOC (7,023 vs 8,384, p=0.001) and decreased between ASSOC and FULL (8,384 vs 6,877, p<0.001). Among NT faculty, RVUs were stagnant between ASST and ASSOC levels (4877 vs 6313, p=0.3121) and increased between ASSOC and FULL levels (6,313 vs 8,975, p<0.001). Comparing TT to NT, TT faculty published more than their NT counterparts (p<0.001 for all groups). TT ASST and TT ASSOC produced more RVU than NT counterparts (p=0.006 and p<0.001 respectively), while NT FULL outproduced TT FULL (p=0.003).

Conclusion:

Tenure and non-tenure pathways appear to appropriately incentivize surgical faculty over the course of their advancement. TT FULL have the highest research production and NT FULL have the highest clinical production. Interestingly, TT faculty paradoxically has more clinical production at the ASST and ASSOC levels than their NT counterparts, and only NT FULL had greater clinical production than their TT parallels.

 

55.11 The Relationship Between Operation Type and Unplanned Intubation within 24 Hours of Surgery

T. M. Bauer1, A. P. Johnson1, S. W. Cowan1, R. R. Kelz2  1Thomas Jefferson University,Thoracic Surgery,Philadelphia, PA, USA 2University Of Pennsylvania,Endocrine And Oncologic Surgery,Philadelphia, PA, USA

Introduction: Persistent anesthetic or medication complications represent a common cause for postoperative respiratory failure in the immediate postoperative period. New medications are available to more rapidly reverse deep paralysis. This study aimed to  characterize patients at highest risk for unplanned intubation (UI) in the immediate post-operative period due to persistent anesthetic as a target population for new medication trials, or increased pulmonary monitoring in the immediate post-operative period.

Methods: We queried the 2014 ACS NSQIP Participant Use File (PUF) for all patients who experienced post-operative day (POD) 0 UI for all procedures without concurrent postoperative complications, such as cardiac arrest, sepsis, septic shock, myocardial infarction, cerebrovascular accident, and coma. Univariable and multivariable logistic regression analyses were used to identify patient and operative characteristics associated with UI in the immediate post-operative period.

Results: Among 706,791 patients, 702 (0.1%) of experienced isolated POD 0 UI.  Multivariable logistic regression analysis identified 14 patient factors and 6 operation types significantly associated with an elevated likelihood of POD 0 UI (p= <.05). The eight patient factors most strongly associated with POD 0 UI included ASA class >= 3, general anesthesia, preoperative transfusion (<72 hrs), age > 60 years, dyspnea on exertion and at rest, severe COPD, hypertension requiring medications, and operative time > 3 hours (p<0.001).  After controlling for patient factors, the 6 procedures significantly associated with a higher likelihood of POD 0 UI (p<0.05) were esophagectomy (OR: 3.6), EVAR (OR: 2.66), open aortoilliac revascularization (OR: 2.43), nephrectomy (OR: 1.97),  hip fracture (OR: 1.81) and colectomy (OR: 1.43). Three procedures were associated with significantly lower likelihood of POD 0 UI (p<0.05): TURP (OR: 0.12), total knee arthroplasty (OR: 0.47) and spine (OR: 0.57).

Conclusion: We have identified 14 independent patient factors and 6 procedures strongly associated with higher likelihoods of isolated POD 0 unplanned reintubation. Combining these risk factors with clinical indications for ineffective reversal of anesthesia can help effectively target the use of new anesthesia reversal drugs or increased pulmonary monitoring in the immediate post-operative period.