70.07 Does the Day of the Week Predict a Cesarean Section?: A Statewide Analysis

G. A. Del Carmen1, S. Stapleton1, M. Qadan1, D. Chang1, Y. Bababekov1, B. Zhang1, Y. Hung1, Z. Fong1, M. G. Del Carmen1  1Massachusetts General Hospital,Department Of Surgery,Boston, MA, USA

Introduction:  While guidelines for clinical indications of Cesarean sections (CS) exist, non-clinical factors may affect CS practices. We hypothesize that CS rates vary by day of the week.

Methods:  An analysis of the Office of Statewide Health Planning and Development database for California from 2006-2010 was performed. All female patients admitted to a hospital for attempted vaginal deliveries were included. Patients who died within 24 hours of admission were also excluded. Weekend days were defined as Saturday and Sunday; weekdays were defined as Monday through Friday. The primary outcome was rates of CS relative to vaginal delivery. Multivariate regression was performed, adjusting for patient demographic, clinical, and system factors.

Results: 1,855,675 women were analyzed. Overall CS rates were 9.02%. On unadjusted analysis, CS rates were significantly lower on weekends than on weekdays (6.65% vs. 9.58%, p<0.001). On adjusted analysis, women were 27% less likely to have a CS on weekends than on weekdays (OR: 0.73, 95% CI 0.71-0.75, p<0.001). In addition, Hispanic ethnicity and delivery in a teaching hospital were also associated with a decreased likelihood of CS (OR 0.91, 95% CI 0.86-0.96, p=0.01; OR 0.80, 95% CI 0.69-0.93, p <0.001, respectively).

Conclusion: CS rates are significantly decreased on weekends relative to weekdays, even when controlling patient, hospital, and system factors. Further exploration of this novel finding is warranted and will lead to improved quality of care for patients.
 

70.06 In pursuit of person-centered care: Do patients value competence over compassion?

K. Heinze3, P. A. Suwanabol2,6, K. Gibson1, B. Lansing1, C. A. Vitous6, P. Abrahamse5, L. Mody1,4  1University of Michigan,Internal Medicine,Ann Arbor, MI, USA 2University Of Michigan,Surgery,Ann Arbor, MI, USA 3St. Joseph’s Mercy Hospital,Ann Arbor, MI, USA 4Veterans Affairs Ann Arbor Healthcare System,Ann Arbor, MI, USA 5University of Michigan,Biostatistics,Ann Arbor, MI, USA 6University of Michigan,Center For Healthcare Outcomes And Policy,Ann Arbor, MI, USA

Introduction: Focus has turned to the importance and often lack of compassion in providing high-quality healthcare. Patients routinely rank intrinsic physician characteristics higher than technical skills when caring for patients with cancer and in end-of-life care.  However, the preferences of patients in other scenarios such as during surgical interventions, chronic disease management, and pediatric conditions are not well studied. To address this, we sought to identify whether clinical or demographic characteristics influence patient preferences for a compassionate or a competent surgeon or physician.

Methods:  We sent 800 surveys to patients identified through the University of Michigan (U-M) Geriatrics Center and the U-M Research volunteer registries in July 2017. Surveys comprised 7 clinical vignettes followed by a 5-point Likert scale assessing the relative importance of surgeon or physician compassion or competence, and an open-ended question to elaborate on their choice. Multivariable logistic regression was performed on quantitative data and thematic analysis on qualitative responses.

Results: Of the mailed surveys, 36 were returned due to address changes, and we received 651 completed surveys (85% response rate). Older age (p < 0.001), male sex (p = 0.016), and higher income levels (p = 0.039) were associated with a preference of competence over compassion in surgical vignettes. Competence was more often preferred in surgical and pediatric scenarios, and less often in chronic care and end-of-life care scenarios where female sex (p = 0.008) and increasing number of physician visits per year (p = 0.01) were associated with a preference of compassion.

Thematic analysis demonstrated that patient preferences were influenced by: 1) explicit beliefs regarding their value of competence versus compassion; 2) perceived role of the surgeon or physician in various clinical scenarios; 3) impact of emotional and mental health on medical experiences; and 4) type and frequency of healthcare exposure. Furthermore, although patients desired a competent approach from surgeons overall, a complex interplay of preferences exists suggesting that compassion is a priority once competence is established and vice-versa (Table 1).

Conclusion: Overall, patients ranked competency higher than compassion particularly in surgical scenarios where technical skill was critical to the patient’s perception of a good outcome. However, qualitative analyses suggest that competence is a priority as long as compassion is established. These novel findings can inform surgeons and surgical training programs on how best to elicit, navigate and prioritize patient communication and informational needs in diverse clinical settings.
 

70.05 Comparison of the Accuracy of SURPAS vs ACS NSQIP Surgical Risk Calculators

S. Khaneki1,2, M. R. Bronsert1, W. G. Henderson1, M. Yazdanfar1, A. Lambert-Kerzner1, K. E. Hammermeister1, R. A. Meguid1  1University Of Colorado Denver,Surgery,Aurora, CO, USA 2Hurley Medical Center,Internal Medicine,Flint, MI, USA

Introduction:

The Surgical Risk Preoperative Assessment System (SURPAS) is a parsimonious surgical risk assessment tool integrated into our electronic health record (EHR) for the preoperative prediction of postoperative adverse events.  SURPAS applies to >3000 operations in 9 surgical specialties, requires entry of 7 readily available predictor variables, and predicts outcomes of mortality, overall morbidity, unplanned readmission and 8 clusters of common complications.  It was developed from the American College of Surgeons’ National Surgical Quality Improvement Program (ACS NSQIP) dataset. The objective of this study was to compare the accuracy of predictions of postoperative mortality and morbidity using SURPAS vs. the ACS NSQIP risk calculator.

 

Methods:

We calculated predicted preoperative risk of postoperative mortality and morbidity using both SURPAS and the ACS NSQIP risk calculator for 1,006 patients randomly selected from the ACS NSQIP database across 9 different surgical subspecialties.  We calculated the relative and absolute mean and median of the risk differences and plotted histograms and Bland-Altman graphs to analyze these differences.  We also compared the goodness of fit statistics for expected and observed adverse postoperative outcomes between SURPAS and the ACS NSQIP risk calculator using the c-index, Hosmer-Lemeshow analysis, and Brier scores.

 

Results:

The SURPAS risk estimates for mortality were slightly higher (Mean=0.64%) than the ACS NSQIP estimates (0.59%) and considerably higher for overall morbidity (10.65% vs 7.73%).  The ACS NSQIP risk estimates for morbidity tended to underestimate risk compared to observed adverse postoperative outcomes, particularly for the highest risk patients.  Goodness of fit statistics were similar for SURPAS and the ACS NSQIP risk calculator, except for the c-index for mortality (SURPAS c=0.853 vs ACS NSQIP c=0.937), although this finding is probably tentative because there were only 6 deaths. Hosmer-Lemeshow graphs and fit statistics for ACS NSQIP and SURPAS risk estimates vs observed adverse postoperative outcomes are shown for mortality and overall morbidity (Figure).

 

Conclusions:

The SURPAS risk predictions for mortality and overall morbidity are as good as those of the ACS NSQIP risk calculator.  SURPAS has the advantages that it requires only one-third of the number of predictor variables as the ACS NSQIP tool, provides patient risk estimates compared to national averages for patients undergoing the same operation, is integrated into the EHR, and automatically provides a preoperative note in the patient’s medical record and a graphical handout of risks for the patients to take home.

70.04 Does a Surgery Specific Rapid Response Team Decrease the Time to Intervention in Surgical Patients?

M. Chang1, P. Sinha2, P. Ayoung-Chee2  1St. George’s University School Of Medicine,St. George’s, St. George’s, Grenada 2New York University School Of Medicine,Department Of Surgery,New York, NY, USA

Introduction:
Rapid response teams (RRT) have been shown to decrease cardiac arrests and unexpected deaths while increasing the number of admissions to the intensive care unit (ICU). They are alerted when patients display clinical signs of deterioration, bringing a team comprised of critical care nurses and a medical intensivist. At our institution, surgical patients with post-operative complications were not benefiting from RRT activations. Therefore, we implemented a surgical rapid response team (SRRT) with the goal of improving surgical patient outcomes. The SRRT alerted a surgical intensivist and the in-house surgical resident team in addition to the usual RRT members. The goal of this study was to evaluate the impact of the SRRT implementation.

Methods:
We completed a retrospective study of 87 total RRTs involving surgical patients in the six months prior to the implementation of the SRRT (period 1) and in the eight months after (period 2). For each RRT, we measured the time elapsed from the initiation of the RRT to the time of intervention. An intervention was defined as prescribing medication, infusing intravenous fluids, intubating the patient or beginning cardiopulmonary resuscitation. Additional outcomes included the time that elapsed from the initiation of the RRT to the admission of the patient to the ICU and the time that elapsed from the initiation of the RRT to the patient’s return to the OR.

Results:
There were 26 total RRTs in period 1 and 61 SRRTs in period 2. This represented a 75.9% increase in RRTs in surgical patients after the implementation of the SRRT. In our analysis, 8 RRTs from period 1 and 18 SRRTs from period 2 were excluded due to missing time to intervention. The average time to intervention decreased significantly from 12.2 mins in period 1 to 7.4 mins in period 2, a decrease of 4.8 mins (CI 1.37 – 8.21, P=0.0068). For patients who required ICU admission, the average time to ICU admission decreased from 40.9 mins in period 1 to 27.0 mins in period 2, a difference of 13.9 mins (CI 10.22 – 37.97, P=0.2458). For patients who required a return to the OR, the average time decreased from 66.5 mins in period 1 to 26.0 mins in period 2, a decrease of 40.5 mins (CI -61.70 – 142.70, P=0.2303).

Conclusion:
Creating a surgery-specific RRT decreased the time to intervention by nearly 40% for surgical patients. The time to ICU admission and time to return to the OR were also decreased, although not statistically significant. Additionally, there was a disproportionate increase in RRTs called on surgical patients after the implementation of the SRRT. We were unable to evaluate why there was an increase in RRTs but having a surgery-specific RRT may represent a resource staff feel comfortable using when their patients show clinical signs of deterioration. This study shows promising results for improved outcomes with a surgery-specific RRT.

70.03 Modifiable Risk Factors Associated with Poor Wellness and Suicidal Ideation in Surgical Residents

R. J. Ellis1,2, D. Hewitt3, Y. Hu1, A. D. Yang1, J. T. Moskowitz4, E. O. Cheung4, D. B. Hoyt2, J. Buyske5, T. J. Nasca6, J. R. Potts6, K. Y. Bilimoria1,2  1Northwestern University,Department Of Surgery, Surgical Outcomes And Quality Improvement Center,Chicago, IL, USA 2American College of Surgeons,Chicago, IL, USA 3Thomas Jefferson University,Department Of Surgery,Philadelphia, PA, USA 4Northwestern University,Department Of Medical Social Sciences,Chicago, IL, USA 5American Board of Surgery,Philadelphia, PA, USA 6Accreditation Council for Graduate Medical Education,Chicago, IL, USA

Introduction:  Poor physician wellness often manifests as burnout and may lead to thoughts of attrition and suicidal ideation, with suicide a leading cause of physician mortality. Surgical residents may be particularly at risk for these issues. Objectives of this study were (1) to examine the frequency of burnout, thoughts of attrition, and suicidal ideation in general surgery residents and (2) to characterize individual and environmental factors associated with poor wellness outcomes.

Methods: Cross-sectional national study of clinically active general surgery residents administered in conjunction with the 2018 American Board of Surgery In-Training Examination. Outcomes of interest were burnout, thoughts of attrition, and suicidal ideation. Individual resident and environmental factors associated with resident wellness included resident grit, stress, duty hour violations, discrimination, abuse, and sexual harassment. Associations between exposures and outcomes were assessed using multivariable logistic regression models.

Results: Among 7,413 residents (99.3% response rate) from 262 general surgery programs, 12.9% of residents reported at least weekly symptoms on both burnout subscales (emotional exhaustion and depersonalization). Burnout was more likely in residents with low grit scores (OR 2.27 [95%CI 1.95-2.63]), frequent duty hour violations (OR 1.46 [95%CI 1.22-1.74]), and in those reporting discrimination (OR 1.23 [95%CI 1.02-1.49]), verbal/physical abuse (OR 1.78 [95%CI 1.47-2.15]), or sexual harassment (OR 1.28 [95%CI 1.00-1.63]). Thoughts of attrition were reported by 12.6% of residents and were more likely in female residents (OR 1.32 [95%CI 1.09-1.60]), those with lower grit scores (OR 1.26 [95%CI 1.06-1.50]), frequent duty hour violations (OR 1.68 [95%CI 1.38-2.04]), or in those reporting severe stress (OR 2.47 [95%CI 2.04-2.99]), frequent burnout symptoms (OR 2.35 [95%CI 1.92-2.87]), discrimination (OR 1.27 [95%CI 1.06-1.51]), or verbal/physical abuse (OR 2.16 [95%CI 1.81-2.57]). Suicidal ideation was reported by 4.5% of residents and was more likely in those with lower grit scores (OR 1.43 [95%CI 1.10-1.84]), or in those who reported severe stress (OR 2.61 [95%CI 1.99-3.42]), frequent burnout symptoms (OR 1.94 [95%CI 1.43-2.63]), verbal/physical abuse (OR 1.80 [95%CI 1.39-2.33]), or sexual harassment (OR 1.58 [95%CI 1.13-2.21]).

Conclusion: Burnout symptoms, thoughts of attrition, and suicidal ideation were reported at lower rates in this comprehensive national survey than in previous studies, but remain an important problem among general surgery residents. Resident grit and environmental factors such as duty hour violations, discrimination, abuse, and harassment are associated with burnout. Burnout and negative environmental factors are further associated with thoughts of attrition and suicidal ideation. Targeted interventions aimed at minimizing inappropriate behaviors and improving the learning environment may improve trainee wellness.

70.02 CGCAHPS Scores are Influenced by Social Determinants of Health

M. Emerson1, S. Markowiak1,2, S. Pannell1,2, M. Nazzal1,2, F. C. Brunicardi1,2  1University Of Toledo Medical Center,College Of Medicine,Toledo, OH, USA 2University Of Toledo Medical Center,Department Of Surgery,Toledo, OH, USA

Introduction:  The Clinician and Group Consumer Assessment of Healthcare Providers and Systems (CGCAHPS), a standardized tool used in office settings to measure patient perceptions of care, was implemented for Physician Quality Reporting Systems (PQRS) beginning in 2012. By 2015, these publicly reported survey scores were tied to Medicare reimbursement for medical groups with over 100 eligible professionals. Understanding the impact of health disparities in social determinants of health is evolving. The purpose of our study was to determine whether CGCAHPS scores were influenced by social determinants of health (SDOH). 

Methods:  Data was drawn from the publicly reported Physician Compare datasets provided by the Centers for Medicare and Medicaid Services (CMS). We created a database linking medical practices with over 100 eligible medical professionals data to the corresponding census measures at the county level. Multivariate analysis and Pearson’s Correlation Coefficient (Pearson r) were used to test 133 SDOH against CGCAHPS score.

Results: All medical practices with over 100 medical professionals in 1,468 counties were analyzed. Of the 133 SDOH analyzed, 56 had statistically significant negative correlation with CGCAHS scores. 67 had a statistically significant positive correlation with CGCAHPS score. 10 measures were not statistically significant. Statstically higher CGCAHPS scores were associated with higher proportion of males (r=0.025, p<0.001), elderly patients (r=0.026, p<0.001), higher proportion of Caucasian patients (r=0.029, p<0.001), and greater proportion of persons employed in professional careers (r=0.024, p<0.001).  Statstically lower CGCAHPS scores were associated with higher proportion of African American patients (r= -0.050, p<0.001), higher proportion of recent immigrants (r= -0.024, p<0.001), and communities with higher unemployment rates (r= -0.031, p<0.001).  

Conclusion: Of the SDOH analyzed, 92.5% had a statistically significant correlation with CGCAHPS scores. 42.1% of SDOH analyzed showed negative correlations and 50.4% of SDOH showed positive correlations. CGCAHPS scores are directly tied to medical practices’ CMS reimbursement by federal law. Because of disparities in SDOH, the CGCAHPS survey will shift CMS reimbursement from medical practices in poor and diverse communities to medical practices in wealthier ones.

 

70.01 Blue Spectrum Filtering Cataract Lenses Are Associated With Reduced Survival

J. Griepentrog1, X. Zhang1, O. Marroquin3, J. Chang3, N. Loewen2, M. Rosengart1  1University Of Pittsburgh,Surgery,Pittsburgh, PA, USA 2University Of Pittsburgh,Ophthalmology,Pittsburgh, PA, USA 3University Of Pittsburgh,Medicine,Pittsburgh, PA, USA

Introduction: During the process of aging, the lens undergoes progressive changes that perturb the transmission of light, particularly the short-wavelength (400-500nm) blue spectrum. It is this shorter wavelength that maximally entrains our circadian rhythms, which orchestrate adaptive alterations in physiology, metabolism, and immunity. Several recent studies highlight that cataract surgery is associated with a reduced risk of all-cause mortality. Intraocular lenses (IOL) differ in transmission properties: conventional (Natural-IOL) and blue-light filtering (Blue-IOL). We hypothesized that in patients undergoing bilateral cataract surgery, the restoration of exposure to blue light with the implantation of Natural-IOL compared to continued blockage with Blue-IOL is associated with a reduced risk of death.

Methods: We conducted a retrospective cohort analysis of all subjects undergoing bilateral cataract surgery within a single healthcare system. We abstracted data for each subject regarding age, sex, race, zip code and state of residence, health insurance status, smoking status, alcohol use, and body mass index. Systemic comorbidities were classified using the Charlson Comorbidity Index. The primary outcome was all-cause mortality. We conducted a multivariate Cox Proportional Hazards model, stratified by and clustered on surgeon, to compare the adjusted risk of death in subjects undergoing bilateral implantation of Blue-IOL with those receiving Natural-IOL. A p<0.05 was considered significant. Sensitivity analyses for mortality were performed 1) using a more restrictive definition of ‘concomitant’ bilateral cataract surgery (<90-day interval), 2) excluding any surgeon implanting predominantly (>90%) Blue-IOL, and 3) restricting the analysis to Pennsylvania (PA) residents.

Results: A total of 1482 subjects underwent bilateral cataract surgery during the period of analysis, of which 512 (34.6%) received a Blue-IOL. Natural-IOL were associated with a reduced risk of all-cause mortality: aHR, 0.60 [95% CI, 0.38 to 0.94]; p=0.03. There was a significant difference by age category (p=0.02 for interaction with age >65): for the subgroup >65, Natural-IOL were associated with reduced mortality: aHR 0.52 [95% CI, 0.35 to 0.78]; p=0.001. Restricting bilateral surgery to a 90-day interval (n=1163), eliminating the surgeon implanting predominantly Blue-IOL (n=1133), and restricting the analysis to PA residents (n=1463), each showed that Natural-IOL are associated with prolonged survival.

Conclusion: Among patients undergoing cataract surgery, restoring the transmission of the entire visible spectrum compared to blocking the shorter wavelength blue spectrum, is associated with a reduced risk of death. These data suggest that a progressive blockage of blue light by cataracts may perturb circadian biology, and that cataract surgery that restores the shorter wavelength of visible blue light may restore these homeostatic mechanisms.

69.10 The Cost of End of Life Care in Colorectal Cancer Patients

M. Delisle1, R. Helewa1, J. Park1, D. Hochman1, A. McKay1  1University of Manitoba,Surgery,Winnipeg, MB, Canada

Introduction:
End of life healthcare for oncology patients has been criticized for being inappropriate and overly aggressive resulting in low value care and inefficient use of limited resources. Strategies exist to improve patient comfort in this critical moment of life and reduce unnecessary expenditures. The objective of this study was to identify factors associated with increased end of life costs in colorectal cancer patients to guide future quality improvement.

Methods:
This is a retrospective cohort study including patients dying of colorectal cancer in a single Canadian province between 2004 to 2012 (ICD-10-CM C18-C21). Data was obtained from a single-payer, provincial administrative claims database and a comprehensive provincial cancer registry. Inpatient hospital costs were calculated using the Canadian Institute for Health Information’s (CIHI) Resource Intensity Weights multiplied by CIHI’s average Cost per Weighted Case in 2014 Canadian dollars. Outpatient costs was the total billed to the provincial government in the last 30 days of life adjusted to 2014 Canadian dollars using Statistics Canada’s Consumer Price Index. Patients with no costs over the last six months of life were excluded to account for loss to follow-up (n=21).

The primary outcome was end of life costs, defined as total inpatient and outpatient costs accrued 30-days before death. Risk adjusted 30-day end of life costs were estimated using a negative binomial regression with the log link function, robust standard errors and an offset variable to account for patients that did not survive 30 days from diagnosis. Covariates included age, sex, cancer stage, socioeconomic status, cancer location (rectal, rectosigmoid, colon), Charlson Co-Morbidity Index, year of diagnosis and death in hospital. Multivariable Logistic regression was used to assess for baseline predictors associated with in hospital death.

Results:
A total of 1,622 patients died of colorectal cancer between 2004 and 2012 (Table 1). The largest variations in cost existed between patients who died in hospital versus those that did not. The median length of stay for patients dying in hospital was 26 days (IQR 13-41). Significant predictors associated with in hospital death included co-morbidities (OR 1.30, 95% CI 1.16-1.45, p<0.01) and more recent diagnosis (OR 1.10, 95% CI 1.02-1.17, p=0.01).

Conclusion:
In hospital deaths are associated with significantly increased end of life costs and the odds of dying in hospital appear to be increasing in this population. This study could not assess if in hospital deaths were also associated with increased patient benefits. Future studies should aim to identify cost effective strategies to optimize end of life care.

69.09 Characterizing the Highest Cost Patients Before and After Enhanced Recovery After Surgery Programs

A. N. Khanijow1, M. S. Morris1, J. A. Cannon1, G. D. Kennedy1, J. S. Richman1, D. I. Chu1  1University Of Alabama at Birmingham,Department Of Surgery, Division Of Gastrointestinal Surgery,Birmingham, Alabama, USA

Introduction:  The overall cost-effectiveness of enhanced recovery after surgery (ERAS) programs have been demonstrated across many institutions, but it is unclear if certain patients account for disproportionate shares of ERAS costs. The purpose of this study was to characterize the cost drivers and clinical features of the highest cost patients undergoing elective colorectal surgery before and after ERAS implementation.

 

Methods:  ERAS was implemented at a tertiary-care single-institution in January 2015. Variable cost data, costs that vary with care decisions, were collected from the institution’s financial department for the inpatient stay of patients undergoing elective colorectal surgery from 2012-2014 (pre-ERAS) and 2015-2017 (ERAS). Costs were adjusted for inflation to 2017 US dollars using the Producer Price Index and compared using Wilcoxon tests between the high cost patients (upper 10th percentile of the total variable costs) and non-high cost patients (lower 90th percentile) for both before and after ERAS. Postoperative complications were identified using National Surgical Quality Improvement Project definitions. Severity of illness (SOI) (minor, moderate, major, and extreme) was used as an indicator of burden of illness.

Results: Of 842 included patients (389 pre-ERAS and 453 ERAS), there was no significant difference in the proportion of high cost patients between the two groups (10.8% vs 9.5% patients, p=0.60). Within the pre-ERAS group, high and non-high cost patients had an average total variable cost per patient of $21,107 and $7,432, respectively ($13,675 difference, p<0.01). Within the ERAS group, high and non-high cost patients had an average total variable cost per patient of $22,737 and $6,810 ($15,926 difference, p<0.01). Over 80% of patients in the extreme SOI group were in the high cost cohort for both pre-ERAS and ERAS patients. Compared to non-high cost patients, high cost pre-ERAS patients had a longer average length of stay (LOS) (13.1 vs 5.2 days, p<0.01) with a great proportion of that time in ICU (19 vs 1%, p<0.01). High cost ERAS patients also had a longer average LOS (15.9 vs 4 days, p<0.01) and proportion of ICU time (14 vs 1%, p<0.01). High cost pre-ERAS patients experienced significantly more post-op complications (p<0.01) including myocardial infarction and pneumonia for pre-ERAS patients and pneumonia, acute renal failure, ventilator dependency, and blood transfusions for ERAS. High cost pre-ERAS patients had higher mean anesthesia costs when compared to high cost ERAS patients ($1,173 vs $841, p<0.01) but lower mean pharmacy costs ($1,453 vs $3,200, p=0.02); there were no significant differences in complications.

 

Conclusion: SOI and post-op complications were key drivers of high costs before and after ERAS implementation. High cost patients continued to experience significantly longer LOS and ICU stays. The need for quality improvement in surgical care remains even in the era of ERAS.

 

69.07 Cumulative Narcotic Dose Associated With Ultimate Risk of Long Term Opioid Use in Colorectal Surgery Patients

P. Cavallaro1, A. Fields2, R. Bleday2, H. Kaafarani1, Y. Yao1, K. F. Ahmed1, T. Sequist1, M. Rubin1, L. Bordeianou1  1Massachusetts General Hospital,General Surgery,Boston, MA, USA 2Brigham And Women’s Hospital,Boston, MA, USA

Introduction:  Nearly 42,000 people died from opioid overdose and an estimated 40% of overdose deaths involved a prescription opioid in 2016 alone. However, the relationship between postoperative inpatient opioid use and the subsequent risk of long-term opioid abuse remains unknown, with studies focusing primarily on opioid prescriptions at time of discharge. We therefore aimed to evaluate the relationship between inpatient opioid use and ultimate prolonged opioid use (POU) in patients undergoing colorectal surgery.

Methods:  We merged pharmacy records and the prospectively maintained ACS-NSQIP data on surgical outcomes of patients undergoing colectomy from June 2015 to October 2017 across 5 institutions (2 academic, 3 community) participating in a regional Colorectal Surgery Collaborative. Narcotic administration was converted into Morphine Milligram Equivalents (MMEs). Patients using patient-controlled analgesia were excluded.  POU was the primary outcome and was defined as any new opioid prescriptions between 90 and 180 days post-operatively. We compared patient demographics, surgical indications, comorbidities, and postoperative complications, daily MME administration and total inpatient MMEs.

Results: 940 colectomy patients were included in the study (52% female, 43.3% opioid naive, mean age 62.2 years old). 99 patients (10.4%) had POU. On univariate analysis, POU patients had higher ASA (ASA > 3 in 61% vs 44%, p=0.002) and were less opioid naive (23% vs 46%, p<0.001). These patients had longer lengths of stay, more readmissions, and more post-operative complications (P<0.05). POU patients also had higher rates of stomas (p<0.05). POU patients had increased rates of cumulative MMEs administered throughout their more complex hospitalization, even though their daily dosages were similar to non PRU patients (50+/-44 vs 73+/-704, p=0.7). In multivariable analysis, only cumulative use of narcotics —not overall complications or length of stay — was predictive of POU (Top quartile OR 2.0, 95% CI 1.2-3.2; p=0.005). Previous opioid use within the last year was also and independent predictor of POU (OR 2.6, 95% CI 1.6-4.3; p<0.001).

Conclusion: Prolonged narcotic use appears to be associated with previous narcotic exposure and the cumulative does of narcotics administered during the post-operative inpatient hospitalization, and not by the complexity of the surgical procedure or by surgical complications. This underscores the importance of minimizing opioid use through the entire peri-operative course, especially in patients with prior opioid use, post-operative complications, and protracted hospital courses. It also suggests the need for development of longer-lasting postoperative narcotic-sparing strategies, beyond the current ERAS efforts, that are mostly focused on the first 24-48 hours after surgery.  

 

69.08 Economic Analysis of ERAS Programs: Lack of Adherence to Standards for Cost Effectiveness Reporting

M. A. Eid1, N. Dragnev1, C. Lamb1, S. Wong1  1Dartmouth Hitchcock Medical Center,General Surgery,Lebanon, NEW HAMPSHIRE, USA

Introduction:

Enhanced Recovery After Surgery (ERAS) is an evidence-based, multimodal pre and post-operative care pathway which results in significant improvements in patient outcomes after major surgery.  Along with the decreased complication rates and recovery times, economic benefit of implementing ERAS has been widely heralded. However, it is unclear how rigorous the associated economic analyses are.  We used the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) guidelines to assess the quality of these studies.

Methods:
Using PubMed and OVID, we performed a systematic literature search to identify economic analyses evaluating the cost effectiveness of ERAS on colorectal, hepatobiliary, and gynecologic surgery in English language journals. The MESH terms  included colorectal surgery, cost analysis, and ERAS. We retrieved 45 articles, of which 17 were found to be directly relevant to the topic.  Each paper was evaluated against the items in the CHEERS guidelines to abstract data which formally included 7 categories with 27 specified criteria, mainly focusing on a study’s methodology (n=16) and how results are reported (n=5).

Results:
Of the 17 publications, including 14 colorectal, 2 hepatobiliary and 1 gynecologic studies, all but one paper described ERAS as being cost-effective; one study made no definitive statement regarding the cost effectiveness. However, none of the studies fully adhered to the CHEERS guidelines. Only 47% of the studies fulfilled at least 14 (50%) checklist items. All of the papers included “an explicit statement regarding the broader context of the study” and most titles identified the studies as economic evaluations. Papers generally performed poorly with regard to checklist items for methods and results. For example, none of the papers reported on choice of discount rates used for costs and outcomes. Overall, of the 16 analytic methods items, there was only an average concordance of 40%. Other key components of economic evaluations such as measurement and valuation of outcomes and assumptions underlying the decision-analytic model were not well reported. 

Conclusion:
Based on our evaluation of economic analyses of ERAS protocols, the quality of these studies is generally quite poor. Less than half of the studies adhered to 50% of the CHEERS reporting guidelines though nearly all of them posited cost savings with ERAS. Although most studies claimed to be cost effective evaluations, the vast majority lacked methodologic quality and appear to be merely cost reports. Cost effective and economic analysis plays a pivotal role in evidence-based medicine, but the current literature may be limited in terms of actually evaluating costs and outcomes of interventions. 

69.06 Association of Enhanced Recovery Pathways with Postoperative Renal Complications: Fact or Fiction?

Q. L. Hu1,2, J. Y. Liu1,3, C. Y. Ko1,2, M. E. Cohen1, K. Y. Bilimoria4, D. B. Hoyt1, R. P. Merkow1,4  1American College Of Surgeons,Chicago, IL, USA 2University Of California – Los Angeles,Department Of Surgery,Los Angeles, CA, USA 3Emory University School Of Medicine,Department Of Surgery,Atlanta, GA, USA 4Feinberg School Of Medicine – Northwestern University,Department Of Surgery,Chicago, IL, USA

Introduction:
Enhanced Recovery Pathways (ERPs) have been shown to dramatically improve perioperative outcomes in colorectal surgery. However, one important factor limiting its widespread adoption is concern regarding postoperative renal complications. Our objective was to evaluate the association of the overall use of an ERP protocol and adherence to its potentially renal-compromising components (e.g., epidural use [hypotension], multimodal pain management [NSAID use], fluid restriction [hypovolemia]) with postoperative renal complications.

Methods:
American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) Enhanced Recovery data between 2014 and 2017 were used to identify patients who were managed under an ERP (ERP group). A 1:1 propensity-score match was used to identify control patients during the same time period who were managed without an ERP (non-ERP group). Hierarchical multivariable logistic regression models were used to evaluate the overall association of an ERP (vs. non-ERP) as well as adherence to individual ERP components with postoperative renal complications (either renal insufficiency or dialysis requirement). 

Results:
We identified 36,452 patients who received at least one ERP component, including 16.1% who received epidural analgesia, 87.6% who received multi-modal pain management, and 53.0% who received fluid restrictive care. Compared to non-ERP, ERP management was not associated with postoperative renal complications (1.0% vs. 1.0%; OR 0.96, 95% CI 0.83-1.11). Independent predictors of renal complications included male sex, African American race, higher ASA class, severe obesity, and preoperative co-morbidities, including hypertension, heart failure, diabetes, ascites, and disseminated cancer. Among patients managed under ERPs, adherence with individual potentially renal-compromising components was not associated with renal complications: epidural use (1.0% vs. 1.0%; OR 0.77, 95% CI 0.54-1.11), multi-modal pain management (0.9% vs. 1.3%; OR 0.78, 95% CI 0.59-1.05), and fluid restriction (0.9% vs. 1.0%; OR 1.05, 95% CI 0.79-1.39). Finally, adherence with all three components versus none was also not associated with renal complications (1.2% vs. 1.0%; OR 0.92, 95% CI 0.52-1.65). 

Conclusion:
Management under ERPs and adherence with individual potentially renal-compromising components were not associated with postoperative renal complications. Postoperative renal complication is a serious adverse event, however, clinicians should focus on other modifiable factors precipitating its occurrence other than the use of an ERP.  

69.05 Survival Outcome of RNF43 Mutant-type Differs between Right-sided and Left-sided Colorectal Cancer

Y. Shimada1, Y. Tajima1, M. Nagahashi1, H. Ichikawa1, K. Yuza1, Y. Hirose1, T. Katada1, M. Nakano1, J. Sakata1, H. Kameyama1, Y. Takii2, S. Okuda3, K. Takabe4, T. Wakai1  1Niigata University,Digestive And General Surgery,Niigata, NIIGATA, Japan 2Niigata Cancer Center Hospital,Surgery,Niigata, NIIGATA, Japan 3Niigata University,Bioinformatics,Niigata, NIIGATA, Japan 4Roswell Park Cancer Institute,Breast Surgery,Buffalo, NY, USA

Introduction: Right-sided colorectal cancer (CRC) demonstrates worse survival outcome compared with left-sided CRC, and clinicopathological characteristics of right-sided CRC differ from left-sided CRC. Recently, the importance of RNF43 mutation has been reported along with BRAF mutation in serrated neoplasia pathway. We hypothesized that clinical significance of RNF43 mutation differs between right-sided and left-sided CRCs, and RNF43 mutation associates with tumor biology of right-sided CRC. To test this hypothesis, we investigated the clinicopahotlogical characteristics and survival outcome of patients with RNF43 mutation in right-sided and left-sided CRCs.

Methods: One-hundred-nine microsatellite stable Stage IV CRC patients were analyzed. Thirty-three and 76 patients were right-sided CRC and left-sided CRC, respectively. We investigated genetic alterations using a 415-gene panel, which includes RNF43 and the other genes associated with tumor biology. We analyzed clinicopathological characteristics between RNF43 wild-type and RNF43 mutant-type using Fisher’s exact test. Moreover, we classified RNF43 mutant-type according to primary tumor sidedness, i.e., right-sided RNF43 mutant-type or left-sided RNF43 mutant-type, and compared clinicopathological characteristics between the two groups. Overall survival rates of RNF43 wild-type, right-sided RNF43 mutant-type, and left-sided RNF43 mutant-type were analyzed using log-rank test.

Results:CGS revealed that 8 of 109 patients (7%) had RNF43 mutation. RNF43 mutation was significantly associated with high age (65 or more) (P = 0.020), presence of BRAF mutation (P = 0.005), absence of KRAS and PTEN mutations (P = 0.049 and P = 0.026, respectively). RNF43 mutation was observed in 3 of 33 right-sided CRC (9%) and 5 of 76 left-sided CRC (7%), respectively. Interestingly, RNF43 mutations in right-sided CRC were nonsense mutation (R145X) or frameshift mutation (P192fs, S262fs), while those in left-sided CRC were missense mutations (T58S, W200C, R221W, R519Q, R519Q). All the three right-sided RNF43 mutant-type were high age (65 or more), female, BRAF V600E mutant-type. Right-sided RNF43 mutant-type showed significantly worse OS than RNF43 wild-type and left-sided RNF43 mutant-type (P = 0.007 and P = 0.046, respectively).

Conclusion:Clinicopathological characteristics and survival outcome of patients with RNF43 mutation might differ between right-sided and left-sided CRC. In right-sided CRC, RNF43 mutation is a small, but distinct molecular subtype which is associated with aggressive tumor biology along with BRAF V600E mutation. Future preclinical and clinical studies might have to focus on RNF43 mutation for improving survival outcome in right-sided CRC.

 

69.04 What Drives Surgeon Workload in Colorectal Surgery?

K. E. Law1,2, B. R. Lowndes1,2,3, S. R. Kelley4, R. C. Blocker1,2, D. W. Larson4, M. Hallbeck1,2,4, H. Nelson4  1Mayo Clinic,Health Sciences Research,Rochester, MN, USA 2Mayo Clinic,Kern Center For The Science Of Health Care Delivery,Rochester, MN, USA 3Nebraska Medical Center,Neurological Sciences,Omaha, NE, USA 4Mayo Clinic,Surgery,Rochester, MN, USA

Introduction: Surgical techniques and technology are continually advancing, making it crucial to understand potential contributors to surgeon workload. Our goal was to measure surgeon workload in abdominopelvic colon and rectal procedures and attribute possible contributors.

Methods: Between February and April 2018, following each surgical case surgeons were asked to complete a modified NASA-Task Load Index (NASA-TLX) which included questions on distractions, fatigue, procedural difficulty, and expectation in addition to the validated NASA-TLX questions. All but the expectation question were rated on a 20-point scale (0=low, 20=high). Expectation was rated on a 3-point scale (i.e., more difficult than expected, as expected, less difficult than expected). Patient and procedural data were analyzed for procedures with completed surveys. Surgical approach was categorized as open, laparoscopic, or robotic.

Results: Seven surgeons (3 female) rated 122 procedures over the research period using the modified NASA-TLX survey. Across the subscales, mean surgeon-reported workload was highest for effort (M=10.83, SD=5.66) followed by mental demand (M=10.18, SD=5.17), and physical demand (M=9.19, SD=5.60). Procedures were rated moderately difficult (M=10.74, SD=5.58). There was no significant difference in procedural difficulty or fatigue by surgical approach.
Fifty-four percent (n=66) of cases were rated as meeting expected difficulty, with 35% (n=43) considered more difficult than expected. Mean surgeon-reported procedural difficulty aligned with expectation with a mean procedural difficulty level of 9.29 (SD=5.11) for as expected, 14.39 (SD=4.49) for more difficult than expected, and 5.92 (SD=4.15) for less difficult than expected (F(2,118)=21.89, p<0.001). Surgeons also reported significantly more fatigue for procedures considered more difficult than expected (F(2,118)=8.13, p<0.001) compared to procedures less difficult than expected.
Self-reported mental demand (r=0.88, p<0.001), physical demand (r=0.87, p<0.001), effort (r=0.90, p<0.001), and surgeon fatigue (r=.71, p<0.001) were strongly correlated with procedural difficulty. Furthermore, fatigue was strongly correlated with overall workload and the NASA-TLX subscales (r>0.7, p<0.001). Surgeons most frequently reported patient anatomy and body habitus, unexpected adhesions, and unfamiliar team members as contributors to ease or difficulty of cases.

Conclusion: Self-reported mental demand, physical demand, and effort were strongly correlated with procedural difficulty and surgeon fatigue. Surgeons attributed case ease or difficulty levels to patient and intraoperative factors; however, procedural difficulty did not differ across surgical approach. Understanding contributors to surgical workload, especially unexpectedly difficult cases, can help define ways to decrease workload.

 

69.03 Population-based Analysis of Adherence to Extended VTE Prophylaxis after Colorectal Resection

A. Mukkamala1, J. R. Montgomery1, A. De Roo1, S. E. Regenbogen1  1University Of Michigan,Surgery, Center For Healthcare Outcomes And Policy,Ann Arbor, MI, USA

Introduction:  Since 2012, the American College of Chest Physicians (ACCP) has recommended 4 weeks of pharmacologic prophylaxis against venous thromboembolism (VTE) after abdominopelvic cancer surgery. Additionally, there is growing expert consensus favoring extended prophylaxis after surgery for inflammatory bowel disease (IBD). National studies have revealed very low uptake of prophylaxis before adoption of the ACCP guideline, but it remains unclear to what extent it has been adopted in standard practice in recent years. We sought to understand responsiveness to guidelines versus expert opinion by evaluating adherence to extended VTE prophylaxis after colectomy in a statewide registry. 

Methods:  We identified all patients in the Michigan Surgical Quality Collaborative (MSQC) registry who underwent elective colon or rectal resection between October 2015 (when MSQC first began recording post-discharge VTE prophylaxis) and February 2018. MSQC is an audited and validated, statewide population-based surgical registry including all major acute care hospitals in the state. We used descriptive statistics and chi-square tests to compare annual statewide utilization trends for extended VTE prophylaxis with low molecular weight heparin by operative year and by diagnosis among all patients without documented exclusions.

Results: Of 5722 eligible patients, 373 (6.5%) received extended VTE prophylaxis after discharge. Use of extended prophylaxis was similar between patients with cancer (282/1945, 14.5%) and IBD (31/242, 12.8%), but was significantly increased when compared with patients with other indications (60/3051, 1.97%, p<0.001). Overall use during the study period significantly increased among cancer patients from 8.2% in 2015 to 9.0% in 2016 to 18.6% in 2017-18 (p=0.001). Use among IBD patients also significantly increased from 0% to 6.6% to 17.1% (p=0.03). Use among patients with other diagnoses was rare and did not vary over the study period (1.5 to 2.4%, p=0.50). Annual trends are shown in Figure 1.

Conclusion: Use of extended VTE prophylaxis after discharge is increasing, but remains uncommon in spite of guidelines recommending its use for colorectal cancer surgery and expert consensus supporting its use in IBD. Efforts to improve dissemination of guidelines and recommendations may require quality implementation initiatives accompanied by payment incentives to improve adherence.

 

69.02 Statewide Utilization of Multimodal Analgesia and Length of Stay After Colectomy

A. C. De Roo1,2, J. V. Vu1,2, S. E. Regenbogen1,2,3  1University Of Michigan,Center For Healthcare Outcomes And Policy,Ann Arbor, MI, USA 2University Of Michigan,Department Of General Surgery,Ann Arbor, MI, USA 3University Of Michigan,Division Of Colorectal Surgery,Ann Arbor, MI, USA

Introduction:
Multimodal analgesia is a critical component of both enhanced recovery protocols (ERP) and efforts to reduce opioid misuse after surgery. Postoperative multimodal pain therapy, using more than one class of pain medication: opioids, acetaminophen, non-steroidal anti-inflammatories (NSAIDs), gabapentinoids, and regional and epidural anesthesia, has been associated with lower pain scores, decreased opioid use, and avoidance of opioid inhibition of gut motility. Whether multimodal analgesia is widely used in practice remains unknown, and its effect on hospital length of stay has not been evaluated outside of controlled trials.

Methods:
Within the population-based, statewide Michigan Surgical Quality Collaborative (MSQC), we evaluated all adult patients undergoing elective colorectal resection between 2012 and 2015. Colectomy has been a targeted procedure for ERP implementation and MSQC collects ERP-specific data elements for colectomy, including details of perioperative analgesia. The primary outcome was mean postoperative hospital length of stay (LOS). To reduce bias from rare, extremely prolonged hospitalizations, we winsorized LOS at 30 days which excluded 27 patients. T-tests were used to evaluate associations between LOS and opioid-only vs multimodal therapy, defined as two or more classes of pain medication used.

Results:
Among the 7249 patients who underwent elective colectomy, 6746 received opioids (93.1%), and 2391 patients (33.0%) received no other analgesia besides opioids. Acetaminophen was used by 2701 (37.2%) patients, NSAIDs in 2551 (35.2%), and epidural, spinal, or regional anesthesia in 1400 (19.3%) patients. Average LOS for patients receiving multimodal analgesia (5.4 days, 95% CI 5.3-5.5) was significantly shorter than for patients receiving opioids alone (6.0 days, 95% CI 5.8-6.2; p<0.001).

Conclusion:
One third of patients undergoing colectomy in the state of Michigan received solely opioid analgesia. Ongoing improvement efforts will aim for near-universal use of opioid sparing pain regimens, in order to reduce opioid-related adverse effects and opioid exposure. Use of opioid-sparing multimodal analgesia, compared with opioids alone, is associated with a small reduction in hospital LOS, perhaps from improved pain control and lower rates of ileus, and could therefore accrue cost savings at a population level.  Multimodal analgesia is also an essential component of efforts to combat opioid use disorders related to surgical encounters and Michigan hospitals have room for improvement.
 

69.01 Achieving the High-Value Colectomy: Preventing Complications or Improving Efficiency

J. V. Vu1, J. LI3, D. S. LIKOSKY2, E. C. NORTON4,5, D. A. CAMPBELL1, S. E. REGENBOGEN1  1University Of Michigan,SURGERY,Ann Arbor, MI, USA 2University Of Michigan,CARDIAC SURGERY,Ann Arbor, MI, USA 3University Of Michigan,SCHOOL OF PUBLIC HEALTH,Ann Arbor, MI, USA 4University Of Michigan,ECONOMICS,Ann Arbor, MI, USA 5University Of Michigan,HEALTH MANAGEMENT AND POLICY,Ann Arbor, MI, USA

Introduction:  As payers increasingly tie reimbursement to value, there is increased focus on both outcomes and expenditures for surgical care. One way of measuring hospital value is by comparing episode payments to adverse outcomes. While postoperative complications increase spending and decrease value, it is unknown whether hospitals that achieve highest value in major surgery also deliver efficient care beyond the prevention of complications. We aimed to identify the contributions of clinical quality and efficiency of perioperative care to high-value strategies for success in episode-based reimbursement for colectomy.

Methods:  This was a retrospective observational cohort study of elective colectomy patients from 2012 to 2016, from 56 hospitals in the Michigan Surgical Quality Collaborative and Michigan Value Collaborative. Hospitals were assigned a value score (proportion of cases without adverse outcome divided by mean episode payment). Adverse outcomes included postoperative complications, reoperation, or death within 30 days of surgery. Risk-adjusted payments for total 30-day episode and components of care were compared using ANOVA between hospitals by value tertile.

Results: We matched 2,947 patients enrolled in both registries, 646 (22%) of which experienced adverse outcomes. Mean adjusted complication rate was 31% (+10.7%) at low-value hospitals and 14% (+4.6%) at high-value hospitals (p<0.001). Mean episode payments for all cases were $3,807 (17%) higher in low-value than high-value hospitals, ($22,271 vs. $18,464 p<0.001). Among cases without adverse outcomes only, payments were still $2,257 (11%) higher in low-value hospitals ($19,424 vs. $17,167, p=0.04).

Conclusion: In elective colectomy, high-value hospitals achieve lower episode payments than low-value hospitals for cases both with and without complications, indicating mechanisms for increasing value beyond reducing complications alone. High-value hospitals had two-fold lower complication rates, but also achieved 11% savings in uncomplicated cases. Worthwhile targets to optimize value in elective colectomy may include enhanced recovery protocols or other interventions that increase efficiency in all phases of care.

 

68.10 Is Amiodarone Prophylaxis Necessary After Non-Anatomic Lung Resection?

B. R. Beaulieu-Jones2, E. D. Porter1, K. A. Fay1, E. Diakow2, R. M. Hasson1, T. M. Millington1, D. J. Finley1, J. D. Phillips1  1Dartmouth-Hitchcock,Thoracic Surgery,Lebanon, NH, USA 2Giesel School of Medicine at Dartmouth,Hanover, NH, USA

Introduction: Post-operative atrial fibrillation (POAF) after non-cardiac thoracic surgery is a common complication that is associated with increased morbidity and hospital stay.  Most reports review POAF incidence in only anatomic resection; however, the use of non-anatomic (wedge) resections is increasing.  Since 2015, our institution has implemented an amiodarone protocol for patients ≥65 years of age undergoing anatomic resection, resulting in a POAF incidence of 9%. We sought to investigate the incidence of POAF in our non-anatomic resection cohort in comparison to our anatomic resection cohort to assess the need for amiodarone prophylaxis following non-anatomic resection.

Methods: Retrospective cohort study at a single tertiary referral center. All anatomic and wedge lung resections from January 2015 through April 2018 were selected. Anatomic resection patients ≥65 years of age, or at the discretion of the Attending Surgeon, were eligible to receive our amiodarone protocol: immediate post-operative IV bolus of 300 mg given over 1 hour followed by 400 mg PO TID x 3days. We excluded patients with chronic atrial fibrillation or a contraindication to amiodarone (hypotension or electrical conduction abnormality). Primary outcome was incidence of POAF within 30 days. We compared anatomic and wedge resection patients using two-sample, two-tailed students t-test and Pearson’s chi-squared test for continuous and categorical data, respectively.

Results: A total of 355 patients underwent lung resection with 85% (300) undergoing an anatomic resection and 15% (55) a wedge resection. On comparative analysis, patients undergoing wedge resection were significantly younger (58.1±17.2 vs. 65.2±9.7 years, p<0.001) and had a shorter duration of surgery (141.4±55.8 vs. 271.2±81.4 mins, p<0.001) than those undergoing anatomic resection. There were no differences with regard to sex, comorbidities, preoperative pulmonary function tests, or length of stay (Table 1). Within wedge resection patients, only 3 received the amiodarone protocol. No wedge resection patients developed POAF. Among patients with anatomic resection, over 89% of patients ≥65 had received amiodarone and POAF occurred in 9% (28) of patients. POAF significantly increased the post-operative length of stay (6.9±4.1 vs. 4.2±4.5 days, p=0.003).

Conclusion: POAF continues to be a challenging problem after non-cardiac thoracic surgery. Amiodarone prophylaxis can reduce the incidence of POAF to 9% among anatomic resections. However, our data indicates that POAF following non-anatomic or wedge resection is rare, and chemoprophylaxis is unnecessary in this population.

 

68.09 Accuracy of Multidetector Computed Tomography in Preoperative Aortic Valve Annulus Sizing

S. Banerjee1, A. Das1, H. Zimmerman1, R. Jennings1, R. Boova1  1Temple University Hospital,Department Of Cardiothoracic Surgery,Philadelpha, PA, USA

Introduction:

Surgical aortic valve replacement (SAVR) may be associated with unanticipated intraoperative aortic pathology that is not identified by routine pre-operative evaluation. Such findings may alter the conduct of SAVR. Pre-operative multidetector computed tomography (MDCT) was adopted to mitigate unexpected intraoperative aortic findings.

MDCT is integral in preoperative sizing for transcatheter aortic valve replacement (TAVR) sizing. As TAVR emerged as an alternative to SAVR, our institutional TAVR MDCT protocol was implemented in pre-operative SAVR assessment to avoid duplicate MDCT, if findings resulted in pathology more amenable to TAVR than SAVR.

The purpose of this study is to determine if our institutional TAVR MDCT accurately predicts aortic valve prosthesis size. The secondary objective is to determine if there is a trend towards over- or under-sizing, if MDCT is not consistent with implant size.

Methods:

Between July 2012 and July 2017, 102 patients who underwent surgical aortic valve replacement had preoperative aortic valve sizing by MDCT. The aortic annulus diameter calculated using MDCT was compared to intraoperative valve sizing during SAVR. Implanted valve size within 1 mm of the MDCT calculated size was regarded as accurate in predicting valve size. If the implanted valve was outside the 1 mm range, it was classified as either smaller or larger. This was done because valves used in SAVR are manufactured in 2 mm increments. To evaluate if MDCT accuracy was affected by aortic valve annulus size, we stratified the valve diameters based on MDCT measurements into categories: 17.8-19.9, 20- 21.9, 22-23.9, 24-25.9, and >26mm. Statistical analysis was performed using SPS software and paired t-test was used to evaluate whether the results were statistically significant.

Results:

Forty-one (40.2%) of the 102 patients studied had MDCT aortic valve measurements that were within 1mm of implant size. Implanted valves were smaller than MDCT calculation in 40 patients (39.2%) and were larger in 21 patients (20.6%). MDCT measurements remained inconsistent with intraoperative sizing regardless of aortic annulus diameter. The variance between MDCT annulus measurements and intraoperative sizing was statistically significant, p value less than 0.0005, as determined by paired t-test.

Conclusion:

Preoperative aortic annulus measurements by MDCT differed substantially from intraoperative sizing. Furthermore, there was no trend towards over- or under-sizing. These results may impact preoperative planning for patients undergoing SAVR if MDCT is utilized for preoperative planning. The implication of this information on preoperative TAVR planning is indeterminate and may warrant further investigation.
 

68.08 The Modifying Effect of Insurance on Racial Disparities in CABG Utilization for AMI Patients

A. G. Sassine1, J. Nosewicz1, F. Aboona1, T. Siblini1, J. M. Clements1  1Central Michigan University College Of Medicine,Mount Pleasant, MI, USA

Introduction:
Racial disparities in the utilization of coronary artery bypass graft surgery (CABG) have been documented in certain parts of the country. The influence of insurance status on these racial disparities has been inconsistently reported.  Apart from regional studies documenting these disparities, to our knowledge, no studies have examined disparities at the national level. Our objective was to assess racial disparities in CABG utilization using national discharge data from the 2012 National Inpatient Sample (NIS), Healthcare Cost and Utilization Project (HCUP), Agency for Healthcare Research and Quality. We hypothesize that minority populations are less likely than whites to receive CABG and that these racial disparities will persist when controlling for insurance.

Methods:
We identified 456,895 discharges with a diagnosis code for acute myocardial infarction (ICD 9 CM code 410.x1 or 410.x3) that also had a CABG procedure code (ICD 9 CM Code 36.10-36.16, 36.19) in any of the 15 procedure code fields. We ran logistic regression models to determine the influence of age, number of chronic conditions, gender, rural/urban patient location, race, and insurance status on undergoing CABG surgery.

Results:
Blacks (OR=.794, 95% CI .742-.849) were less likely than Whites to receive CABG when controlling for all demographic variables, including insurance status. This disparity persists when including an interaction term between race and insurance with Blacks on Medicaid being less likely to receive CABG compared to whites with Private/HMO insurance (OR = ..777, 95% CI .690-.874). Hispanics (OR=1.13, 95% CI 1.06-1.22) and Asian/Pacific Islanders (OR=1.55, 95% CI 1.40-1.70) are more likely to receive CABG compared to whites.  However, compared to whites with Private/HMO insurance, Hispanics (OR-.852, 95% CI .781-.930) and Asians (OR=0.764, 95% CI .669-.874) on Medicare are less likely to receive CABG, indicating that insurance status completely moderates the effect of race on CABG for these race/ethnic groups. Native Americans were as likely as whites to receive CABG across all logit models.

Conclusion:
Disparities in CABG utilization for Black AMI patients were not explained by the interaction effect between race and insurance; however, insurance status appears to moderate the effects of race for Hispanics and Asian/Pacific Islanders. These findings suggest policies be implemented that improve access to invasive revascularization procedures for African Americans. Future studies should evaluate why the positive effects of race for Hispanics and Asian/Pacific Islanders is negated by insurance status, specifically public insurance programs such as Medicare.