33.10 Medical Optimization Prior to Surgery Improves Outcomes but is Underutilized

I. L. Leeds1, J. K. Canner1, F. Gani1, P. M. Meyers1, E. R. Haut1, J. E. Efron1, F. M. Johnston1  1Johns Hopkins University School Of Medicine,Department Of Surgery,Baltimore, MD, USA

Introduction:  Preoperative comorbidities can have substantial effects on operative risk and outcomes. The modifiability of these risks remains poorly understood. The purpose of this study was to evaluate the impact of non-surgeon preoperative comorbidity optimization on short-term postoperative outcomes.

Methods: Patients with employer-sponsored commercial insurance undergoing a colectomy (ICD-9 codes: 17.3x, 45.7x, 45.8x, 48.5) were identified in the Truven Health MarketScan database (2010-2014). Patients were included if they could be matched to a preoperative surgical clinic visit within 90 days of an operative intervention by the same surgeon. The time interval between the surgical visit and the colectomy was defined as the “potential preoperative optimization period.” In this time interval, patients were defined as “optimized” if they were seen by an appropriate non-surgeon for at least one of their preexisting comorbidities (e.g., primary care or endocrinology visit for diabetic patient). Propensity score matching with 1:1 nearest-neighbor matching with replacement was performed prior to regression analysis to account for between-group covariate extremes. Bivariate analysis and mult

Results: We identified 16,279 eligible colectomy episodes, of which 3,940 (24.2%) were in patients with at least one clinically significant comorbidity. 64.8% of patients with comorbidities were medically optimized prior to surgery. 2,545 medical optimized patients were matched to 1,388 non-optimized controls. Operative indications included neoplasm (50.5%) and diverticulitis (32.6%). The optimized subgroup was significantly older, more likely to be male, more comorbid at baseline by Charlson score, and more likely to reside in the northeastern United States.

 

Medically optimized patients had a lower risk of complications (29.9% vs. 33.7%, p=0.014) driven largely by fewer postoperative gastrointestinal, renal, hepatic, wound, and septic complications. Multivariable logistic regression controlling for patient demographics, operative indication, and Charlson Comorbidity Index demonstrated that patients optimized prior to surgery had a 15% lower odds (OR 95% CI = 0.73-0.99, p=0.036) of having a complication compared with non-optimized patients. The median increase in preoperative costs for optimized patients was $1,519 (p<0.001) while the median increased total cost with a complication was $18,941 (p<0.001).

Conclusion: Many surgical patients do not receive focused preoperative care for their medical comorbidities. Patients who receive comorbidity-associated nonsurgical care prior to an operation have better short-term surgical outcomes. The individual costs of medical optimization are much less than the cost of a surgical complication. These findings support further prospective study of whether patients undergoing high-risk surgery can benefit from more intensive preoperative optimization.

33.09 Laparoscopic Cholecystectomy Is Safe Both Day and Night

E. S. Tseng1, J. Imran1, J. Byrd1, I. Nassour1, S. S. Luk1, M. Choti1, M. Cripps1  1University Of Texas Southwestern Medical Center,Dallas, TX, USA

Introduction: The acute care surgical model has increased the ability to perform non-elective laparoscopic cholecystectomies (LC) during day and night hours. Despite the potential to reduce hospital length of stay (LOS) and improve operating room usage, it is reported that performing LC at night leads to increased rates of complications and conversion to open. We hypothesize that it is safe to perform LC at night in appropriately selected patients.

Methods:  We performed a retrospective review of over 5200 non-elective LC in adults at a large urban tertiary referral hospital performed between April 2007 and February 2015. We dichotomized the cases to either day (case started between 7am-6:59pm) or night (case started between 7pm-6:59am). Univariate analysis was performed using Mann-Whitney U, chi-squared, and Fisher's exact tests.

Results: A total of 5206 patients underwent LC, with 4628 during the day and 576 at night. There was no difference in age; body mass index (BMI); ASA class; race; insurance type; pregnancy rate; history of hypertension, diabetes, or renal failure; or white blood cell count. However, patients who underwent LC during the day were more likely to have presented with obstructive biliary complications of cholelithiasis as evidenced by higher median total bilirubin (0.6 [0.4, 1.3] vs. 0.5 [0.3, 1.0] mg/dL, p = 0.002) and lipase (33 [24, 56] vs. 30 [22, 42] U/L, p < 0.001). Operatively, there was no difference in case length, estimated blood loss, rate of conversion to open, biliary complications, LOS after operation, unanticipated return to the hospital in 60 days, or 60-day mortality. There were significant differences in median LOS before surgery (1 [1, 2] vs. 1 [0, 2] days, p < 0.001) and median total LOS (3 [2, 4] vs. 2 [1, 3] days, p < 0.001) with day patients spending more time in the hospital compared to night patients. Logistic regression to look at the effects of ASA class, total bilirubin, lipase, BMI, and day vs. night status on the likelihood of biliary complications showed that none of the factors had statistical significance.

Conclusion: In this center with an acute care surgery service, it is safe to perform LC during day or night. The lack of complications and shorter LOS justifies performing LC at any hour.

 

33.08 ED Visits After Joint Arthroplasty: Appropriate Outpatient Care Decreases Utilization

M. A. Chaudhary1, L. M. Pak1, D. Sturgeon1, T. P. Koehlmoos2, A. H. Haider1, A. J. Schoenfeld1  1Brigham And Women’s Hospital,Center For Surgery And Public Health,Boston, MA, USA 2Uniformed Services University Of The Health Sciences,Bethesda, MD, USA

Introduction:
Emergency department (ED) visits after elective surgical procedures are not only a significant quality of care indicator but also a potential target for interventions to reduce healthcare costs.  With the volume of hip and knee arthroplasties soaring over 1 million annually, investigation of patterns of ED utilization in patients undergoing these procedures becomes critical. The objective of this study was to evaluate the patterns and predictors of 30- and 90-day ED utilization in a national sample of total hip arthroplasty (THR) and total knee arthroplasty (TKR) patients.

Methods:
The military health insurance database, TRICARE (2006-2014), was queried for patients aged 18-64 years who underwent THR and TKR. Patients demographics, clinical characteristics and environment of care information was abstracted. Sponsor rank was used as a proxy for socio-economic status. The outcome of interest was ED utilization. Multivariable logistic regression models were used to identify predictors of 30- and 90-day ED utilization.

Results:
Among the 44,557 patients included in the analysis, 14,187 (31.8%) underwent THR and 30,370 (68.2%) underwent TKR. Forty-nine percent and 70% patients received orthopedic outpatient care within 30- and 90- days after discharge respectively. The proportion of patients who presented to ED within 30 and 90 days were 24% and 35% respectively. The most common primary ICD-9 diagnoses associated with post-discharge ED visits were “Care involving other physical therapy” (V57.1) (17.6%), “Pain in joint” (719.46) (6.3%), “after care of joint replacement” (V54.81) (5.1%) and “encounter for therapeutic drug monitoring” (V58.83) (4.8%). In the risk adjusted analysis lower socio-economic status, LOS, comorbid conditions and complications were associated with higher odds of ED utilization, while orthopedic outpatient care was associated with lower odds of ED utilizations (Table).

Conclusion:
Almost one-third of the patients present to the ED within 90-days of THR and TKR. Lower socio-economic status, longer LOS, presence of comorbid conditions and complications were associated with increased ED visits; Whereas, orthopedic outpatient visits were associated with decreased ED visits. We concluded that appropriate outpatient care may reduce ED utilization after THR and TKR.
 

33.07 Characterizing Surgeon Prescribing Practices and Opioid Use after Outpatient General Surgery

J. R. Imbus1, J. L. Philip1, J. S. Danobeitia1, D. F. Schneider1, D. Melnick1  1University Of Wisconsin,Surgery,Madison, WI, USA

Introduction: Surgeons typically prescribe opioids for patients undergoing outpatient general surgery operations, yet opioid prescribing practices are not standardized. Excess opioid supply in the community leads to abuse and diversion. Identifying patient and operative characteristics associated with postoperative opioid use could reduce overprescribing, and optimize prescribed quantity to patient need. Our aim was to characterize prescribing practices and opioid use after common outpatient general surgery operations, and to investigate predictors of opioid amount used.

Methods: We developed a postoperative pain questionnaire for adult patients undergoing outpatient inguinal hernia repair (IHR), laparoscopic cholecystectomy (LC), breast lumpectomy +/- sentinel lymph node biopsy, and umbilical hernia repair (UHR) at our institution. This facilitated a retrospective review of patients undergoing operations from January to May 2017, excluding those with postoperative complications. We collected opioid prescription data, operative details, and patient characteristics. All opioids were standardized to morphine milligram equivalents (MME) and reported as a corresponding number of 5mg hydrocodone pills for interpretability. Multivariable linear regression was used to investigate factors associated with opioid use.

Results: The 374 eligible cases included 114 (30.6%) unilateral and 59 (15.8%) bilateral IHRs, 90 (24%) LCs, 17 (4.6%) lumpectomies, 33 (8.9%) lumpectomies with sentinel node biopsy, and 60 (16.1%) UHRs. Forty-eight providers prescribed six different opioids. There was variation in prescribed quantity for all procedures, ranging from zero to 80 pills. Median numbers of pills prescribed vs taken were 20 vs 5.5 for unilateral IHR, 20 vs 4 for bilateral IHR, 20 vs 10 for LC, 10 vs 1 for lumpectomy, 20 vs 2 for lumpectomy with sentinel node biopsy, and 20 vs 5 for UHR. Most patients (86%) were over-prescribed. Nearly all (95%) patients took 30 or fewer pills. Twenty-four percent of patients took zero pills.

Univariate analysis showed operation type (p<.001), age (p<.001), body mass index (p<0.01), chronic pain history (p<0.01), and pre-operative opioid use (p<0.01) to be associated with MME amount taken. On multivariable analysis, there was a significant relationship between opioid use and age (p<0.001), with 16-34% less MME taken for every ten year age increase. Patients who underwent LC took over twice as much opioids compared to patients undergoing UHR (p<0.05). Opioid amount taken was independently associated with opioid amount prescribed (p<0.001), with patients taking 24% more MME for every additional ten pills prescribed.

Conclusion: Marked variation exists in opioid type and amount prescribed, and most patients receive more opioids than they consume. Higher prescription amounts contribute to more opioid use, and certain patient subsets may be more (LC) or less (elderly) likely to use opioids postoperatively.

33.05 Underuse of Post-Discharge Venous Thromboembolism Prophylaxis After Abdominal Surgery for Cancer

J. W. McCullough1, J. Schumacher1, D. Yang1, S. Fernandes-Taylor1, E. Lawson1  1University Of Wisconsin,Madison, WI, USA

Introduction:
The efficacy and safety of post-discharge venous thromboembolism (VTE) prophylaxis for patients undergoing major abdominal surgery for cancer has been demonstrated in numerous studies and has been recommended by multiple national organizations over the past decade. Our objective was to identify factors associated with post-discharge VTE prophylaxis after major abdominal surgery for cancer and quantify associated costs to patients and insurers.

Methods:
Adult patients undergoing a major abdominal surgical procedure (colectomy, proctectomy, pancreatectomy, hepatectomy, gastrectomy, or esophagectomy) for cancer in 2012-2015 were identified in the Marketscan® databases, which include comprehensive claims for a nationwide cohort of patients. Patients on anticoagulation preoperatively or with a VTE diagnosis prior to discharge were excluded. Use of post-discharge VTE prophylaxis and associated costs for the 28 days following surgery were assessed. Multivariable logistic regression, including demographics, comorbidities and surgical factors, assessed predictors of receipt of post-discharge VTE prophylaxis.

Results:
Of 23,509 patients undergoing major abdominal surgery for cancer, 5.6% received post-discharge VTE prophylaxis. The median cost to payers was $378 (Interquartile range $212-$579), while patient out-of-pocket costs were $10 (Interquartile range $5-$32). Receipt of post-discharge VTE prophylaxis by procedure and associated costs are displayed in the table. Compared to colectomy, patients undergoing proctectomy and pancreatectomy had significantly higher risk-adjusted odds of receiving post-discharge VTE prophylaxis (OR 1.7, p=0.01 and OR 2.1, p<0.01, respectively). Patients undergoing open procedures (OR 1.4, p<0.01) had higher odds of receiving prophylaxis, as did patients with obesity (OR 1.3, p<0.01), congestive heart failure (OR1.5, p<0.01) or metastatic disease (OR 1.5 p<0.01). In contrast, patients with anemia were significantly less likely to receive prophylaxis (OR 0.85, p=0.02). There were no significant differences in rates of post-discharge VTE prophylaxis observed between insurance plan types. However, significant variation was observed by region, with patients in the south and west regions less likely to receive post-discharge VTE prophylaxis.

Conclusion:
The vast majority of patients undergoing major abdominal surgery for cancer do not receive post-discharge VTE prophylaxis. This is despite a decade of strong recommendations for post-discharge VTE prophylaxis from national organizations. These findings suggest that substantial efforts are needed in order to change clinical practice and increase prescribing of post-discharge VTE prophylaxis for patients undergoing major abdominal surgery for cancer.
 

33.06 Surgeon Annual and Cumulative Volume Variably Predict Outcomes of Complex Hepatobiliary Procedures

M. M. Symer1, L. Gade3, A. Sedrakyan2, H. Yeo1,2  1Weill Cornell Medical College,Surgery,New York, NY, USA 2Weill Cornell Medical College,Healthcare Policy,New York, NY, USA 3NewYork-Presbyterian / Queens,Surgery,New York, NY, USA

Introduction: There is a strong volume-outcome relationship in pancreatectomy, but whether the same relationship exists for other complex hepatopancreatobiliary (HPB) procedures is not known. The role of surgeon experience is clearly important, but whether it should be defined by cumulative volume or a more contemporaneous measure like annual volume is unclear. We compared the outcomes of surgeons across the spectrum of experience to better define the volume-outcome relationship in complex HPB surgery. 

Methods: We identified all patients undergoing major elective HPB operations in New York State from 2000 to 2014 using the Statewide Planning and Research Cooperative Database. Major resections such as liver lobectomy, proximal pancreatectomy, as well as bile duct resection and complex repair were included, while wedge resections, distal pancreatectomy, and percutaneous or endoscopic procedures were excluded. In-hospital mortality and perioperative outcomes were compared across four categories of surgeons based on high or low annual and high or low cumulative operative volume. Median volume was used as the cut-point for high vs. low categories.

Results:13,236 operations performed by 893 surgeons were included in the study. Median cumulative volume was 89 operations, and median annual volume was 21 operations. Similar numbers of procedures were performed by low cumulative/low annual (LCLA) volume surgeons and high cumulative/high annual (HCHA) volume surgeons (6106 vs. 6176 operations). HCHA surgeons treated slightly older patients than LCLA surgeons (63.0y vs. 61.1y, p<0.01). HCHA surgeons also treated fewer Medicaid (5.6% vs. 10.0%, p<0.01) or Black patients (5.2% vs. 10.2%, p<0.01). HCHA surgeons performed many more minimally invasive procedures (15.2% of HCHA operations vs. 5.7% of LCLA operations, p<0.01). Mortality was lowest for HCHA and highest for LCLA surgeons (1.6% vs. 3.7%, p<0.01). Adjusted odds of in-hospital mortality were lower only for those patients undergoing surgery by HCHA volume surgeons (OR 0.47 95%CI 0.32-0.67), but not HCLA volume surgeons (OR 0.58 95%CI 0.28-1.20), or LCHA volume surgeons (OR 0.82 95%CI 0.44-1.53). 30d major events (e.g. stroke, shock), reoperation, and readmission were not affected by cumulative or annual experience. 

Conclusion:In this large New York State based study of complex, elective HPB operations only surgeons with high cumulative and high annual volume have improved in-hospital mortality. In isolation, neither high cumulative volume, nor high annual volume alone were associated with improved outcomes. Racial and socioeconomic disparities in access to high-volume care persist. Interventions to regionalize complex surgical care should account for these distinctions.

 

33.03 Sarcopenia Predicts Mortality Following Above Knee Amputation for Critical Limb Ischemia

D. Strosberg1, T. Yoo1, K. Lecurgo1, M. J. Haurani1  1Ohio State University,Division Of Vascular Surgery / Department Of Surgery,Columbus, OH, USA

Introduction:
Sarcopenia, the measurement of muscle decline, has been shown to be an independent predictor of performance status and mortality in the cancer and trauma literature. Others have applied frailty scores and other measures to predict outcomes after surgical procedures, but these require information that is not always readily available in the electronic health records. Total psoas muscle area (TPA) normalized for body surface area (TPA/m^2) can quickly be assessed with most modern image viewing software available to surgeons.  There are also no accepted guidelines for what constitutes sarcopenia in a subset of patients with critical limb ischemia.  The objectives of this study were to evaluate the feasibility of easily calculating TPA/m^2, and then studying whether lower TPA/m^2 predicted mortality in patients undergoing above knee amputation (AKA) for critical limb ischemia. 

Methods:
We evaluated patients who underwent AKA between July 2013 and July 2016 at a single institution. Patients with abdominal/pelvis computed tomography (CT) scans within 3 months of their amputation were included.  Total psoas muscle area (TPA) was manually measured at L3, and then normalized for body surface area (TPA/m2) calculated using height and weigh from the anesthesiology records at the time of surgery. We defined sarcopenia as patients whose TPA/m^2 were in the lowest quartile of our cohort. Univariate analysis was used to look for difference in mortality between patients undergoing AKA for critical limb ischemia. 

Results:
97 patients underwent AKA, of whom 48 had a CT scan that met inclusion criteria. Total mortality was 44% (21 patients), with a median survival of 90 days (range 1-648 days).  35 patients (70%) were cleared for prosthetic use, however only 5 patients (10%) were noted to be using a prosthesis on follow up, and 13 patients were ambulatory with or without a prosthetic at their last clinic visit (26%). 4 patients (8%) required revision of their residual limb. Mean TPA/m^2 was 1156.3mm2/m2 (range 372.7 – 2572.5mm2/m2). When comparing the demographics of the amputees in the lowest quartile based on TPA/m^2, there was no differences noted in their age (63 vs 59y.o. P=0.1), or discharge status (21% vs 33% discharged home P=0.5). The mortality rate of patients in the lowest TPA/m^2 quartile (372.7 – 781.1mm2/m2) was significantly higher at 62% (8 patients), compared to 35% (13 patients) (P=0.04).

Conclusion:
CT imaging was available making TPA/m^2 measurement possible in this subset of patients undergoing AKA.  Patients with low TPA/m^2 have a significantly higher mortality rate following AKA for critical limb ischemia, despite no differences in age or discharge status. Psoas muscle mass may be used as a predictive indicator for mortality risk, and patients should be counseled accordingly prior to AKA.

33.04 Resection for Anal Melanoma: Is There an Optimal Approach?

A. T. Hawkins1, T. Geiger1, R. Muldoon1, B. Hopkins1, M. Ford1  1Vanderbilt University Medical Center,Colon & Rectal Surgery,Nashville, TN, USA

Introduction:
Anal melanoma is a lethal disease but its rarity makes understanding the behavior and effects of intervention challenging.  Local resection (LR) and abdominal perineal resection (APR) are the proposed treatments for non-metastatic disease and have each gone in and out of favor over the years. We hypothesize that there will be no difference in overall survival between the two types of resection. 

Methods:
The National Cancer Database (NCDB 2004-2014) was queried for adults with a diagnosis of anal melanoma who underwent curative resection. Patients with metastatic disease were excluded.  Patients were divided into two groups – those who underwent local resection (LR) and those who underwent abdominal perineal resection (APR).  Bivariate and multivariable analyses were used to examine the association between resection type and R0 resection rate, short term survival and overall survival.  

Results:
570 patients with anal melanoma who underwent resection were identified over the study period.  The median age was 68 and 59% of patients were female.  383 (67%) underwent LR.  Rate of LR did not change significantly by year. Factors associated with the use of LR included older age, government insurance, and treatment at a high volume center. LR was associated with lower rates of R0 resection rates (LR 73% vs. APR 91%; p<0.001). Overall five year survival for the entire cohort was 20%. There was no significant difference in five-year overall survival (LR 17% vs. APR 21%; p=0.31). (SEE FIGURE)  Even when adjusting for confounding variables including age, gender, comorbidity, and R0 resection in a Cox proportional hazard multivariable model there was no significant survival difference between resection methods (OR 0.84; 95%CI 0.66-1.06; p=0.15).  In addition, there was no improvement in overall survival for patients who underwent R0 resection (OR 1.18; 95%CI 0.90-1.56; p=0.22). 

Conclusion:
Anal melanoma has an abysmal prognosis, with only 1 out of 5 patients alive at five years.  Older age, government insurance, and treatment at a high volume center were associated with local resection. Although local resection was associated with lower rates of R0 resection, there was no discernable difference in overall survival in both unadjusted and adjusted analysis. Given the known morbidity of APR resection, local resection should be considered in cases of anal melanoma.  

Figure- Kaplan-Meier Curve for Overall Survival Comparing Method of Resection

 

33.01 Use and Outcomes of Abdominal Evacuation in Ruptured Endovascular Aortic Aneurysm Repairs

M. Pherson2, J. Richman2, A. Beck1, E. Spangler1  1University Of Alabama At Birmingham,Department Of Surgery, Division Of Vascular Surgery And Endovascular Therapy,Birmingham, AL, USA 2Univiersity Of Alabama At Birmingham,Department Of Surgery,Birmingham, AL, USA

Introduction: Endovascular aneurysm repair (EVAR) for ruptured abdominal aortic aneurysm (AAA) has evolved over the past 10 years to become a feasible treatment mechanism with potentially decreased morbidity and mortality for patients with appropriate anatomy.  On an institutional basis, reports of similar rates of abdominal compartment syndrome were found following creation of EVAR protocols for treatment of ruptured AAAs, but with overall decreased mortality.  We seek to examine EVAR use over time and outcomes of abdominal evacuation in EVAR for rupture in clinical practice as assessed by a national vascular quality database.

Methods: Registry data on open AAA and EVAR repairs from 2003-2016 in the Vascular Quality Initiative (VQI) were obtained (a total of 40,450 procedures).  Our cohort was then restricted to the 3,424 cases where rupture was the indication for repair.  This cohort was analyzed for change in use of modality of repair over time, variation in repair use by region (clustered into North, South, East and West), and survival outcomes by modality of repair.  Comparisons of demographics were performed via ANOVA and chi squared analyses as indicated, and time to event analyses included Kaplan Meier curves and log rank testing.

Results: In total from 2003-2016, 3424 rupture repairs were performed within the VQI: 1605 open repairs and 1819 EVAR repairs.  Of the EVAR repairs, 1597 were performed without abdominal evacuation, while 222 required abdominal evacuation.  Trends in modality of repair over time showed a distinct rise in utilization of any form of EVAR repair from none in 2003 to above 60% of repairs by 2016.  No significant variation in use of EVAR by geographic region (north, south, east or west) was seen.  As seen in Figure 1, EVAR repairs not requiring abdominal evacuation had the greatest survival, while EVAR repairs requiring abdominal evacuation had a lower survival than open AAA repair. 
Factors which were significantly different in the group requiring abdominal evacuation included age (p=.01), and intraoperative packed red blood cell transfusion (p=.02).  However, the percentage of patients considered unfit for open repair did not differ significantly between patients receiving or not receiving abdominal exploration (28.5% vs 26.2%, p=.48).

Conclusion: EVAR use has increased over time, however the proportion undergoing abdominal evacuation has remained relatively stable.  Patients requiring abdominal evacuation after EVAR fared worse than those undergoing EVAR repair without abdominal evacuation or open AAA repair, likely as a surrogate for occurrence of abdominal compartment syndrome.

 

33.02 Safety and Efficacy of Brain Injury Guidelines at a Level III Trauma Center

G. E. Martin1, C. Carroll2, Z. Plummer2, D. Millar1, T. Pritts1, A. T. Makley1, B. Joseph3, L. B. Ngwenya2, M. D. Goodman1  1University Of Cincinnati,Surgery,Cincinnati, OH, USA 2University Of Cincinnati,Neurosurgery,Cincinnati, OH, USA 3University Of Arizona,Surgery,Tucson, AZ, USA

Introduction: Patients with mild-to-moderate traumatic brain injury (TBI) are increasingly managed primarily by trauma/acute care surgeons. The Brain Injury Guidelines (BIG) were developed at an ACS-accredited level 1 trauma center to triage mild-to-moderate TBI patients and facilitate identification of patients warranting neurosurgical consultation. The BIG have not been validated at a level III trauma center. We hypothesized that BIG criteria can be safely adapted to an ACS-accredited level III trauma center to guide transfers to a higher echelon of care.

Methods:  We reviewed the trauma registry at a level III trauma center to identify TBI patients who presented with an Abbreviated Injury Severity-Head score >0. Demographic data, injury details, and clinical outcomes were abstracted with primary outcome measures of worsening on repeat head CT, neurosurgical intervention, transfer to a level I trauma center, and in-hospital mortality.  Patients were classified using the BIG criteria. After validating the BIG in our cohort, we reclassified patients using updated BIG criteria, including: mechanism of injury, anticoagulation or antiplatelet use into BIG-2 or BIG-3, and replacing the “neurologic exam” component with stratification by admission Glasgow Coma Scale (GCS) score. 

Results: From July 2013 to June 2016, 332 TBI patients were identified: 114 BIG-1, 26 BIG-2, and 192 BIG-3. Patients requiring neurosurgical intervention (n=30) or who died (n=29) were BIG-3 with one exception. Patients with GCS <12 had worse outcomes than those with GCS ≥12, regardless of BIG classification. Anticoagulant or antiplatelet use was not associated with worsened outcomes in patients not meeting other BIG-3 criteria. The updated BIG resulted in more patients in BIG-1 (n=109) and BIG-2 (n=100) without negatively impacting outcomes.

Conclusion: The BIG can be applied in the level III trauma center setting. Updated BIG criteria can aid triage of mild-to-moderate TBI patients to a level I trauma center and may reduce secondary overtriage.

 

32.11 Risk Factors of Residual Disease After Partial Mastectomy: A Single Institution Experience

L. M. DeStefano1, L. Coffua2, E. Wilson3, J. Tchou4, L. N. Shulman5, M. Feldman6, A. Brooks7, D. Sataloff7, C. S. Fisher8  1Mercy Catholic Medical Center,Department Of Surgery,Darby, PA, USA 2Philadelphia College Of Osteopathic Medicine,Philadelphia, PA, USA 3Perelman School Of Medicine,Philadelphia, PA, USA 4Hospital Of The University Of Pennsylvania,Department Of Surgery, Division Of Endocrine And Oncologic Surgery,Philadelphia, PA, USA 5Hospital Of The University Of Pennsylvania,Department Of Medicine, Division Of Hematology And Oncology,Philadelphia, PA, USA 6Hospital Of The University Of Pennsylvania,Department Of Pathology And Laboratory Medicine, Division Of Surgical Pathology,Philadelphia, PA, USA 7Pennsylvania Hospital,Department Of Surgery, Division Of Endocrine And Oncologic Surgery,Philadelphia, PA, USA 8Indiana University School Of Medicine,Department of Surgery, Division Of Endocrine And Oncologic Surgery,Indianapolis, IN, USA

Introduction:
For women with invasive breast cancer (IBC), the incidence of a close or positive margin after partial mastectomy (PM) ranges widely in the literature from 20-70%. The additional surgery required for margins leads to a delay in adjuvant treatment and an increased emotional, financial and cosmetic burden for the patient.  Criteria for re-excision are traditionally based on the proximity of the margin.  We hypothesize that based on a more comprehensive review of the initial pathology, there are additional factors that can better predict the likelihood of finding residual disease. 

Methods:
After IRB approval, we retrospectively identified patients diagnosed with Stage I-III IBC who underwent PM and re-excision at our institution from July 2010 – June 2015. We excluded patients if they had undergone neoadjuvant chemotherapy, had multicentric disease, concurrent contralateral cancer, if current cancer was a recurrence, or if the initial surgery was an excisional biopsy. Bivariate analyses were conducted using two-sample t-tests for continuous variables and Fisher’s Exact tests for categorical variables. A multivariate logistic regression was then performed on significant bivariate analyses variables. A statistical significance was accepted for p <0.05.

Results:
We identified 425 patients who underwent PM and re-excision.  Of these patients, 241 (56.7%) were excluded. The remaining 184 patients were included in our analysis and divided into two groups; those with residual disease on re-excision (87 or 47.3%) and those without residual disease (97 or 52.7%). Patients with residual disease were more likely to have higher T and N stages (p=0.02 and p=0.03, respectively), have undergone PM with shave margins (p=0.002) and have only DCIS at their margins (p = 0.02).  Of the patients who had residual disease, pure DCIS was found in 14 (16%), invasive disease in 4 (4.6%), and both DCIS and invasive disease in the remaining 69 (79%) patients. The number of positive margins at initial surgery varied significantly between the two groups, with fewer positive margins in patients with no residual disease (p<0.001).  In a multivariate logistic regression, surgery with or without shave margins) (p=0.004), number of positive margins (p<0.011), and type of disease present at margin (p=0.026) remained predictive of residual disease at re-excision. 

Conclusion:
Our study adds to a growing body of literature on the evaluation of margins after PM.  Our data can assist in making decisions regarding the absolute need for additional surgery.  While the numbers are unable to predict patients without residual disease with necessary accuracy, they can certainly assist in highlighting which patients will likely have residual disease. Future research should focus on the clinical significance of residual burden of disease. 
 

32.09 HCAHPS Scores are Influenced by Social Determinants of Health

S. F. Markowiak1, S. M. Pannell1, M. J. Adair1, C. Das1, W. Qu1, F. C. Brunicardi1, M. M. Nazzal1  1University Of Toledo,Department Of Surgery, College Of Medicine And Life Sciences,Toledo, OHIO, USA

Introduction:

The 15-year-old Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey was created to compare hospitals by measuring and publicly reporting patients’ perspectives of their care. The survey instrument has been found to be valid and reliable for this purpose. Since the passage of the Affordable Care Act, however, the HCAHPS survey has tied hospital reimbursement to patient satisfaction – a purpose for which the survey was not designed. The purpose of this study was to determine whether HCAHPS scores were influenced by social determinants of health (SDOH).

Methods:

Data were gathered from Centers for Medicare and Medicaid Services (CMS) and US Census Bureau archives.  We created a database pairing individual hospital HCAHPS data to corresponding census measures at the county level.  FY2013 was excluded because its HCAHPS performance period did not match with a single census period.  Multivariate analysis and Pearson’s Correlation Coefficient (Pearson r) were used to test 54 SDOH against HCAHPS score.

Results:

1,150 hospitals in 136 counties were analyzed.  Of the 54 SDOH analyzed, 27 had a statistically significant negative correlation with HCAHPS score.  19 had a statistically significant positive correlation with HCAHPS score.  Hospitals in communities with higher proportions of government insured patients had statistically lower HCAHPS scores (Pearson r -0.158, p < 0.001) compared to communities with higher rates of private coverage. Hospitals in communities with larger percentages of unemployment had statistically lower HCAHPS scores compared to those with a stronger workforce (Pearson r -0.153, p < 0.001). Hospitals in communities with predominantly Caucasian ethnicity had statistically higher HCAHPS scores than those in ethnically diverse ones (Pearson r 0.153, p < 0.001). For economic SDOH, hospitals located in communities where rent costs exceed 30% of income had lower scores (Pearson r -0.207, p < 0.001). Hospitals located in communities with higher rates of poverty had statistically lower scores (Pearson r -0.045, p 0.003). Hospitals in communities with higher rates of homeownership had higher HCAHPS scores (Pearson r 0.091, p < 0.001).

Conclusion:

Of all SDOH analyzed, 85% had a statistically significant correlation with HCAHPS scores.  59% of SDOH were negative correlations, indicating lower HCAHPS scores for hospitals located in communities with higher rates of poverty, less educational achievement, and more ethnic diversity. 41% of SDOH were positive correlations, indicating that hospitals in wealthier communities with high rates of homeownership, private insurance, and Caucasian ethnicity had better HCAHPS scores. HCAHPS scores are directly tied to a hospital’s CMS reimbursement by federal law. Because of disparities in SDOH, the HCAHPS survey will shift CMS reimbursement from hospitals in poor and diverse communities to hospitals in wealthier ones.
 

32.10 Routine pre-thyroidectomy laryngoscopy is not necessary in the era of intraoperative neuromonitoring

S. Goare1, E. Forrest1, J. Serpell1,2, S. Grodski1,2, J. C. Lee1,2  1The Alfred Hospital,Monash University Endocrine Surgery Unit,Melbourne, VICTORIA, Australia 2Monash University,Department Of Surgery,Melbourne, VICTORIA, Australia

Introduction:  Routine pre-operative vocal cord (VC) assessment with laryngoscopy in patients undergoing thyroid surgery allows clear documentation of baseline VC function, aides in surgical planning in patients with preoperative palsy, and facilitates the interpretation of intraoperative neuromonitoring (IONM) findings. This has been the practice in our institution for the last 20 years. In this study, we aimed to determine the rate of pre-operative vocal cord palsy (VCP) in our patient cohort;  to evaluate the associated risk factors for preoperative VCP; and therefore, build a case for a selective approach to pre-operative laryngoscopic VC assessment.

Methods:  :  This retrospective review study recruited patients from the Monash University Endocrine Surgery Unit database from 2000 to 2016. Patients who had a VC assessment by fiberoptic laryngoscopy prior to undergoing thyroid surgery were included. Case files were reviewed for potential indicators of VCP, including hoarseness and other symptoms, previous neck surgery, largest nodule dimension, and history of head and neck irradiation. 

Results: Of the 5 279 patients who had pre-operative laryngoscopy, 36 (0.68%) patients were found to have a VCP. Of these, 16 had a nodule > 3.5 cm, 15 had a hoarse voice, 12 had previous neck surgery, and 5 had a malignant cytology. More than one risk factor was present in 11 of these patients. Furthermore, the first 3 of these features would account for all 36 patients with pre-operative VCP. Pre-operative knowledge of malignancy was associated with palsy in 5 patients. However, all of these 5 patients also presented with either a hoarse voice or a nodule > 3 cm. Therefore, malignancy by itself was not an indicator of potential palsy. Approximately two-thirds of the 5 279 included patients had none of these 3 features and also did not have a VCP. Therefore, using these 3 pre-operative factors (hoarseness, previous surgery, nodule > 3.5 cm) as selection criteria, up to two-thirds of our patients could do without a pre-operative laryngoscopy and no palsy would have been missed. As this is a retrospective study, these data need to be interpreted with that in mind.

Conclusion: Using this large dataset, we have established that a VCP is extremely unlikely in the absence of previous neck surgery, hoarseness, or a large nodule. Therefore, in the era of intraoperative neuromonitoring, where the recurrent laryngeal nerve can be directly assessed, we support a selective approach to pre-operative laryngoscopy using the aforementioned criteria. 

 

32.08 Very Early vs. Early Readmissions in General and Vascular Surgery Patients

L. N. Clark1, M. C. Helm1, S. Singh1, J. C. Gould1  1Medical College Of Wisconsin,Milwaukee, WI, USA

Introduction:  Readmission rates are an important surgical quality metric.  Readmissions up to 30 days after discharge following a procedure are the most commonly examined metric.  We hypothesize that ‘very early’ readmissions (0-3 days after discharge) have a significantly different root cause than ‘early’ readmissions (4-30 days after discharge).

Methods:  The American College of Surgeons National Surgical Quality Improvement Program (NSQIP) datasets from 2014-2015 were used to identify patients undergoing a general or vascular surgery procedure. Patients were excluded if they died during the index admission, were discharged greater than 30 days from the operation, or did not have readmission data entered. Patient demographics, medical comorbidities present at the time of surgery, and data regarding postoperative morbidity were analyzed. Complications were graded according to the Clavien-Dindo classification.  Binary logistic regression was used to compare age, functional status, comorbidities, discharge destination and complications to determine their relationship to any 30-day readmission as well as readmission within 3 days compared to 4-30 days following discharge.

Results: A total of 850,043 patients met inclusion criteria: 55.5% female, average age 55 years (range 18-89). Of these patients, 55,212 (6.5%) were readmitted within 30 days and 13,570 (1.6%) were readmitted within three days of discharge. These very early readmissions comprised 24.6% of all readmissions (Figure 1).  When evaluating all readmissions from 0-30 days regardless of timing, age ≥ 65 (Odds Ratio [OR] 1.5; 95% Confidence Interval [CI]; 1.5-1.6, p<0.0001), ≥3 comorbidities (OR 2.7; 95% CI; 2.7-2.8, p<0.0001), preoperative functional dependent status (OR 3.1; 95% CI; 2.8-3.3, p<0.0001), discharge to facility other than home (OR 2.8; 95% CI; 2.7-2.9, p<0.0001), any grade three or four complication prior to discharge (OR 2.4; 95% CI; 2.4-2.5, p<0.0001), and any grade three or four complication after discharge (OR 84.7; 95% CI; 81.3-88.1, p<0.0001) were all identified as risk factors.  The only factor found to be significantly associated with very early readmission compared to early readmission was any grade three or four complication prior to discharge (OR 1.3; 95% CI; 1.2-1.4, p<0.0001).

Conclusion: Readmissions within 3 days of surgery constitute a large portion of all 30-day readmissions. Grade 3 and 4 complications prior to initial discharge are significantly associated with an increased risk of readmission, especially within the first 3 days. Further research is needed determine if effective and targeted strategies can be developed to prevent very early readmission. 
 

32.06 Quantifying the Degree of Postoperative Opioid Over-Prescription

D. S. Swords1,2, S. Vijayakumar1, S. Brimhall1, B. Ostlund1, P. Narayanan1, J. Prochazka1, D. E. Skarda1,2  1Intermountain Healthcare,Surgical Services,Salt Lake City, UT, USA 2University Of Utah,Surgery,Salt Lake City, UT, USA

Introduction:  Previous studies have found that surgeons commonly over-prescribe opioid pain medications to surgical patients, representing an opportunity to decrease the flow of unnecessary opioids into the community. However, few resources are available for surgeons regarding the quantity of opioid medications that patients actually require after various surgeries. The goals of this study were to assess prescribing practices of surgeons and patient opioid utilization after a wide variety of surgical procedures.

Methods:  Between January 15 and August 21, 2017, patients who underwent surgical procedures at 1 of 27 Intermountain Healthcare (IHC) facilities were sent an email-based survey 2-3 weeks postoperatively. Surveys were re-sent to initial non-responders 2 additional times at 3 day intervals. The survey included questions about preoperative and postoperative opioid utilization. The IHC Enterprise Data Warehouse was queried for information about postoperative opioid prescriptions.

Results: During the study period, 6673/21434 (31.1%) patients responded to the survey. Sixty-nine percent of patients were opioid naïve, and 31.0% had taken opioids in the month prior to surgery. The cohort was comprised of 38.2% orthopedic surgery patients, 17.6% general surgery, 7.8% ENT, 7.2% gynecology,  7.3% urology , 4.9% neurosurgery, 4.9% plastics/maxillofacial and 12.1% other specialties. Narcotic naïve patients were prescribed a median of 30 pills (interquartile range [IQR] 24, 50), but used a median of only 4 (IQR 0, 15). Patients who had taken narcotics in the month prior to surgery were prescribed a median of 30 pills (IQR 24, 60) and also took a median of only 4 (IQR 0, 20). When examined on a procedure-specific basis, there was also substantial over-prescription of opioids for most examined procedures. Results for 5 representative procedures are shown in the Table.

Conclusion: The majority of patients undergoing surgery are substantially over-prescribed opioids postoperatively, representing a significant source of unnecessary opioids into the community. Surgeons have an opportunity to increase the appropriateness of postoperative opioid prescribing by prescribing patients fewer opioids. Our next step will be to provide our surgeons with recommendations regarding the number of doses that will satisfy the pain needs of the majority of patients.
 

32.07 Cost Effectiveness of Immediate Biopsy vs. Surveillance of Intermediate Suspicion Thyroid Nodules

E. J. Kuo1, J. X. Wu1, K. A. Zanocco1  1David Geffen School Of Medicine,Section Of Endocrine Surgery,Los Angeles, CA, USA

Introduction:
In an effort to reduce the overdiagnosis and overtreatment of low-risk thyroid cancer, recent American Thyroid Association guidelines increased the size-based biopsy thresholds for some sonographic categories of thyroid nodules. However, fine-needle aspiration (FNA) biopsy continues to be recommended for intermediate-suspicion nodules greater than 1cm in diameter. We hypothesize that the quality-adjusted life expectancy of patients with sonographically intermediate suspicion thyroid nodules would be improved and costs would be decreased by raising the size threshold for biopsy from 1.0 cm to 1.5 cm.

Methods:
A Markov transition-state model was constructed to compare the cost-effectiveness of immediate FNA versus ultrasound surveillance of an incidentally detected 1.5 cm thyroid nodule with intermediate-suspicion sonographic features (hypoechoic, smooth-margined solid nodule without microcalcifications, extrathyroidal extension, or taller than wide shape). Treatment outcome probabilities and their corresponding utilities were estimated based on literature review. Nonlinear growth modeling techniques were used to predict changes in the observed nodule size over time. Effectiveness was measured in quality-adjusted life years (QALYs). Costs were estimated using Medicare reimbursement data. A 3% annual discount rate was applied to all future costs and QALYs. The threshold for cost-effectiveness was defined as an incremental cost-effectiveness ratio of less than $100,000/QALY. Univariate and multivariate sensitivity analyses were used to examine the uncertainty of cost, probability, and utility estimates in the model.

Results:
The expected cost of routine ultrasound surveillance was $3,024 with an effectiveness of 23.8 QALYs. Ultrasound surveillance was $1,053 less costly and 0.01 QALY more effective than immediate FNA, making ultrasound surveillance the dominant strategy. Ultrasound surveillance decreased the lifetime rate of surgery from 26.5% to 23.7%. Immediate FNA became cost-effective during one-way sensitivity analysis when the pretest probability of malignancy increased from 15% to 71% or the cost of ultrasound examination increased from $130 to $570. Two-way sensitivity analysis demonstrated that routine FNA was cost effective if the quality adjustment factor for observation following a benign biopsy result exceeded the quality adjustment factor for observation without a biopsy. The model was not sensitive to the cost or complication rates of surgical therapy.

Conclusion:
Ultrasound surveillance is more cost-effective than immediate FNA for small thyroid nodules with intermediate-suspicion sonographic imaging characteristics, unless the probability of malignancy exceeds 71%. This model is highly sensitive to the utility differences between patients undergoing sonographic surveillance and patients with benign biopsy results. Therefore, additional primary investigation of health-related quality of life in these groups is necessary.

32.04 Impact of Frailty on Failure to Rescue After Low Risk and High Risk Inpatient Surgery

R. Shah1, K. Attwood6, S. Arya2, D. E. Hall3, J. M. Johanning5, N. N. Massarweh4  1Henry Ford Health System,General Surgery,Detroid, MI, USA 2Emory University School Of Medicine,Division Of Vascular And Endovascular Therapy/ Department Of Surgery,Atlanta, GA, USA 3University Of Pittsburg,Center For Health Equity Research And Promotion, Veterans Affairs Pittsburgh Healthcare System,Pittsburgh, PA, USA 4Baylor College Of Medicine,VA HSR&D Center For Innovations In Quality, Effectiveness And Safety, Michael E DeBakey VA Medical Center,Houston, TX, USA 5University Of Nebraska College Of Medicine,Department Of Surgery,Omaha, NE, USA 6Roswell Park Cancer Institute,Surgical Oncology,Buffalo, NY, USA

Introduction:  Failure to rescue (FTR), or death after a potentially preventable complication, is a nationally endorsed, publically reported quality measure. However, little is known about the impact of frailty on FTR—in particular, after lower risk surgical procedures.

Methods:  Retrospective cohort study of 984,550 patients from the National Surgical Quality Improvement Program (2005-2012) who underwent inpatient general, vascular, thoracic, cardiac and orthopedic operations. Frailty was assessed using the clinically applicable Risk Analysis Index (RAI) and patients were stratified into five groups based on RAI score (<=10, 11-20, 21-30, 31-40 and >40). Procedures were categorized as low (≤1%) or high mortality risk (>1%). The association between RAI, the number of post-operative complications (0, 1, 2, 3+), and FTR was evaluated using hierarchical modeling. 

Results: Among the most frail (RAI >30) patients in the cohort, ~20% were aged 55 years or younger. Regardless of procedural risk, increasing RAI score was associated with both an increased occurrence of post-operative complications and the number of complications. For those who underwent low risk surgery, major complication rates were 3.2%, 8.6%, 13.5%, 23.8% and 36.4% for RAI scores of <=10, 11-20, 21-30, 31-40 and > 40, respectively and for patients undergoing high risk surgery, the corresponding rates of major complications were 13.5%, 23.7%, 31.1%, 42.5% and 54.4%, respectively. Stratifying by the number of complications, significant increases in FTR rates were observed across RAI categories after both low and high risk procedures (Figure 1; trend test, p<0.001 for all). Increasing RAI was associated with an increased risk of FTR that was most pronounced after low risk procedures. For instance, the odds ratios (ORs) for FTR after 1 major complication for patients undergoing a low risk procedure were 4.8 (3.7, 6.2), 8.1 (5.9, 11.2), 19.3(12.6, 29.6) and 48.8 (22.7, 104.9) for RAI scores of 11-20, 21-30, 31-40 and > 40, respectively and for patients undergoing a high risk procedure, the corresponding ORs were 2.6 (2.4, 2.8), 5.2 (4.8, 5.6), 9.3 (8.5, 10.3) and 19.5 (16.8, 22.6) respectively. 

Conclusion: Frailty has a dose-response relationship with complications and FTR that is similarly apparent after low and high risk inpatient surgical procedures.  Tools facilitating rapid assessment of frailty during preoperative assessment, may help provide patients with more accurate estimates of surgical risk and could improve patient engagement in peri-operative interventions that enhance physiologic reserve and can potentially mitigate aspects of procedural risk.

 

32.05 Bariatric Surgery Reduces the Incidence of Estrogen Receptor Positive Breast Cancer

T. Hassinger1, J. H. Mehaffey1, R. B. Hawkins1, B. D. Schirmer1, P. T. Hallowell1, A. T. Schroen1, S. L. Showalter1  1University Of Virginia,Department Of Surgery,Charlottesville, VA, USA

Introduction:  Bariatric surgery is an effective treatment for morbid obesity with long-lasting weight loss. Additionally, elevated body mass index (BMI) is known to be an important risk factor for the development of breast cancer, one of the most common cancer diagnoses among women in the United States. Therefore, we hypothesized that patients undergoing bariatric surgery would have a decreased incidence of estrogen receptor (ER) positive breast cancer when compared to a propensity-matched non-surgical cohort.

Methods:  The bariatric population for this study included all female patients that underwent bariatric surgery at a single institution between 1985 and 2015. Patients from all routine outpatient visits were identified from the clinical data repository (CDR) and matched 1:1 with bariatric patients using body mass index (BMI), relevant comorbidities, demographics, and insurance status. The primary outcome of interest was ER positive breast cancer. Chart review was performed on all patients with a breast cancer diagnosis. Univariate analyses were performed to compare the two groups.

Results: A total of 4,860 patients were included in this study, with 2,430 in both the bariatric surgery and non-surgery groups. Median follow-up time from date of surgery or date of initial morbid obesity diagnosis (non-surgery group) was 5.6 years. There was no difference in median age (42.0 [35.0-51.0] vs. 42.0 [31.0-53.0]; p=0.29) or medical comorbidities aside from gastroesophageal reflux disease (713 [29.3%] vs. 149 [6.1%]; p<0.0001). Seventeen (0.7%) patients in the bariatric surgery group were diagnosed with any breast cancer after surgery compared to 32 (1.3%) patients in the non-surgery group (p=0.03). The non-surgery group had more ER positive tumors (4 [36.4%] vs. 22 [71.0%]; p=0.04) as well as larger median tumor size (p=0.02). 

Conclusion: Morbidly obese female patients who underwent bariatric surgery were found to have fewer subsequent diagnoses of any breast cancer and ER positive breast cancer when compared to a propensity-matched cohort. These results suggest the possibility of an oncologic benefit to weight-loss surgery.

32.01 To Activate or Not to Activate? The Controversy Surrounding Tamoxifen Treatment and Thrombosis

T. N. Augustine1, K. Pather1, T. Dix-Peek2, R. Duarte2  1University Of The Witwatersrand,School Of Anatomical Sciences, Faculty Of Health Sciences,Johannesburg, SOUTH AFRICA, South Africa 2University Of The Witwatersrand,Department Of Internal Medicine, School Of Clinical Medicine, Faculty Of Health Sciences,Johannesburg, SOUTH AFRICA, South Africa

Introduction

Cancer is associated with hypercoagulability, with therapies including hormone-therapy linked to an increased risk of thrombotic complications. Notwithstanding the contribution of other hematological processes and components, platelets are implicated in contributing to a hypercoagulable state, with breast cancer patients undergoing Tamoxifen treatment at a greater risk for thrombotic complications. Nevertheless, laboratory studies show that Tamoxifen prevents platelet activation. Therein lies the controversy – we postulate that experimental design and methodological approaches to this question have resulted in disparate conclusions. We thus assessed the effects of Tamoxifen on breast cancer cell induced platelet activation, and further assessed whether alterations in estrogen receptor (ER) profiles were associated with platelet induction capacity.

Materials and Methods

MCF7 and T47D cells were treated with 2μM Tamoxifen for 24 hours at 37°C and 5% CO2 prior to exposure to whole blood. Peripheral whole blood from healthy female volunteers (n=5, age 19 – 30 years old with specific exclusion criteria) was collected in 3.2% sodium citrate vacuette coagulation tubes (Human Research Ethics Committee, University of the Witwatersrand Approval #M160826). The first 2ml of blood drawn was discarded to exclude the effect of mechanically activated platelets. Cells were exposed to blood for 2.5min followed by sample preparation for platelet activation (CD41+CD62p+) using flow cytometry, with a specialised interval gating strategy to determine the index of platelet activation (IPA). We further assessed cellular ER isoform expression using digital droplet PCR (ddPCR) to determine whether platelet induction capacity is associated with ER isoform expression.

Results

Breast cancer cells induced a significantly (p<0.05) higher IPA compared to the untreated, negative, controls; with T47D cells inducing greater levels than MCF7 cells. Tamoxifen treatment enhanced the ability of MCF7 cells to induce overall platelet activation, but further investigation revealed that T47D cells induced a substantial increase in a small population of CD62p+++ platelets. We are currently analyzing the ddPCR data to determine whether alterations in ER isoform expression are related to the ability of the cells to modulate platelet activation under Tamoxifen treatment.

Conclusions

Hormone-dependent breast cancer cells are able to induce platelet activation the severity of which being potentially linked to the aggressiveness of the tumour, and thus the phenotype. The addition of hormone-therapy treatment differentially affected induction of platelet activation, rivalling that induced by a procoagulant, thrombin. These results highlight the procoagulant nature of breast cancer cells, the effects of hormone-therapy and the need to assess graded levels of platelet activation in investigating the role of platelet-tumour cell interaction in thrombosis and tumour progression. 

32.03 Operating Room Teams: Does Familiarity Make a Difference?

S. Fitzgibbons1,2, S. Kaplan3, X. Lei3, S. Safford5, S. Parker4  1MedStar Georgetown University Hospital,Surgery,Washington, DC, USA 2Georgetown Univeristy Medical Center,Washington, DC, USA 3George Mason University,Psychology,Fairfax, VA, USA 4Virginia Tech Carilion School Of Medicine And Research Institute,Human Factors,Roanoke, VA, USA 5Carillion Clinic,Pediatric Surgery,Lynchburg, VA, USA

Introduction: The composition of any given operating room team may vary procedure-to-procedure. Studies in healthcare have documented that greater familiarity between certain team member pairs or dyads (ex. surgeon and scrub) corresponds to improved effectiveness, with outcomes ranging from shorter cross clamp times during cardiopulmonary bypass to shorter operative times during mammoplasty. We sought to further our understanding of this effect beyond simple dyads by developing an OR team familiarity score reflective of the larger and more complex group, and determining the impact of the larger group familiarity on surgical processes and clinical outcomes.  

Methods: Data from a diverse, primarily urban healthcare system including 6 acute-care hospitals was extracted from a system-wide electronic medical record.  All knee arthroplasty cases performed between 2013 and 2016 were included in the data set.  Information regarding individual OR team participants and their roles in the surgery were collected, in addition to patient demographics (ASA class, age, gender, race, ethnicity), case information (surgical procedure, date and time of the operation)  and outcome variables (length of procedure, length of hospital stay).  Team familiarity was calculated using a previously published formula from Huckman, Staats, and Upton (2009).  A multilevel regression (i.e., random coefficient modeling) framework was applied to examine the impact of a team’s familiarity score on case length and post-op length of stay. In addition, specific familiarity scores for each possible dyad on the team was calculated and analyzed. Dyads were defined as pairs of core team members: surgeon, scrub, circulator, anesthesiologist.

Results:A total of 4546 knee arthroplasty cases were included in the data set with an average case length of 92.68 minutes and an average length of hospital stay of 3.22 days.  When controlling for patient age, gender, hospital, and ASA class, a team’s familiarity score during a case was significantly associated with a shorter case length, with 10 previous team member interactions predicting a decreased case length of approximately 1.1 minutes (p=.012).  Similarly, an increased team familiarity score predicted a decreased length of stay, with 10 previous team member interactions predicting a decrease in hospital length of stay of 0.1 days.  With respect to the impact of specific dyad familiarity, all dyads involving the circulator predicted a shorter length of hospital stay, while all three dyads between the surgeon, scrub and circulator predicted a shorter case length.

Conclusion:Overall team member familiarity in the operating room is associated with a small but significant decrease in the case length and hospital length of stay for patients undergoing total knee arthroplasty.