70.10 Out-of-Pocket Payment for Surgery in Uganda: Impoverishing and Catastrophic Expenditures

G. A. Anderson1,4, L. Ilcisin4, P. Kayima2, R. Mayanja3, N. Portal Benetiz2, L. Abesiga3, J. Ngonzi3, M. Shrime4  1Massachusetts General Hospital,Surgery,Boston, MA, USA 2Mbarara University Of Science And Technology,Surgery,Mbarara, WESTERN, Uganda 3Mbarara University Of Science And Technology,Obstetrics And Gynecology,Mbarara, WESTERN, Uganda 4Havard Medical School,Global Health And Social Medicine,Boston, MASSACHUSETTS, USA

Introduction:  All care delivered at government hospitals in Uganda is provided to patients free of charge. Unfortunately, frequent stock-outs and broken equipment require patients to pay out of pocket for medications, supplies and diagnostics. This is on top of the direct non-medical costs, which can far exceed direct medical costs. Little is known about the amount of money patients have to pay to undergo an operation at government hospitals in Uganda.

Methods:  Every patient that was discharged from MRRH after undergoing an operation over a 3-week period in April was approached. Participants were then interviewed, using a validated tool, about their typical monthly expenditures to gauge poverty levels. Next they were asked about the medical costs incurred during the hospitalization. An impoverishing expense was incurred if a patient spent enough money to push them into poverty. A catastrophic expense was incurred if the patient spent more than 10% of their average annual expenditures.

Results: 41% of our patients met the World Bank’s definition of extreme poverty compared with 33% of all Ugandan by Ministry of Finance estimates. After receiving surgical care, one quarter were pushed into poverty by Uganda’s definition and 2 out of every 3 patients became poor by the World Bank’s definition. These devastating financial impacts can also be seen in other ways. Over half of the households in our study had to borrow money to pay for care, 21% had to sell possessions and 17% lost a job as a result of the hospitalization.  Only 5% of our patients received some form of charity.

Conclusion: Despite “free care,” receiving an operation at a government hospital in Uganda can result in a severe economic burden to patients and their families. The Ugandan government needs to consider alternative forms of financial protection for its citizens. If surgical care is scaled-up in Uganda the result should not be a scale-up in financial catastrophes. The Ministry of Health and the Ministry of Finance can use our results, and others like these, to help inform decisions regarding healthcare policy and resource allocation.

 

70.09 The WHO Surgical Safety Checklist in Cambodia: Understanding Barriers to Quality Improvement.

N. Y. Garland1, H. Eap2, S. Kheng2, J. Forrester1, T. Uribe-Leitz1, M. M. Esquivel1, G. Lucas2, O. Palritha2, T. G. Weiser1  1Stanford University,Department Of Surgery, Section Of Acute Care Surgery,Stanford, CA, USA 2World Mate Emergency Hospital,Battambang, BATTAMBANG, Cambodia

Introduction: The WHO Surgical Safety Checklist (SSC) has been proven to reduce post operative morbidity and mortality, however it remains difficult to implement, particularly in low resource settings. We aimed to better characterize both the barriers to checklist implementation and subsequent improvement in patient safety measures by assessing compliance with specific checklist items and evaluating root causes of compliance failures. We hypothesized that a better understanding of barriers to quality improvement will lead to a more effective implementation strategy.

Methods: The SSC was introduced at a 109 bed orthopedic trauma hospital in Battambang, Cambodia. After two half-day training sessions for the operating theatre (OT) staff in checklist use, intraoperative data were collected by trained nurses via a paper form for the first 2 months, and later a mobile REDCap data collection tool. Our tool focused on identifying performance of specific checklist items, including both communication and perioperative processes. Process-level data were compiled and presented to hospital administration and the surgical team.

Results:We collected information from direct observations of 308 surgical cases. Following initiation of the checklist all communication elements of the checklist (discussing case length, confirming the correct patient, and estimating blood loss) were performed 100% (308/308) of the time with the exception of team introductions, which the surgical team found unnecessary as they had a small staff and were familiar with each other. Several elements that required material resources were also performed with great consistency; for example, appropriate imaging was present in the OT during 100% (278/278) of cases. Other processes that were initially done poorly or not done at all were quickly brought to 100% compliance when resource barriers were overcome, such as the presence of a sterile indicator in instrument trays, which increased from 0% to 100% by the end of the observational period. However, complex processes that required clinical decision-making, such as antibiotic administration within 60 minutes of skin incision for clean cases, were performed inconsistently.

Conclusion:The primary barriers to checklist compliance in this low resource setting were not communication factors or material resources, but rather inconsistently functioning processes. Complex processes that involve clinical decision-making were more difficult to perform consistently, but appear likely to improve over time with ongoing data feedback to the team. This study highlights the importance of understanding barriers to checklist compliance as part of a checklist implementation strategy.

 

70.08 Venous Thromboembolism Prophylaxis in Patients Undergoing Intracranial Pressure Monitoring is Safe

C. Luther1, A. Strumwasser1, D. Grabo1, D. Clark1, K. Inaba1, K. Matsushima1, E. Benjamin1, L. Lam1, D. Demetriades1  1University Of Southern California,Surgery – Trauma/Critical Care,Los Angeles, CA, USA

Introduction:  The use of venous thromboembolism (VTE) prophylaxis in patients with severe traumatic brain injury (TBI) and intracranial pressure monitoring (ICPM) is controversial. This study’s purpose was to determine the safety and efficacy of VTE prophylaxis in TBI patients undergoing ICPM. 

Methods:  A seven-year (2008-2015) retrospective analysis of patients undergoing ICPM at our academic Level I trauma center was conducted. Inclusion criteria were ICPM patients surviving ≥7 days that were eligible for VTE prophylaxis. Pediatric patients (<18 years) and patients with known VTE were excluded. Variables abstracted from the registry included patient demographics, age, sex, comorbidities, injury severity scores (ISS), injury profiles, Glasgow Coma Score (GCS), systolic blood pressure (SBP), ICP data, Marshall CT index, pharmacy data, and prior anticoagulant use.  Outcomes included ICP data pre/post initiation of prophylaxis, VTE incidence, hemorrhage expansion on CT, and need for neurosurgical intervention.  Data was analyzed by unpaired Student’s t-test for continuous variables and Chi Square analysis for categorical variables with significance denoted at a p value of 0.05 or less.

Results: A total of 213 patients met inclusion criteria. Of these, 104 (49%) received VTE prophylaxis (ICPM-PPx) and 109 (51%) did not (ICPM-no PPx). Groups were matched for age (p=0.1), sex (p=0.7), admission SBP (p=0.8), GCS (p=0.7) and total injury burden (mean ISS ICPM-PPx=25±1.3 vs. ICPM-no PPx=23±1.1, p=0.2). In low head bleed severity (Marshall CT Index≤3), VTE rates (ICPM-PPx=8.3% vs. ICPM-noPPx=6.3%, p=0.7) and craniotomy rates (ICPM-PPx=21% vs. ICPM-noPPx=14%, p=0.3) were similar.  Among high-risk ICH (Marshall CT Index≥4), VTE rates (VTE rate ICPM-noPPx=0% vs. ICPM-PPx=3.1%, p=0.2) and craniotomy rates (craniotomy rate ICPM-no PPx=50.0% vs. ICP-PPx=31.2%, p=0.1) were similar. Among patients on prophylaxis, 40 (39%) began prophylaxis with an ICPM in place (pre-d/c) while 64 (63%) began prophylaxis with the ICPM removed (post-d/c).  Among patients with ICPM in place, mean ICP did not change appreciably with prophylaxis (mean ICP pre-d/c=12±0.6 vs. post-d/c=11±0.8 mmHg, p=0.1) and there was no difference in the need for surgical intervention (pre-d/c=7.3% vs. post-d/c=3.1%, p=0.3).  There was no difference in prophylaxis interruptions (0.2), duration of prophylaxis (0.7), dosing (0.9) or type of prophylaxis (0.6).  The proportion of increased ICH identified by CT was similar pre-d/c vs. post-d/c (10.0% vs. 10.0%, p=0.9).  Overall incidence of VTE was not significantly different (pre-d/c vs. post-d/c=14.6% vs. 9.2%, p=0.6).

Conclusion: Anticoagulant prophylaxis can be initiated safely with-or-without an ICP monitor in place.  Intracranial pressures do not change significantly and there is no increased need for surgical intervention.  However, the data suggests there is no decreased incidence of VTE in ICPM patients on prophylaxis.

 

70.07 Improving Utilization of Continuous Renal Replacement Therapy in ICUs of a Large Academic Center

J. Tseng1, P. Hain1, S. Barathan1, H. Rodriguez1, T. Griner1, M. S. Rambod1, N. Parrish1, H. Sax1, R. F. Alban1  1Cedars-Sinai Medical Center,Los Angeles, CA, USA

Introduction:
Continuous Renal Replacement Therapy (CRRT) is a dialysis modality that is essential in the management of critically ill patients with renal failure. It confers the advantage of removing solutes and fluid at a slow, constant rate, with lower rates of hypotension and other adverse events. CRRT use has increased over time, particularly in surgical patients. We sought to assess current utilization patterns of CRRT at our institution and to standardize usage and efficiency in a collaborative approach.

Methods:
Data was collected for fiscal years 2013 to 2016 at a large urban academic medical center. A task force was developed in October 2015 involving intensive care units (ICUs), nursing and nephrology leadership with an effort to apply standardized guidelines for initiation and termination of CRRT, improve daily communication and documentation between the Nephrology, ICU teams, and ICU nursing staff. In addition to other measures, electronic order sets were revised to reassess the need for CRRT and its associated labs on a daily basis. Utilization data and related costs were calculated using our internal data warehouse and finance department. Fiscal year (FY) data before and after the intervention were compared.

Results:
From 2013 to 2016, the total volume of patients on CRRT increased by 187% (233 to 435), and the total number of CRRT days increased by 191% (1,704 to 3,257) with the large majority of patients being surgical (62%). Prior to intervention, the median number of CRRT days per patient peaked at 8 days for FY2014 and FY2015. This decreased to 7 after our intervention for FY2016. The total direct cost of CRRT increased yearly from $2.84 million in 2013 to $4.37 million in FY2016, while the average cost of CRRT per patient decreased from $12,167 in FY2013 to $11,548 in FY2015, and further to $10,030 in FY2016. This resulted in savings per case of $1,518, to a total annualized saving of $660,220 for FY2016. In addition, case mix index increased yearly from 8.82 in FY2013 to 9.35 in FY2016.

Conclusion:
By establishing a task force to critically review the usage of CRRT and implementing best practice guidelines and collaboration policies, we significantly reduced the cost of CRRT per case across all ICUs at our institution. These resulted in significant cost savings and improved documentation.
 

70.06 Burn Injury Outcomes in Patients with Preexisting Diabetic Disease

L. T. Knowlin1, B. A. Cairns1, A. G. Charles1  1University Of North Carolina At Chapel Hill,Surgery,Chapel Hill, NC, USA

Introduction: It is estimated 486,000 people sustained burn injuries last year in the United States.Despite advancements over the last three decades in burn care, the burden of burn injury morbidity and mortality is still high and the drivers are largely  burn wound infections and sepsis. The challenge remains to prognosticate various burn injury outcomes as current burn prediction models do not account for specific comorbidities such as diabetes. We therefore sought to examine the impact of pre-existing diabetes on burn injury outcomes.

Methods: A retrospective analysis of patients admitted to a regional burn center from 2002-2012. Independent variables analyzed included basic demographics, burn mechanism, presence of inhalation injury, TBSA (total body surface area), and pre-existing comorbidities. Bivariate analysis was performed and Poisson regression modeling was utilized to estimate the incidence of sepsis, graft complications, and in-hospital mortality.  

Results: 7640 patients were included in this study. Overall survival rate was 96%. 8% (n=605) had a preexisting diabetes. Diabetic patients had a higher rate of sepsis (5% vs 2%), graft complications (2% vs 0.5%), and crude mortality rate (8% vs 4%) compared to those without diabetic disease (p < 0.001). The adjusted Poisson regression model to estimate the incidence risk of sepsis in patients with preexisting diabetic disease was 54% more compared to those without diabetic disease (IRR =1.54, 95% CI = 1.04-2.29). The risk of graft complication is two times higher (IRR= 2.17, 95% CI = 1.03-4.58) for patients with pre-existing diabetic disease compared to those without diabetic disease after controlling for patient demographics and injury characteristics. However, there was no significant impact of preexisting diabetic disease on in-hospital mortality.

Conclusion: Preexisting diabetes significantly increases the risk of developing sepsis and graft complication but has no significant effect on mortality in patients following burn injury. Our findings emphasizes the need for the inclusion of comorbidities in burn care outcomes in addition to prognosticate burn mortality.

 

70.05 Association between Hospital Staffing Strategies and Failure to Rescue Rates

S. T. Ward1, D. A. Campbell1, C. Friese2, J. B. Dimick1, A. A. Ghaferi1  1University Of Michigan,Department Of Surgery,Ann Arbor, MI, USA 2University Of Michigan,School Of Nursing,Ann Arbor, MI, USA

Introduction: Failure to rescue (FTR) is a widely accepted quality measure in surgery. While numerous studies have established FTR as the principal driver in postoperative mortality rates, specific determinants of FTR remain unknown. In this study we investigate hospital staffing strategies associated with FTR.

Methods: Using prospectively collected data Michigan Surgical Quality Collaborative (MSQC), we identified 44,567 patients between 2008-2012 who underwent major general or vascular surgery procedures. Hospitals were divided into tertiles based on risk adjusted failure to rescue rates. We then administered a hospital resource survey to surgeon champions at MSQC participating hospitals, with a response rate of 62% (32/52). Survey items included ICU staffing model (closed or open), use of board-certified intensivists, presence of surgical hospitalists and residents, overnight coverage by advanced practice providers (APP) and a dedicated rapid response team (RRT).

Results: FTR rates across the tertiles were 8.9%, 16.5% and 19.9% respectively, p <0.001. Low FTR hospitals tended to have a closed ICU staffing model (56% vs 20%, p<0.001) and a higher proportion of board-certified intensivists (88% vs 60%, p<0.001) when compared to high FTR hospitals. There was also significantly more staffing of low FTR hospitals by hospitalists (85 vs 20%, p<0.001) and residents (62 vs 40%, p<0.01). Low FTR hospitals were noted to have more overnight coverage using APP (75% vs 45%, p<0.001) as well as a dedicated RRT (90% vs 60%, p<0.001).

Conclusion: Low FTR hospitals had significantly more staffing resources than high FTR hospitals. While hiring additional staff may be beneficial, there remain significant financial limitations for many hospitals to implement robust staffing models. As such, our ongoing work seeks to improve rescue rates with better understanding and implementation of effective hospital staffing strategies within these constraints.  

70.04 Acute Alcohol Intoxication Strongly Correlates With Polysubstance Abuse In Trauma Patients

A. Jordan1, P. Salen2, T. R. Wojda1, M. S. Cohen2, A. Hasani3, J. Luster3, H. Stankewicz2, S. P. Stawicki1  1St. Luke’s University Health Network,Department Of Surgery,Bethlehem, PENNSYLVANIA, USA 2St. Luke’s University Health Network,Department Of Emergency Medicine,Bethlehem, PENNSYLVANIA, USA 3Temple University,Department Of Surgery,Philadelpha, PA, USA

Introduction: Polysubstance abuse, defined as any combination of multiple drugs or at least one drug and alcohol, is a major public health problem. In addition to the negative impact on health and well-being of substance users, alcohol and/or drug abuse can be associated with significant trauma burden. The aim of this study was to determine if serum alcohol (EtOH) levels on initial trauma evaluation correlate with the simultaneous presence of other substances of abuse. We hypothesized that polysubstance use would be significantly more common among patients who presented to our trauma center with blood alcohol content (BAC) >0.10%.

Methods: A retrospective audit of trauma registry (August 1998 to June 2015) was performed. Abstracted data included patient demographics, BAC determinations, all available formal determinations of urine/serum drug screening, injury mechanism and severity information, Glasgow coma scale (GCS) assessments, and 30-day mortality. Stratification of BAC was based on the 0.10% cut-off. Statistical comparisons were performed using Fisher’s exact testing and Chi-square testing, with significance set at α=0.05.

Results: We analyzed 488 patients (76.3% male, mean age 38.7 years). Median GCS was 15 (IQR 14-15). Median ISS was 9 (IQR 5-17). Overall 30-day mortality was 2.7%, with no difference between elevated (>0.10) and normal (<0.10) EtOH groups. For the overall study sample, the median BAC was 0.10% (IQR 0-0.13). There were 284 (58.2%) patients with BAC <0.10% and 204 (41.8%) patients with BAC >0.10%. The two groups were similar in terms of mechanism of injury (both, >95% blunt).

A total of 245 patients underwent formal “tox-screen” evaluations. Of those, 31 (12.7%) were positive for marijuana, 18 (7.35%) were positive for cocaine, 28 (11.4%) for opioids, and 32 (13.1%) for benzodiazepines. Patients with BAC >0.10% on initial evaluation were significantly more likely to also have polysubstance use (e.g., EtOH + additional substance) than patients with BAC <0.10% (53/220 [24.1%] versus 16/25 [64.0%], p<0.002, Table). Among polysubstance users, BAC >0.10% was significantly associated with opioid and cocaine use (Table).

Conclusion: This study confirms that a significant proportion of trauma victims with an admission BAC >0.10% present with evidence of polysubstance use. Patients with BAC >0.10% were more likely to test positive for drugs of abuse (e.g., cocaine and opioids) than patients with BAC <0.10%. Our findings support the need for routine substance abuse screening in the presence of EtOH intoxication, with focus on primary identification, appropriate clinical management, and early polysubstance abuse intervention.
 

70.03 IMPACT OF EXPANDED MEDICAID COVERAGE ON HOSPITAL LENGTH STAY FOLLOWING INJURY

J. Holzmacher1, K. Townsend1, C. Seavey1, S. Gannon1, L. Collins1, R. L. Amdur1, B. Sarani1  1George Washington University School Of Medicine And Health Sciences,Surgery,Washington, DC, USA

Introduction:
Despite implementation of the Affordable Care Act (ACA), states differ regarding specific eligibility requirements, coverage, and benefits. Washington DC (DC) has the most expansive Medicaid eligibility, including coverage for undocumented immigrants and individuals at a higher federal poverty income threshold, followed by Maryland (MD), which meets ACA expansion standards, and Virginia (VA), which has not expanded Medicaid. We hypothesize that patients in DC have a shorter hospital length of stay (LOS) following injury than either MD or VA.

Methods:
A retrospective study of an adult, urban trauma center, which receives patients from DC, VA, and MD, was performed from 2013-2016. Private insurance was excluded. A multivariate linear model predicting LOS by insurance and state and a model examining LOS by insurance type within states were created after adjusting for demographics, injury severity, penetrating injury, and head and pelvis abbreviated injury scores.

Results:
2728 patients were enrolled. Average patient age and injury severity scores were 53 ± 23 years old and 7 ± 6, respectively . 90% of patients sustained a blunt mechanism of injury. Overall, 36% of patients had Medicaid and 42% had Medicare insurance. 20% of the overall cohort was uninsured. 47% of patients in DC had Medicaid compared with 18% in MD and 8% in VA (p<0.0001). 39% of patients in DC had Medicare compared with 47% in MD and 43% in VA (p<0.0001).  

Adjusted LOS was 1.9 days shorter for Medicaid patients in DC versus VA (p=0.003), and 0.9 days shorter in DC versus MD (p=0.02) (figure 1). Uninsured patients had 0.7 and 2.4 days shorter LOS than Medicaid in DC (p<0.0001) and VA (p=0.006), respectively, but no difference in LOS was found between states. Medicaid patients had 0.5 days shorter LOS than Medicare patients in DC (p=0.017), but 1.9 days longer LOS than Medicare patients in VA (p=0.042). There was no difference in LOS between Medicare and Medicaid patients in MD.

Conclusion:
Expanded Medicaid coverage, which includes undocumented immigrants and individuals at a higher federal poverty income threshold, is associated with shorter LOS following injury.
 

70.02 Graft Loss: Review of a Single Burn Center’s Experience and Proposal of a Graft Loss Grading Scale

L. S. Nosanov1,2, M. M. McLawhorn2, L. Hassan2, T. E. Travis1,2, S. Tejiram1,2, L. S. Johnson1,2, L. T. Moffatt2, J. W. Shupp1,2  1MedStar Washington Hospital Center,Burn Center,Washington, DC, USA 2MedStar Health Research Institute,Firefighters’ Burn And Surgical Research Laboratory,Washington, DC, USA

Introduction:  Etiologies contributing to burn graft loss are well studied, yet there exists no consensus definition of burn “graft loss”, nor a scale with which to grade severity. This study examines a single burn center’s experience with graft loss. Our institution introduced a graft loss grading scale in 2014 for quality improvement. We hypothesize that higher grades are associated with longer hospital stays and increased morbidity.

Methods:  Following IRB approval, a retrospective review was performed for all burned patients with graft loss on departmental morbidity and mortality reports 7/2014–7/2016. Duplicate entries, wounds not secondary to burns, and chronic non-healing wounds were excluded. Data abstracted from the medical record included demographics, medical history, and details of injury, surgical procedures, graft loss, and clinical outcomes including hospital and ICU lengths of stay. Graft loss grades were assigned per institutional grading scale (Table 1). Photos of affected areas were graded by two blinded surgeons, and a linear weighted κ was calculated to assess inter-rater agreement. 

Results: In the two-year study period, 50 patients with graft loss were identified. After exclusions, 43 patients were included for analysis. Mean age was 50.1 years, and the majority were male (58.1%) and African American (41.9%). Smoking (30.2%) and diabetes (27.9%) were prevalent. The most common mechanisms were flame (55.8%), scald (18.6%) and thermal (11.6%). Total body surface area (TBSA) involvement ranged 0.5–51.0% (mean 11.8±12.3 %). Grade 1 graft loss was documented in the chart of one patient (2.3%), Grade 2 in 15 (34.9%), Grade 3 in 12 (27.9%) and Grade 4 in 15 (34.9%). Seven patients had wound infections at diagnosis of graft loss. Reoperation was performed in 20 (46.5%). Hospital LOS ranged 9–81 days (mean 27.4±16.0 days), with ICU LOS ranging 0–45 days (mean 7.7±10.9 days). Hospital LOS was longer than predicted (by TBSA%) in 38 patients (88.4%). Seven patients experienced significant morbidities including two amputations. On image review, moderate agreement was reached between blinded surgeons (k = 0.44, 95% CI 0.11 – 0.65, p = 0.004).

Conclusion: Graft loss is a major source of morbidity in burn patients. In this cohort, reoperation was common and hospital LOS was extended. Use of a graft loss grading scale enables improved dialogue among providers and lays the foundation for improved understanding of risk factors. Results of this study will be used to guide revision of the institutional graft loss grading scale.

52.19 Predictors Of Emergent Operation For Diverticulitis

B. L. Corey1,2, L. Goss1,2, A. Gullick1,2, M. Morris1,2, D. Chu1,2, J. Grams1,2  1University Of Alabama At Birmingham,Surgery,Birmingham, ALABAMA, USA 2Birmingham Veteran’s Affairs Medical Center,Surgery,Birmingham, ALABAMA, USA

Introduction:  Emergent operations for diverticular disease are associated with worse clinical outcomes. Understanding predictors of emergency surgery may help guide recommendations for elective operations.  We hypothesized that patient-specific factors, such as co-morbid conditions, would predict emergent operation in the surgical treatment of diverticulitis. 

Methods:  The 2012-2013 national surgical quality improvement (NSQIP) patient database was queried for patients undergoing surgical management for diverticulitis. Patients were stratified by emergent versus non-emergent operations.  Univariate analyses were used to determine predictors of emergent versus non-emergent operations. Multivariate adjustments were made using logistic regression utilizing stepwise selection of all possible covariates. Significance was determined as p-value ≤0.05. 

Results: Of 8,070 patients who underwent surgery for diverticulitis, median age was 59 years and patients were more commonly female (54.2%), white (93%), and non-smoking (78.5%). Of these, 84.2% of cases were non-emergent and 15.8% were emergent. When compared to non-emergent patients, emergent patients were older (64 vs 59 years, p=<0.001) and had a higher incidence of diabetes (13.1 vs 10.2%, p=0.003), COPD (9.9 vs 3.8%, p<0.001), hypertension (54.3 vs 46.9%, <0.001), steroid use (14.1 vs 4.5%, p=<0.001), and severe or life-threatening ASA class (71.4 vs 37.4%, p<0.001). On multivariate analysis, independent predictors of emergency operation included male sex (odds ratio [OR]1.3), steroid use (OR 1.75), and higher ASA Class (ASA class 3, OR 2.7; ASA class 4-5, OR 17.7) (p<0.05). 

Conclusion: Patients of male sex, with steroid use, and higher ASA class were at increased risk of an emergency operation.  These factors should be considered when recommending surgery for patients with diverticulitis.

 

52.18 Implementation and Outcomes of an ERAS Protocol for Abdominal Wall Reconstruction

E. Stearns1, M. A. Plymale1, D. L. Davenport2, C. Totten1, S. Carmichael1, C. Tancula1, J. S. Roth1  1University Of Kentucky,General Surgery/Surgery/Medicine,Lexington, KY, USA 2University Of Kentucky,Surgery/Medicine,Lexington, KY, USA

Introduction: Enhanced Recovery after Surgery (ERAS) protocols are evidence-based quality improvement pathways reported to be associated with improved patient outcomes.  Building on the previously-reported protocol for abdominal wall reconstruction (AWR) that addresses optimal pain control and acceleration of intestinal recovery, a 17-element ERAS protocol for AWR was developed. The purpose of this study was to compare short-term outcomes for patients cared for after protocol implementation to a cohort of historical controls. Process evaluation was conducted to pinpoint level of adherence to protocol details in order to identify opportunities for improvement.

Methods: After obtaining IRB approval, surgical databases were searched for AWR cases for two-years prior and eleven months after protocol implementation. The two groups were compared on characteristics including age, body mass index, comorbidities, operative details, and clinical outcomes using chi square, Fisher’s exact test or Mann Whitney U test, as appropriate. Process evaluation consisted of determining the level of adherence to protocol details at the patient, health care provider and system levels.

Results: 173 patients underwent AWR by one surgeon during the time period described (46 patients with ERAS protocol in place and 127 controls).  Preoperative characteristics of age, gender, ASA Class, comorbidities, and smoking status were similar between the two groups.  Body mass index was slightly lower among ERAS patients (p = .042). Just over three-fourths of the cases in each group were CDC Wound Class 1; ERAS patients were more likely than controls to have had synthetic mesh implanted as opposed to other mesh types. In terms of clinical outcomes, ERAS patients had earlier return of bowel function (median 3 days vs. 4 days) (p = .002) and decreased incidence of superficial surgical site infection (SSI) (7% vs. 25%) (p = .004) than controls. Hospital length of stay was similar between the two groups. Protocol adherence by ERAS component ranged from a low of 54% (acceleration of intestinal recovery) to 100 % (postoperative glucose control). Protocol adherence by case varied from 55% (1 patient) to 94% (4 patients).

Conclusion: A comprehensive ERAS protocol for AWR demonstrates evidence for hastened return of bowel function and decreased incidence of SSI. Process evaluation identified specific areas of less than optimal adherence to protocol details, providing substantiation for increased education at all levels. A system-wide culture focused on enhanced recovery is needed to improve protocol adherence and subsequent patient outcomes.

52.17 Preoperative Opioid Abuse: Implications for Outcomes Following Low Risk Elective Surgery

A. N. Cobb1,2, A. Kothari1,2, S. Brownlee2, P. Kuo1,2  1Loyola University Medical Center,Department Of Surgery,Maywood, IL, USA 2Loyola University Medical Center,One:MAP Division Of Clinical Informatics And Analytics,Maywood, IL, USA

Introduction: Increasing numbers of patients are using opioids for pain management. While there is growing recognition of the potentially negative implications of postoperative opioid use, little is known about the effect of opioid abuse in the preoperative setting. The objective of this study was to determine the prevalence of opioid abuse in patients undergoing low risk elective procedures and to assess its impact on postoperative morbidity, mortality, and resource utilization. 

Methods: Patients with a preoperative diagnosis of opioid use disorder or dependence who underwent one of five low risk elective procedures (laparoscopic cholecystectomy, mastectomy, total knee replacement, gastric bypass, prostatectomy) were extracted using the Healthcare Cost and Utilization Project (HCUP) State Inpatient Database (SID) for California, New York, and Florida during the years 2009-2011. Descriptive statistics of the study population were calculated using arithmetic means with standard deviations for continuous variables and proportions for categorical variables. Risk-adjusted odds of mortality, morbidity, length of stay, and discharge disposition were calculated using mixed-effects regression models with fixed effects for age, race, sex, socioeconomic status, insurance type, and comorbid disease

Results: This study included 541,637 adult patients that underwent one of five elective surgical procedures. Of these, 403 patients (0.07%) were found to carry a preoperative diagnosis of opioid dependence. The largest proportion of patients with opioid dependence were Caucasian, female, privately insured, and of low socioeconomic status. The most common procedure in patients with opioid dependence was total knee replacement (322 patients). Patients with opioid use disorder were younger than those without (57 years vs 64 years p<0.001). Opioid use was not associated with inpatient mortality (OR=5.01, 95% C.I: 0.69 – 36.2), however it was associated with increased aggregate morbidity (OR=2.0, 95% C.I:1.19-3.39), the odds of having a non-routine discharge disposition (OR=0.38, 95% C.I: 0.30 – 0.48), and prolonged length of stay (OR=2.67, 95% C.I: .2.08 – 3.41).

Conclusion: Preoperative opioid use has a negative impact on postoperative outcomes and leads to increased resource utilization following low risk elective procedures. This presents an opportunity to create a preoperative screening tool to assess patients for opioid abuse or dependence. If patients are found to exhibit opioid abuse, a concerted effort should be made by surgeons to have these patients treated prior to elective surgical intervention, or decline to perform elective surgery on patients that misuse opioids and defer treatment.
 

52.16 Minimally Invasive Surgery For HCC: Comparison Of Laparoscopic And Robotic Approach

P. Magistri1,2, G. Tarantino1, G. Assirati1, T. Olivieri1, V. Serra1, N. De Ruvo1, R. Ballarin1, F. Di Benedetto1  1University Of Modena And Reggio Emilia,Hepato-Pancreato-Biliary Surgery And Liver Transplantation Unit,Modena, MO, Italy 2Sapienza – University Of Rome,Medical And Surgical Sciences And Translational Medicine,Rome, RM, Italy

Introduction:  Hepatocellular carcinoma (HCC) has a growing incidence worldwide, and represents a leading cause of death in patients with cirrhosis. Nowadays minimally invasive approaches are spreading worldwide in every field of surgery and in liver surgery as well.

Methods:  We retrospectively reviewed demographics, clinical and pathological characteristics and short-term outcomes of patients underwent minimally invasive resections for HCC at our Institution between June 2012 and May 2016.

Results: No significant differences in demographics and comorbidities were found between patients underwent laparoscopic (n=24) and robotic (n=22) liver resections, except for the rates of cirrhotic patients (91.7% and 68.2%, respectively, p=0.046). Peri-operative data analysis showed that the operative time (mean, 211 min and 318 min, respectively, p<0.001) was the only parameter in favor of laparoscopy. In fact, robotic approach allowed us to resect larger tumors (mean, 22.96 mm and 31.85 mm, respectively, p=0.02) with a statistically significant lower rate of conversions (16.7% vs. 0%, respectively, p=0.046). Moreover, robotic assisted resections were related to less Clavien I-II post-operative complications (22 vs. 14 cases, respectively, p=0.02). About resection margins, the two groups had similar rates of disease-free resection margins without any statistically significant difference.

Conclusion: A modern hepatobiliary center should offer both open and minimally invasive approaches to the liver disease, in order to provide the best care for each patient, according to the individual comorbidities, risk factors, and personal quality of life expectations. Our results show that the robotic approach is a reliable tool for an accurate oncologic surgery, comparable to the laparoscopic approach. Also, robotic surgery allows the surgeon to perform larger resections and to safely approach liver segments that are hardly resectable in laparoscopy, namely segment I-VII-VIII.  
 

52.15 Is there a Gender Disparity for Diabetes Remission after Bariatric Surgery?

K. Shpanskaya1, H. Khoury1, H. Rivas1, J. Morton1  1Stanford University,Palo Alto, CA, USA

Introduction:  Bariatric surgery is profoundly effective in improving and resolving type 2 diabetes mellitus (T2DM) in the severely obese. Our study aims to investigate the role of gender differences in T2DM remission and its related serum markers 12 months after bariatric surgery.

Methods: We performed a retrospective study of 817 severely obese T2DM patients that underwent laparoscopic Roux-en-Y gastric bypass or sleeve gastrectomy. T2DM remission was defined as complete or partial with a glycated hemoglobin (HbA1c) of less than 6.0% or 6.5%, respectively, with no diabetes medication use for 12 months after surgery. HbA1c, fasting glucose, and insulin were measured preoperatively and 12 months postoperatively. Data were analyzed using student’s t-test and multivariate regression analysis, controlling for age, change in BMI, type of surgery, and number of preoperative comorbidities.

Results: 213 males and 604 females with a mean age of 50.42 (SD 9.76) and 48.55 (SD 10.57), respectively, and an average preoperative BMI of 46.26 (SD 8.35) and 46.83 (SD 7.85) were included in this study. Preoperatively, HbA1c levels from males (N=184; 7.81, SD 1.80) and females (N=530; 7.01, SD 1.40) significantly differed (P=0.000). Initial fasting glucose was significantly higher (P=0.010) in males (N = 136; 151.44, SD 59.74) than in females (N=349; 137.08, SD 53.05). Females (N = 226; -15.35, SD 9.25) showed significantly greater BMI reduction at 12 months post-surgery than males (N= 76; -12.7, SD 10.43) (P=0.037). Males and females showed significant one-year improvement in HbA1c (P = 0.000; males from 8.1% to 5.4%, females from 7.1% to 5.5%), fasting glucose (P = 0.000; males from 149.97 to 101.76 mmol/L, females from 136.17 to 96.32 mmol/L), and insulin (P =0.000; males from 29.5 to 5.38 UI/L, females from 34.07 to 7.63 UI/L). One year after surgery, males (N=41; -2.67, SD 1.87) had a greater decrease in HbA1c levels (P=0.001) than females (N=153; -1.60, SD 1.48) after controlling for age, 12 month weight loss, and preoperative comorbidities. Change in fasting glucose (P=0.511) and insulin (P=0.728) at 12 months after surgery did not significantly differ. Postoperatively, no significant difference in HbA1c, in fasting glucose, or in insulin levels was seen. Neither complete nor partial T2DM remission one year after surgery was significant between genders (P=0.372, P=0.514). 

Conclusion: Our study is one of the first to explore gender differences in T2DM remission and diabetes serum markers at one year after bariatric surgery. Males show higher preoperative HbA1c levels and greater benefit of surgery for HbA1c levels at 12 months. Diabetes remission and reduction in fasting glucose and insulin levels at one year after surgery were not significantly different between males and females. Despite a worsened glycemic profile preoperatively, male bariatric surgery patients demonstrated significant improvement in diabetes markers similar to female bariatric surgery patients.

 

52.14 Quality of Life Assessment Before and After Ventral and Umbilical Hernia Repairs, a Prospective Study

S. L. Whitney1, M. Simhon1, C. Divino1  1Mount Sinai School Of Medicine,Surgery,New York, NY, USA

Introduction: Ventral and umbilical hernias are a common pathology in our population given increasing obesity rates and increasing rates of abdominal surgeries. We aimed to assess quality of life (QOL) improvement following repair of these hernias using a validated, the Carolina Comfort Scale (CCS), and compare these QOL outcomes between patients.

 

Methods: The CCS was tailored to assess symptoms and quality of life related to hernia repair. Patients undergoing repair were consented and completed three CCS surveys: pre-operatively, within 1 month after surgery, and greater than 1 month after. Questions on the CCS survey assessed severity of symptoms related to the hernia on a scale of 1-5 by asking if various positions and activities could result in worsening of pain, movement limitations and sensation of mesh post-procedure. Secondary outcomes measured were re-admission, complications, and blood loss.

 

Results: 43 patients consented to participate and filled out the 3 required surveys. The average age was 49.2 years with an average BMI of 29.7. 24 patients had a history of previous abdominal surgery with 13 of these being classified as incisional ventral hernias. 42 of 43 were reported to be symptomatic, with pain and movement limitations being present. All 43 cases were performed electively. 24 surgeries were performed laparoscopically, 13 were performed open, and 6 were performed robotically. Mesh was used in all laparoscopic cases and all robotic cases, and 3 of the 13 open cases. The average length of hernia duration was 38.5 months. On the CCS survey, all patients who underwent repair showed significant symptomatic improvement (p ≤ 0.05) between the pre-op symptoms and 2nd post-op visit symptoms in all 9 categories regarding pain and movement limitations. All patients who received mesh reported mild sensation of mesh that did mildly diminished by the second post-operative visit, but this change was not significant in any of the categories (p < 0.05). Of all patients who underwent repair, the mean rating of satisfaction on a scale of 1-5 was 3.93 at the second post-operative visit. The mean number of days to return to work was 25.7 days as reported at the second post-operative visit. The mean surgery time was similar in both laparoscopic and open cases (p 0.11). There were no recurrences during the follow-up period.

 

Conclusion: Patients reported significant improvement in pain and functional status following ventral and umbilical hernia repair surgery, with improved pain and diminished movement limitations.  Sensation of mesh did not significantly diminish as time progressed post-operatively. 

 

52.13 Outcomes of Neoadjuvant Therapy in Stage III Rectal Cancer

A. M. Dinaux1,2, R. Amri1,2, L. Bordeianou1,2, H. Kunitake1,2, D. L. Berger1,2  1Massachusetts General Hospital,Surgery,Boston, MA, USA 2Harvard School Of Medicine,Surgery,Brookline, MA, USA

Introduction:
Neoadjuvant therapy for patients with advanced stage rectal cancer remains the gold standard. Patients with either T3 or greater disease and/or suspected lymphadenopathy presently receive neoadjuvant chemoradiation. These patients have 3 possible outcomes to treatment. They can have a complete pathologic response. They can have a partial response and be pathologically node negative at surgery or they can have persistent disease with positive nodes. This abstract addresses outcomes in these groups. 

Methods:
All patients with clinically AJCC-stage III who received neoadjuvant treatment were selected from an IRB-approved, retrospective, prospectively maintained database, which contains all surgically treated rectal cancer patients who received their surgery between 2004-2014 at a single center. 

Results:
A total of 207 patients were clinically stage III based on preoperative imaging and received neoadjuvant treatment. Seventy-six of those still had nodal disease on pathology after treatment, compared to 131 nodal responders, of which 33 had a pathologic complete response. Compared to nodal responders with residual tumor, patients with positive nodes on pathology had higher rates of high-grade tumors (17.1% vs .4.1%; P=0.016), extramural vascular invasion (30.3% vs. 7.1%; P<0.001), perineural invasion (31.6% vs. 11.2%; P=0.001), large vessel invasion (27.6% vs. 7.1%; P<0.001) and small vessel invasion (27.6% vs. 10.2%; P=0.003). Distant metastatic recurrence percentage was also higher in the pN+ group (26.3% vs. 9.9%), with a shorter median disease free survival (18.5 months, IQR [8.7-32.3] vs. 32.3 [16.8-59.3]). Disease specific survival analysis showed a significant difference between clinically stage III patients with pathologic complete response, with residual tumor but no more nodal disease on pathology, and with persistent nodal disease. Previous mentioned groups are listed in order from best to worst survival in a Kaplan Meier-curve.

Conclusion:
Persistent nodal disease after neoadjuvant therapy is a very poor prognostic sign. Patients with residual nodal disease had significantly worse short and long-term oncological outcomes compared to those who had residual tumor but no positive lymph nodes. Considering that pathologically node positive patients showed higher rates of specific pathologic features, these features may be indicators of a poor response to neoadjuvant therapy. These prognostic factors may be detectable on preoperative assessments, biopsy and MRI. It is possible that these patients may benefit from receiving a full course of chemotherapy prior to surgical resection as they clearly have systemic disease.

52.12 Pre-Admission Frailty Predicts Post-Discharge Adverse Events in Acute Care Surgery Patients

Y. Li1, J. L. Pederson1, T. A. Churchill1, A. S. Wagg2,5, J. S. Holroyd-Leduc3,5, K. Alagiakrishnan2,5, R. S. Padwal2,6, R. G. Khadaroo1,4  1University Of Alberta,Department Of Surgery,Edmonton, AB, Canada 2University Of Alberta,Department Of Medicine,Edmonton, AB, Canada 3University Of Calgary,Department Of Medicine And Community Health Sciences,Calgary, AB, Canada 4University Of Alberta,Department Of Critical Care Medicine,Edmonton, AB, Canada 5Alberta Seniors Health Strategic Clinical Network,Calgary, AB, Canada 6Alberta Diabetes Institute,Edmonton, AB, Canada

Introduction:
Frailty is a subjective measure of decreased physiological reserve across multiple organ systems. Hospital readmissions are costly and may reflect quality of care, yet the importance of frailty for prognosis after discharge following emergency surgery is not well established. We evaluated the association of frailty and risk of readmission or post-discharge death in older surgical patients.

Methods:
We prospectively followed patients aged ≥ 65 years admitted to Acute Care Surgery at two tertiary care centres in Alberta, Canada who preoperatively required assistance with <3 activities of daily living. Severity of frailty prior to admission was defined as well (score ≤ 2), managing-vulnerable (3-4), and mildly-moderately frail (≥ 5) on the CSHA Clinical Frailty Scale (CFS). Primary endpoints were composites 30-day and 6-month all-cause readmission or death. We assessed endpoints using multivariable logistic regression that adjusted for confounders (Table 1).

Results:
Of 308 patients included, the mean age was 76±7.6 years, 55% were female, and the median CFS was 3 (range 1-6); 168 patients were managing-vulnerable and 68 were mildly-moderately frail. Most surgeries performed were cholecystectomies/appendectomies (28% closed, 8% open), small intestine (28%) or colon surgery (14%), and hernia repairs (14%). At 30 days, 42 (13.6%) and at 6 months, 104 (33.8%) patients were readmitted or died. Frail patients were more likely to be readmitted or have died within 30 days: 16% of managing-vulnerable (adjusted odds ratio [aOR] 4.60, 95% CI 1.29-16.45, p=0.019) and 18% of mildly-moderately frail (aOR 4.51, 95% CI 1.13-17.94, p=0.033) compared to 4% of well patients. At 6 months, an independent dose-response relationship was observed for increasing frailty severity: 33% of patients managing-vulnerable (aOR 2.15, 95% CI 1.01-4.55, p=0.046) and 54% of those mildly-moderately frail (aOR 3.27, 95% CI 1.31-8.12, p=0.011) were readmitted or died compared to 15% of well patients.

Conclusion:
Patients undergoing emergency abdominal surgery who were more frail were also more likely to be readmitted to hospital at 30 days and 6 months. To our knowledge, this is the first study to assess the impact of frailty on adverse events after discharge in this population. These findings can assist in developing targeted interventions to prevent readmissions in this vulnerable population.
 

52.11 A Comparitive Study Of Two Parathyroid Hormone Assays In Primary Hyperparathyroidism Patients

S. Joglekar1, J. C. Lee1,2, J. Serpell1,2, H. Schneider3  1The Alfred Hospital,Department Of General Surgery,Melbourne, VICTORIA, Australia 2Monash University,Endocrine Surgery Unit,Melbourne, VICTORIA, Australia 3The Alfred Hospital,Department Of Pathology (Clinical Biochemistry),Melbourne, VICTORIA, Australia

Introduction:
Inappropriately high serum parathyroid hormone (PTH) is a diagnostic criterion for primary hyperparathyroidism (pHPT). Recently hospital administrative records showed an increase in the diagnosis of pHPT during a 2-year (approx.) period when the Abbott assay was used instead of the usual Roche assay for our institution due to product unavailability. Therefore, we aimed to compare the clinical performance of these 2 2nd generation assays in patients undergoing parathyroidectomy for pHPT.

Methods:
All study patients underwent parathyroidectomy for pHPT at The Alfred Hospital. Those who were treated during the 20-month period (May 2012 to Feb 2014 inclusive) when the Abbott assay was in use were designated “Group A”; and those treated during the subsequent 20-month period (Mar 2014 to Dec 2015 inclusive) when the Roche assay was again in use were designated “Group R”. Comparisons were made of their biochemistry (serum calcium, PTH, vit D levels), as well as clinical outcomes (diagnostic accuracy and recurrence prognostication), using the Student’s t-test and Fisher’s exact test. Deviation of PTH from normal ranges are expressed as multiples of the upper limit of normal (xULN), as the 2 assays have different normal ranges. A biochemical diagnosis was classified as false positive (FP) when associated with a negative neck exploration. Post-operative PTH reduction was calculated from pre-operative and recovery room PTH levels. In this study, curative treatment was defined as normo-calcaemia lasting over 3 months. 

Results:
There were 79 patients in Group A and 64 in Group R. Mean ages and gender distribution were similar between the groups (63.3 ± 15.6 years vs 62 ± 12.9 years; 75% vs 70% female). The mean pre-operative PTH in Group A (2.25 ± 0.28 xULN) was significantly higher than in Group R (1.84 ± 0.25 xULN; p < 0.05); this was despite similar levels of hypercalcaemia (2.78 ± 0.17 mM vs 2.77 ± 0.18 mM respectively; p = 0.72). The FP rates were similar (p = 0.65), with each group only having 2 patients with a negative 4-gland exploration. Operative PTH reduction of > 50% was seen in the majority in both groups (Group A 92% vs Group R 93%), as was normo-calcaemia at 3 months (Group A 90% vs Group R 93%).

Conclusion:
This study confirmed that although the Abbott assay measured higher PTH levels in patients with pHPT compared to the Roche assay, this does not seem to affect the ability of these assays to make an accurate diagnosis. Furthermore, the comparable kinetics of post-operative PTH conferred similar medium term normo-calcaemia rates. 
 

52.10 A Prospective Study on Quality of Life after Laparoscopic and Open Inguinal Hernia Repairs

J. Horwitz1, F. Burbano1, R. Lingnurkar2, C. M. Divino1  1Icahn School Of Medicine At Mount Sinai,Department Of Surgery,New York, NEW YORK, USA 2Central Michigan University College Of Medicine,Mount Pleasant, MICHIGAN, USA

Introduction: Patient-reported quality-of-life (QOL) data is becoming an important component of modern surgical quality improvement initiatives. Using the Carolinas Comfort Scale (CCS), a validated QOL survey specific to patients undergoing hernia repairs with mesh, the aim of our study was to prospectively compare QOL outcomes for patients undergoing both laparoscopic and open inguinal hernia repairs.

Methods: Patients undergoing inguinal hernia repairs by a four surgeon group at The Mount Sinai Hospital from 2015-2016 were identified prospectively. The CCS survey was administered at the pre-operative visit, post-operative visit (<1 month from surgery), and follow-up visit (>1 month from surgery). These patients were stratified into operation specific groups: unilateral laparoscopic, bilateral laparoscopic, open with mesh plug-and-patch, open with mesh patch only. The primary outcomes were the CCS survey’s 1-5 point scale for mesh sensation, pain, and movement limitation in the pre-operative, post-operative, and follow-up settings. Secondary outcomes analyzed were blood loss, operative time, admission, re-admission, and recurrence.

Results: 92 patients, at this time, have completed the CCS surveys at all three visits. Mean follow-up time was 4.4 months. Within this group, 40 underwent laparoscopic repairs (31 bilateral and 9 unilateral) and 52 underwent open repairs (35 plug-and-patch, 17 patch only). Each operative group experienced a significant decrease in pain between the pre-operative and follow-up setting. There were no significant QOL differences between the laparoscopic and open groups, nor were QOL differences observed between the unilateral and bilateral laparoscopic groups. The open plug-and-patch group had a significantly higher pain and movement limitation score at follow-up compared to the open patch only group (p = 0.016 and p = 0.031, respectively); of note, no differences were observed at the baseline pre-operative visit. The unilateral laparoscopic group’s operative time was significantly longer than the unilateral open group, 74 vs 59 minutes (p <  0.001). There were no recurrences during the follow-up period.

Conclusion: Using prospective, patient-reported, QOL data with the CCS survey, we have demonstrated that all patients experienced lower pain scores after inguinal hernia repairs, regardless of operation type. There we no QOL differences between laparoscopic or open repairs; however, the open plug-and-patch repair group did experience increased pain and movement limitation at follow-up as compared to the open patch only repair group.

 

52.09 Risk Factor and Outcome Analysis of Patients with Bethesda Category III (AUS/FLUS) Thyroid Nodules

W. Ouyang1, O. Picado Roque1, S. Liu1, R. Teo1, A. Franco1, M. Gunder1, P. P. Parikh1, J. C. Farrá1, J. I. Lew1  1University Of Miami,Division Of Endocrine Surgery,Miami, FL, USA

Introduction:  With the Bethesda System for Reporting Thyroid Cytopathology (BSRTC), thyroid nodules designated as Bethesda Category III or atypia or follicular lesion of undetermined significance (AUS/FLUS) by fine needle aspiration (FNA) have an estimated risk of malignancy (ROM) ranging from 5% to 15%. Previous reports performed at other institutions suggest that the ROM for AUS/FLUS is highly variable. This surgical series determines the ROM and those clinical factors that may predict underlying malignancy in patients with thyroid nodules categorized as AUS/FLUS at a single institution.

Methods:  A retrospective review of prospectively collected data of 665 patients with index thyroid nodules who underwent FNA and thyroidectomy from April 2010 to June 2016 was performed. Patients with thyroid nodules classified as AUS/FLUS by FNA were divided into malignant or benign groups based on final pathology, noting whether malignancy was found in the index thyroid nodule or as an incidental lesion. Incidental cancers were defined as malignancy discovered outside the index nodule within the ipsilateral thyroid lobe or contralateral lobe. Such patients underwent initial thyroid lobectomy for definitive diagnosis unless there was a history of radiation exposure, familial thyroid cancer, obstructive symptoms, bilateral nodules and/or patient preference for which total thyroidectomy was performed. Groups were compared in terms of demographics, clinicopathologic factors, and surgeon performed ultrasound (SUS) features for malignancy.

Results: Among the 171 patients with AUS/FLUS nodules who underwent thyroidectomy, final pathology confirmed malignancy in 60% (103/171) of the patients compared to benign disease in 40% (68/171). Malignancy in the index thyroid nodule alone was found in 37% (64/171) of patients whereas incidental cancers were found in 9% (16/171) on final pathology. Twenty-three (14%, 23/171) patients were found to have both index nodule and incidental malignancy. The ROM for index thyroid nodule with AUS/FLUS overall is 51% (87/171). Papillary thyroid cancer (PTC) was the most common cancer, found in 86% (89/103) of patients with malignancy. The most common subtype among patients with PTC was the follicular variant in 71% (63/89), followed by the classic variant in 12% (11/89). Analysis of nodule features by SUS revealed solid texture more likely to be present in patients with a malignancy when compared to benign tumors (88.1% vs 73.5%, p<0.05).

Conclusion: In this surgical series, the malignancy rate of 51% in thyroid nodules with AUS/FLUS cytology is higher than the estimated ROM, but within range of other surgical reports in the literature. Furthermore, during SUS evaluation, solid features may help determine underlying malignancy in AUS/FLUS thyroid nodules. For appropriate treatment recommendations, surgeons should assess their ROM for AUS/FLUS nodules, which may vary in their everyday clinic practice and local institutional experience.