17.10 Gender Discrepancies in Bariatric Procedures Despite Increased Qualification and Referrals

E. M. Masterson1, F. Halperin2, A. Tavakkoli2  1Wake Forest University School Of Medicine,Winston-Salem, NC, USA 2Brigham And Women’s Hospital,Boston, MA, USA

Introduction:
Despite increasing obesity rates in both males and females, females continue to participate in commercial weight loss programs, receive medical weight loss counseling, and undergo bariatric surgery procedures at disproportionately higher rates. Reasons for this are multifactorial and includes patient and provider biases. It is of interest to both physicians and surgeons to better understand these biases to help better serve this patient population, and increase the rate of bariatric surgery uptake from current 1-2%.

Methods:
The study retrospectively reviewed electronic health records and customized paper surveys for 300 new patients seen at our Center for Weight Management and Metabolic Surgery (CWMMS) . All new patients between July 2016 and February 2017 were included in the study. Data collected included patient demographics (age, BMI, gender, comorbidities), treatment (diet and exercise counseling, pharmacotherapy, bariatric referrals), and outcomes (weight loss, referrals, bariatric procedures). All patient data was input to and analyzed using REDCap an online, HIPPA-compliant database. 

Results:
79.3% (n=238) of patients seen were female. Based on BMI and comorbidities, 57.7% (n=173) of all patients qualified for bariatric surgery at their initial visit. Interestingly, a much higher percentage of male patients qualified for bariatric surgery than women (77.4% vs. 52.5% respectively; p<0.001). Of the 173 patients meeting surgical criteria, 26.0% (n=45) were referred for bariatric surgery consultation at an initial or follow-up visit, with no difference between male and female referral rates (31.3% vs. 24.0% respectively, p=0.33). Within the study time frame (July 2016- June 2017), a total of 14 patients underwent a bariatric procedure, representing 8.1% of qualified patients and 31.1% of referred patients. 78.6% of patients receiving a bariatric procedure were female.

Conclusion:

At an urban academic medical center, males referred for weight management consults were more likely to qualify for bariatric surgery at the initial visit. Although men were equally likely to be referred to bariatric surgery, they were less likely to undergo weight loss operations compared to females. These results highlight that (1) Males are referred to medical weight loss programs by primary care or specialty physicians at higher BMIs and (2) Males and females were equally likely to agree to referral to a bariatric clinic, but men were less likely to proceed with surgery. Studies with longer follow up time and sample population are necessary to extend these findings to other weight management centers, but these initial findings highlight gender discrepancies in medical weight management and bariatric surgery referrals and procedures.

 

17.08 Tipping the Scales: Results of Bariatric Surgery by Socioeconomic Status in Black Patients

S. Timberline7, M. S. Pichardo6,7, G. Ortega5, M. F. Nunez8, E. S. Bauer5, E. Smith7, J. Tordecilla7, T. M. Fullum10, D. D. Tran10  5Howard University College Of Medicine,Clive O. Callender Howard-Harvard Health Sciences Outcomes Research Center,Washington, DC, USA 6Yale University,New Haven, CT, USA 7Howard University College Of Medicine,Washington, DC, USA 8Howard University College Of Medicine,Department Of Medicine,Washington, DC, USA 10Howard University College Of Medicine,Department Of Surgery, Center For Wellness And Weight Loss Surgery,Washington, DC, USA

Introduction:
Research suggests that individuals of racial/ethnic minority groups and of low socioeconomic status (SES) experience worse outcomes after weight loss surgery compared to their White and higher SES counterparts, respectively. Our objective is to examine the association between socioeconomic characteristics and post-operative outcomes by 12 months in Black patients from a single academic center.

Methods:
A retrospective study of Black patients who underwent bariatric surgery from 2008 to 2013 was performed. Median Household Income (MHI), obtained from census-tract level neighborhood SES data, was a proxy for patients’ SES and categorized into tertiles: $42,595-$76,674, $76,969-$100,652, and $100,704-$205,980. Insurance status at time of surgery was defined as public or private insurance. Outcomes of interest included mean weight loss, body mass index (BMI) points loss, percent excess weight loss (%EWL), and resolution of comorbidities (hypertension, diabetes, hypercholesterolemia). Adjusted multivariable regression analysis was performed to assess the association between SES characteristics and the outcomes of interest.

Results:
Of 422 Black patients, most were female (82%) and had private insurance (73.9%). The mean preoperative BMI was 48.9 kg/m2. At baseline, about half of the patients had hypertension (51.1%), and one third had diabetes (34.4%) and hypercholesterolemia (28.2%). Postoperatively, there were no statistical significant differences in %EWL (β= 0.17, 95%CI= -1.95 – 2.28), mean weight loss (β= 5.37, 95%CI=-3.88 – 14.62), BMI point difference (β= 5.39, 95%CI= -13.47 – 24.24), resolution of hypertension (OR= 1.57, 95%CI= 0.88 – 2.80), diabetes (OR= 1.29, 95%CI= 0.63 – 2.62), and hypercholesterolemia (OR= 0.81, 95%CI= 0.36 – 1.81) by insurance status. Median household income categories did not statistically differ in %EWL, mean weight loss, BMI point difference, or resolution of co-morbidities (Table 1).

Conclusion:
Among Black patients who underwent bariatric surgery, median household income level and type of insurance used was not associated with a difference in weight loss or resolution co-morbidity outcomes by 12 months post-operatively.

17.09 Readmission Following Laparoscopic Bariatric Surgery Using the MBSAQIP Database

K. Feng1, J. S. Richman1, B. L. Corey1, R. D. Stahl1, J. M. Grams1  1University Of Alabama at Birmingham,Division Of Gastrointestinal Surgery/Department Of Surgery,Birmingham, Alabama, USA

Introduction: Laparoscopic Roux-en-Y gastric bypass (RYGB) and sleeve gastrectomy (SG) are the two most common bariatric operations. As both have low reported morbidity and mortality rates, readmission rates are increasingly utilized as a measure of quality. Identifying patients at risk will allow for targeted interventions to decrease readmissions. The purpose of this study was to evaluate national readmission rates and the associated risk factors related to RYGB and SG.

Methods:  Data from patients undergoing SG or RYGB were identified from the 2015 Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program (MBSAQIP) database. Chi-square test and logistic regression were used to examine patient characteristics and 30-day readmission rates. Patients were also stratified by bariatric procedure. 

Results: A total of 144,459 patients were included RYGB (30.44%) and SG (69.56%). The overall 30-day readmission rate was 3.45% (n=4,991). SG patients had a lower readmission rate compared to RYGB (2.73% vs. 5.10%; p<0.001). The most common causes of readmission were nausea, vomiting, or dehydration (RYGB 26.83%, SG 32.32%); and abdominal pain (RYGB 14.55%, SG 11.71%). Unadjusted analyses showed that readmitted patients had higher body mass index (BMI), longer operation times, and more often had length of stay (LOS) >4 days (all p< 0.001). When stratified by operation, readmitted SG patients were more likely to have hypertension, hyperlipidemia, obstructive sleep apnea, and diabetes, while readmitted RYGB patients had longer operation time and more post-operative complications (all p< 0.001). Adjusted analyses (Table 1) showed that factors associated with readmission for both procedures included being African-American (SG OR=1.46, RYGB OR=1.24), LOS>4 (SG OR=3.63, RYGB OR=2.09), postoperative inpatient complications (SG OR=23.03, RYGB OR=9.21), all p<0.001. 

Conclusion: Readmission after bariatric surgery was associated with race, BMI, diabetes, LOS, and inpatient postoperative complications. Further studies should focus on understanding these risk factors to reduce readmission rates. 
 

17.06 Pre-Operative Weight-loss on a Liver Shrink Diet Predicts Early Weight-loss after Bariatric Surgery

A. D. Jalilvand1, J. Sojka1, K. Shah1, B. J. Needleman1, S. F. Noria1  1Ohio State University,General And Gastrointestinal Surgery,Columbus, OH, USA

Introduction:  The surgical weight loss program at our institution requires patients to comply with a liver-shrink diet (LSD) 1-3 weeks prior to bariatric surgery (BS) in order to facilitate liver retraction during surgery. However, the effect of LDS-induced weight-loss on weight-loss after BS is unclear. The primary objective of this study was to examine the correlation between LSD-induced weight-loss and post-operative weight loss outcomes. Secondary objectives included identifying other factors that correlated to improved weight-loss after surgery.

Methods:  All patients who underwent primary laparoscopic sleeve gastrectomy (LSG) or Roux-en-Y gastric bypass (LRNYGB) between July 2014 and June 2016 were retrospectively reviewed at a single academic institution. Baseline demographic and operative data was obtained using the electronic medical record. The LSD consisted of a partial-liquid, low carbohydrate, high protein diet that utilized 4 protein shakes and 1 low carbohydrate meal/day. Percent excess body weight-loss (EBWL) was calculated for each patient on LSD (EBWL-LSD), as well as at 2, 8, and 24 weeks after BS. Student’s t-test, Mann-Whitney-U, Chi squared, and Fisher’s Exact were utilized to calculate significance. Multivariate linear regressions were conducted to determine independent predictors of weight-loss. A p-value of <0.05 was considered significant. 

Results: During the study period, 588 patients underwent primary BS, of which 57.14% had LSG and 42.86% underwent LRNYGB. Of these, 78.91% (464) were female, and the mean preoperative BMI was 48.8 ± 8.95 kg/m2. The mean time spent on the LSD was 18.21 ± 7.32 days, and median EBWL-LSD was 4.7% (1.73-7.61). Greater EBWL-LSD was observed in patients who were on the LSD for > 2 weeks (5.35% vs 3.09%, p<0.0005), and in men (median of 6.2% vs 4.23%, p=0.0001). Significant independent predictors of EBWL 2 weeks post-operatively included EBWL-LSD (p<0.0005) and male sex (p<0.0005), when adjusting for surgery type, baseline EBW, and age. Patients who achieved at least the median EBWL at 2 weeks (15.4%) had greater EBWL-LSD than those who did not (5.7% vs 4%, p<0.0005). The only significant predictor of EBWL at 2 months was 2-week EBWL (p <0.0005), when adjusting for EBWL-LSD, surgery type, and gender. At 24 weeks, significant independent predictors for EBWL included EBWL at 2 and 8 weeks (p=0.001, p<0.0005), and LRNYGB (p=0.002).

Conclusion: Greater EBWL-LSD is associated with male gender and longer duration on the LSD. EBWL-LSD was a significant independent predictor of EBWL at 2 weeks, while EBWL at 2 and 8 weeks were independent predictors for weight loss at 24 weeks. Patients who reached at least 5.7% EBWL-LSD were in the 50th percentile of EBWL at 2 weeks. This suggests that EBWL-LSD can predict optimal early weight loss outcomes after BS and be used to guide expectations both in preparation for, and after bariatric surgery. 

 

17.07 Type of Fundoplication Is Not Associated with Persistent Dysphagia Following Antireflux Surgery

K. Vande Walle1, L. M. Funk1, Y. Xu1, J. Greenberg1, A. Shada1, A. Lidor1  1University Of Wisconsin,Department Of Surgery,Madison, WI, USA

Introduction:  Laparoscopic fundoplication is the gold standard operation for control of gastroesophageal reflux disease. It has been suggested that persistent postoperative dysphagia is increased in Nissen fundoplication compared to partial fundoplication (Toupet, Dor). We aimed to determine risk factors for persistent postoperative dysphagia, specifically examining type of fundoplication, to inform operative planning. 

Methods:  Patients experiencing gastroesophageal reflux symptoms who underwent laparoscopic Nissen, Toupet, or Dor fundoplication between January 2009 and July 2016 were identified from our single academic institutional foregut surgery database. A dysphagia score was obtained by administering a standardized quality of life survey in clinic or by telephone. Persistent dysphagia was defined as a difficulty swallowing score ≥ 1 (noticeable) on a scale from 0 (no symptoms) to 5 (incapacitating) at least one year postoperatively. Adjusted odds ratios (OR) of persistent dysphagia among those who underwent Nissen compared to partial fundoplication with 95% confidence intervals (CI) were calculated in multivariate logistic regression models. The multivariate logistic regression model was adjusted for sex, age, body mass index (BMI), and redo operation.

Results: Of 441 patients in the database who met the inclusion criteria, 255 had at least one year of follow-up (response rate = 57.8%). The median follow-up interval was 3 years. 45.1% of patients underwent Nissen fundoplication and 54.9% underwent partial fundoplication. Persistent postoperative dysphagia was present in 25.9% (n=66) of patients. On adjusted analysis, there was no statistically significant association between the type of fundoplication (Nissen vs. partial) and the likelihood of dysphagia (Table 1).

Conclusion: The likelihood of persistent dysphagia was not associated with the type of fundoplication (Nissen vs. partial). While many surgeons believe partial fundoplication decreases the risk of persistent postoperative dysphagia compared to Nissen fundoplication, our study demonstrated equivalent rates of persistent postoperative dysphagia. This suggests that in patients who are equivalent candidates for either a Nissen or partial fundoplication, Nissen fundoplication is a sound choice for an antireflux operation.
 

17.05 Stratification by Age Improves Accuracy of ACS Risk Calculator for Paraesophageal Hernia Repair

A. D. Jalilvand1, M. Al-Mansour1, K. A. Perry1  1Ohio State University,General And Gastrointestinal Surgery,Columbus, OH, USA

Introduction: The ACS-NSQIP Surgical Risk Calculator (ANS-RC) predicts 30-day complication rates for specific surgical procedures. The goal of this study was to assess the accuracy of the ANS-RC for predicting 30-day complication rates in a cohort of patients undergoing laparoscopic paraesophageal hernia repair (LPEHR) in a large academic medical center.

Methods: One hundred seventy-seven patients underwent primary LPEHR between 2011 and 2016 and were included in the study. Using the definitions in the ANS-RC, risk factors and 30-day post-operative complications were obtained for all patients from the electronic medical record. Predicted complication rates were calculated for each patient. Data are presented as incidence (%), mean ± SD, or median (IQ range). Comparisons between predicted and observed complication rates were made using one sample proportion or Wilcoxan paired signed rank tests. A p-value of <0.05 was considered statistically significant.

Results: During the study period, LPEHR was performed for 177 patients with a mean age of 66.2 ± 14.0 years and BMI of 30.2± 6.1 kg/m2. Seventy-three percent (n=156) were female and most patients had an ASA score of 2 (n=47, 26.6%) or 3 (n=122, 68.9%). Compared to the ANS-RC general population, this cohort had higher risks for serious complications (7.0% vs 5.7%), cardiac complication (0.4% vs 0.2%), reoperation (2.3% vs 2.1%), and readmission (6.5% vs 5.2%). Overall, the observed complication rates for any complication (13.6% vs 7.7%, p<0.01), serious complication (11.4% vs 7%, p=0.02), death (1.7% vs 0.3%, p<0.01), reoperation (6.8% vs 2.3%, p<0.01), and readmission (11.3% vs 6.5%, p<0.01) were higher than those predicted by the ANS-RC. The median hospital length of stay (LOS) was significantly shorter than predicted (2 vs 2.5, p<0.01). When stratified for patients with ASA scores of 2 or 3, the calculator more accurately predicted the observed complication rates, although reoperation (p=0.02) for ASA 2, and reoperation (p=0.04), SNF placement (p=0.03) and readmission rates (p=0.04) for ASA 3 were higher than predicted by the ANS-RC. The calculator most accurately predicted complication rates when patients were stratified by age group (<65, 65-79, 80+). Predicted values were lower than observed rates for reoperation in patients <65 (p=0.01) and 65-79 (p<0.01), readmission for patients <65 (p<0.01), and any or serious complications for patients >80 years of age (p=0.01). ANS-RC significantly overestimated LOS for patients <65 (p<0.01) and 65-79 years (p<0.01).

Conclusion: While the ANS-RC provides a useful tool for guiding preoperative discussions, this cohort comprised primarily of elderly patients with significant medical comorbidities had significantly higher than predicted complication rates compared to the general NSQIP population. However, stratifying patients by age and ASA improves the accuracy of the ANS-RC for LPEHR.

17.02 New Onset Alcohol Use Disorder Following Bariatric Surgery

C. Holliday1, M. Sessine1, N. Ibrahim1, M. Alameddine1, J. Brennan1, A. A. Ghaferi1  1University Of Michigan,Ann Arbor, MI, USA

Introduction:

Bariatric surgery is the most effective treatment for morbid obesity; however, there may be significant unanticipated psychosocial effects after surgery. Prior work identified a three-fold increase in the incidence of alcohol use disorder (AUD) after surgery in patients who underwent Roux-en-Y gastric bypass (RYGB). The landscape of bariatric surgery has changed, with sleeve gastrectomy (SG) now comprising over 50% of primary bariatric operations. However, the degree to which patients who undergo SG develop AUD remains unknown. Therefore, we sought to characterize the incidence of AUD in patients who have undergone SG compared to RYGB and potential predisposing patient factors.

Methods:

This study used prospectively collected, patient-reported data from a state-wide quality collaborative. Presence of AUD was determined using the validated Alcohol Use Disorders Identification Test for Consumption (AUDIT-C), with a score ≥4 in men and ≥3 in women suggestive of AUD. We used bivariate chi-square tests for categorical variables and independent samples t-tests for continuous variables. We used multivariable logistic regression to identify patient characteristics that may predispose patients to development of AUD at 1 and 2 years after surgery.

Results:

The prevalence of AUD in all patients who underwent bariatric surgery in our population was 9.6% preoperatively (n=5724), 8.5% at 1 year postoperatively (n=5724), and 14.0% at 2 years postoperatively (n=1381). The preoperative, 1 year, and 2 year prevalence of AUD for SG were 10.1%, 9.0%, and 14.4%, respectively. The preoperative, one year, and two year postoperative prevalence of AUD for RYGB were 7.6%, 6.3%, and 11.9%, respectively. The rate of new onset AUD in the first year following SG and RYGB were 0.75% and 0.54%, respectively. However, in year two, there was a significant increase in the incidence of new onset AUD—8.5% for SG and 7.2% for RYGB (Figure). Predisposing patient factors to AUD development included higher educational level (p<0.01) and higher household income (p<0.01).

Conclusions:

This is the first large, multi-institutional study of AUD in sleeve gastrectomy patients. The prevalence of alcohol use disorder in patients undergoing SG and RYGB was similar pre- and post-operatively. While there was only a slight increase in the incidence of new onset AUD in the first postoperative year, there was a marked increase in new onset AUD in the second year after both SG and RYGB. Understanding the timing and incidence of alcohol use disorder in patients undergoing sleeve gastrectomy—the most commonly performed bariatric operation in the United States—is critical to providing appropriate counseling and treatment. 

17.03 Discrepancies Between Physician and Midlevel Provider Attitudes on Bariatric Surgery

S. M. Wrenn1, V. Shah1, P. W. Callas1, W. Abu-Jaish1  1University Of Vermont College Of Medicine / Fletcher Allen Health Care,Burlington, VT, USA

Introduction: Bariatric surgery (BS) remains a mainstay of treatment for severe obesity and/or diabetes mellitus. Referral for BS is predominantly dictated by primary care practitioners consisting of physicians and midlevel providers. Provider perceptions and knowledge related to these procedures influences treatment decisions, surgical volume, and ultimately patient outcomes.

 

Methods: We constructed a novel electronic survey and dispersed it to all physician and midlevel providers (n=1169) at a single academic medical center and its affiliated external sites.  Responders were queried for demographic information, baseline perception regarding BS, and given the option to view short informational surgical videos on four procedures (sleeve gastrectomy, roux-en-y gastric bypass, laparoscopic gastric band, and duodenal switch). Their perceptions were reassessed following the viewing of these videos. Responses were given on a Likert scale (1=very positive/very likely, 5=very negative/very unlikely) or multiple-choice response. Statistical analysis was performed with two sample t-test and Fisher’s exact test. Multivariate analysis adjusted for gender, specialty (primary care vs. specialist), practice, and education level (MD/DO vs. midlevel).

 

Results: Total respondents (n) included 114 physicians and 26 midlevel providers (12% response rate). Midlevel providers preferred weight loss medication (mean 3.3 vs 2.6, p =.005) for the treatment of diabetes and were less likely to recommend a randomized trial of weight loss surgery (mean 2.1 vs. 1.6, p=0.003). Midlevel providers also had a less favorable opinion overall of BS than physicians (mean likert scale response 2.4 vs. 1.9, p=0.003), including its ability to treat diabetes mellitus (mean 2.4 vs 2.0, p=0.02). Midlevel providers believed there was an increased likelihood of death from all 4 surgeries. Providers who watched an educational video on sleeve gastrectomy trended towards more likely to recommend the procedure (p=.07) than those who had not. After adjustment, there was no difference between genders or between specialists and generalists. After reviewing the educational material, 60% of all providers stated they had a more favorable opinion of BS and midlevel providers were just as likely to recommend BS (1.7 vs. 1.9, p=0.26).

 

Conclusions: Midlevel providers overall had significantly more negative perceptions of BS than physicians and perceived it to be of higher risk. This was at least partially alleviated by viewing educational videos.  More continued educational interventions geared toward primary care practitioners, particularly midlevel providers, may improve perceptions and increase referrals. 

17.04 National evaluation of adherence to quality measures in esophageal cancer

A. Adhia1, J. Feinglass1, K. Engelhardt1, M. DeCamp1, D. Odell1  1Northwestern University,Chicago, IL, USA

Introduction: Esophageal cancer is the leading cause of death among GI malignancies and the incidence of the disease is rising faster than any other solid organ tumor. Patients frequently present with locally advanced disease (stage III), contributing to challenges in treatment decision making.  Our objective was to assess adherence to four novel quality measures in patients with stage III esophageal cancer.

Methods:  18,555 patients diagnosed with stage III esophageal cancer were identified from the National Cancer Database (NCDB) between 2004 and 2014.  Four quality measures were defined from NCCN guidelines: administration of induction therapy, >15 lymph nodes sampled at resection, surgery within 120 days of neoadjuvant treatment, and R0 resection.  The association of patient demographic and treatment variables (age, sex, location of lesion, histology, income, education, race and ethnicity and year of diagnosis) with measure adherence was assessed using logistic regression. Risk of all-cause mortality was assessed comparing adherent and non-adherent cases using Cox modeling.  Kaplan-Meier survival estimates of groups that adhered to none, one of four, two of four etc. quality measures were performed.

Results: Adherence was high for three of the quality measures: neoadjuvant treatment (92.7%), timing of surgery (82.5%) and completeness of resection (91.5%).  However, nodal evaluation was adequate in only a minority of patients (20.0%). Advanced age, Medicaid insurance status, lower level of education and Black or Hispanic ethnicity were all associated with statistically significant increased odds of non-adherence for all measures.  Adherence improved in the more recent time period, with cases after 2008 having improved adherence in administration of induction therapy (OR = 2.58 in 2012-2014 period) and adequate nodal staging (OR = 2.49 in 2012-2014).  Achieving adherence was associated with a statistically significant decrease in all-cause mortality for administration of induction therapy (HR = 0.70 [0.62, 0.78]), nodal staging (HR = 0.67 [0.63, 0.70]), and R0 resection (HR = 0.48 [0.43, 0.53]), but not for timing of surgery (HR = 0.93 [0.85, 1.02]).  Survival improved as the number of quality measures an individual patient adhered to increased (Figure).

Conclusion: Adherence to quality measures in the care of patients with stage III esophageal cancer is associated with improved survival.  Understanding variability in measure adherence may identify potential targets for cancer quality improvement initiatives.

17.01 Effect of preoperative liquid diet on liver volume and MRI estimated proton density fat fraction

T. SUZUKI1, R. B. Luo1, J. C. Hooker2, Y. Covarrubias2, T. Wolfson2, A. Schlein2, S. Liu1, J. B. Schwimmer3, L. M. Funk5, J. A. Greenberg5, G. M. Campos6, B. J. Sandler1, S. Horgan1, S. B. Reeder4, C. B. Sirlin2, G. R. Jacobsen1  1University Of California – San Diego,Division Of Minimally Invasive Surgery, Department Of Surgery,San Diego, CA, USA 2University Of California – San Diego,Liver Imaging Group, Department Of Radiology,San Diego, CA, USA 3University Of California – San Diego,Division Of Gastroenterology, Hepatology, And Nutrition, Department Of Pediatrics,San Diego, CA, USA 4University Of Wisconsin,Departments Of Radiology, Medical Physics, Biomedical Engineering, Medicine And Emergency Medicine,Madison, WI, USA 5University Of Wisconsin,Department Of Surgery,Madison, WI, USA 6Virginia Commonwealth University,Division Of Bariatric And GI Surgery,Richmond, VA, USA

Introduction: Liver volume (LV) and fat content are important considerations during bariatric procedures as increased liver volume not only increases the difficulty of intra-operative visualization but also elevates the risk of bleeding complications. The aim of this study was to evaluate the impact of a preoperative liquid diet (PLD) on LV and magnetic resonance imaging (MRI) estimated proton density fat fraction (PDFF) as a measure of liver fat content, in morbidly obese patients undergoing bariatric surgery (BS). 

Methods: This prospective multi-institutional study was approved by an institutional review board (IRB) and was Health Insurance Portability and Accountability Act (HIPAA) compliant. After providing informed consent, patients meeting National Institutes of Health (NIH) criteria for BS underwent MRI at baseline and post PLD. LV and PDFF were estimated from 3D chemical shift encoded MRI (CSE-MRI) anatomical images and PDFF maps, using the OsiriX (Pixmeo SARL, Bernex, Switzerland) imaging software. Primary outcomes were patient weight, body mass index (BMI), LV and PDFF. Secondary outcomes were relationships between the changes in BMI, LV and PDFF. Data were analyzed with paired t-test and Wilcoxon-Mann-Whitney tests. Pearson correlation was used to assess the relationships between measures. Relative reduction rate of BMI was defined as: (baseline BMI – post BMI) / baseline BMI ×100 (%). Relative reduction rate of LV was defined as: (baseline LV – post LV) / baseline LV ×100 (%). The absolute reduction rate of PDFF was defined as: baseline PDFF ?post PDFF  (%).

Results:One-hundred-twenty-four patients scheduled for BS were recruited to be part of the study between October 2010 and June 2015. 102 patients (87 females, 85.3%, mean age 48.0 ± 12.8 years) underwent MRI at baseline and post PLD. The mean liquid diet duration was 17.1 ± 8.8 days. Post PLD, mean weight decreased from 119.6 ± 19.1 kg/m2 to 114.8 ± 18.7 kg/m2 (p<0.0001). BMI decreased  from 43.6 ± 6.4 kg/m2 to 41.9 ± 6.3 kg/m2 (p<0.0001) with a mean relative reduction of 4.1 ± 2.2 %. LV decreased from 2277.2 ± 578.0 cm3 to 1985.0 ± 510.6 cm3 (p<0.0001) with a mean relative reduction of 12.3 ± 10.1 %. PDFF decreased from 13.6 ± 9.4 % to 10.4 ± 7.8 % (p<0.0001) with a mean absolute reduction of 3.2 ± 4.3 %. Pearson correlations analyses revealed statistically significant relationships between the relative reductions in LV and BMI (r=0.5253, p≤0.0001), between the absolute reduction in PDFF and relative reduction in BMI (r=0.2451, p=0.0140), and between the absolute reduction in PDFF and relative reduction in LV (r=0.3861, p=0.0001).

Conclusion:PLD significantly reduced LV and PDFF. This highlights the importance of PLD in the improvement of LV and PDFF in morbidly obese patients and underscores the reason why PLD is routinely performed at our institutions.

16.19 Outcomes and Hospital Resource Utilization in Older Adult Patients After Motor Vehicle Crashes

P. P. Patel1, L. Gryder1, C. McNicoll1, C. Katona1, P. McGrew1, P. Chestovich1, J. Fildes1, D. Kuhls1  1University Of Nevada,Trauma & Critical Care,Las Vegas, NEVADA, USA

Introduction: As the average life expectancy increases, more older adults continue to drive or travel as passengers in motor vehicles. Crashes involving the elderly have worse outcomes compared to younger patients. The purpose of this study is to describe the injury burden, hospital resource utilization (HRU), hospital charges, and disposition incurred by older adult patients after a motor vehicle crash (MVC).

Methods: The Statewide Vehicular Trauma Database was queried for all individuals age ≥65 injured in a MVC from 2005-2014. Patients were stratified by age: 65-74, 75-84, and ≥85. Relevant data include demographics, crash factors, and injury severity score (ISS). Primary outcome was hospital mortality, with secondary outcomes of hospital and ICU length of stay (LOS), hospital charges, and disposition. Analysis was by Chi-squared and Kruskal-Wallis tests, with p<0.05 considered significant.

Results: A total of 2,029 individuals met inclusion criteria. The average age was 75, majority were Caucasian, restrained, and seated in the driver position. Gender distribution was equal. Injury and HRU was significant for a higher average ISS and an increased mean number of hospital and ICU days in the 75-84 age group. Upon nonparametric analysis, the ≥85 group showed significantly increased ISS, hospital and ICU LOS, and hospital charges.  Patients ≥85 were also more likely to die or require disposition to a rehab facility or a nursing home after discharge.

Conclusion: This study demonstrates that although the older adult population is considered a high-risk group, there are significant differences in injury burden, outcomes and HRU within this cohort. Older adults had greater injury severity requiring a higher resource utilization while achieving less desirable outcomes. As the number of older adult trauma patients grows, special attention should be placed on those over age 85 to enhance their recovery after a MVC.

 

16.20 The Influence of Pancreatic Division Technique on Pancreatic Leak Rates Following Traumatic Injury

P. Hu1, R. Uhlich1, J. Kerby1, P. Bosarge1  1University Of Alabama at Birmingham,Acute Care Surgery,Birmingham, Alabama, USA

Introduction:
Pancreatic injury is a rare, potentially devastating consequence of abdominal trauma. While low grade injuries may be successfully managed conservatively, injuries to the pancreatic duct or head typically require operative intervention. Complications following pancreatic resection are historically associated with high rates of morbidity and mortality. We sought to evaluate the influence of intra-operative techniques on postoperative complications.

Methods:
A retrospective case control study was performed at an American College of Surgeons verified level 1 trauma center from 2011-2017. All adult patients admitted to the trauma surgery service were eligible for inclusion, while patients with pregnancy or age <18 years old were excluded. Patients with pancreatic injury were identified from the trauma registry using ICD-10 codes. Pancreatic injuries were graded according to the AAST guidelines (Grades 1-5), with major injury identified as ≥ grade 3 (pancreatic ductal injury). Patients were stratified into cohorts according to the method used for pancreatic division and resection, including stapled, cut and oversewn, stapled and oversewn, or cautery. Pancreatic leak was defined as a drain amylase level three times greater than normal serum amylase (103 U/L), according to institutional standard. Analysis was performed using χ2 and Student's t-test or one-way ANOVA for categorical and continuous variables respectively. The primary outcome of interest was the rate of pancreatic leak following resection. 

Results:

52 patients were identified with pancreatic injury[PLB1] . The majority of patients (90.4%) underwent operative management. Pancreatic resection was required in 40.4% (21/52), with a majority undergoing stapled resection (52.3%). The remaining resections were performed by cut and oversew (14.3%), stapled and oversewn (23.8%), and cautery (9.5%). Pancreatic leak was identified postoperatively in 76.2% (16/21) of patients, with no significant difference between the different types of resection (p=0.27).

 

Conclusion:

Pancreatic injury requiring resection results in significant rates of postoperative leak, regardless of intraoperative technique. Drain placement should be strongly considered at the time of initial operation

16.17 Clinical Outcomes In Patients Requiring Dialysis After Trauma: A National Trauma Database Analysis

A. E. Siletz1, J. Grotts2, C. E. Lewis1, A. Tillou1, H. Cryer1, A. Cheaito1  1University Of California – Los Angeles,Department Of Surgery, David Geffen School Of Medicine At UCLA,Los Angeles, CA, USA 2University Of California – Los Angeles,UCLA Department Of Medicine Statistics Core, David Geffen School Of Medicine At UCLA,Los Angeles, CA, USA

Introduction: AKI requiring renal replacement therapy (RRT) represents the most severe form of post-traumatic AKI, and has been associated with poor outcomes.  Incidence and clinical impact vary by study due to variations in study population and definitions. The objective of this study was to determine the correlation between initiating dialysis and clinical outcomes in trauma patients using a national dataset.  

Methods:  All patients in the National Trauma Database from 2013-2014 with a diagnosis of AKI based on ICD9 code during admission for trauma were reviewed. Patients were excluded if they had no signs of life on arrival, were under age 18, or had pre-existing end-stage renal disease. A propensity score based on ISS, penetrating injury, age, and gender was used to match patients with AKI requiring dialysis with those with AKI who did not need dialysis.  A multivariate logistic regression model using dialysis, ISS, injury type, age, and gender as covariates was also constructed. 

Results: Among adult patients surviving to admission for trauma without pre-existing end-stage renal disease, the incidence of AKI was 1.07%  and the incidence of AKI requiring dialysis was 0.02%.  17668 trauma patients with AKI, of which 282 received dialysis, were compared.  Older age, male gender, black/African American race and Medicare and Medicaid coverage were significantly associated with dialysis (p<0.02). Penetrating injury was more likely to be associated with dialysis than blunt injury (OR 3, 95% CI 2.3-4, p<0.001) and dialysis patients had higher median ISS scores (26.5, IQR 18-35.2 vs. 17, IQR 9-29, p <0.001).  When patients were matched using a propensity score based on ISS, penetrating injury, and age, dialysis patients were found to have higher complication rates including unplanned intubation (OR 3, 95% CI 1.6-5.6, p <0.001), unplanned return to the operating room (OR 7.3, 95% CI 3.8-14, p <0.001), acute lung injury/acute respiratory distress syndrome (OR 4.7, 95% CI 3-7.3, p <0.001), pulmonary embolism (OR 3.1, 95% CI 1.3-7.2, p =0.013), severe sepsis (OR 11.3, 95% CI 6.4-19.9, p <0.001), myocardial infarction (OR 4, 95% CI 1.5-10.7, p =0.009), and death (OR 3.8, 95% CI 2.7-5.2, p <0.001).  Median hospital stay (27 vs. 8 days, p <0.001), ICU stay (19 v. 5 days, p < 0.001), and number of ventilator days (16 vs 5 days (p < 0.001) were significantly higher for dialysis patients.  In a multivariate logistic regression model, initiating dialysis was significantly associated with development of acute respiratory distress syndrome (OR 4.8, 95% CI 3.1-7.6, p < 0.001), severe sepsis (OR 12.2, 95% CI 7.0-22.2, p < 0.001), and mortality (OR 4.0, 95% CI 2.9-5.6, p<0.001).

Conclusion

AKI requiring dialysis after trauma is rare. Risk factors include high ISS and penetrating injury.  The need for dialysis after AKI during admission for trauma is associated with increased complications, length of hospital stay, and mortality. 

 

16.18 Current Nutritional Practices and Associated Outcomes in Critically-Ill Trauma Patients

B. E. Haac1, R. Van Besien1, R. Jenkins1, A. Geyer2, J. Diaz1, D. Stein1  1University Of Maryland,R Adams Cowley Shock Trauma Cener,Baltimore, MD, USA 2Air Force Institute Of Technology (AFIT/ENC),Wright-Patterson AFB, OHIO, USA

Introduction: Nutrition is an important component of support for critically-ill trauma patients who often present in a state of catabolic stress but there is limited recent research on this topic specific to trauma patients. We sought to examine nutritional practices in a critically-ill trauma population and to identify baseline factors and outcomes associated with timing, content and route of nutrition.

Methods:  We conducted a retrospective review of adult critically-ill trauma patients admitted to the intensive care unit (ICU) for >72 hours. A multivariable regression model was built for each nutritional variable and outcome variable. Outcomes evaluated include number of operative trips, hospital and ICU length of stay (LOS), ventilator days, mortality, discharge destination and hospital-acquired infections.

Results: 683 patients (mean ISS 24.4) were included. 461 patients received tube feeds within the first 7 days of admission. Two-thirds (n=297, 64%) of these were initiated on early enteral tube feeding within 48 hours. Injury pattern was associated with timing of nutrition initiation, time to goal tube feed rate and percent of goal calories and protein received. Specifically, severe head injury (brain AIS=5) was independently associated with reaching a goal tube feed rate (aOR 3.1, P<0.01) and receiving a greater percent of goal calories (aOR 2.1, p=0.01) in the first 48 hours of admission whereas patients without head injury (brain AIS=0) were less likely to initiate nutrition within 24 hours of admission (aOR 0.3, p<0.01). Higher admission GCS was also associated with decreased odds of reaching goal tube feeds within 24 hours (aOR 0.6, p<0.01). Later initiation of enteral nutrition after 48 hours was associated with increased ICU LOS (aOR 1.4, p<0.01) and more ventilator days (aOR 1.6, p<0.01) in all patients and increased pneumonia rates in patients who required gastrointestional (GI) surgery for their injury (aOR 15.7, p=0.02). Increased percent of goal nutrition received in the first 48 hours was associated with more ventilator days (aOR 2.8, p<0.01) and longer ICU LOS (aOR 1.7, p<0.01). Increased percent of goal nutrition received in the first 7 days was associated with development of urinary tract infection (UTI) (aOR 5.4, p<0.01). Gastric tube feeds were associated with lower bacteremia incidence (aOR=0.4, p=0.01) and longer ICU LOS (aOR 1.2, p<0.01). There was no association of nutrition variables with mortality.

Conclusion: Injury pattern, especially presence of brain injury, may be predictive of ability to initiate early enteral nutrition, time to goal feeds and percent of goal nutrition received. Benefits of early initiation may include decreased LOS and ventilator days and reduced pneumonia rates in patients requiring GI surgery. Trophic feeds may be less likely to result in UTI, and gastric feeds may have a protective role in prevention of bacteremia.

 

16.14 Thyroid Trauma − Incidence, Mortality, and Concomitant Injury

D. Spencer1, A. Grigorian1, S. Schubl1, K. Awad1, D. Elfenbein1, T. Dogar1, J. Nahmias1  1University Of California – Irvine,Division Of Trauma, Burns & Surgical Critical Care,Orange, CA, USA

Introduction:  Traumatic injury to the thyroid is rare with no large studies previously published. Although the thyroid is not considered an immediately vital structure, it is in close proximity to several critical structures such as the carotid arteries, trachea, esophagus, and cervical spine. We sought to describe the national incidence of traumatic thyroid injury as well as mortality rate, rate of operative intervention, and frequencies of concomitant injury to surrounding structures. We hypothesized isolated thyroid injury would have a lower mortality compared to thyroid with concomitant carotid artery, trachea, esophagus or cervical spine injury.

Methods:  National Trauma Data Bank data from 2007-2015 was used to identify patients with thyroid injury. Demographics, associated injuries, operative repair, complications, and outcomes were analyzed. Analysis was performed by odds ratio utilizing a logistic regression model.

Results: 1,395 patients with thyroid injury were identified from over 6.7 million trauma patients. Yearly incidence was 0.02%. The majority of patients were female (79.6%), had a penetrating mechanism of injury (79.7%), and had isolated thyroid injury (59.7%). The most common concomitant injuries were to the trachea (25.9%), carotid artery (9.5%), and cervical spine (7.9%). Operative interventions most frequently performed were direct thyroid repair (13.9%), thyroid blood vessel repair (3.4%), and thyroidectomy (3.2%). No patients experienced postsurgical hypothyroidism. All-cause mortality was 15.6%. After controlling for age  ≥  65, ISS > 25, and gender, non-isolated thyroid injury was shown to be an independent risk factor for mortality when compared to isolated thyroid injury (Odds Ratio 1.67, 95% Confidence Interval 1.17 – 2.34; p<0.05).

Conclusion: Thyroid injury in trauma patients is extremely rare. Interestingly, thyroid trauma is seen more often in females than males. Isolated thyroid trauma presents less of a clinical challenge with a lower risk of mortality than those with concomitant injuries even after controlling for significant covariates. When operative intervention is required, direct thyroid repair is greater than four times more common than thyroidectomy. Regardless of injury type and operation, postsurgical hypothyroidism was not seen.

16.15 Using Injury Severity Score to Determine Venous Thromboembolism Risk in Trauma Patients

T. E. Hereford1, S. Ray1, R. D. Robertson1, M. K. Kimbrough1  1University Of Arkansas For Medical Sciences,Little Rock, AR, USA

Introduction:
Venous thromboembolisms (VTEs) continue to be a leading cause of death among trauma patients. Predicting which patients will develop a VTE can be difficult. This study investigated whether the Injury Severity Score (ISS) could be used in conjunction with the Abbreviated Injury Score (AIS) to assess a trauma patient’s risk for subsequent VTE development. 

Methods:
Participants were found by querying a trauma center registry. There were 2,213 patients included for evaluation. The patients were categorized based on their ISS and the anatomical region with the greatest injury (determined by the AIS). Odds ratios for developing VTEs were calculated for each ISS category. 

Results:
The results showed that in most categories VTE risk increased as ISS increased. Patients with trauma to their head/neck, chest, or extremities with ISS values of 21 or greater were at significantly increased risk for VTE development. Patients in these categories with an ISS less than 21 seemed to have little or only moderately increased odds of developing a VTE, although these values were not statistically significant. Patients with abdominal trauma were at increased risk even with ISS values of 11-21. 

Conclusion:
Trauma to the head/neck region, chest, and extremities (including pelvis) with Injury Severity Scores of 21 or higher had significantly increased odds of developing a VTE. Patients with abdominal trauma of any severity appeared to have increased odds of developing a VTE. Physicians should be aware of patients that fall into these categories and consider whether the risks of developing a VTE outweigh the risk of prophylactic treatment. 
 

16.16 Diminished Physiologic Reserve Predicts Mortality in the Underweight Following Hemorrhagic Shock

J. O. Hwabejire1, B. Adesibikan1, T. A. Oyetunji2, O. Omole1, C. E. Nembhard1, M. Williams1, E. E. Cornwell III1, W. R. Greene3  1Howard University College Of Medicine,Surgery,Washington, DC, USA 2Children’s Mercy Hospital- University Of Missouri Kansas City,Surgery,Kansas City, MO, USA 3Emory University School Of Medicine,Atlanta, GA, USA

Introduction:  We have previously demonstrated that extremes of body mass index (BMI) are associated with poor outcomes following blunt traumatic hemorrhagic shock. In this study, we examined the risk factors for mortality in underweight patients following blunt trauma.

Methods:  The Glue Grant database was retrospectively analyzed. Patients with BMI <18.5 kg/m2 who met criteria for hemorrhagic shock after blunt trauma were included. Survivors were compared to non-survivors using univariate analysis. Multivariable analysis was used to determine predictors of mortality.

Results: There were 102 patients who met criteria for inclusion in the study. Their mean age was 46 years (SD=20), with 62% being males, 89% Whites and 5% black. Mortality in this cohort was 52.9%, compared to 16.0 % in all comers and 14.3 % in patients with a normal BMI. Amongst the underweight, there was no differences in age, multiple organ dysfunction score, or emergency room (ER) shock index or pre-injury comorbidities between survivors and non-survivors. Compared to survivors, non-survivors were hypotensive in the ER (systolic BP: 110 ±27 vs. 87±38 mmHg, p=0.001), had higher ER lactate (7.1 ±4.1 vs. 4.1 ±2.5 mg/dL, p<0.001), were more coagulopathic (ER INR: 1.92 ±1.91 vs. 1.24±0.30, p=0.026 ), had higher APACHE II score (35±6 vs. 28±7, p<0.001), higher injury severity score, ISS (35±13 vs. 27±11, p=0.002), received more crystalloids (12696±6550 vs. 9796±4964 mL, p=0.014), and more blood (6070±4912 vs. 2240±3658 mL, p<0.001) within 12 hours of presentation.  When only patients with ISS >25 were compared, non-survivors were still more likely to be hypotensive (ER SBP: 112 ±28 vs. 87±36 mmHg, p=0.004), acidotic (ER lactate: 7.4 ±4.4 vs. 4.4 ±3.0 mg/dL, p=0.006), received more blood 6174±4926 vs. 3024±4612 mL, p=0.011) and had a higher APACHE II score (35±6 vs. 29±5, p<0.001). In the multivariate analysis, after adjusting for ISS, the only independent predictor of mortality was the APACHE II score (OR: 1.35, CI 1.08-1.1.69, p=0.009). 

Conclusion: The Acute Physiologic and Chronic Health Evaluation (APACHE) II score independently predicts mortality in the underweight after blunt traumatic hemorrhagic shock. Underweight patients appear to lack the physiologic reserve to tolerate severe trauma.

 

16.11 When We Take the Time to Look: Completion Angiography After Major Vascular Injury Repair

S. A. Moore1, J. P. Hazelton3, Z. Maher2, B. L. Frank4, J. W. Cannon1, D. N. Holena1, N. D. Martin1, A. Goldenberg-Sandau3, M. J. Seamon1  4Geisinger Health System,General Surgery,Scranton, PA, USA 1University Of Pennsylvania,Division Of Traumatology, Surgical Critical Care And Emergency Surgery,Philadelphia, PA, USA 2Temple University,Division Of Trauma & Surgical Critical Care,Philadelphia, PA, USA 3Cooper University Hospital,Division Of Trauma Surgery,Camden, NJ, USA

Introduction: Despite the operative vascular trauma advances achieved over the past several decades, these challenging injuries still result in significant morbidity and mortality.  Completion angiography (CA) immediately following repair of major vascular injury (MVI) has been advocated to limit adverse outcomes, but adequate data supporting or refuting this practice is lacking.  We hypothesized that CA after operative MVI repair identifies unsatisfactory repairs requiring intraoperative revision.  

Methods: A multi-center, retrospective cohort study of consecutive patients with operative MVI was conducted at 3 urban, Level-I centers (2005-2013).  Patients (≥15 years) with MVI of the neck, torso, or extremities (proximal to elbows/knees) requiring operative management were included.  Demographics, clinical variables and revision risk factors were analyzed with respect to our primary study endpoint, intraoperative revision following CA.  Secondary endpoints included outcomes after MVI repair.

Results: Of the 435 patients identified in the study, the majority were young (mean = 31 years) male (89%) patients with penetrating (84%) trauma.  Patients who underwent CA after repair (n= 128) were compared to patients who did not (n=303).  Although patients sustaining blunt injuries with associated fractures were both more likely to undergo CA (p<0.01), no differences with respect to age, gender, Injury Severity Score (ISS), initial systolic blood pressure, transfusion requirement and operating surgeon subspecialty were detected between study groups (all p<0.05).  Completion angiography study group patients were then more likely to undergo immediate intraoperative revision than those who did not undergo CA (CA, 21/128 [16.4%] vs. no CA, 4/303 [1.3%]; p<0.01, Figure 1).  Importantly, there were no differences in fasciotomy, delayed revision, arterial patency at discharge, or limb salvage rates between study comparison groups.

Conclusion: CA after operative repair led to intraoperative revision in 16% of MVI patients.  These data suggest that all patients undergoing operative MVI repair should undergo CA, as this additional diagnostic adjunct may prevent later adverse outcomes caused by unsatisfactory repairs.

 

16.13 Could Retained Bullet Fragments Be a Significant Source of Blood Lead Levels in Trauma Patients?

S. A. Eidelson1, C. A. Karcutskie1, A. B. Padiadpu1, M. B. Mulder1, S. K. Madiraju1, G. D. Garcia1, G. D. Pust1, N. Namias1, C. I. Schulman1, K. G. Proctor1  1University Of Miami,Miami, FL, USA

Introduction:
On Feb 17, 2017, the CDC reported that retained bullet fragments (RBF) may be a source of elevated blood lead levels (BLL) in those with no other known exposure.  This conclusion was based on voluntary reports of BLL>10 µg/dl to the CDC’s National Institute for Occupational Safety and Health. Roughly 75,000 non-fatal firearm injuries occur annually in the United States and routine screening for BLL is rarely performed. Thus, the incidence and magnitude of BLLs from RBF are unknown, but the CDC reports that any measurable BLL is unsafe.  We test the hypothesis that BLLs are elevated in trauma patients with RBF.

Methods:
BLL were measured in 23 consecutive adult patients with imaging-proven RBF admitted to an American College of Surgeon’s verified level 1 trauma center from 2/15/17-7/16/17. BLL is considered elevated at >5 μg/dL.  Data are expressed as mean±standard deviation if parametric and median if nonparametric.   Differences are assessed at p<0.05.

Results:
The study population is 95.7% male, 33±15 yrs, 25±4 kg/m2, and 70% African American. Of twenty-three patients with RBF, 35.0% (n=8) were found to have elevated blood lead levels and 74.0% (n=17) were found to have measureable lead levels. 

Conclusion:

These preliminary data provide basic proof of concept that measurable BLL occur in over half of trauma patients with RBF, regardless of days exposed.  Potential deleterious effects include impaired renal function with BLL <5 μg/dL, an increased risk for hypertension and essential tremor with BLL between 5-10 μg/dL, and neurocognitive deficits and adverse reproductive outcomes (including spontaneous abortion and reduced birthweight) with BLL ≥10 μg/dL.  Thus, patients with RBF may benefit from precautionary counseling on lead poisoning and the importance of baseline and periodic monitoring. Moving forward, there may also be a potential benefit of surgical retrieval.   

 

16.10 Provider Beliefs and Practice for the Use of Long-Acting Pain Medication in the Adult Burn Patient

K. Sloan1, J. Cartwright2, J. Liao1, Y. M. Liu1, K. S. Romanowski1  1University Of Iowa Hospitals And Clinics,Department Of Surgery,Iowa City, IA, USA 2University Of Michigan,Ann Arbor, MI, USA

Introduction: Achieving adequate pain control is a vital yet challenging component of burn management. Pharmacological interventions with opioids and adjuvants have long been the cornerstone of burn pain management. Since background pain is innate to burn injuries, long-acting pain medication (LAPM) are particularly advantageous to attain stable analgesia. However, literature is lacking surrounding their utilization and efficacy. The purpose of this survey was to assess burn providers’ beliefs and practices surrounding LAPMs in burn analgesic management.

Methods: Following approval by the Institutional Review Board and the American Burn Association (ABA) Survey Advisory Panel, a 31-item survey concerning LAPM was distributed electronically through Google Forms and REDCap to all physician, physician assistant, and nurse practitioner members of the ABA. Descriptive statistics and analysis of variance were conducted as indicated.

Results: Of 194 respondents (36.7% response rate), 101 (52%) identified as prescribers of pain medications with 93% utilizing LAPM. A majority of prescribers (73.4%) reported being likely or extremely likely to prescribe LAPM to burn patients. The most common trigger for initiation was “patient’s complaints of pain” (82%). Practitioners were evenly divided on whether burn size would influence their use of LAPM (46% no, 43% yes). Almost half of the respondents (47.25%) would utilize LAPM in burns as small as < 10% TBSA. Patient age was cited as consideration in the use of LAPM by 44% of practitioners with 13.5% of practitioners not using LAPM in patients aged 70 or older.  In adults, methadone was the most common first line therapy (44%), but was closely followed by extended-release morphine (31%). There was no consistent starting dose or regimen identified among practitioners. Only 21 (22.3%) practitioners cited that their institution had a protocol for the administration of LAPM. Clinical response was the principal reason for altering initial medication choice (18%), with excessive sedation being the chief variable stimulating reduction or with-holding of doses (90%). Analysis of provider perceptions of the effectiveness of LAPM revealed over 97% agreed/strongly agreed LAPM diminish background pain. While only 52% agreed/strongly agreed LAPM reduce the overall adjuvant pain medication requirement, over 90% agreed/strongly agreed that the usage of LAPM was associated with reduction in amount and/or dose of short-acting opioids.

Conclusions: While LAPM use was common among survey respondents and their attitudes towards it were largely positive, there was variance in individual practice and a lack of institutional protocols for use. More research into the most effective administration, dosing and weaning protocols must occur in light of the worsening opioid addiction crisis.