81.01 Incidence and Risk Factors Associated with Ulcer Recurrence among Patients with Diabetic Foot Ulcers

C. J. Abularrage1, J. K. Canner2, N. Mathioudakis3, C. Lippincott4, R. L. Sherman1, C. W. Hicks1  1The Johns Hopkins University School Of Medicine,Division Of Vascular Surgery And Endovascular Therapy,Baltimore, MD, USA 2The Johns Hopkins University School Of Medicine,Department Of Surgery,Baltimore, MD, USA 3The Johns Hopkins University School Of Medicine,Division Of Endocrinology And Metabolism,Baltimore, MD, USA 4The Johns Hopkins University School Of Medicine,Division Of Infectious Diseases,Baltimore, MD, USA

Introduction:

Recent studies demonstrate favorable diabetic foot ulcer (DFU) healing outcomes with the implementation of a multidisciplinary team. However, the long-term outcomes of this approach to DFU care are unknown. We aimed to describe the incidence of and risk factors associated with ulcer recurrence after initial complete healing among a cohort of DFU patients treated in a multidisciplinary setting.

Methods:
All patients presenting to our multidisciplinary diabetic limb preservation service from 6/2012-04/2018 were enrolled in a prospective database. Wounds were classified according to the SVS-WIfI at initial presentation. The incidence of ulcer recurrence after complete wound healing was assessed per limb using the Kaplan Meier method, and a stepwise multivariable Cox proportional hazards model was created to identify independent predictors of ulcer recurrence.

Results:
A total of 244 patients with 304 affected limbs were included. Mean age was 59.2±3.8 years, 62.7% of patients were male, and 61.9% were black. Nearly all (95.1%) of patients has loss of protective sensation, with abnormal proprioception in 23.9%. Ulcer recurrence occurred in 38.5% of limbs at a mean time of 310±30 days. Only 12.8% of recurrent ulcers occurred at the same site as the initial wound. Ulcer recurrence rates at one- and three-years post-healing were 30.6±3.0% and 64.4±5.2%, respectively (Figure), and did not significantly differ by the WIfI stage of the initial wound (P=.34). Recurrent ulcers were smaller (4.4±1.1cm2 vs. 8.2±1.2cm2; P=0.04) and had a lower WIfI stage (stage 4: 7.7% vs. 22.4%; P<0.001) than initial ulcers. Time from ulcer onset to assessment was lower for recurrent ulcers (0.9±0.3 vs. 2.4±0.2 months; P<0.001), and wound healing time was significantly reduced (95.0±9.8 vs. 131.8±7.0 days; P=0.004).  Independent predictors of ulcer recurrence included abnormal proprioception [HR 1.57 (95%CI 1.02-4.43); P=.04] and younger age [HR 1.02 per year (95%CI 1.01-1.04). Patient race, BMI, socioeconomic status, comorbidities, blood sugar control (hemoglobin A1c), and wound location were not independently associated with ulcer recurrence.

Conclusion:
In this prospective cohort of diabetic foot ulcer patients, ulcer recurrence occurred in nearly two-thirds of limbs within three years. Importantly, time to diagnosis and healing was significantly lower for recurrent ulcers, and downstaging was common. These data suggest that engaging DFU patients in a multidisciplinary care model with frequent follow-up and focused patient education may serve to decrease DFU morbidity.

80.10 Prospective, Randomized Study of Short-Term Weight Loss Outcomes Using Gamification-Based Strategy

P. Kaur1, S. V. Mehta5,7, T. Wojda3, P. Bower4, M. Fenty6, M. Kender8, K. Boardman7, M. Miletics7, J. C. Stoltzfus1, S. P. Stawicki1,2  1St. Luke’s University Health Network,Department Of Research & Innovation,Bethlehem, PA, USA 2St. Luke’s University Health Network,Department Of Surgery,Bethlehem, PA, USA 3St. Luke’s University Health Network,Department Of Family Medicinee,Warren, PA, USA 4St. Luke’s University Health Network,Development,Bethlehem, PA, USA 5St. Luke’s University Health Network,Department Of Gastroenterology,Bethlehem, PA, USA 6St. Luke’s University Health Network,Information Technology – Innovation Program,Allentown, PA, USA 7St. Luke’s University Health Network,Weight Management Center,Allentwwn, PA, USA 8St. Lukes University Health Network,St. Luke’s Internal Medicine- Miners,Coaldale, PA, USA

Introduction: In response to the obesity epidemic, various strategies have been proposed. While the surgical approaches remain most effective long-term management option, the effectiveness and sustainability of short-term, non-surgical weight loss remains controversial. Gamification(e.g., point systems and constructive competition) of weight loss activities may help achieve more sustainable results. We hypothesized that the use of smartphone-based gamification platform (SBGP) would facilitate sustained non-surgical weight loss at 3 months. In addition, we sought to examine if intensity of SBGP participation correlates with outcomes, and if it has parallel effects on hemoglobin A1c (HA1c) levels.

Methods:  An IRB-approved, prospective, randomized study (01/2017-02/2018) included 100 bariatric surgery candidates, randomized to either SBGP (n=50) or No SBGP (NSBGP, n=50). Following enrollment, SBGP patients installed a mobile app (Picture It! Ayogo, Vancouver, Canada) and received usage instructions. Patients were followed for 3 months (weight checks, patient engagement questionnaires, health-care encounters). Mobile app frequency was also tracked (number of interactions,  real-time feedback). Primary (weight loss) and secondary (HA1c) outcomes at 3 months were then contrasted between SBGP and NSBGP groups using non-parametric statistical testing. In addition, the intensity of app use was contrasted with weight loss for the SBGP group. Participation was measured on a low-intermediate-high scale (a composite of in-app encouragements, likes, answers and “daily quest” inputs).

Results:After losing 4 patients to follow-up, 49 SBGP and 47 NSBGP patients completed the study. There were no significant demographic differences between the two groups (mean age 38.4±10.4, median weight 273 lbs, 81% female, 28% diabetic, 44% hypertensive). We noted no significant differences in average weight loss at 3 months between SBGP (3.94 lbs) and NSBGP (1.45 lbs) groups. However, actively engaged patients lost more weight (8.33 lbs) compared to less engaged patients (2.51 lbs) in the SBGP group. Of note, absolute measured weight loss was greater among women (Figure 1A). We did not note statistically significant diffrences in HA1c among the groups (Figure 1B).

Conclusion:This study suggests that when using gamification as an adjunct in non-surgical approaches to weight loss, active patient engagement and female gender may be the strongest determinants of success. Our findings will be important in guiding strategies to optimize weight loss through customization and personalization of SBGP approaches to maximize patient engagement and clinical results.

80.09 The Prognostic Value of NLR in Patients that Underwent Neoadjuvant Treatment Before Gastrectomy.

Y. Zager1, A. Dan1, Y. Nevo1, L. Barda1, M. Guttman1, Y. Goldes1, A. Nevler1  1Sheba Medical Center,Surgery B,Ramat-gan, ISRAEL, Israel

Introduction:
Gastric cancer is the fifth most common cancer worldwide. This aggressive gastrointestinal cancer has grim 5 year survival rates of only 30% and is considered the third leading cause of cancer deaths worldwide. Studies in recent years have found hematological markers such as Neutrophil to Lymphocyte ratio (NLR) as potent prognostic immune biomarkers in various malignant conditions including gastric adenocarcinoma (GC). However, chemotherapy has been shown to affect systemic immune responses and local immune signatures and thus, may affect NLR. We therefore aimed to assess the prognostic value of using post-neoadjuvant NLR as a biomarker in gastric cancer patients with resectable disease.

Methods:
We conducted retrospective analysis on a prospectively maintained GC database in our institution. We collected oncologic, perioperative and survival data regarding gastric adenocarcinoma patients that underwent curative intent gastrectomy and D2 lymphadenectomy between the years 2010-2015. Neutrophil-to-Lymphocyte ratio were calculated from preoperative laboratory test. High and low NLR groups were stratified using NLR≥4 as a threshold. Kaplan-Meier analysis and Cox multivariate regression models were used for survival analysis to assess the prognostic value of clinical, histologic and hematological variables.

Results:

We reviewed the data of 174 patients, of which 121 (70%) patients we had the complete necessary data. median follow up duration was 20 months (range 1-88). A total of 54 patients received neoadjuvant chemotherapy (NACT). Postoperatively, High NLR was associated with greater morbidity (ranked with the Clavian-Dindo classification, p=0.011). The rate of major complications (Clavien-Dindo≥3) was higher significantly in the high NLR group (31.25% vs. 5.77%, p=0.015).

Among patients that received NACT, patients in the low NLR groups has a significantly improved disease free survival (Mean DFS, 48.9±5.4 months vs 27.7±10.0 months, p=0.04). Low NLR was not significantly associated with overall survival (OS). Multi-variant analysis demonstrated NLR (p=0.018, HR= 33.7%, CI = 0.12-0.947), and AJCC staging (p=0.01) to be independent prognostic factors associated with DFS.

Conclusion:
Our results suggests that NLR may have prognostic value amongst gastric cancer patients planned for curative intent surgery who underwent NACT.  These effects are evident mainly in terms of disease free survival and perioperative complications. Further studies assessing the value of NLR in predicting chemotherapy response are on their way.

80.08 Superiority of esophageal reconstruction by pedicled jejunal flap with microvascular augmentation

G. Takiguchi1, T. Nakamura1, H. Hasegawa1, M. Yamamoto1, Y. Matsuda1, S. Kanaji1, K. Yamashita1, T. Oshikiri1, T. Matsuda1, S. Suzuki1, Y. Kakeji1  1Kobe University Graduate School of Medicine,Gastrointestinal Surgery,Kobe, HYOGO, Japan

Introduction: The safe and secure esophageal reconstruction method in patients whose stomach is unavailable is still unsettled issue. Recently, the number of cases using pedicled jejunum flap (PJF) as an alternative conduit are increasing when the stomach is unavailable. The objective of this study is to elucidate advantages of reconstruction by PJF.

Methods: Forty-nine patients whose stomach was unavailable for the conduit following esophagectomy were enrolled in this study: 10 patients underwent ileo-colon (IC) reconstruction after esophagectomy from January 2005 to January 2011; after that 39 patients underwent esophageal reconstruction by PJF with microvascular augmentation from February 2011 to January 2018. Surgical outcomes, complications, perioperative serous albumin levels and postoperative body mass index (BMI) changes were retrospectively reviewed and compared between IC and PJF group.

Results:Anastomotic leakage rate was significantly lower in PJF group than those of IC group (10.3 % vs. 50.0 %, P=0.011). There was no severe diarrhea in PJF group while 30.0 % was observed in IC group. The mean serum albumin level was higher all through the postoperative period in PJF group than IC group. Especially, PJF group showed significant better recovery of serum albumin level compared to IC group at two weeks after operation (2.70 g/dl vs 2.20 g/dl, P=0.003). The mean decrease rate of postoperative BMI was lower in the PJF group than in the IC group. In the IC group, one patient died due to the postoperative pneumonia and brain infarction, but there was no mortality in the PJF group.

Conclusion:The reconstruction by PJF with microvascular augmentation following esophagectomy was superior to reconstruction by IC at the point of anastomotic leakage and severe diarrhea. Also, PJF has an advantage in earlier recovery of postoperative serum albumin level and keeping the body weight than IC. PJF might be a better choice for reconstruction after esophagectomy than IC in patients whose stomach is unavailable.
 

 

80.07 Bariatric Surgery in Vulnerable Populations: Early Look at Affordable Care Act’s Medicaid Expansion

K. M. Gould1,2,4, A. Zeymo1,2, K. S. Chan1,2,4, T. DeLeire2,4, N. Shara1,4, T. R. Shope3,4, W. B. Al-Refaie1,2,3,4  1MedStar Health Research Institute,Washington, DC, USA 2MedStar-Georgetown Surgical Outcomes Research Center,Washington, DC, USA 3Integrated Surgical Services of MedStar Washington Region,Washington, DC, USA 4Georgetown University,Washington, DC, USA

Introduction: Obesity disproportionately affects vulnerable populations. Bariatric surgery is a long-term effective treatment for obesity and obesity-related complications; however, utilization rates of bariatric surgery are lower for racial minorities, low-income persons, and publicly-insured patients. The Affordable Care Act’s (ACA) Medicaid expansion increased access to health insurance for millions of low-income adults, but its impact on documented disparities in utilization of bariatric surgery by vulnerable populations has not been evaluated. We sought to determine the impact of the ACA’s Medicaid expansion on disparities in the utilization rates of bariatric surgery by insurance, income, and race.

Methods:  47,974 non-elderly adult patients (aged 18-64) who underwent bariatric surgery were identified in two Medicaid expansion states (Kentucky and Maryland) vs. two non-expansion control states (Florida and North Carolina) from 2012-2015 using the Healthcare Cost and Utilization Project’s State Inpatient Database. Poisson interrupted time series were conducted to determine the adjusted incidence rates of bariatric surgery overall and by insurance (Medicaid vs. privately-insured vs. uninsured), income (high- vs. low-income) and race (African Americans vs. whites). The differences in the counts of bariatric surgery by insurance, income and race were calculated to measure the gap in utilization rates of bariatric surgery.

Results: After the ACA’s Medicaid expansion, the adjusted incidence rate of Medicaid-insured and low-income bariatric surgical patients increased by 16.6% and 4.2% per quarter respectively in expansion states. No significant marginal changes were observed in the adjusted incidence rate of privately-insured and high-income bariatric surgical patients post-ACA in these expansion states. These changed rates of bariatric surgery resulted in a decreased measured gap in the difference of counts of bariatric surgery by insurance status and income in expansion states. In contrast, the overall trend in the utilization rate of bariatric surgery for African Americans vs. whites remained constant pre- and post-ACA’s expansion resulting in an unchanged gap in the difference of counts of bariatric surgery by race in expansion states. (Table)

Conclusion: The Medicaid expansion under ACA reduced the gap in bariatric surgery rates by income and insurance status, but racial disparities persisted. Future research should track these trends and focus on identifying other factors that can reduce disparity in bariatric surgery for minority patients.

80.06 Mesh Reinforcement of Paraesophageal Hernia Repair: Trends and Outcomes from a National Database

K. A. Schlosser1, S. R. Maloney1, T. Prasad1, V. A. Augenstein1, B. T. Heniford1, P. D. Colavita1  1Carolinas Medical Center,Division Of Gastrointestinal And Minimally Invasive Surgery,Charlotte, NC, USA

Introduction:
Mesh placement in paraesophageal hernia repair (PEHR) is controversial. Following encouraging early results, in 2012, Oelschlager et al demonstrated no reduction of recurrence with mesh after five years. This study examines the trends of mesh use before and after this publication, as well as outcomes of PEHR.

Methods:
The American College of Surgeons National Surgical Quality Improvement Program was queried for patients who underwent PEHR with or without mesh (2010-2016). Bariatric procedures were excluded. Demographics, operative approach, and outcomes were compared over time.

Results:

20,798 patients underwent PEHR from 2010-2016. 90.8% were performed laparoscopically (LPEHR). Mean age was 62.1±14.0yr, mean BMI was 30.2±6.2m2/kg, 70.9% were female, 9.0% had diabetes, and 9.1% were active smokers. Most cases were elective (88.9%) and without mesh (61.2%). LPEHR patients had higher BMI (30.3±6.2 vs 29.6±6.7, p<0.0001), and had lower rates of reoperation, readmission, mortality, overall complications, and major complications (2.7 vs 4.8%, 6.3 vs 9.9%, 0.6 vs 3.0%, 7.3 vs 21.5%, 3.9 vs 11.4% respectively; all p<0.0001). Mesh placement was more common in LPEHR (39.8 vs 29.3, p<0.0001).

In primary LPEHR with mesh, patients were older (63.1±13.5yr vs. 61.0±14.3, p<0.0001) and more obese (BMI 31±5.9 vs 30.4±6.4, p=0.0003). Mesh placement was not associated with adverse outcomes.  Trends of LPEHR with mesh were examined over time. From 2010 to 2016, mesh placement decreased from 46.2 to 37.0% of LPEHRs (Figure 1). Mean operative times for LPEHR with mesh also decreased (176.0±71.0 to 152.9±73.3min), while mean operative times for LPEHR without mesh were consistently lower (148.6±71.4 to 134.7±70.4). There were no significant changes in comorbidities or adverse outcomes over time.

Using multivariate analysis to control for potential confounding factors, COPD was most strongly associated with multiple adverse outcomes, including reoperation (OR 1.4, CI 1.02-2.0), readmission (OR 1.17, CI 1.03-1.33), mortality (OR 1.57, CI 1.04-2.36), any complications (OR 1.81, 1.48-2.2), and major complications (OR 1.78, CI 1.36-2.31). Other factors associated with adverse outcomes included older age, higher BMI, male sex, non-elective repair, contaminated operation, diabetes, steroid use, and smoking.

Conclusion:
The placement of mesh during LPEHR is not associated with adverse outcomes despite an older patient population. Use of mesh with LPEHR is decreasing with no apparent adverse impact on available short-term patient outcomes. Further research needs to investigate patient factors not captured by this national database, such as symptoms, hernia recurrence, and hernia type and size.  Additionally, the mesh type and fixation in these cases needs to be separated and short and long term outcomes further defined.

80.05 Association Between Intraoperative Leak Testing and 30-Day Outcomes After Bariatric Surgery

M. C. Cusack2, M. Venkatesh3, A. Pontes3, G. Shea3,4, D. Svoboda3, N. Liu3, J. Greenberg3, A. Lidor3, L. Funk3,4  4William S. Middleton VA,Madison, WI, USA 2Indiana University School Of Medicine,Indianapolis, IN, USA 3University Of Wisconsin-Madison,Madison, WI, USA

Introduction: Bariatric surgery has become much safer over the past two decades; however, postoperative complications remain a concern. Intraoperative leak testing is commonly performed to minimize the risk of postoperative complications, but its impact on outcomes is unclear. The aim of this study was to determine if intraoperative leak testing during sleeve gastrectomy or Roux-en-Y gastric bypass decreases the risk of 30-day postoperative leaks, bleeding, readmissions, and reoperations.

Methods: This was a retrospective cohort study utilizing 2015 and 2016 data from the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program (MBSAQIP) database, which includes preoperative, operative, and postoperative data from more than 700 accredited bariatric surgery centers nationally. Postoperative leak was defined as a drain present for >30 days, organ space surgical site infection, or leak-related 30-day readmission, reoperation, or intervention. Postoperative bleed was defined as transfusion within 72 hours or bleed related 30-day readmission, reoperation, or intervention. Patient characteristics and postoperative outcomes were analyzed via Chi-squared tests for categorical variables.

Results: 237,081 patients were included in the study cohort. 29.2% underwent gastric bypass, while 70.8% underwent sleeve gastrectomy. 79.2% were female, and the mean age was 44.7 (SD 11.9). For sleeve gastrectomy patients, intraoperative leak testing was associated with slightly higher rates of 30-day postoperative leak but lower rates of bleeding, reoperation and readmission within 30 days. For gastric bypass patients, intraoperative leak testing was associated with higher rates of 30-day postoperative leaks and bleeds, but lower reoperation and readmission (Table 1). Complications, readmissions and reoperations were 2-3 times more common in bypasses vs. sleeves regardless of whether a leak test was performed. All results were statistically significant (p<0.05).

Conclusion: In this retrospective study of a national sample of bariatric surgery patients, intraoperative leak testing was associated with paradoxically higher rates of 30-day postoperative leaks for both sleeve gastrectomy and bypass patients but lower rates of reoperations and readmissions. However, given the small differences associated with leak testing, its utility is unclear. Gastric bypass was associated with higher complication rates compared to sleeve gastrectomy during the 30-day postoperative period.

80.04 Interim Results from a Prospective Human Study of the Immuno-metabolic Effects of Sleeve Gastrectomy

T. Lo1, G. Williams1, K. Heshmati1, A. Tavakkoli1, D. C. Croteau-Chonka1, E. G. Sheu1  1Brigham And Women’s Hospital,Metabolic Surgery,Boston, MA, USA

Introduction:
Laparoscopic sleeve gastrectomy (LSG) has been proved to be an effective weight loss procedure and has a positive impact on obesity-related comorbidities. We hypothesize that the effects of LSG are reflected in the immune-metabolic changes in a longitudinal human cohort study.

Methods:

Prospective data has been collected from enrolled human subjects from a single institution. Parameters of weight, comorbidities, pulmonary function tests, and trends in blood biomarkers (HbA1C, inflammatory and hormonal biomarkers) were observed from pre-operative baseline to 1 year in a 3-monthly interval follow ups. Subcutaneous and omental adipose tissue biopsies were collected perioperatively in addition to leukocytes every 3 months for RNA sequencing. We have included our interim analysis on immune-metabolic and hormonal profiling in this abstract.

Results:
16 subjects were enrolled (M: F, 3:13; mean age, 45 years old; mean body mass index (BMI) 43.18±5.78 Kg/m2). 13 subjects have competed their 3 month follow up visit with 1 subject dropout. There was a significant reduction in mean total body weight loss at 3 months (17.2±1.2%) and at 6 months (24.99±3.70%). Improvements in obesity-related comorbidities have been observed either by disease remission or reduction in medication. 75% of patients with hypertension, 50% with type 2 diabetes, and 50% dyslipidemia ceased their medication requirements by 3 months after LSG.  Significant improvements in hormonal biomarkers such as insulin (P<0.001), HbA1C (P<0.05), ghrelin (P<0.001) and leptin (P<0.001) were seen by 3 months after LSG. Surprisingly, reductions in ghrelin levels did not predict weight loss. Immunologic markers such as total white cell counts, neutrophils, and C reactive protein (CRP) were found to have significantly decreased as early as 3 months comparing to baseline. Two patterns of CRP responses were seen: one set of subjects had elevated CRP at baseline that resolved to normal by 3 to 6 months post-op. A second subset had normal CRP levels at baseline that remained stable post-op.  Subjects with a baseline, low CRP achieved more weight loss (P<0.001). White cells composition was also altered after LSG, with a significant decrease in neutrophils and increase in lymphocytes. Changes in neutrophil and lymphocyte fraction were reduced in subjects with metabolic diseases (P<0.01), whilst other immunological markers and weight outcomes did not differ between the two groups.

Conclusion:
This interim analysis from our study suggests that LSG induces significant immuno-metabolic changes in obese individuals as early as 3 months post-operatively. The improvement in CRP as well as white cells composition alteration tracks closely with weight loss, suggesting that the immune response plays a role in LSG. Future analyses including a larger sample size and RNA sequencing data will provide additional insights into predicting weight outcomes and metabolic response after LSG.

80.03 Bariatric Surgery Independently Associated with Reduction in Colorectal Lesions

M. Kwak1, J. H. Mehaffey1, R. B. Hawkins1, B. Schirmer1, C. L. Slingluff1, P. T. Hallowell1, C. M. Friel1  1University Of Virginia,Department Of Surgery,Charlottesville, VA, Virgin Islands, U.S.

Introduction:
While bariatric surgery has demonstrated excellent long-term weight loss results, little is known about secondary effects such as cancer risk. Previous studies have shown obesity is a risk factor for colorectal cancer and possibly precancerous colorectal polyp formation, but it is unclear whether bariatric surgery could potentially mitigate this risk. We hypothesized that bariatric surgery would decrease the risk of developing colorectal lesions (defined as new development of colorectal cancer and precancerous colorectal polyps).

Methods:
All patients (n=3,676) who received bariatric surgery (gastric bypass, sleeve gastrectomy, or gastric banding) at a single institution (1985-2015) were included in the study. Additionally, obese patients (n=46,873) from an institutional data repository were included as controls. Cases and controls were propensity score matched 1:1 by demographics, comorbidities, BMI, and socioeconomic factors. The matched cohort was compared by univariate analysis and conditional logistic regression.

Results:
A total of 4,462 patients (2,231 per group) with a median follow-up of 7.8 years were well matched with no significant baseline differences in BMI (49 vs 48 kg/m2, p=0.26), Female gender (51% vs 50%, p=0.16), and Age (43 vs 43 years old, p=0.63) as well as other comorbidities (all p>0.05). The surgical cohort had significantly more weight loss (55.5% vs -1.4% Reduction in Excess Body Mass Index, p<0.0001). The surgical cohort developed significantly fewer colorectal lesions (2.4% vs 4.8%, p<0.0001). There were no significant differences in polyp characteristics or staging for patients who developed cancer (all p>0.05). After risk-adjustment, bariatric surgery was independently associated with reduction in new colorectal lesions (OR 0.62, 0.42-0.91, p=0.016, Table).

Conclusion:
Bariatric surgery was associated with lower risk-adjusted incidence of new colorectal lesions in this large population. These results are encouraging that the benefits of bariatric surgery may extend beyond weight loss and comorbidity mitigation.
 

80.02 Role of Gastroesophageal Reflux Symptoms on Patient Satisfaction in Sleeve Gastrectomy

I. A. Van Wieren1, J. Thumma1, O. Varban1, J. Dimick1  1University Of Michigan,Department Of Surgery, The Center For Healthcare Outcomes & Policy,Ann Arbor, MI, USA

Introduction: Sleeve gastrectomy has emerged as the most common bariatric procedure. However, there is emerging data that this procedure can result in lifestyle limiting gastroesophageal reflux. It is unclear whether these symptoms are severe enough to offset the benefits of the procedure in terms of weight loss and other positive outcomes. Using a validated disease-specific instrument, we evaluated the extent to which reflux symptoms after sleeve gastrectomy affected patients’ satisfaction with the surgery.

Methods: We studied 6,633 patients who underwent laparoscopic sleeve gastrectomy (2013 to 2017) from Michigan bariatric surgical collaborative. We used the GERD-HRQL score which is 10 questions each ranging from 0 for no symptoms to 5 for severe symptoms. To assess the impact of sleeve gastrectomy we calculated change in this score before versus after the procedure. We divided the delta GERD score into quintiles: the bottom quintile represents worsening of GERD symptoms from baseline to 1-year and the top quintile represents improvement in symptoms. We then looked at the relationship between delta GERD score and patient satisfaction at 1-year. We used univariate and multivariate generalized linear mixed models to assess the variation in satisfaction explained by change in GERD score/delta GERD, percent excess body weight loss (%EBWL) at 1-year and other patient outcomes (serious complications, readmission and reoperations). We controlled for patient factors (age, gender, race and comorbidities) and year of surgery.

Results: The average change in GERD score was 1.63 (range: -48 to 48). However, the change in GERD score varied across quintiles with -9.0 point (range: -48 to -3) worsening in the bottom quintile verses a 13.9 point (range: 7 to 48) improvement in the top quintile. Overall, 77.7% of patients were satisfied, but the proportion of patients satisfied was highly dependent on whether there reflux symptoms improved or worsened. For example, in the bottom quintile only 48.9% were satisfied compared to 78.1% in the top quintile. In a multivariate model, changes in GERD score explained 10.5% of the variation in 1-year satisfaction. In fact, change in GERD score predicted the most variation in 1-year patient satisfaction, especially among whose symptoms worsened the most.  For patients in the worst quintile, reflux symptoms explained 30.6% of variation compared to 2.2% with little change or improvement in reflux (quintiles 2-5).  In univariate analyses, %EBWL explained only 2% of variation in satisfaction and <1% was explained by 30-day patient outcomes (serious complications, readmissions or reoperations).

Conclusion: In this state wide study of sleeve gastrectomy in Michigan, we demonstrated that reflux symptoms are the most important determinant of 1-year satisfaction after sleeve gastrectomy particularly among patients whose symptoms worsened the most.

79.10 The Impact of Medicaid Expansion on Utilization of Vascular Procedures and Rates of Amputation

K. G. Bennett1, M. E. Smith1, N. F. Matusko1, J. F. Waljee1, N. H. Osborne1, P. K. Henke1  1University Of Michigan,Department Of Surgery,Ann Arbor, MI, USA

Introduction:
In 2001, the state of New York expanded Medicaid coverage, providing access to care for thousands of previously uninsured patients. Although these policy changes can enhance the opportunity for obtaining care, little is known regarding care utilization, especially amongst patients with vascular disease and critical limb ischemia for whom access to procedures may prevent limb loss. We sought to measure the impact of Medicaid expansion on the rates of total vascular procedures, open procedures, endovascular procedures, and amputations.

Methods:
We examined discharge records from the 1998-2006 State Inpatient Databases of New York (intervention) and Arizona (control). Discharge records of interest were identified using ICD-9 vascular procedure codes. To measure the impact of Medicaid expansion on the rates of total vascular, open vascular, and endovascular procedures, as well as amputations, we used a difference-in-difference analysis to compare the number of procedures performed per admission within each state. We used logistic regression, truncated poisson, and zero-inflated poisson regression to model each outcome while adjusting for relevant patient covariates.

Results:
In this cohort of 112,624 patients undergoing vascular procedures, the difference-in-difference estimator demonstrated that expansion of Medicaid coverage was associated with lower odds of mortality (OR 0.77, p=0.043), but this became insignificant after controlling for patient-level covariates (OR 0.92, p=0.5). The difference-in-difference estimators also demonstrated that Medicaid expansion was associated with lower incidence rate ratios of total vascular procedures (IRR 0.65, p<0.001) and open vascular procedures (IRR 0.92, p=0.002), but a higher incidence rate ratio of endovascular procedures (IRR 1.13, p<0.001). There was no change in the incidence rate ratio of amputations (IRR 1.02, p=0.53). In patients with critical limb ischemia (N =12,668), the difference-in-difference estimators were also significant, demonstrating that expansion was associated with a lower incidence rate ratio of total procedures (IRR 0.59, p<0.001) and endovascular procedures (IRR 0.59, p<0.001) but a higher incidence rate ratio of amputations (IRR 1.43, p=0.001) and higher odds of mortality (OR 2.21, p=0.032).

Conclusion:
After Medicaid expansion, the rates of total vascular procedures decreased, with no impact on amputations rates in New York. Moreover, the utilization of interventions that could prevent amputations in patients with critical limb ischemia did not increase. Thus, while Medicaid expansion may improve access to care, significant barriers and disparities continue to prevent appropriate utilization of limb-saving procedures.
 

79.09 Elderly Patients With Cervical Spine Fractures Following Ground Level Falls are at Risk for BCVI

E. Warnack1, C. DiMaggio1, S. Frangos1, M. Klein1, C. Berry1, M. Bukur1  1New York University School Of Medicine,New York, NY, USA

Introduction:
Osteopenia is common in the elderly, increasing their risk of sustaining cervical fractures after ground level falls (GLF). Neck CTA is used to screen for Blunt Cerebrovascular Injuries (BCVI) after high cervical (C) spine fractures. We sought to examine the incidence of BCVI and subsequent stroke in elderly GLF patients as compared to other higher injury mechanisms.

Methods:

The Trauma Quality Improvement Program database (2011-2016) was used to identify blunt trauma patients with isolated (other body region AIS <3) high C spine (C1- C4) fractures. Patients were stratified into three groups: non-elderly patients (<65) with all mechanisms of injury, elderly patients (≥ 65) with GLF, and elderly patients with all other mechanisms of injury. Demographics and outcomes were compared. Multivariable logistic regression was used to determine predictors for BCVI, stroke, and mortality. Secondary outcomes included rates of spinal cord injury (SCI) and acute kidney injury (AKI), given risk for contrast exposure.

Results:

17,558 patients with high C spine injuries were identified. 50.2% involved patients ≥ 65. BCVI was highest in the < 65 group (0.8%) and lowest in elderly patients with GLF (0.3%, p = .001). When controlling for other factors, elderly patients with GLF were less likely to sustain BCVI (AOR 0.46, p = .03) but had comparable rates of stroke attributable to BCVI (15.4% vs. 9.5%, p= .685), compared to elderly patients with other mechanisms of injury. There was no significant difference in mortality (AOR 1.08, p = .34). SCI was less common (AOR 0.78, p = .002) in elderly patients with GLF. AKI was more common in elderly patients (0.9% vs. 0.5%, p = .002).

Conclusion:
In elderly patients with isolated C spine fracture after GLF, BCVI occurs less frequently, but is associated with a comparable rate of stroke as compared to other mechanisms.  Low injury mechanism should not preclude BCVI screening in the presence of high C spine fractures.
 

79.08 Using Myoglobin as Serum Marker in Administering Renal Protective Therapy in Electrical Burn Patients

J. H. Henderson1, P. Attaluri1, E. He1, J. Kesey1, M. Tan1, J. Griswold1  1Texas Tech University School of Medicine,Department Of Surgery,Lubbock, TEXAS, USA

Intro: Electrical high-voltage contact injuries are the second leading cause of occupational death in the U.S. The electrical surge encounters muscle cells, causing sudden and intense myocyte contraction, releasing intracellular contents such as myoglobin and creatine kinase (CK). The released pigments cause obstruction of renal tubules leading to acute renal failure. Currently, the trauma literature supports use of elevated serum CK to indicate muscle and renal damage. While CK can be a reliable screening method for muscle injury, we believe myoglobin is a more sensitive and specific indicator of risk and severity of renal damage. Our study aims to determine whether elevated CK or elevated myoglobin is more sensitive in predicting the risk of renal injury for electrical burn patients and to define parameters of serum myoglobin for implementing renal protective therapies.

Methods: A retrospective, single institution review was conducted on all patients over the age of 18 years who suffered a high voltage electrical injury (>1,000 volts) admitted to the Burn Center from 2006 to 2017. Patients who had preexisting end stage renal disease, were on dialysis, or died within 48 hours of admission were excluded. Chi-Square Testing was used to compare means in serum myoglobin and serum CK levels collected daily and acute kidney injury (AKI) as defined by the RIFLE criteria, which breaks AKI into three categories Risk, Injury, and Failure. Urine output and fluid resuscitation therapies were collected daily to track the progression of AKI. A Pearson product-moment correlation coefficient was computed to assess the relationship between AKI and serum myoglobin and serum CK. An independent sample mean's test was performed on patients who developed AKI to determine a serum myoglobin threshold for initiation of treatment.

Results: A total of 207 patients were analyzed 2006-2017; 27.1% of patients developed AKI as defined by RIFLE criteria. Mean serum myoglobin in patients with AKI was found to be 2,336.9 vs. no AKI 1,140.14 (P=0.0001). Mean serum CK level in patients with AKI was found to be 10,926 vs. no AKI 8,174 (P=0.132). There was a positive correlation between serum myoglobin levels and developing AKI (r = 0.212, n = 120, p = 0.02), whereas there was no statistically significant correlation between serum CK levels and AKI. Patients with myoglobin level 1,449.52 or above are at high risk of developing AKI (P=0.053) and require renal protective measure.

Conclusion: Serum myoglobin is a more sensitive marker for predicting AKI when compared to serum CK in high-voltage electrical burns. A serum myoglobin threshold of >1500 was associated with increased risk of AKI, indicating the need to start renal protective therapies. Although trauma and rhabdomyolysis patients’ CK may be useful for indication of risk of renal damage, in electrical contact injuries myoglobin must be used to determine risk of renal damage and to direct renal protective therapy.

79.07 Validation of a Tool: Bystander’s Self-efficacy to Provide Aid to Victims of Traumatic Injury

S. Speedy1, L. Tatebe1, B. Wondimu1, D. Kang1, F. Cosey-Gay2, M. Swaroop1  1Feinberg School Of Medicine – Northwestern University,Chicago, IL, USA 2University Of Chicago,Chicago, IL, USA

Introduction: Violent traumatic injury is a leading cause of death among people aged 1-44 years old. Violence disproportionately affects socioeconomically disadvantaged neighborhoods. Increasing self-efficacy, an individual’s belief in his or her ability to achieve a goal, amongst community members in these neighborhoods reduces the rate of violence. Furthermore, bystanders are more likely to intervene and provide assistance to victims if they feel they possess the skills to provide aid. Our aim was to develop and validate a survey tool to assess lay persons’ self-efficacy to intervene and provide first aid to victims of traumatic injury.

Methods: An evidence-based trauma first responder’s course (TFRC), TRUE (Trauma Responders Unified to Empower) Communities, survey tool for measuring first aid self-efficacy among lay persons was constructed. It was developed using focus groups with community members, input from field experts, and Bandura’s self-efficacy scales development guide. The tool contained seven questions measuring self-efficacy and one personal safety question. Community members living in the south side of Chicago who participated in a 3-hour long TFRC completed the tool immediately following the course (n=459) and at 6-month follow up (n=46).  Reliability testing using Spearman correlation was undertaken to examine internal consistency. Validation of the tool was conducted using Wilcoxon signed rank test and repeated measures mixed effects model.

Results: Spearman correlations between pre-course and immediate post-course surveys demonstrated a moderate magnitude of change for all seven self-efficacy survey questions (r = 0.35 to 0.41, p < 0.001). The signed rank test confirmed that all self-efficacy questions measuring willingness to intervene and empowerment were increased immediately following the course (p < 0.001). Repeated measures mixed effects model demonstrated there was a significant increase in all self-efficacy questions over the three time points (pre, post, and 6-month post course) when adjusted for age, gender, race, and course (p < 0.001). The one personal safety question measuring fear of self-injury while aiding victims was the only survey question not achieving statistical change immediately post course or at 6-month follow up.     

Conclusions: The TFRC survey tool is a reliable and valid instrument for measuring bystander’s self-efficacy to provide first aid to trauma victims. Perception of personal safety may not necessarily be affected by educational interventions. The tool will be useful to researchers and educators interested in teaching bystanders how to provide first aid to victims of traumatic injury and for developing interventions to improve empowerment.

79.06 Validation of the American Association for the Surgery of Trauma Grade for Mesenteric Ischemia

M. C. Hernandez1, H. Saleem1, E. J. Finnesgard1, N. Prabhakar1, J. M. Aho1, A. K. Knight1, D. Stephens1, K. B. Wise1, M. D. Sawyer1, H. J. Schiller1, M. D. Zielinski1  1Mayo Clinic,Surgery,Rochester, MN, USA

Introduction:

 

Acute mesenteric ischemia (AMI) is a lethal and variable disease without uniform severity reporting. The American Association for the Surgery of Trauma (AAST) developed an Emergency General Surgery (EGS) grading system for AMI where grade I represents low disease severity and grade V severe in order to standardize risk assessment. We aimed to validate this system by stratifying patients using the AAST EGS grade hypothesizing that disease severity would correspond with clinical outcomes.

Methods:

 

Retrospective, single-institution review of adults with AMI was performed (2013-2017). Preoperative, procedural, and postoperative data were abstracted. Univariate comparisons of imaging and operative grades and covariates were performed and a multivariate analysis evaluated for factors independently associated with 30-day mortality (odd ratios ±95% confidence intervals).

Results:

 

There were 230 patients; 137 (60%) were female. AMI etiologies included: hypovolemia (137, 60%), thrombosis/atherosclerosis (68, 30%), and embolism (25, 10%). The imaging AAST EGS grades were I (108, 47%), II (38, 17%), III (53, 23%), IV (24, 10%), V (7, 3%). Compared to patients who received an operation, patients managed non-operatively (91, 40%) demonstrated a lesser imaging grade (1 [1-2] vs 2 [1-3]) and the etiology was more commonly (75% vs 50%;both p<0.05). Increased imaging grade was associated with diminished systolic blood pressure and increased serum lactate concentrations but not with other physiologic or demographic covariates (Table 1). The type of operation (laparotomy, laparoscopy, conversion to open), need for multiple operations, open abdomen therapy, bowel resection, intensive care management, and 30-day mortality were associated with increasing imaging grade (Table 1). After adjustment for age, sex, AAST EGS grade, operation type, qSOFA score, and etiology, the following factors were independently associated with 30-day mortality: age 1.02 (95%CI 1.0-1.05). imaging grade I (reference), grade II 2.6 (1.01-6.9), grade III 3.1 (1.3-7.4), grade IV 6.4 (1.9-12.2) and grade V 16.6 (2.4-21.3) and increasing qSOFA 2.9 (1.9-4.5). Operative AAST EGS grade was similar to preoperative imaging AAST EGS grade, Spearman correlation 0.88 (p=0.0001).

Conclusion:

 

The AAST EGS grade, used as a surrogate for AMI disease severity, incrementally demonstrated greater odds of 30-day mortality. Decreasing blood pressure and increasing lactate correlated with increasing AAST EGS grade. Operative approach was also associated with AAST EGS grade with few patients receiving vascular interventions at higher grades. The AAST EGS grade for AMI is valid and may be used as a benchmarking tool on these disease severity definitions.

 

79.05 Application of Artificial Intelligence Developed Point of Care Imaging Clinical Decision Support

R. A. Callcut1,2, M. Girard2, S. Hammond2, T. Vu2,3, R. Shah2,3, V. Pedoia2,3, S. Majumdar2,3  1University Of California – San Francisco,Surgery,San Francisco, CA, USA 2University Of California – San Francisco,Center For Digital Health Innovation,San Francisco, CA, USA 3University Of California – San Francisco,Radiology,San Francisco, CA, USA

Introduction: Chest Xrays (CXR) are the most common imaging modality used worldwide.  The time from image ascertainment until review creates an inherent delay in identification of potentially life threatening findings.  Bedside deployed or point of care (POC) tools developed from neural networks have potential to speed the time to clinician recognition of critical findings.  This study applies neural networks to CXRs to automate the detection of peumoperitoneum (PP). 

Methods: We utilized a multi-step deep learning pipeline to create a clinical decision support system for the detection of pneumoperitoneum under the right diaphragm. 5528 training and 1368 validation images were used to train a Unet to initially segment the right and left lung.  By combining the lung segmentation with simple, rule-based algorithms we generated a region of interest in the original image where important features of positive-case detection were likely to be found (Figure 1a). Two readers blindly read images in a second clinical dataset (1821 CXRS total with 771 positive for PP) to classify PP presence or absence.  Images were then divided randomly into a 75% training, 15% validation, and a 10% testing sets.  With the cropped, full resolution images of the region of interest (Figure 1b), a DenseNet neural network classifier was trained to identify PP.

Results: The AUROC for training was 0.99, validation 0.95, and testing 0.96 (Figure 1c).  This yielded a specificity of 94% in the validation group and the results remained consistent in the testing set (92%).  Overall, the accuracy for detection of PP exceeded 90% in the validation group and was confirmed to be excellent in the testing set (92%).

Conclusion: This work demonstrates the potential power of integration of Artificial Intelligence into POC clinical decision support tools.  These algorithms are highly accurate and specific and could yield earlier clinician recognition of potential life threatening findings.
 

79.04 The Influence of Healthcare Resource Availability on Amputation Rates in Texas

J. Cao1, S. Sharath1, N. Zamani1, N. R. Barshes1  1Baylor College Of Medicine,Division Of Vascular Surgery And Endovascular Therapy,Houston, TX, USA

Introduction:  Amputation rates in Texas are high, and racial disparities continue to affect leg amputation rates. Targeted interventions aimed at reducing health disparities may benefit patients in high-need, low-resource areas, and reduce gaps in care.

Methods:  We collated 2005-2009 data on 254 Texas counties from three sources: Texas Inpatient Public Use Data File, Health Resources and Services Administration, and the County Health Rankings and Roadmaps. The primary outcome measure was the number of non-traumatic, lower-extremity amputations. Counties with greater than 11 leg amputations per 100,000 patients per year were designated as “hotspot” counties. Population-adjusted linear and logistic regressions identified factors that could explain increasing amputations among Texas counties.

Results: We identified 33 counties in Texas as “hotspot” counties. Hotspot counties had fewer healthcare resources and lower healthcare utilization. Dual Medicare/Medicaid enrollment and ER visits for foot complications are each associated with more amputations. In the presence of more ER visits, greater dual enrollment decreases total associated amputations (coefficients = -1.21*10-06, P<0.001). In counties with more than 70% rural communities, additional primary care providers decreased the total associated amputations (coefficients = -0.004, P=0.022). Populations in hotspot counties consisted of more people with diabetes (OR = 1.49, P<0.001) and more people categorized as black (OR = 1.09, P=0.007).

Conclusion: Healthcare availability plays a critical role in decreasing PAD-related amputations. Insurance enrollment and improved access to primary care providers may help reduce PAD-associated leg amputations. Strategic resource allocation may promote the reduction in PAD-associated amputations.

 

79.03 Assessing Fatigue Recovery in Trauma Surgeons Utilizing Actigraphy Monitors

Z. C. Bernhard1,2, T. W. Wolff1,3, B. L. Lisjak1, I. Catanescu1, E. Baughman1,4, M. L. Moorman1,4, M. C. Spalding1,4  1OhioHealth Grant Medical Center,Division Of Trauma And Acute Care Surgery,Columbus, OHIO, USA 2West Virginia School of Osteopathic Medicine,Lewisburg, WEST VIRGINIA, USA 3OhioHealth Doctors Hospital,Department Of Surgery,Columbus, OHIO, USA 4Ohio University Heritage College of Osteopathic Medicine,Athens, OHIO, USA

Introduction: Mental fatigue is a psychobiological state caused by prolonged periods of demanding cognitive activity. For over 20 years, the relationship between mental fatigue and physical performance has been extensively researched by the US military, the transportation industry, and other high-risk occupations. This is a growing area of interest within the medical community, yet there remain relatively few investigations specifically pertaining to surgeons. This study sought to quantify and evaluate fatigue and recovery time following 24-hour call among trauma surgeons to serve as a starting point in optimizing staffing and scheduling. We expected more sleep both during and after call, prior to the next normal circadian sleep cycle, would lead to faster recovery times.

Methods:  This was a prospective analysis of trauma surgeons employed at an urban, Level 1 trauma center. Readiband actigraphy monitors (FatigueScience, Vancouver, BC) incorporating a validated Sleep, Activity, Fatigue, and Task Effectiveness Model, were used to track sleep/wake cycles over a 30-day period. Recovery time was measured as the time required during the post-call period for the surgeon to return to his/her pre-call 24-hour mean alertness level. Three groupings were identified based on recovery time: rapid (0-6 hours), intermediate (6-18 hours), and extended (>18 hours). Tri-linear regression analysis was performed to assess correlation between recovery time and on-call, post-call, and combined sleep quantities.

Results: Twenty-seven 24-hour call shifts among 8 trauma surgeons (6 males, 2 females) were identified and analyzed. Mean age was 41.0 ± 5.66. Mean work hours per week was 54.7 ± 13.5, mean caffeinated drinks per day was 3.19 ± 1.90, and mean hours of exercise per week was 4.0 ± 2.5. Six call shifts met rapid criteria, 11 shifts intermediate, and 10 shifts extended, with mean recovery times of 0.49 ± 0.68, 8.86 ± 2.32, and 24.93 ± 7.36 hours, respectively. Table 1 shows the mean alertness levels and sleep quantities for each group. Statistically significant and moderate positive correlations were found between recovery time and the amount of sleep achieved on-call (p=0.0001; R2=0.49), post-call (p=0.0013; R2=0.49) and combined (p<0.0001; R2=0.48).

Conclusion: This early analysis indicates that increased sleep quantities achieved on-call, post-call, and combined are partially indicative of quicker recovery time in surgeons following 24-hour call shifts, thus serving as a viable starting point to optimize trauma surgeon staffing and scheduling. Further studies to validate these findings and evaluate the impact of additional sleep components, such as number of awakenings, should be undertaken.

 

79.01 The Impact of Prehospital Whole Blood on Arrival Physiology, Shock, and Transfusion Requirements

N. Merutka1, J. Williams1, C. E. Wade1, B. A. Cotton1  1McGovern Medical School at UT Health,Acute Care Surgery,Houston, TEXAS, USA

Introduction: Several US trauma centers have begun incorporating uncrossmatched, group O whole blood into civilian trauma resuscitation. Our hospital has recently added this product to our aeromedical transport services. We hypothesized that patients receiving whole blood in the field would arrive to the emergency department with more improved vital signs, improved lactate and base deficit, and would receive less transfusions following arrival when compared to those patients receiving pre-hospital component transfusions. 

Methods: In Novemeber 2017, we added low-titer group O whole blood (WB) to each of our helicopters, alongside that of existing RBCs and plasma. We collected information on all trauma patients receiving prehospital uncrossed, emergency release blood products between 11/01/17 and 07/31/18. Patients were divided into those who received any prehospital WB and those who received only RBC and or plasma (COMP). Initial field vital signs, arrival vital signs, arrival lbaoratory values, and ED and post-ED blood products were captured. Statistical analysis was performed using STATA 12.1. Continuous data are presented as medians (25th-75th IQR) with comparisons performed using Wilcoxon ranksum. Categorical data are reported as proportions and tested for significance using Fisher’s exact test. Following univariate analyses, a multivariate model was created to evaluate post-arrival blood products, controlling injury severity score, field vital signs, and age. 

Results: 174 patients met criteria, with 98 receiving prehospital WB and 63 receiving COMP therapy. 116 WB units were transfused in the prehospital setting. Of those receiving WB prehospital, 84 (82%) received 1 U, 14 (12%) received 2U. There was no difference in age, sex, race, or injury severity scores between the two groups. While field pulse was similar (WB: median 117 vs. COMP: 114; p=0.649), WB patients had lower field systolic pressures (median 101 vs. 125; p=0.026) and were more likely to have positive field FAST exam (37% vs. 20%; p=0.053). On arrival, however, WB patients had lower pulse and higher systolic pressures than COMP patients (TABLE). There was no difference in arrival base excess and lactate values (TABLE). However, WB patients had less ED and post-ED blood transfusions than the COMP group. A multivariate linear regression model demonstrated that field WB was associated with a reduction in ED blood transfusions (corr. coef. -10.8, 95% C.I. -19.0 to -2.5; p=0.018).

Conclusion: Prehospital WB transfusion is associated with improved arrival physiology with similar degrees of shock compared to COMP treated pateints. More importantly, WB pateints received less transfusions after arrival than their COMP counterparts. 

78.10 Oral Nutrition for Patients Undergoing Tracheostomy: The Use of an Aggressive Swallowing Program

J. Wisener1,2, J. Ward2, C. Boardingham2, P. P. Yonclas1,2, D. Livingston1,2, S. Bonne1,2, N. E. Glass1,2  1Rutgers New Jersey Medical School,Trauma Surgery,Newark, NJ, USA 2University Hospital,Trauma Surgery,Newark, NJ, USA

Introduction:
The insertion of a tracheostomy is thought to compromise protective swallowing mechanisms leading to aspiration and dysphagia. Consequently, clinicians are reluctant to allow oral nutrition for patients with tracheostomies and continue nasoenteric tube feeds. To maximize the number of patients receiving oral nutrition and to minimize aspiration, we began an aggressive swallowing program by dedicated speech and language pathologists (SLP) using fiberoptic endoscopic evaluation of swallowing (FEES). We hypothesized that despite the presence of a tracheostomy, most patients would be able to be safely fed orally and this approach is optimal for this patient population.

Methods:
Retrospective chart review of all trauma patients who underwent a tracheostomy between 7/1/2016-6/30/2018. Data collected included, demographics, injury severity, time to tracheostomy, ICU and hospital lengths of stay. The time to SLP evaluation and FEES as well as outcomes of those assessments were also captured.

Results:
115 patients underwent a tracheostomy during this period with 90 (78%) evaluated by SLP.  72 (80%) underwent FEES and 53 (76%) of those passed and were allowed oral nutrition. 11 (61%) of the 18 patients seen by SLP and not evaluated by FEES had swallowing evaluated by another method and 5 of those were allowed to eat. 40 patients (55%) passed their first FEES. Among those who failed, 21 (66%) underwent a second FEES approximately a week later, and 10 (48%) passed. Total success rate for patients undergoing SLP ± FEES was 70% (58/83). Days between tracheostomy and time of first FEES was not significant between groups (11 vs 15, p=0.486). The median time to passing FEES was 13 days [IQR 7, 20.5]. Patients who passed FEES were younger (42 vs 55 years, p=0.005) and had more severe injuries (ISS 20 vs 14, p=0.03) compared to those who did not pass FEES. Both groups had similar ICU and hospital lengths of stay (32 vs 31, p=0.95 and 43 vs 36, p=0.14). 12 patients underwent PEG placement prior to SLP evaluation; 7 of which passed their FEES and were fed orally.  There were few incidences of documented aspiration in all patients who were orally fed (3/55).

Conclusion:
Over two-thirds of trauma patients who have undergone a tracheostomy can safely take oral nutrition. Aggressive use of SLP and FEES allows oral nutrition, less use of nasoenteric tubes and gastrostomies which likely improves patient satisfaction. Failure to pass a FEES within the first 2 attempts allows objective indications for a gastrostomy tube. As patients who failed FEES were older, age may be a factor in the decision for earlier gastrostomy tube placement. In conclusion, oral nutrition is not only possible, but preferable in trauma patients undergoing tracheostomy and all eligible patients should be evaluated by FEES.