54.09 Has the ACA Altered Socioeconomic Disparities in Patient Outcomes After Traumatic Brain Injury?

E. Moffet1, T. Zens1, M. Beems1, K. McQuistion1, G. Leverson1, S. Agarwal1  1University Of Wisconsin School Of Medicine And Public Health,Department Of Surgery, Division Of Trauma,Madison, WI, USA

Introduction:  The Patient Protection and Affordable Care Act (ACA) amounts to the most influential U.S. healthcare legislation in half a century, extending insurance coverage to 20 million Americans since its inception in 2010. Several ACA provisions mean to improve socioeconomic (SES) disparities in health, such as expanded access to care and quality improvement initiatives. Given that SES impacts patient outcomes after traumatic brain injury (TBI), we aimed to gauge the ACA’s effect on SES disparities in patient outcomes after TBI.

Methods:  The National Trauma Data Bank was utilized to identify TBI patients in two time periods: before (2008-2009) and after (2011-2012) ACA implementation. Our primary outcomes included hospital length of stay (LOS) and in hospital mortality. Using univariate and multivariate regression models, while controlling for gender, age, geographical region, plus injury mechanism, type, and severity, we evaluated the impact that race and method of payment had on our endpoints. Data analysis was conducted with STATA, where p-values of <0.05 were considered significant.

Results: The study examined 54,958 TBI patient outcomes from 2008-2009 and 84,033 from 2011-2012. The Medicaid population underwent a 21% decrease in the LOS coefficient, from 2.76 days (95% CI = 2.49 – 3.02; p = 0.000) to 2.17 days (95% CI = 1.98 – 2.37; p = 0.000), as compared to the Privately Insured.  Despite this reduction in LOS, in hospital mortality remained unchanged for Medicaid patients, with odds ratios of 1.19 in 2008-2009 (95% CI = 1.02 – 1.39; p=0.03) and 2011-2012 (95% CI = 1.04 – 1.36; p = 0.009), when compared to the Privately Insured.

Conclusion: Our study observed that within the Medicaid population, after implementation of the ACA, TBI-related in hospital mortality did not change while hospital LOS decreased. These findings correlate with decreased clinical risks. Further before and after studies over longer time periods are warranted to assess causal impact of the ACA on SES disparities in healthcare.

54.08 Firearm Attitudes and Accessibility in a Trauma Population

C. N. Fick2, B. P. Smith1  1University Of Pennsylvania,Surgery,Philadelphia, PA, USA 2Georgetown University Medical Center,Washington, DC, USA

Introduction: Firearm violence is a serious public health issue in the US. In 2013 the Institute of Medicine, in collaboration with the National Research Council, published their committee findings regarding a research agenda for the public health aspects of firearm-related violence. A specific recommendation was to characterize the scope and motivations for gun acquisition, ownership, and use among various subpopulations. This study sought to provide descriptive data about firearm attitudes and accessibility among trauma patients, a group commonly regarded as "at risk" for firearm related injuries.

Methods: Over a five-month period, a convenience sample of 150 trauma patients and their family/friends, ages 18 to 44, self-administered a novel survey. No incentive was provided. Data measured included demographics, perception of personal safety and community violence, gun ownership and acquisition methods, and opinions regarding firearm legislation. Gun ownership was defined as currently owning or possessing a firearm. Previous gun ownership was defined as previous possession of any firearm to which the respondent no longer had access. Fisher’s Exact Test and Linear-by-Linear Associations were used to measure relationships, with significance tested at p<0.05.

Results: Fifteen subjects declined to participate before reaching the 150 subject sample, yielding accrual rate of 91%. Subjects had a median age of 28 (IQR 22-34), 72% were male, and violent injuries (assault, gunshot wound, stab wound) accounted for 52% of patients. Of the sample, 12% were current gun owners and 15% were previous gun owners. Current owners were white (67% vs. 18%; p<0.01), lived outside of the metropolitan catchment area (44% vs. 16%; p=0.01), and had a higher median household income (p<0.01). Current owners generally acquired their guns from a licensed dealer (61%). Previous owners were more frequently unemployed (61% vs. 24%; p=0.01), recently in a fight (57% vs. 30%; p=0.02), and more commonly presented as a victim of a GSW compared to current gun owners (48% vs. 11%; p=0.02) and never gun owners (48% vs. 22%; p=0.02). Previous gun owners acquired a gun from an owner of stolen guns or gun trafficker at a significantly higher rate than current owners (48% vs. 0%; p=0.01). Perceived community violence was not associated with current gun ownership (p=0.08) nor previous gun ownership (p=0.19).

Conclusion: This study demonstrates trauma patients and their familes were willing to discuss firearms. Prevelance of firearms in this sample is similar to other national surveys. Previous gun owners more often presented with gunshot injuries compared to both current and never gun owners. Future research should focus on characteristics surrounding the high gunshot wound victimization rate of previous owners. Additionally, investigation into the motivations of gun forfeiture by previous gun owners might inform firearm violence reduction strategies.

 

54.07 We Are What We Eat: Demographics And Payer Status Of Patients Presenting At Level I Trauma Centers

J. D. Forrester1, T. G. Weiser1, P. M. Maggio1, T. Browder1, L. Tennakoon1, D. A. Spain1, K. Staudenmayer1  1Stanford University,Department Of Surgery,Palo Alto, CA, USA

Introduction:
Trauma centers may face financial challenges as healthcare reimbursement changes. While American College of Surgeons level 1 trauma centers (ACSL1TCs) meet the same personnel and structural requirements, they often serve different populations. We hypothesized that differences between ACSL1TCs exist based on the patients they serve and some may be more sensitive to funding changes than others. 

Methods:
The National Trauma Data Bank 2014 was used to derive information on ACSL1TCs. Explorative cluster hypothesis generation was performed using Ward’s linkage, an agnostic, statistically robust clustering method to determine the expected number of clusters based on hospital, patient, and injury characteristics. Subsequent k-means clustering was applied for analysis.  Comparison between clusters was performed using Kruskall-Wallis or Chi2 where appropriate. 

Results:
In 2014, there were 113 ACSL1TCs that admitted 267,808 patients (median = 2220 patients, range: 928-6643 patients). Three clusters emerged. Cluster 1 centers (n=53, 47%) were more likely to admit older, Caucasian patients who suffered falls (P<0.05) and had higher proportions of private (31%) and Medicare payers (29%) (P=0.001) (figure). Cluster 2 centers (n=18, 16%) were more likely to admit younger, minority males who suffered penetrating trauma (P<0.05) and had higher proportions of Medicaid (24%) or self-pay patients (19%) (P=0.001). Cluster 3 centers (n=42, 37%) were similar to cluster 1 with respect to racial demographic and payer status but resembled cluster 2 centers with respect to injury patterns (P<0.05).

Conclusion:
Our analysis identified three clusters of ACSL1TCs serving three different patient populations. Given the variability in payer mix, some ACSL1TCs may be at higher risk for financial instability due to the changing reimbursement environment. Vulnerable centers, particularly those treating minorities with high rates of Medicaid and self-pay patients may require additional financial support to ensure they can continue to serve their missions. 
 

54.06 Costs and Financial Burden of Initial Hospitalizations for Firearm Injuries in the US, 2006-2013

S. A. Spitzer1, K. L. Staudenmayer1, L. Tennakoon1, D. A. Spain1, T. G. Weiser1  1Stanford University,Palo Alto, CA, USA

Introduction:

The United States has the highest rate of firearm homicide of any developed country. In 2013 alone, firearms caused an estimated 33,000 deaths and 84,000 injuries. The frequency of these events likely generates a substantial financial burden to the healthcare system and patients. We sought to determine the costs associated with the initial hospitalization of firearm-related injuries. 

Methods:

We used the Healthcare Cost and Utilization Project (HCUP) Nationwide Inpatient Sample (NIS), the largest inpatient care database in the United States, to identify patients hospitalized for firearm-related injuries between 2006 and 2013 using ICD-9 codes. We used survey weights to generate national estimates. Charges were converted to costs using the HCUP-NIS cost-to-charge ratio files, and costs were inflation-adjusted to 2013 dollars. To determine where the burden of payment fell, we further investigated the distribution of costs across payer groups. Unadjusted and adjusted analyses were performed. 

Results:

A total of 184,985 patients were admitted for firearm-related injures between 2006 and 2013. The cost of the initial inpatient hospitalization totaled $5.78 billion dollars. The largest proportion of costs was for patients with governmental insurance coverage, totaling $2.28 billion (39.4%) divided between Medicare ($0.34 billion) and Medicaid ($1.94 billion). Self-pay patients, in aggregate, were responsible for $1.41 billion (24.4%) in costs; of this group, 80.2% lived below the 50th income percentile. Privately insured patients were responsible for $1.15 billion (19.8%) in costs. While the total charges per year for all patients increased slightly over the study period ($1.98 billion to 2.83 billion, p=0.0013), total yearly costs remained the same ($671 million to $738 million, p=0.2683).

Conclusion:

Between 2006 and 2013, the costs of hospitalizations for firearm-related injuries averaged $700 million dollars per year. One third of the financial burden was placed on Medicaid, and another quarter of the cost was generated by patients with no form of insurance, indicating that firearm-related injuries place a particular burden on governmental insurance and the poor. These figures underestimate true healthcare costs, since they do not include costs of readmissions, rehabilitation, long-term care, or disability. They also exclude the costs of those who were treated and released or died prior to admission. 
 

54.05 Identifying Preventable Trauma Death: Does Autopsy Serve a Purpose?

D. Scantling1, R. Kucejko1, A. Teichman1, B. McCracken1, R. Burns1, J. Eakins2  1Hahnemann University Hospital/Drexel University College Of Medicine,Surgery,Philadelphia, PA, USA 2Atlanticare,Trauma,Atlantic City, NJ, USA

Introduction:

Missing a life threatening injury is a persistent concern in any trauma program. Autopsy is routinely utilized to determine an otherwise occult cause of death in medicine. It has been adopted as a required component of the trauma peer review process by both the American College of Surgeons and the Pennsylvania Trauma Foundation.

Methods:

A retrospective chart review using our institutional trauma registry of all trauma deaths between January 2012 and December 2015 was performed. Per the protocol of our Level 1 center, all trauma deaths are referred to the medical examiner (ME) and all autopsy results are evaluated with ISS, TRISS, nature of death, injuries added by autopsy and referral to peer review noted. Results were compared with chi square calculation.

Results:

173 patient deaths were referred to the ME with 123 responses received. Average LOS was 2.61 days. 26 patients had autopsy declined by the ME, 25 received an external examination only and 72 received a full autopsy. Autopsy identified one case referred to peer review (p=0.603), however the previous preventability determination was not affected. No preventable cause of death uncovered. Autopsy did identify injuries in 7 cases that initially did not have findings consistent with expected death (p=.022). Mean ISS was 31.64 and mean TRISS was 0.35 among all patients. The most commonly identified injuries added by autopsy were ICH, lung injuries, rib injury, extremity fractures, liver injuries and cardiac injuries (n=36, 32, 32, 19, 13 respectively).

Conclusion:

Inclusion of autopsy in the peer review process does not identify causes of preventable death in an otherwise highly functioning trauma program and may be a poor use of institutional resources. It does add data regarding cause of death when external injuries are not consistent with a fatal wound and may be of use to grieving families and in select situations.

 

54.04 The Effect of Surgeon “Experience” on Postoperative Mortality following Colorectal Surgery

F. Gani1, M. Cerullo1, J. K. Canner1, A. E. Harzman2, S. G. Husain2, W. C. Cirocco2, M. W. Arnold2, A. Traugott2, T. M. Pawlik1,2  1Johns Hopkins University School Of Medicine,Department Of Surgery,Baltimore, MD, USA 2Ohio State University,Department Of Surgery,Columbus, OH, USA

Introduction:  Although the relationship between laparoscopic surgery and improved clinical outcomes has been well established across a variety of procedures, the effect of surgical “experience” with laparoscopic surgery remains less defined. The current study sought to assess the comparative benefit of laparoscopic colorectal surgery relative to surgeon “experience.”

Methods:  Commercially insured patients aged 18-64 years undergoing a colorectal resection were identified using the MarketScan Database from 2010-2014. Individual surgeons were identified using surgeon-specific identifiers. For each surgeon, an annual surgical volume, and the degree of “experience” defined as the annual number of laparoscopic operations was calculated. Surgeons were categorized based on their annual laparoscopic surgical volume (1-4, 5-14, and ≥15 laparoscopic operations / year). Multivariable logistic regression analysis was used to calculate and compare postoperative mortality and morbidity relative to surgeon “experience.” 

Results: A total of 34,066 patients were identified who met inclusion criteria. The median age of all patients was 53 years (IQR: 45-59) and 51.9% (n=17,689) patients were female. The average Charlson comorbidity index (CCI) score was 1.4 (SD=2.0) and 36.4% of patients presented with a CCI score ≥2. Laparoscopic operations were performed in 36.8% (n=12,522) of patients. Postoperative morbidity and mortality were 17.3% (n=5,875) and 0.9% (n=288), respectively. On multivariable analysis, laparoscopic surgery was associated with 70% decreased odds of developing a postoperative complication (OR=0.30, 95%CI: 0.28-0.32, p<0.001) and 84% lower odds of mortality (OR=0.16, 95%CI: 0.10-0.25, p<0.001). The comparative benefit of laparoscopic surgery was, however, greater among surgeons who had a greater experience with laparoscopic surgery. Compared with surgeons with less laparoscopic surgery experience (<5 laparoscopic operations / year), surgeons who had greater experience with laparoscopic surgery (≥15 laparoscopic operations / year) demonstrated a 33% lower odds for postoperative morbidity (OR=0.67, 95%CI: 0.62-0.71) and a 55% lower odds for postoperative mortality (OR=0.45, 95%CI: 0.33-0.62) when a laparoscopic approach was utilized (Figure).

Conclusion: Although laparoscopic surgery was associated with improved postoperative clinical outcomes, the effect of laparoscopic surgery was highly variable relative to surgeon experience with laparoscopic surgery. 

 

54.03 Utilization of Emergency Department Care by Cancer Patients in the United States

A. A. Shah1,2, S. Zafar1, R. Gray2, B. Pockaj2, E. Cornwell1, L. Wilson1, N. Wasif2  1Howard University College Of Medicine,Surgery,Washington, DC, USA 2Mayo Clinic In Arizona,Surgery,Phoenix, AZ, USA

Introduction:  Utilization of emergency department (ED) services by cancer patients has not been well studied. Our objective with this study was to identify common reasons for ED visits in patients with cancer and identify predictors for subsequent admission.

Methods:  The Nationwide Emergency Department Sample (2009-2012) was queried for patients with a diagnosis of malignant cancers (ICD-9-CM diagnosis codes; 140-208.9, 238.4, 289.8) as a secondary diagnosis. Of these the five most common cancers in the United States, as identified by the American Cancer Society, were identified. Primary diagnosis codes were examined for common reasons for presentation to the ED. Descriptive analysis was then performed to describe patient demographics, payor status, discharge disposition, hospital characteristics and outcomes. Multivariable logistic regression analyses for inpatient admission were used to identify risk factors from among the following domains age, gender, insurance status, income, and year of admission for all cancer patients and for each of the commonest cancers (Table).

Results: A total of 2,279,822 records were analyzed representing 2% of ED visits and weighted to represent 10,178,361 visits nationally. Mean age was 63.9(±17.9) with a slight female dominance. Medicare was the primary payor for 55.6% and Medicaid for 12.5%, whereas 24.1% had private insurance. Of the 5 most common cancers, patient with lung cancer comprised 11.8% of ED visits followed by prostate(6.5%), breast(5.7%), colorectal(4.6%), and bladder(1.9%) cancers. Around 65.0% were admitted to the hospital and 31.1% were discharged from the ED. Geriatric patients and those in the highest income quartile are at higher risk of hospital admission. However, female patients, the uninsured and those visiting on the weekends were less likely to be admitted to the hospital (table). The five most common reasons for ED visits included pneumonia(3.5%), abdominal pain (3.5%), urinary tract infection(2.1%), acute exacerbation of bronchitis(1.7%) and acute kidney injury(1.6%). Mortality was 0.4% and 4.0% in the ED and inpatient, respectively. Amongst the five most common cancers, patients with lung cancers (OR[95%CI]:2.07[2.04-2.10]) had the highest odds likelihood of admission followed by patients with bladder cancer (OR[95%CI]:1.71[1.67-1.75]) and colorectal cancer (OR[95%CI]:1.50[1.47-1.52]).

Conclusion: Cancer patients represent an important patient demographic treated in the ED every year. The results of this study help identify the spectrum of clinical conditions cancer patients present with. Recognizing patients at risk for admission can help expedite ED triage to reduce wait times, ensure timely healthcare delivery and identify potentially avoidable admissions.
 

54.01 Nomogram to Predict Risk for Regional Lymph Node Metastasis for Appendiceal Neuroendocrine Tumors

C. Mosquera1, N. Bellamy1, T. L. Fitzgerald1  1East Carolina University Brody School Of Medicine,Division Of Surgical Oncology,Greenville, NC, USA

Introduction:  Currently, size is the primary factor utilized to determine risk of regional nodal metastasis for Appendiceal Neuroendocrine Tumors (A-NET). Here we validate a nomogram combining depth of invasion and size to predict risk of nodal disease. 

Methods:  Patients with resected A-NET from 2004-2013 were identified in the NCDB. 

Results: A total of 3,269 patients were included. The majority were female (56.9%), white (88.1%), had no nodal metastasis (74.9%), and received colectomy (61.5%). On univariate analysis, risk of nodal metastasis was associated with greater depth of invasion (LP 13.3%, MP 22.5%, TS 60.0%, p<0.0001), tumor size (<1 cm 3.6%, 1-2 cm 19.8%, 2-4 cm 45.6%, > 4 cm 44.1%, p<0.0001), and extent of surgical resection (appendectomy 12.8%, colectomy 30.0%, p<0.0001). On multivariate analysis depth of invasion (LP vs MP OR 1.03 p=0.8924; LP vs TS OR 4.02, p <0.0001), size (<1 cm vs 1-2 cm OR 5.81, p<0.0001; <1cm vs 2-4 cm OR 16.78, p<0.0001; <1 cm vs >4 cm OR 13.02, p <0.0001), and extent of surgical resection (colectomy vs appendectomy OR 2.09, p<0.0001) continued to be significant. On univariate survival analysis of 5-year DSS, depth of invasion (LP 88.5%, MP 84.8%, TS 58.2%, p<0.0001), size (<1 cm 84.5%, 1-2 cm 86.3%, 2-4 cm 81.5%, >4 cm 75.4%, p=0.0004), and extent of surgical resection (appendectomy 85.3%, colectomy 80.7%, p=0.0006) were predictive of survival. On multivariate survival analysis, increased depth of invasion (LP vs MP HR 1.73 p=0.1709; LP vs TS HR 5.63, p=0.0023) and size (<1 cm vs 1-2 cm HR 0.12, p<0.0001; <1cm vs 2-4 cm HR 0.34, p=0.0129; <1 cm vs >4 cm HR 0.21, p=0.0034) were associated with survival, however, extent of surgical resection was not (colectomy vs appendectomy HR 1.86, p=0.1188). A nomogram was created to assess the risk of nodal metastasis determined by size and depth of invasion (see figure). The model accurately predicts risk of lymph node metastasis for A-NET with an area under the Received Operating Characteristic (ROC) curve of 0.77200. In order to eliminate bias of low lymph node retrieval with only appendectomy, a model including only colectomy patients was constructed. All results were similar with the ROC of 0.75301.

Conclusion: This study validates the utility of a nomogram including depth of invasion and size to predict risk of nodal metastasis of A-NET. Given that depth predicts both risk of lymph node metastasis and mortality, consideration should be given to including this data in AJCC T classification. 

53.20 Biliary Duct Injury During Laparoscopic Cholecystectomy: a NSQIP Data Analysis

S. Cassaro1,2, A. Meshesha2, S. Kesavaramanujam2, N. Atherton1,2  1University Of California – Irvine,Surgery,Orange, CA, USA 2Kaweah Delta Health Care District,Acute Care And Trauma Surgery,Visalia, CA, USA

Introduction:

Biliary duct injury (BDI) is a dreaded complication of cholecystectomy. The incidence of BDI during laparoscopic cholecystectomy (LC) is not exactly known. Major BDI is defined as an injury requiring biliary repair or reconstruction and is reported to occur in 0.1 to 0.55% of the cases. Since approximately 750,000 patients undergo LC each year in the US, it can be inferred that at least 750 patients sustain a major BDI every year.

Methods:

We reviewed the most recent five years of NSQIP data to assess the incidence and 30-day outcomes of major BDI after LC. The 2010-2014 NSQIP database of 158,278 cases of LC was searched for diagnostic and procedural codes associated with BDI.

Results:

The query returned a total of 33 cases of LC that listed one of the selected procedural codes either as additional procedure at the time of the initial surgery, or as reoperation.

A BDI was repaired during the initial procedure in 19 cases. An IOC was performed during the LC in ten of the patients.  Six of the patients were men and 13 women. The injury was repaired with a bilio-enteric anastomosis in eleven patients, using a Roux-en-Y loop in nine. The remaining nine injuries were repaired primarily in eight patients and with an end-to-end reconstruction in one. The average postoperative length of stay after repair was 6.5 days (range 1 to 16 days), and there were no readmissions. One of the patients who underwent biliary diversion died within thirty days from the procedure.

Fourteen patients underwent BDI repair within 30 days from the index procedure, which included an only in two cases. Eleven of the patients were women and three were men. Seven of these patients were discharged on the day of the initial procedure, while the other seven remained hospitalized after the index LC for an average of 7 days (range 1 to 16 days). A bilio-enteric diversion was used to repair the injury in six patients, and a Roux-en-Y reconstruction was the technique selected in all but one of the cases. A direct ductal repair was performed in the other eight patients. There were no postoperative deaths in this group.

Conclusion:

There is substantial evidence that the incidence of major BDI after LC is between 0.1 and 0.5%, and the vast majority of those injuries should be identifiable within thirty days of the index procedure.

NSQIP is designed to capture all the significant events occurring in the thirty days following each procedure tracked, but a query of NSQIP data for codes associated with major BDI after LC yields results that are grossly inconsistent with the expected ones and reflect a BDI incidence lower than 0.002%.

In its current format NSQIP data are inadequate to benchmark the risk of major BDI injury after LC, and may grossly underestimate the incidence of such occurrence. The implementation of procedure-specific registries for commonly performed surgical interventions such as LC may provide better data quality.

 

53.19 Wound Dehiscence after Laparotomy: Who Needs Retention Sutures?

A. Pal1, E. Mahmood3, J. Nicastro2, M. Sfakianos2, T. Dinitto1, S. M. Cohn1  2North Shore University And Long Island Jewish Medical Center,Department Of Surgery,Manhasset, NY, USA 3Northwestern University,Feinberg School Of Medicine,Chicago, IL, USA 1Staten Island University Hospital, Northwell Health,Surgery,Staten Island, NY, USA

Introduction: There is a need for predictive models that can help surgeons identify patients at greatest risk for wound dehiscence in order to guide their management to avoid evisceration. We sought to use a large database in order to examine risk factors for developing this complication after midline laparotomy.

Methods: The American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) is a prospectively collected surgical outcomes database compiled by manual chart abstraction. Exploratory laparotomy cases were queried using the primary CPT code from 2005-2013. The independent factors associated with wound dehiscence were examined by multivariate analysis using SAS JMP Pro 11 (Cary, NC, US). The cohort was split into a training dataset of patients from 2005-2009 and a prospective validation dataset from 2010-2013. A backwards logistic regression analysis was performed to identify predictors of wound dehiscence in the training set. The model was then tested in the validation set to estimate the receiver operating curves (ROC) and goodness of fit.

Results: A total of 16,793 patients were included in our analysis. 248 (1.47%) of these patients had a wound dehiscence. Significant predictors of wound dehiscence: deep wound infection (AOR=5.98, 95% CI 3.06 to 10.9, P<0.0001), postoperative pneumonia (AOR=3.25, 95% CI 1.99 to 5.11, P<0.0001), preoperative weight loss (AOR=3.11, 95% CI 1.29 to 10.2, P<0.0083), preoperative sepsis (AOR=3.03, 95% CI 1.91 to 4.70, P<0.0001), superficial wound infection (AOR=2.97, 95% CI 1.63 to 5.05, P<0.0007), and previous operation in the last 30 days (AOR=1.82, 95% CI 1.19 to 2.73, P<0.0061), smoking (AOR=1.49, 95% CI 1.01 to 2.18, P<0.044). The c-statistic for our model was reasonable: 0.73 in the training set and 0.70 in the validation set. The Hosmer-Lemenshow goodness-of-fit statistic was 0.89.

Conclusion: We identified a number of independent risk factors for the development of wound dehiscence which may inform the clinician and lead to improved selection of patients for measures which could reduce the likelihood of evisceration (retention sutures) after laparotomy.   

53.18 Novel Method For Confirming Appropriate Nerve Integrity Monitor (NIM) Endotracheal Tube Positioning

I. J. Behr1, S. Mansoor1, M. McLeod1  1Michigan State University,Surgery,Lansing, M, USA

Introduction:

Surgical injury to the recurrent laryngeal nerve (RLN) is a feared complication of head and neck surgery due to potential for significant permanent functional disability. Originally recommended by Lahey, intra-operative identification and protection of the nerve, remains the gold standard for minimizing RLN injury. Over time, less invasive monitoring systems and methods to protect the RLN during surgical procedures have developed. One such method is the endotracheal Nerve Integrity Monitoring (NIM) system. This study demonstrates a novel method to more accurately ensure placement of the NIMS device.

Methods:

176 patients were enrolled in this prospective clinical trial. Each patient underwent surgery involving dissection around the recurrent laryngeal nerve thus requiring monitoring. These surgeries included partial thyroidectomy, near total thyroidectomy, and total thyroidectomy. All patients were placed under general anesthesia and intubated with a NIMs endotracheal device. 

All patients had both the tap test and the train of 4 stimulation performed prior to beginning surgery. The results were determined from recordings through the NIMs monitoring system. 

The tap test was performed by percussion to the midline trachea and recording the results through the NIMs device. The train of 4 was performed by the anesthesia team with 2 electrical pads placed over the facial musculature. A train of 4 stimulation was created with a electrical stimulator and the results were recorded through the NIMs device. 

The most accurate method to ensure placement is by direct stimulation/contraction of the vocal cords through stimulation of the vagus nerve. 3 of the 176 patients consented to direct stimulation of the vagus nerve as a control study.  This was done by opening the carotid sheath, freeing a small 1cm section of the nerve from the surrounding tissue and directly stimulating the nerve. The results of this test were also recorded through the NIMs device.  

Results:
Out of 176 patients 131 were found to have adequate positioning using the tap test (74.4%). With a train of 4 stimulation 170/176 (96.6%) were found to have accurate positioning. Using the McNemar test, train of four peripheral nerve stimulation showed significantly more positive findings than the tap test , p < 0.001

Conclusion:

This clinical prospective study of 176 patients showed a novel method to determine accurate positioning of the NIM device using a train of four electric stimulation. By causing contraction of the musculature and vocal cords overlying the NIMs device more accurate placement was established compared to a less accurate but commonly used method of simply tapping the larynx (p < 0.001).  This minimally invasive and improved method to determine accurate positioning of the NIMs device could therefore minimize the risk of RLN injury. 

53.17 Body Mass Index is Associated with Surgical Site Infection (SSI) in Patients with Crohn’s Disease

M. M. Romine1,2, A. Gullick1,2, M. Morris1,2, L. Goss1,2, D. Chu1,2  1University Of Alabama at Birmingham,Gastrointestinal Surgery,Birmingham, Alabama, USA 2VA Birmingham HealthSystem,General Surgery,Birmingham, AL, USA

Introduction:
Controversy exists on the association of Body Mass Index (BMI) with SSI in patients with IBD. Previous conclusions have been limited by single-institution studies and inclusion of both Crohn’s disease (CD) and Ulcerative Colitis patients. In this study, we used a national dataset to investigate the association of BMI with SSI in patients with CD. We hypothesize that higher BMI is associated with higher risk for SSI in CD patients.

Methods:
Using the 2012-2014 ACS-NSQIP Procedure Targeted Database, we identified all patients with CD who underwent colectomy between 2012-2014. Patients with CD were stratified by weight status to underweight, normal weight, overweight and BMI class I (30-34.9), II (35-39.9) and III (>40). Patient demographics, preoperative comorbidities and surgical characteristics were compared. Primary outcomes were wound complications (SSI, organ space SSI, anastomotic leak) and secondary outcomes included other reported NSQIP-complications.  Chi-square and Wilcoxon Rank Sums tests were used to determine differences among categorical and continuous variables, respectively. Stepwise backwards logistic regression analyses were performed to identify risk factors for SSI.

Results:
Of 3734 patients with CD, 12.29% were underweight, 43.92% were normal weight,  24.24% overweight, 12.35% BMI class I, 4.79% class II,  and 2.41% class III. Overall, 24.4% of patients were smokers, 4.05% were diabetic and 62.94% were on steroids or other immunosuppressant. Patients with higher BMI class were more likely to have diabetes: 3.47% in class I, 6.7% in class II and 8.89% in class III (p value <0.001). A larger percentage of class III obese patients (27.45%) were classified as ASA 4-5 (p value <0.001). Higher BMI was associated with a greater rate of SSI: 6.75% in underweight, 6.4% in normal weight class, 10.06% in the overweight class, 8.89% in class I, 12.85% in class II and 16.67% in class III (p-value<0.001). Organ space SSI rates were highest in underweight patients 12.2%, 7.13% in the normal weight class, 6.85% in the overweight class, and 7.38%, 5.59%, and 2.22% in the BMI classes I, II and III, respectively (p-value<0.001). There was no significant difference in anastomotic leak rate (range 2.8-7.6%, p>0.05). Higher BMI was also associated with respiratory complications (class III 8.9% vs  normal 2.2%, p=0.1) and Ileus (class III 20.2% vs 13.4% p=0.01). On multivariate analysis, BMI remained an independent predictor for SSI where BMI class III had highest odds of SSI infection (OR 2.8 CI 1.5-5.2) in addition to class II (OR 2.1 CI 1.3-4.5),  class I (OR 1.4 CI 0.9-2),  and overweight status (OR 1.5 CI 1.1-2.1) when compared to Normal weight individuals.

Conclusion:
Patients with CD and high BMI are at increased risk for SSI but not organ spaces SSI or anastomotic leak. Underweight CD patients are at increased risk for organ space SSI. Targeting BMI may be one actionable opportunity to reduce post-operative SSI rates. 
 

53.16 Indocyanine Green (ICG) Fluorescence-Guided Parathyroidectomy for Primary Hyperparathyroidism

J. C. DeLong1, E. P. Ward1, T. M. Lwin1, K. T. Brumund1, K. J. Kelly1, S. Horgan1, M. Bouvet1  1University Of California – San Diego,Surgery,San Diego, CA, USA

Introduction:  Surgical resection is the only definitive treatment for primary hyperparathyroidism. Effective treatment requires successful intraoperative localization of the aberrant gland. Classic preoperative imaging includes ultrasound, nuclear scintigraphy, and in some cases axial imaging, however, these modalities have limited utility in the operating room. Indocyanine green (ICG) is a nontoxic organic dye with a high safety profile that can be detected with near infrared fluorescence imaging systems when administered intravenously. ICG is currently used in other surgical procedures as fluorescence intensity is correlated with relative blood supply. In the present report, we evaluated the utility of using ICG for intraoperative localization of parathyroid glands. 

Methods:  ICG fluorescence angiography was performed during 30 open parathyroidectomies for primary hyperparathyroidism over a 12 month period. 7.5mg of ICG was administered intravenously to guide surgical navigation and confirmation using a commercially available fluorescence imaging system. Video files were evaluated and graded by three independent surgeons for strength of enhancement using an adapted numeric scoring system (Fig. 1).

Results: 70% of patients were female. Patient age ranged from 40 to 87 years old (average 64). 26 (87%) patients had a single adenoma, 1 (3%) patient had a double adenoma, and 3 (10%) had hyperplasia.  Of the 30 patients, 22 (73.3%) of the parathyroid glands were rated to have shown strong enhancement, 5 (16.6%) demonstrated mild to moderate enhancement. and 3 (10%) exhibited little or no enhancement. Of the 27 patients who had a preoperative sestamibi scan, parathyroid adenoma was identified in 14 while 13 failed to localize. Of the 13 patients who failed to localize, all 13 patients (100%) had an adenoma that fluoresced on ICG imaging—10 patients (76.9%) had strong fluorescence and 3 patients (23.1%) had moderate fluorescence. There were no adverse events.

Conclusion: ICG fluorescence angiography can effectively be used for intraoperative localization and confirmation of parathyroid glands for patients with primary hyperparathyroidism. ICG proved reliable even in cases where the glands were not identified on preoperative sestamibi scanning. The technique can be used to quickly distinguish parathyroid glands from lymph nodes, thymus, and other benign fatty tissue that may grossly resemble a parathyroid due to variation in blood supply/gland hypervascularity. The technique does not replace intraoperative parathyroid hormone (PTH) testing because ICG angiography cannot distinguish an adenoma from a normal gland presently. ICG angiography has the potential to assist surgeons in identifying parathyroid glands rapidly with minimal risk.

53.15 BMI Class Increases Risk of Complication In Open Ventral Hernia Repair: A NSQIP study

L. Owei2, K. Dumon3, R. Kelz3, D. T. Dempsey3, N. Williams4  2University Of Pennsylvania,Department Of Surgery,Philadelphia, PA, USA 3University Of Pennsylvania,Department Of Surgery, Perelman School Of Medicine,Philadelphia, PA, USA 4University Of Pennsylvania,Division Of Surgical Education, Department Of Surgery, Perelman School Of Medicine,Philadelphia, PA, USA 1Hospital Of The University Of Pennsylvania,Philadelphia, PA, USA

Introduction:

Recent studies have been inconclusive about whether the degree of obesity is an independent risk factor for adverse outcomes following ventral hernia repair (VHR). This study aims to elucidate the influence of BMI class on complications in open VHR.   

Methods:

A retrospective analysis was conducted using data from the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) database from 2005 to 2015. Univariate analyses, namely the Chi-square test for categorical variables and ANOVA for continuous variables, were used to examine the association between BMI class and patient characteristics, comorbidities, re-operation, and risk of perioperative complication. Logistic regression was also used to assess the risk of complication by BMI class with adjustment for potential confounders. All analyses included the entire cohort.

Results:

Of the 19,145 patients who underwent VHR between 2005 and 2015, 53.6% were obese. When stratified by BMI class, we found significant differences in age, gender, race, comorbidities (p < 0.001 for all). In the cohort, 65 patients (0.3%) lost > 10% of their body weight in the 6 months prior to surgery. The average operating time was 80.1 minutes. Higher BMI class was significantly associated with increased mean operating time (p < 0.001).  Unplanned re-operation occurred in only 0.98% and eight patients died within 30 days of surgery; however, neither of these outcomes were significantly associated with BMI class. In contrast, all other complications (surgical, medical and respiratory) were significantly associated with BMI class (p < 0.0001). This association remained even after adjusting for age, sex, race and comorbidities. Patients with a BMI > or equal to 30 kg/m2 were found to be significantly more likely to have a complication compared to patients with BMIs < or equal to 25kg/m2 (Table 1). This risk of complications further increased with increasing BMI class.

Conclusion:

Being in a higher BMI class is a risk factor for surgical, medical and respiratory complications after VHR. Moreover, patients with BMIs > 40kg/m2 have 2.38 times greater risk for complications, with the odds ratio increasing with increasing BMI class. As only 0.3% of patients were able to lose > 10% of their body weight preoperatively, our findings suggest that bariatric surgery prior to VHR might be considered for patients with BMIs > 40kg/m2 to reduce their risk of complications.

53.14 Ultrasound Guided FNA of Thyroid With and Without Sedation

R. M. Kholmatov1, F. Murad1, D. J. Monlezun1, E. Kandil1  1Tulane University School Of Medicine,Surgery,New Orleans, LA, USA

Introduction:
Ultrasound guided Fine Needle Aspiration (FNA) biopsy is a crucial method of preoperative diagnosis of thyroid diseases. It is usually well tolerated with utilization of local anesthesia. However, many physicians offer their patients sedation. Herein, we aim to examine the correlation of performing the procedure under sedation and specimen’s adequacy.

Methods:
We performed retrospective review of electronic medical records of patients who underwent ultrasound guided FNA of thyroid nodules by single surgeon over eight years period. Patients’ clinicodemographic characteristics such as age, gender, race, BMI, nodule size, vascularity, anticoagulation status, and cytopathology results were collected. Patients were divided into two groups, sedated and non-sedated.

Results:
Total 1568 thyroid biopsies were performed in 802 patients. Mean age was 52.5±14.5 years and 80.2% of patients were women. Sixty patients requested sedation and underwent biopsies of 96 (6.1%) nodules. There was no statistical difference between sedated and non-sedated groups in regards age, gender, nodule size, nodule vascularity, and anticoagulation status (p>0.05). Non-diagnostic sample rate was 81 (5.5%) in non-sedated group, and 7 (7.3%) in the sedated group (p=0.46). A post-FNA hematoma rate was 8 (0.5%) in the non-sedated group, and 1 (1.04%) in the sedated group (p=0.53).

Conclusion:
Performing FNA of thyroid nodules under sedation is safe but doesn’t improve the non-diagnostic sample rate or post-FNA hematoma rate. Further studies are warranted to decide whether sedation is appropriate in particular hospital settings.
 

53.13 The management of adhesive small bowel obstruction: a decision analysis of competing strategies.

R. Behman1, P. Karanicolas1, A. Nathens1, J. Jung1, N. Look Hong1  1University of Toronto,Department Of Surgery,Toronto, Ontario, Canada

Introduction:
Adhesive small bowel obstruction (aSBO) is one of the most common reasons for general surgery admission.  Current guidelines advocate for a trial of conservative management (TCM) in patients without signs of bowel ischemia. However, emerging evidence suggests conservative management may be associated with increased risk of recurrence.  Furthermore, when TCM fails, patients undergoing delayed operative management experience increased mortality and morbidity.  The purpose of this decision analysis is to compare two competing strategies for the management of aSBO: early operative management (EOM) and the current standard of care, TCM.

Methods:
We performed a decision analysis with microsimulation and Markov modeling to compare short- and long-term outcomes following treatment with either TCM or EOM at the index admission for aSBO.  We defined EOM as operative management within 24 hours.  The TCM strategy could succeed or fail and result in delayed operation (>24 hours).  Patients’ disease course was modeled over a 10-year time horizon using probabilities derived from 18 previously published studies.  Outcomes modeled included the total number of recurrences, complications, and bowel resections as well as the overall probability of an aSBO-related mortality associated with each treatment strategy.  Sensitivity analyses were performed to test the robustness of the model.

Results:
Over a 10-year time horizon, patients treated with EOM are less likely to experience a recurrence of aSBO than those treated with TCM (36% vs. 52%) and are 36% less likely to experience two or more recurrences.  Patients treated with EOM are more likely to undergo bowel resection (32% vs 16%) and are more likely to experience complications (34% vs. 24%).  A sensitivity analysis was performed to account for potential confounding by indication associated with the use of retrospective data.  When controlling for patients in the EOM arm who were likely assigned to this treatment due to signs of bowel ischemia, the two treatment strategies had similar complication rates (29% with EOM and 26% with TCM).  Peri-admission mortality over the 10-year time horizon was also similar between the two groups (0.06 vs 0.056).  

Conclusion:
Over a 10-year time horizon, EOM is associated with lower recurrence.  Complication rates are similar in the two treatment strategies when controlling for patients who likely had signs of bowel ischemia at pesentation.  EOM may be a suitable treatment strategy for patients with aSBO without signs of bowel ischemia. Future studies should focus on cost-effectiveness in order to further assess the impact of different treatment strategies on the healthcare system and to effect changes in clinical practice. 
 

53.12 Assessment of Shared Decision-Making in Surgical Evaluation for Gallstones at a Safety-Net Hospital

K. M. Mueck1, M. I. Leal1, C. C. Wan1, B. F. Goldberg1, T. E. Saunders1, S. G. Millas1, M. K. Liang1, T. C. Ko1, L. S. Kao1  1University Of Texas Health Science Center At Houston,General Surgery,Houston, TX, USA

Introduction: Shared decision-making (SDM) promotes informed, patient-centered healthcare. However, little is known about vulnerable patients’ perceptions regarding SDM during surgical evaluation and the accuracy of existing tools for measuring SDM. The purpose of this study was to evaluate whether a commonly used tool, the Shared Decision Making Questionnaire (SDMQ9), accurately reflects perceptions of SDM among medically underserved patients seeking surgical evaluation for gallstones at a safety-net hospital.

Methods: An exploratory qualitative study was conducted in a sample of adult patients with gallstones at a safety-net surgery clinic between May-July 2016. Semi-structured interviews were conducted after initial surgical consultation. Patients were administered the SDMQ9 which has been validated in English and Spanish and Autonomy Preference Scale (APS). Interviews were analyzed using thematic content analysis and investigator triangulation was used to establish credibility. Interviews and questionnaires were determined to reflect SDM if responses were equivalent to “strongly agree” or “completely agree”. Univariate analyses were performed to identify factors associated with SDM and to compare results of the surveys to those of the interviews.

Results: The majority of patients (N=20) were female (85%), Hispanic (80%), Spanish speaking (70%), and middle-aged (46.8 ± 15 years). Most had a diagnosis of symptomatic cholelithiasis (55%), though 4 patients (20%) had non-biliary diagnoses.  Non-operative management was chosen for 8 (40%) patients following surgical consultation. The proportion of patients who perceived SDM was significantly higher based on the SDMQ9 versus interviews (85% vs 35%, p<0.01). Age, sex, race/ethnicity, language, diagnosis, desire for autonomy based on the APS, and decision to pursue surgery were not associated with SDM. Analysis of component questions similarly demonstrated significantly higher perceived SDM based on the SDMQ9 in patient involvement in decision-making (90% vs 35%, p<0.01), discussion of treatment options (85% vs 50%, p=0.02), physician explanation of all information (90% vs 45%, p=0.04), and joint weighing of treatment options (75% vs 20%, p<0.01).  Interview themes suggest that contributory factors to this discordance include patient unfamiliarity with the concept of SDM, deference to the surgeons’ authority, lack of discussion about treatment options, and confusion between aligned versus shared decisions.

Conclusion: Understanding patient perspectives regarding SDM is crucial to providing informed, patient-centered care. Discordance between two methods for assessing vulnerable patients’ perceptions of SDM during surgical evaluation suggests that modifications to current measurement strategies may need to occur when assessing SDM in vulnerable patients. Furthermore, such metrics should be assessed for correlation with patient-centered outcomes.

 

53.11 Hospital Volume and Outcomes in Elderly Emergency General Surgery Patients

A. B. Pratt1, S. Gale1, V. Y. Dombrovskiy1, M. E. Lissauer1  1Robert Wood Johnson – UMDNJ,Acute Care Surgery,New Brunswick, NJ, USA

Introduction:  Though controversial, volume may be related to quality in elective surgery. Elderly emergency general surgery patients(EGSP) have not been studied in this manner. Our goal was to compare EGSP outcomes in high volume (HVH) and low volume (LVH) hospitals.

Methods:  Nationwide Inpatient Sample 2012-2013 was queried for EGSP hospitalizations and were ranked by annual EGSP volume. EGSP were identified by ICD code. Top 10% of hospitals were classified as HVH. Complications, mortality, length of stay (LOS) and cost were compared by Chi-square test, multivariable analysis and Wilcoxon rank sum test.

Results: HVH represented 10% of hospitals and 29.7% of EGSP hospitalizations.  In both HVH and LVH ≈ 30 % of EGSP underwent surgery. EGSP in HVH were older (77.7±7.9 years vs 77.5±7.9; P <. 0001), more likely Caucasian (78.3% vs 72.6%; P<.0001), and more likely to be male (HVH=57.6%, LVH=57.0%; P<.0001). In Chi-square analysis, HVH demonstrated more complications (45.3% vs  LV 44.2%; P <. 0001), including cardiac (2.3% vs 2.2; P =. 003), respiratory (3.7% vs 3.34%; P <.0001), and renal (16.2% vs 16.0%; P <.0001) but fewer infections (17.8% vs 18.4%; P <.0001).  Mortality  was greater in HVH (2.8% vs 2.6%; P=.0002). However, in multivariable logistic regression analysis adjusting for patient age, gender, race, comorbidities, complications, EGS area, and need for surgery, differences in mortality disappeared. Mortality in both groups was still greater if patient had surgery. LOS was greater in HVH (HVH 5.7 days vs LVH 5.3; P<.0001) but total hospital cost was lower ($11432 vs $11509; P <. 0001).

Conclusion: There is no mortality difference in HVH compared to LVH treating EGSP. LOS is greater in HVH, and patients may have more complications. Cost is statistically lower in HVH despite longer LOS.

 

53.10 Assessing the Risk of Hypercalcemic Crisis in Patients with Primary Hyperparathyroidism

A. J. Lowell1, N. M. Bushman1, X. Wang1, Y. Ma1, R. S. Sippel1, S. C. Pitt1, D. F. Schneider1, R. W. Randle1  1University Of Wisconsin,Department Of Surgery,Madison, WI, USA

Introduction:  Hypercalcemic crisis (HC) is a rare, potentially life-threatening complication of hypercalcemia.  Primary hyperparathyroidism (PHPT) is the most common cause of hypercalcemia and can manifest as hypercalcemic crisis. This study aims to identify patients with PHPT at greatest risk for developing HC.

Methods:  This retrospective cohort study included patients with a pre-operative calcium of at least 12 mg/dL undergoing initial parathyroidectomy for PHPT from 11/2000–03/2016. This cohort was then separated into two groups: 1) those with HC, defined as those patients hospitalized and treated for hypercalcemia, and 2) those without HC. We compared the two groups using Mann-Whitney U tests and chi-squared tests where appropriate. Multivariable logistic regression identified predictors of HC. Additionally, we performed a classification tree (CART) analysis to produce a decision tool that can classify patients by risk of HC.

Results: Of the patients meeting inclusion criteria, 29 (15.8%) had HC and 154 (84.2%) did not. The two cohorts were similar in age, gender, alcohol use, smoking status, BMI, and Charlson comorbidity index (CCI). Patients with HC were more likely to have a history of kidney stones than patients without HC (31.0% vs. 14.3%; P=0.039). Compared to those without HC, patients with HC also had higher pre-operative calcium (median 13.8 vs. 12.4 mg/dL; P<0.001), higher pre-operative parathyroid hormone (PTH) (median 318 vs. 160 pg/mL; P=0.001), and lower pre-operative total vitamin D (median 16 vs. 26 ng/mL; P<0.001). Cure rates with parathyroidectomy were similar in both groups, but nearly double the proportion of patients with HC required resection of more than one gland compared to patients without HC (24.1 vs. 12.3%, P=0.12). In multivariable analysis, higher pre-operative calcium (Odds Ratio [OR] 1.7, 95% Confidence Interval [CI] 1.1-2.5, P=0.01), elevated PTH (OR 1.0, 95% CI 1.0-1.0, P=0.01), and history of kidney stones (OR 3.0, 95% CI 1.1-8.2, P=0.04) were independently associated with HC. The CART decision tree (Figure 1) revealed that over 90% of patients with a calcium ≥ 13.25 mg/dL and a CCI ≥ 4 developed HC. Additionally, 60% of patients with calcium ≥ 13.25, CCI < 4, and PTH ≥ 394 also had crisis. The CART model carried an overall predictive accuracy of 90%, and a positive predictive value of 76%. 

Conclusion: These data indicate that patients with calcium ≥ 13.25, PTH ≥ 394, and a CCI ≥ 4 are at increased risk for developing HC.  The decision tool reported here can help identify patients at greatest risk for developing HC, and allow surgeons to expedite parathyroidectomy accordingly.

 

53.09 Comparing Surgical Outcomes Between Vertical Sleeve Gastrectomy and Roux-n-Y Gastric Bypass Today

S. Pagkratis3, P. R. Armijo3, Y. Wang5, J. Mitchell4, V. Kothari3  3University Of Nebraska College Of Medicine,General Surgery,Omaha, NE, USA 4University Of Nebraska College Of Medicine,College Of Medicine,Omaha, NE, USA 5University Of Nebraska,College Of Public Health,Omaha, NE, USA

Introduction: Obesity is evolving into an epidemic in the United States. Bariatric surgery procedures are performed more frequently and patient’s access to bariatric centers is easier. Since the first weight loss operations were performed several decades ago, the utilization of each procedure has changed dramatically. Currently, sleeve gastrectomy (SG) and Roux-n-Y gastric bypass (RYGB) are the two most commonly procedures performed, whereas gastric bands and others are now rarely executed. Additionally, the implementation of new technology has shifted the approach to these procedures from open (O) to laparoscopic (L) and more recently to robotic assisted (RA). The aim of this study is to compare surgical outcomes between these two most popular procedures within all three approaches.

Methods: The UHC clinical database resource manager (CDB/RM) was queried for adult patients who underwent either open, laparoscopic or robotic RYGB or SG. Data for fourteen different surgical outcomes were collected and statistical analyses were conducted using IBM SPSS v23.0.0, with α=0.05.

Results:Between Jan/2013 and Sep/2015, a total of 27,901 patients underwent RYGB (O: 2,393, L: 23,902, RA: 1,606) and 41,318 patients had SG (O: 1,255, L: 37,766, RA: 2,297). For both procedures, the majority of patients had laparoscopic approach (RYGB: 85.7%, SG: 91.4%). Subsequently, for each approach, patients were stratified by severity on admission status (minor vs major severity), and 6 separate groups of patients occurred for each procedure. Analyses revealed interesting findings: minor severity L-RYGB patients more frequently suffered from aspiration pneumonia (L-RYGB:0.15%, L-SG:0.05%, p<0.001), GI hemorrhage (L-RYGB:0.43%, L-SG:0.05%, p<0.001), post-op infection (L-RYGB:0.13%, L-SG:0.05%, p=0.001), post-op shock (L-RYGB:0.14%, L-SG:0.08%, p=0.027), and death (L-RYGB:0.04%, L-SG:0.01%, p=0.008), compared to equal severity L-SG patients. Additionally, our data suggested that major severity patients who underwent O-RYGB and O-SG had surprisingly high mortality (O-RYGB:10.59%, O-SG:7.79%) and post-operative infection (O-RYGB:9.26%, O-SG:9.30%) rates, that reached 10%. Interestingly, for all patients who underwent RA approach, no statistically significant difference in outcomes was identified between SG and RYGB.

Conclusion:SG and RYGB are the most popular bariatric procedures performed nowadays with the laparoscopic approach being the predominant one. Analyzing data from a large national database from more than 69,000 patients reveals that patients who undergo L-RYGB have worse outcomes compared to L-SG patients. Additionally, our analyses suggest that when the RA approach is utilized this difference is eliminated. Moreover, open approach is associated with relatively higher mortality and post-operative infection risk. This information should be taken under consideration when we educate our patients and assist them to choose the procedure that is best for them.