38.02 Gender Differences in Residency: Duty Hour Utilization, Burnout and Psychological Wellbeing

A. R. Dahlke1, J. K. Johnson1,3, C. C. Greenberg4, R. Love1, L. Kreutzer1, D. B. Hewitt1,5, C. M. Quinn1, K. Engelhardt1,6, K. Y. Bilimoria1,2  1Northwestern University- Feinberg School Of Medicine, Surgical Outcomes And Quality Improvement Center (SOQIC),Department Of Surgery,Chicago, IL, USA 2American College Of Surgeons,Chicago, IL, USA 3Northwestern University-Feinberg School Of Medicine,Center For Healthcare Studies In The Institute For Public Health And Medicine,Chicago, IL, USA 4Wisconsin Surgical Outcomes Research (WiSOR) Program,Department Of Surgery,Madison, WI, USA 5Thomas Jefferson University Hospital,Department Of Surgery,Philadelphia, PA, USA 6Medical University Of South Carolina,Department Of Surgery,Charleston, SC, USA

Introduction: As the number of women in surgical residency programs continues to increase, there is a growing recognition that women and men may enter, experience, and even leave residency programs differently. Recent studies have shown that up to 65% of surgical residents experience some amount of burnout and challenges to their wellbeing. Our objective is to (1) assess differences in how male and female general surgery residents utilize duty hour regulations and experience burnout and psychological wellbeing and (2) examine reasons why women and men may have differing experiences with duty hours, aspects of burnout, and issues with psychological wellbeing.

Methods: 7,395 surgical residents completed a survey (99% response rate) regarding how often and why they exceeded 2011 standard duty hour limits, as well as about aspects of burnout and psychological wellbeing. Hierarchical logistic regression models were developed to examine the association between gender and each of the resident outcomes. 98 semi-structured interviews were completed with 42 faculty and 56 residents. Transcripts were analyzed thematically using a constant comparative approach. 

Results: Female residents reported more frequently staying in the hospital >28 hours or violating the 80 hour work week maximum ≥3 times in a month, as well as more frequently feeling fatigued and burned out from their work (P<0.001). Females also reported less frequently treating patients as “impersonal objects” or “not caring” what happens to patients (P<0.001). Women reported more often: losing sleep due to worry, being unable to make decisions, feeling constantly under strain, being unable to overcome difficulties, feeling unhappy or depressed, feeling a loss of self-confidence, or thinking of themselves as worthless (P<0.01). In adjusted analyses, all associations remained significant. Themes identified in the qualitative analysis as possible contributory factors to gender differences in residency include: lack of mentorship/leadership roles by women surgeons, dual role responsibilities (surgeon and family), the inability of co-workers to understand gender differences (gender blindness), and gender-based differences regarding pressures and challenges, as well as in approaches to patient care.

Conclusion: Our study found that women report working extended shifts more often than men and experience worse contributing factors to burnout and poor psychological wellbeing. This mixed-methods study adds to the existing literature on resident wellbeing, and calls for a closer look into how gender schemas drive the differences in the way male and female surgeons work, behave, and ultimately cope during residency. Focusing future research on the differences in how women and men navigate residency and their social, emotional, and mentoring needs may help us develop policy recommendations as well as specific programmatic or cultural interventions.

 

37.09 Treatment Outcomes of Manual Lymphatic Drainage in Pediatric Lymphedema Patients

K. Ali1, C. Dougherty2, R. Maricevich1,2, I. Iacobas1,2  1Baylor College Of Medicine,Houston, TX, USA 2Texas Children’s Hospital,Houston, TX, USA

Introduction:
Generalized pediatric lymphedema is primarily due to congenital malformations or lymphatic dysplasia. Manual lymphatic drainage (MLD) via the Vodder technique is a popular therapeutic modality that incorporates superficial and deep massage to soften soft tissue, increase lymphatic flow, and improve functional limb performance. However, there is sparse literature discussing MLD outcomes in pediatric patients with lymphedema. Our purpose is to quantitatively measure the effect of MLD on pediatric lymphedema based on differences in girth, weight, and functional performance before and after therapy.

Methods:
Retrospective chart review was performed on pediatric patients with primary lymphedema who underwent a 1-month course of MLD between 2015-2017. Data collected included weight, limb girth, extremity strength, pain scores, and functional limb performance before and after MLD. Patients with other causes of edema (i.e. heart failure, renal failure) were excluded. Data statistics were used to quantify the reductive effect of MLD.

Results:
10 children with primary lymphedema who completed the MLD course were identified (median age 8 years, range 0.4-18 years). Immediately following MLD, weight reduced by 2-19%, and limb girth reduced by 4-27% in lower extremities, 0-10% in upper extremities, and 5-22% in truncal regions among the patients. More pronounced reduction was noted in the distal extremities compared to proximal extremities after therapy. Validated functional questionnaires showed at least 50-60% improvement in limb performance in half of the patients. Clinically, pain scores improved by 80-100%, soft tissue was softened with improved skin quality, and range of motion of affected limbs improved as noted during physical therapy sessions. Two patients had minimal improvement in lymphedema girth and range of motion after MLD and subsequently underwent sclerotherapy and lymphovenous bypass surgery.

Conclusion:
Manual lymphatic drainage is not a cure, but does improve reduction in the amount of extremity lymphedema with consistent usage. Noticeable improvements include decreased limb girth particularly of distal extremities, softening of skin and tissue, and favorable patient-reported functional outcomes. These findings suggest that MLD can be a reliable therapeutic modality in pediatric lymphedema until more permanent solutions are available. Large, prospective studies are needed to validate these results.
 

37.08 Ligation and Repair are associated with Similar Outcomes in Superior Mesenteric Vein Injury

T. Tan1, J. Sabat1, P. Hsu1, N. Samra2, Q. Chu2  1University Of Arizona,Tucson, AZ, USA 2Louisiana State University Health Sciences Center,New Orleans, LA, USA

Introduction:  Traumatic superior mesenteric vein (SMV) injury is rare, and the ideal treatment is controversial. We compared the outcomes of ligation versus repair of SMV injury using the National Trauma Database (NTDB).

Methods:  All adult patients who suffered from traumatic SMV injury were identified from NTDB (2002-2014) by International Classification of Diseases (ICD), Ninth Revision Diagnosis codes. Patients were stratified on the basis of treatment modality into observation, ligation, and surgical repair using ICD procedure codes. Outcomes including hospital mortality, bowel resection, hospital and intensive care unit (ICU) length of stay (LOS) between ligation and surgical repair were compared by two-sample t-test or X2 test as appropriate. 

Results: Among 952 patients with SMV injury, 332 patients (34.9%) were observed, 192 patients (20.2%) had ligation and 428 (50%) underwent surgical repair.  More than 92% of injury was from penetrating trauma and overall hospital mortality was 32%. Age, gender, injury severity score (ISS), and Glasgow Coma Scale (GCS) were similar between groups that underwent ligation and surgical repair. Although the mortality rate (29.4% vs. 36.5%, p=.20) and bowel resection (4% vs. 3%, p=.12) were similar, patients who underwent repair had significantly longer hospital LOS (19.4±24.8 vs.15.2±24.4 days, p<.001) and ICU LOS (13±17.1 vs. 9.3±11.8 days, p=.02) compared to ligation. In multivariable analysis, although mortality and rate of bowel resection were similar, SMV repair was associated with significantly longer hospital LOS (RR 1.5, 95% CI 1.2,1.9 p<.01) and ICU LOS (RR 1.4, 95% CI 1.1,1.8, p<.01) compared to ligation. 

Conclusion: In patients with traumatic SMV injury, surgical repair does not appear to confer a significant advantage over ligation and is associated with significantly longer hospital and intensive care unit length of stay.  Ligation is an acceptable option in traumatic SMV injury, especially in critically ill patients. 
 

37.05 Four Extremity Venous Duplex Ultrasonography For Suspected Deep Venous Thrombosis: An Anachronism

T. Yoo1, R. Aggarwal1, S. Brathwaite1, B. Satiani1, M. J. Haurani1  1Ohio State University,Vascular/Surgery,Columbus, OH, USA

Introduction: Duplex Ultrasonography (DUS) is the gold standard for diagnosis of deep vein thrombosis (DVT) due to its high specificity/sensitivity, safety, and portability. However, unnecessary testing may represent an inefficiency that pervades healthcare. Here we hypothesize that the majority of four extremity DUS, often ordered for fever of unknown origin (FUO), is unnecessary. Furthermore, by analyzing patient and clinical factors of patients with an acute DVT on four extremity DUS, we aim to identify a subset of high-risk patients that may benefit from four extremity testing.

Methods:  We retrospectively reviewed all venous DUS performed in our Intersocietal Accredited Commission Vascular Laboratory from January 1st, 2009, to December 31st, 2016. Patients with DUS of all 4 limbs were included. DVT risks factors, and indication for DUS was recorded. Primary outcome was finding of acute DVT.

Results: 188 patients met criteria of which 31 patients (16.5%) had acute DVT (11 upper extremity, 16 lower extremity, 4 upper and lower extremity). Factors associated with positive DUS were recent surgery (OR =2.50, p<0.02), hospitalization≥7 days (OR=3.85, p<0.01), immobility≥7 days (OR=3.67, p<0.01), presence of central venous catheter (CVC) (OR=3.40, p<0.01), and active malignancy (OR=2.52, p<0.02). FUO was the main indication for requesting four extremity DUS (53.7%). Patients who underwent four extremity DUS for FUO had significantly lower likelihood of DVT (OR=0.41, p<0.02), of which DVT was rarely the proximate cause (<1% of all cases), as follow-up culture results and clinical course most often revealed other sources of fever. Patients with known acute PE had the most significant likelihood of DVT (OR=22.6, p<0.01). Prophylaxis was found to be protective for acute DVT (OR=0.33, p<0.01). Only patients with an upper extremity CVC had an upper extremity DVT, which was usually line associated (93%). 

Conclusion: Four limb DUS for FUO is inefficient given that DVT was rarely the proximate cause of fever. Upper extremity acute DVT was only found in patients with an ipsilateral CVC. When upper extremity DVT iss present, the extremity associated with the CVC should be imaged, while the contralateral extremity does not need a full-limb DUS.  Our results indicate that a restrictive strategy can reduce DUS testing inefficiency and healthcare cost without compromising patient safety.

 

37.06 Racial and Gender Disparity in Aortoiliac Disease Revascularization Procedures

W. Alshwaily1, B. Nejim1, H. Dakour Aridi1, M. Rizwan1, S. Locham1, M. Malas1  1Johns Hopkins University School Of Medicine,Vascular Surgery/General Surgery,Baltimore, MD, USA

Introduction:
The impact of race and gender on surgical outcomes has been studied in infrainguinal revascularization procedures. However, it is yet to be investigated among the aortoiliac disease patients. The aim of this study is to explore how race and gender affect the outcomes in suprainguinal bypass (SIB) surgery.

Methods:
Patients who underwent SIB were identified from the procedure-targeted National Surgical Quality Improvement Program (NSQIP) dataset (2011-2015). Patients were stratified into four groups: non-black males(NBM), black males(BM), non-black females(NBF), and black females(BF). Primary outcomes were 30-day MACE (Major Adverse Cardiac Events a composite of MI, stroke, or death), postoperative bleeding requiring transfusion or intervention, major amputation, prolonged length of stay (pLOS>10days). Predictors of those outcomes were determined by multivariable logistic regression analysis adjusting for comorbidities, degree of limb ischemia, presentation, and elective vs. urgent repair.

Results:
Overall, 3700 patients were identified. Of which 54.6% were NBM, 5.7% BM, 35.2% NBF, and 4.6% BF. The event rates were 6.9%, 16.9% and 2.0% for MACE, bleeding and major amputation, respectively. BM were significantly younger [BM mean age: 62.8±10.8, NBM: 65.5±10.5, NBF: 66.8±12.2, BF: 64.1±12.6]. BM were more likely to be smokers, less likely to be on statin, and to receive elective SIB (NBM: 67.5%, BM:50.9%, NBF: 67.0%, BF: 58.7%) (all p≤.01). BF were more likely to be diabetic (NBM: 26.6%, BM: 24.3%, NBF: 24.1%, BF: 37.8%; p=.001), hypertensive, and functionally dependent (all p<.05). There were no significant differences in CHF and bleeding disorders in all groups. MACE outcomes were similar among all groups. BM had a threefold higher risk of limb loss [adjusted odds ratio (OR) (95%CI):3.23(1.38-7.59) p<.007] compared to other groups. While female gender was significantly associated with bleeding in both race groups, that association was more drastic in BF [OR(95%CI):2.57(1.65-4.01) p<.001] whereas NBF [OR(95%CI):1.61(1.27-2.04) p<0.001]. In comparison to other groups, BF had significant odds of pLOS [OR (95%CI):1.76(1.12-2.76) p<.014] (Figure).

Conclusion:

This is the largest study to date to demonstrate racial and gender disparity in SIB outcomes. BM had more than threefold increase of limb loss risk, and the risk of severe bleeding was more than doubled in BF. The advanced disease seen in younger patients, the comorbidity burden, and the emergent presentation might have contributed to this observed disparity, yet did not fully explain it. Race and gender consideration is warranted in risk assessment and refinement of patient selection for aortoiliac disease revascularization.

37.07 Risk Factors for Revision Following Lower Extremity Amputations

M. T. Cain1, M. Wohlauer1, P. Rossi1, B. Lewis1, K. Brown1, G. Seabrook1, C. J. Lee1  1Medical College Of Wisconsin,Division Of Vascular Surgery,Milwaukee, WI, USA

Introduction:  Lower extremity amputation is a significant cause of morbidity for patients with peripheral vascular disease.  Revision of an amputation to a more proximal level carries a heightened physiological, functional, and psychological stress for patients. Little data exists describing outcomes of non-traumatic amputations and risk factors associated with subsequent need for revisions. The objective of this study was to identify the determinants for revisions following lower extremity amputations:

Methods:  Patient data was reviewed retrospectively from a prospectively collected database.  Patients with underlying peripheral arterial disease who underwent non-traumatic lower extremity amputation between 2013 and 2016 were included in the study. A total of 260 patients met study criteria,. Patients who required revision were grouped and compared to those who did not require revision. Preoperative, intraoperative, and postoperative variables were collected and analyzed.  Univariate and multivariate analysis was performed. 

Results: Amputation revision was required in 70 patients (26.9%). Patients who underwent amputation revision to a higher level were significantly younger, taller, and heavier, and experienced higher degree of independence in mobility prior to revision (p value < 0.001, 0.035, 0.004, and 0.014, respectively).  Preoperative and postoperative aspirin use, statin use, and P2Y12 use appeared to be protective against the need for revisions (p value 0.018, 0.026, and 0.007, respectively).  Having undergone prior open or endovascular arterial intervention on the index limb was also protective against the need for amputation revision (p value 0.005).   The indication at the index amputation of acute limb ischemia or severe infection was more common in those who underwent subsequent revision (p value <  0.001).  Urgent or emergent amputations and active tobacco use were also associated with amputation revisions (p value < 0.001). Patients who experienced one or more postoperative complications after index amputation (arrhythmia, congestive heart failure, myocardial infarction, pneumonia, or respiratory failure) were also more commonly revised (p value < 0.001).   

Conclusion:  Preoperative patient comorbidities and limb acuity are significant determinants for amputation revision.  Procedural urgency and postoperative complications correlate significantly with the need for amputation revision in patients with peripheral vascular disease.  Factors protective against the need for revision included prior arterial interventions and optimal medical therapy to reduce 

 

37.03 Predictors of Serious Morbidity and Mortality Following Endovascular Repair of Aortoiliac Lesions

A. Thomas1, I. Leitman1  1Mount Sinai School Of Medicine,New York, NY, USA

Introduction:  Endovascular approaches to aortoiliac disease are becoming increasingly frequent due to the prospect of reduced recovery times, bleeding and discomfort. This study was designed to identify important risk factors for major 30-day outcomes and mortality following repair. 

Methods:  Data were derived from the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) participant user file. Patients that underwent aortic, bilateral common iliac, common iliac, external iliac, internal iliac, common and internal iliac, or common and external iliac repair (CPT codes 37220, 37221, 37222 and 37223) in one of 40 participating procedure targeted hospitals during 2015 were identified. Preoperative risk factors were analyzed using univariate and multivariate analysis for the endpoints of major 30-day postoperative outcomes and mortality.

Results

818 patients underwent endovascular aortoiliac repair; 474 (57.9%) males and 344 (42.1%) females. Ages ranged from 33 to 89, with a mean of 64.3 years. The overall 30-day mortality rate was 1.7%. Smoking, advanced age, low serum albumin and hematocrit, insulin-dependent diabetes, and dialysis correlated with increased mortality. 33 patients (4%) required major re-intervention of the treated arterial segment.  Risk factors included preoperative living dependency and transfer from a location other than home. 59 patients (6.9%) were discharged to a location other than home. Advanced age, low albumin and hematocrit, high BUN, creatinine, WBC, platelets, PTT and INR, ABI ≤0.89, dependency, open wound, 10% weight loss in 6 months prior to surgery, dialysis and diabetes were associated with discharge to a place other than home. Eight patients (1%) required amputation in the early post-operative period.  Risk factors included high BUN and WBC, low hematocrit, ABI ≤0.39, living dependency, and transfer from a location other than home. 44 patients (5.4%) required transfusion for perioperative hemorrhage and this was associated with an underlying bleeding disorder, low albumin and hematocrit, high alkaline phosphatase, ABI ≤0.89, living dependency, open wound, weight loss, and transfer from a location other than home.

After logistic regression, 30-day mortality was correlated with low serum albumin, with survivors having an average albumin of 3.72, and non-survivors 2.36; major re-intervention of the treated arterial segment was correlated with preoperative living dependency.

Conclusion: We identified preoperative living dependency and low serum albumin to be the most strongly associated risk factors with adverse 30-day postoperative outcomes following endovascular aortoiliac procedures. Mortality rates remain low following percutaneous revascularization. 
 

37.04 Results of Non-operative Management of Acute Limb Ischemia in Infants

S. Wang1, A. R. Gutwein1, N. A. Drucker1, R. L. Motaganahalli1, M. C. Dalsing1, B. W. Gray1, M. P. Murphy1, G. W. Lemmon1  1Indiana University School Of Medicine,Indianapolis, IN, USA

Objective:

Acute limb ischemia (ALI) in infants poses a challenge to the clinician secondary to poor operative outcomes, limb loss risk, and life-long morbidity.  This retrospective study reviews a 10-year institutional experience with non-operative management of ALI in infants.

 

Methods:

Infants (age ≤ 12-months) diagnosed with ALI by duplex and treated with initial non-operative management at a tertiary care dedicated children’s hospital were identified via vascular laboratory lower extremity arterial duplex records.  Patient demographics, injury characteristics, treatment given, and outcomes were abstracted via chart review and presented using descriptive statistics.  Continuous variables are presented as mean ± standard deviation.

 

Results:    

During the study period, a total of 25 (28% female) infant patients were diagnosed with ALI.  The average age for this cohort was 3.5 ± 3.2 months.  The majority of cases were secondary to iatrogenic injury (88%) from arterial cannulation (Table).  Injury sites were concentrated to the lower extremities (84%) as compared to the upper.  Absence of Doppler signals were noted in 64% of infants while limb cyanosis observed in 60% at the time of presentation.

 

Infants were initially treated with anticoagulation (80%) when possible.  Two patients failed non-operative management and required thrombolysis secondary to progression of thrombus burden while anticoagulated.  There were no major (above-ankle) amputations at 30-days.  Three deaths occurred within 30-days; all were unrelated to limb ischemia.  In the 30-day survivors, overall duration of follow-up was 52.1 ± 37.7 months.  One infant required above-knee amputation six weeks after diagnosis resulting in an overall limb salvage rate of 96%.  Long-term morbidity included two patients with a chronic wound of the affected limb and one patient with limb length discrepancy.  No subjects reported claudication at the latest follow-up appointment.  Additionally, all patients were independently ambulatory except for one female who was using a walker with leg braces.

 

Conclusions:

In contrast to the adult population, ALI in infants can be managed with anticoagulation and non-operative intervention.  Long-term follow-up continues to demonstrate excellent functional results and minimal disability. 

37.01 The Mortality for Surgical Repair is Lower than Ligation in Patients with Portal Vein Injury

J. Sabat1, T. Tan2, B. Hoang2, P. Hsu2, Q. Chu3  1Banner University Medical Center/ University of Arizona,Tucson, AZ, USA 2University Of Arizona,Tucson, AZ, USA 3Louisiana State University Health Sciences Center,New Orleans, LA, USA

Introduction:
Portal vein injury is uncommon, and the optimal treatment is controversial. We compared the outcomes of ligation versus repair of portal injury utilizing the National Trauma Database (NTDB).

Methods:
All adult patients who suffered from portal injury were identified from NTDB (2002-2014) by International Classification of Diseases (ICD), Ninth Revision Diagnosis codes. Patients were stratified by treatment modality into observation, ligation, and surgical repair using ICD procedure codes. Outcomes including hospital mortality, bowel resection, and length of stay (LOS) between ligation and surgical repair were compared by two-sample t-test or X2 test as appropriate. Multivariable analyses were performed with logistic regression. 

Results:
Among 752 patients with portal vein injury, 345 patients (45.9%) were observed, 103 patients (13.7%) had ligation, and 304 (40.4%) underwent surgical repair.  Over 95% was from penetrating trauma, and mortality was 49%. Age, gender, injury severity score (ISS), Glasgow Coma Scale (GCS), presenting blood pressure and heart rate were similar between groups that underwent ligation and surgical repair. The hospital mortality (59.2% vs. 47.7%, p=.08), bowel resection (1.9% vs. 1%, p=.55), and LOS (12.5 vs. 15 days, p=.08) were also comparable between ligation and repair in univariable analysis. In multivariable analysis, the hospital mortality was significantly lower for surgical repair compared to ligation (OR 0.63, 95% CI 0.40, 0.99, p=.04).  

Conclusion:
Portal vein injury is caused by penetrating trauma and is associated with significant mortality and morbidity. Surgical repair is associated with significantly lower mortality than ligation of the portal vein and should be performed if feasible. 

37.02 Impact of Glucose Control and Regimen on Limb Salvage in Patients Undergoing Vascular Intervention

J. L. Moore1, Z. Novak1, M. Patterson1, M. Passman1, E. Spangler1, A. W. Beck1, B. J. Pearce1  1University Of Alabama at Birmingham,Division Of Vascular Surgery And Endovascular Therapy,Birmingham, Alabama, USA

Introduction:

Studies have demonstrated correlation between levels of glycosylated hemoglobin (HbA1c) in diabetic patients and the incidence of both peripheral artery disease (PAD) and lower extremity amputation (AMP).  However, the impact of glucose control on outcomes in patients undergoing open or endovascular PAD treatment has not been examined.  The purpose of this study is to assess the effect of HbA1c and medication regimen on amputation-free survival (AFS) in patients undergoing treatment for limb salvage.

Methods:

Limb salvage patients with a baseline HbA1c within one month of treatment were identified from a prospectively maintained vascular registry queried from 2010-17.  The hospital EMR was cross-referenced to identify patients with HbA1c measured within 3 months of the index procedure.  Patient records were examined and instances of AMP, type of treatment (ENDO v OPEN), demographics, co-morbidities, and diabetic glycemic control modalities were analyzed.  Diagnosis of diabetes was determined by a combination of HbA1c, physician diagnosis, and usage of diabetic medications.

Results:

Our query found 306 eligible limbs for analysis.  AFS was associated with diabetes (82.6%, p=0.002), non-white race (56.5%, p=0.006), insulin-only diabetic control (52.2%, p<0.001), post-operative creatinine >1.3mg/dL (38.0%, p<0.001), and dialysis (26.1%, p<0.001).  HbA1c was not significantly associated with AFS.  Survival analysis (Kaplan-Meier plots) revealed a diagnosis of diabetes was significantly associated with worse AFS in the entire cohort (Log rank=0.011) [Graph 1] as well as in the critical limb ischemia subgroup (Log rank=0.049) (Rutherford >3) (not pictured).  Logistic regression demonstrated an association with age (p=0.040, AOR=1.027), post-operative creatinine level (p=0.003, AOR=1.247), non-white race (p=0.048, AOR=0.567), and insulin-only diabetic control (p=0.002, AOR=2.535) with worse AFS across all limbs surveyed.

Conclusion:

Diabetes with insulin only regimen has significantly worse AFS than non-diabetic patients or those on an insulin sensitizing regimen.  This may represent a surrogate for disease severity, but the type of medications may present a modifiable risk factor to improve limb salvage.

36.09 Opioid Prescribing vs. Consumption in Patients Undergoing Hiatal Hernia Repair

A. A. Mazurek1, A. A. Brescia1, R. Howard1, A. Schwartz1, K. Sloss1, A. Chang1, P. Carrott1, J. Lin1, W. Lynch1, M. Orringer1, R. Reddy1, P. Lagisetty2, J. Waljee1, M. Englesbe1, C. Brummett1, K. Lagisetty1  1University Of Michigan,Department Of Surgery,Ann Arbor, MI, USA 2Ann Arbor VA,Division Of General Internal Medicine And Center For Clinical Management And Research,Ann Arbor, MI, USA

Introduction:  Recent studies have demonstrated a high prevalence of excessive opioid prescribing after surgery, and the incidence of persistent opioid use is among the highest after thoracic surgery. Procedure-specific prescribing guidelines have been shown to reduce excessive prescribing in certain health systems; however, this has not been studied within thoracic surgery. There is little data available to assess how many opioids patients take versus are prescribed following surgery. To establish evidence-based guidelines to reduce excessive prescribing, this study compared postoperative opioid prescribing dosages to actual usage following open and laparoscopic hiatal hernia repair (HHR).

Methods:  Retrospective chart review was performed on 119 patients who underwent open (transthoracic and transabdominal) or laparoscopic HHR between January and December 2016, and received an opioid prescription after surgery. The patient cohort consisted of opioid naïve patients, defined as individuals not using opioids at the time of surgery. Patients underwent a telephone survey regarding postoperative opioid use. The amount of opioid prescribed was quantified in oral morphine equivalents (OME) to adjust for varying potencies between medications. Descriptive statistics (median and interquartile range, IQR) were used to summarize variables. Mann-Whitney U tests were used to compare the OME prescribed vs. actual patient use within the patient cohort.

Results: 91 opioid naïve patients (37 open HHR; 54 laparoscopic HHR) were surveyed, with a response rate of 69% (n=63, 27 open, 36 lap). Mean age was 59 ± 14 years and the cohort was 65% female. Median follow-up time was 305 days (IQR 209-463). The overall median prescription size was 300 mg OME (IQR 225-375) and median patient use was 150 mg OME (IQR 25-300) (p<0.0001). Following open HHR, median prescription size was 350 mg OME (IQR 250-420) and median patient use was 225 mg OME (IQR 105-300) (p=0.001). Following laparoscopic HHR, median prescription size was 270 mg OME (IQR 200-350) and median patient use was 106 mg OME (IQR 6-295) (p<0.0001). In comparing open vs. laparoscopic HHR, significantly more OME were prescribed for open (p=0.01), with a difference in median patient use that did not reach statistical significance (p=0.08).

Conclusion: Patients use far fewer opioids than they are prescribed following open and laparoscopic HHR. While there is excess prescribing in both cohorts, laparoscopic procedures tended to have a greater difference in amount prescribed versus actual usage. These findings may be used to develop guidelines that better standardize postoperative prescribing to reduce overprescribing. 

36.10 Trends in Postoperative Opioid Prescription Size: 2010 – 2014

J. Hur1, J. S. Lee1, H. M. Hu1, M. P. Klueh1, R. A. Howard1, J. V. Vu1, C. M. Harbaugh1, C. M. Brummett2, M. J. Englesbe1, J. F. Waljee1  1University Of Michigan,Department Of Surgery,Ann Arbor, MI, USA 2University Of Michigan,Department Of Anesthesiology,Ann Arbor, MI, USA

Introduction:
Despite growing concerns about the dangers of prescription opioids, deaths from opioid overdoses have increased in recent years, reaching over 33,000 fatalities in 2015. Surgeons play a key role in this epidemic, providing 10% of opioid prescriptions in the United States. In this context, it is unclear how opioid prescribing by surgeons has changed during this time period. In this study, we examined trends in postoperative opioid prescription size over time. We hypothesized that postoperative opioid prescription size would increase during this time period.

Methods:
Using a nationwide dataset of employer-based insurance claims, we identified opioid-naive patients who underwent laparoscopic cholecystectomy, breast procedures (lumpectomy and mastectomy), or wide local excision from 2010 – 2014. Opioid prescriptions were obtained from pharmacy claims and converted to oral morphine equivalents (OMEs) for comparison. Our primary outcome measure was the size of the first opioid prescription between the day of surgery and 14 days after discharge. We calculated the mean prescription size with 95% confidence intervals for each year and procedure type. Mean prescription sizes were compared using t-tests.

Results:
In this cohort, 134,085 opioid-naïve patients underwent surgery during the study period. Of these patients, 108,893 (81.2%) underwent laparoscopic cholecystectomy (mean age 46 ± 15 years; 71.1% female); 19,199 (14.3%) underwent breast procedures (mean age 58 ± 12 years, 99.8% female); and 5,993 (4.5%) underwent wide local excision (mean age 55 ± 14 years, 45.1% female). Figure 1 shows the mean opioid prescription size by year and procedure type. For laparoscopic cholecystectomy, opioid prescriptions markedly increased in size from 230 OMEs in 2010 (equivalent to 46 tablets of 5 mg hydrocodone) to 475 OMEs in 2014 (equivalent to 95 tablets of 5 mg hydrocodone). This increase was statistically significant (p<0.001). Prescription size for breast procedures also increased significantly from 228 OMEs to 394 OMEs (p<0.001). For wide local excision, prescription size increased from 200 OMEs to 277 OMEs, but this difference was not statistically significant (p=0.10).

Conclusion:
For opioid-naïve patients undergoing common elective surgical procedures, opioid prescription size continued to increase from 2010 – 2014, reaching the equivalent of almost 100 tablets of 5 mg hydrocodone in 2014. Given recent studies showing most surgical patients require only 10 – 15 tablets of 5 mg hydrocodone, surgeons should focus on tailoring opioid prescriptions to better match actual patient requirements. 
 

36.08 Development & Usage of a Computerized Simulation Model to Improve Operating Room Efficiency

L. H. Stevens1,2, N. Walke2, J. Hobbs2, T. Bell1, K. Boustany2, B. Zarzaur1  1IU School Of Medicine,General Surgery,Indianapolis, IN, USA 2IU Health,Perioperative Services,Indianapolis, IN, USA

Introduction:
~~Efficient usage of the operating rooms is crucial to a hospital’s mission and survival. Traditionally the allocation of operating room (OR) block time to surgeons has been heavily influenced by historical usage patterns (which may no longer be relevant), local politics and organizational culture instead of data driven analysis of the most efficient OR allocation. We created a computerized simulation model of the OR’s to drive more rationale and efficient utilization. This model provides the ability to test proposed changes in block allocation, demonstrate the impact of those changes to the surgeons, and thus gain surgeons’ buy-in to the proposed changes before implementation.

Methods:
~~A discrete-event, adaptive, complex system computerized simulation model was created based on big-data analysis of 3 years of historical OR data and an industrial engineering work-flow analysis of a 600-bed level-1 trauma hospital with 30 operating rooms. Data elements included: admission type, case urgency, number of cases by surgical specialty, equipment utilized, case duration, personnel required, and patient flow within the perioperative department (from patient check-in to discharge from the recovery room). The simulator provides the ability to model changes in OR block allocation by the full day or half day, create specialty specific blocks, open OR blocks as “first-come first-served,” set aside OR blocks for urgent or emergent cases, and/or to close OR blocks and then measure the impact of these changes on OR utilization and throughput. The simulator provides the ability to test up to 8 different block allocation scenarios at a time and runs each scenario 10 times to assess the total & mean OR utilization over a month.

Results:
~~Using actual O.R. case volumes, case urgencies, and specialty mix the simulator was used to contrast the O.R. utilization achieved by the historical specialty based OR block allocation (scenario #1) with total elimination of all specialty block allocation, making every OR open for elective scheduling on a “first-come, first served” basis (scenario #2). Having all OR’s open for “first-come, first-served” scheduling resulted in significantly higher total and mean OR utilization (Total OR utilization scenario 1= 2,051.9 hours vs. scenario 2=2,236.4, p=0.02; mean OR utilization scenario 1=68.4% vs. scenario 2=74.5%, p=0.02).

Conclusion:
~~The usage of a computerized simulator of the OR’s provides surgical leaders with a virtual laboratory to test experimental OR allocation scenarios that can increase OR utilization but would be far too radical to implement without the surgeons’ buy-in. Surgeon buy-in and implementation of new approaches to OR allocation are enhanced by this data driven approach.
 

36.07 ED Utilization as a Comprehensive Healthcare Metric after Lumbar Spine Surgery

L. M. Pak1, M. A. Chaudhary1, N. K. Kwon1, T. P. Koehlmoos2, A. H. Haider1, A. J. Schoenfeld1  2Uniformed Services University Of The Health Sciences,Department Of Preventive Medicine,Bethesda, MD, USA 1Brigham And Women’s Hospital,Center For Surgery And Public Health,Boston, MA, USA

Introduction: Post-discharge emergency department (ED) visits represent a significant clinical event for patients and are an important quality metric in healthcare. The national volume of lumbar spine surgeries has risen dramatically and represents an increasingly large proportion of healthcare costs. Quantifying the use of the ED post-discharge and identifying factors that increase ED utilization are critical in evaluating current hospital practices and addressing deficiencies in patient care.

Methods: This study utilized claims from patients insured through TRICARE, the insurance plan of the Department of Defense. TRICARE data was queried for the years 2006-2014 for patients aged 18-64 years old who had undergone one of three common lumbar spine surgery procedures (discectomy, spine decompression, spine fusion). Patient demographics, treatment characteristics, and follow-up information was abstracted from the claims data.  Sponsor rank was used as a proxy for socio-economic status. Utilization of the ED at 30- and 90-days were the primary outcomes.  Multivariable logistic regression tests were used to identify independent factors associated with 30- and 90-day ED utilization following a lumbar spine procedure.

Results: In the period under study, 48,485 patients met inclusion criteria. Fifteen percent of patients (n=7,183) presented to the ED within 30 days post-discharge. The 30-day readmission rate was 5% (n=2,344). By 90 days post-discharge, 30% of patients (n=14,388) presented to an ED. The 90-day readmission rate was 8% (n=3,842). The overall 30-day and 90-day complication rates were 6% (n=2,802) and 8% (n=4,034), respectively. Following multivariable testing, female sex, increased Charlson comorbidity index, lower socio-economic status, fusion-based spine procedures, length of stay, and complications were associated with ED utilization within 30- and 90-days (Table). Dependent beneficiary status was associated with 90-day ED utilization only (OR1.050, 95%CI 1.020-1.081).

Conclusion: Within 30- and 90-days after lumbar spine surgery, 15% and 30% of patients, respectively, sought care in the ED. However, only one-third of these patients had a complication recorded during the same period, and even fewer were subsequently readmitted. These findings suggest a high rate of unnecessary ED utilization. We have identified several characteristics associated with the risk of ED utilization, which may present viable targets for intervention in the peri-operative period. 

36.04 Surgical Procedures in Health Professional Shortage Areas: Impact of a Surgeon Incentive Payment Plan

A. Diaz1, E. Schneider1, J. Cloyd1, T. M. Pawlik1  1Ohio State University,Columbus, OH, USA

Introduction:  The American College of Surgeons has predicted a physician shortage in the US with a particular deficiency in general surgeons. Any shortage in surgical workforce is likely to impact underserved areas. The Affordable Care Act (ACA) established a Center for Medicare/Medicaid Services (CMS) based 10% reimbursement bonus for general surgeons in Health Professional Shortage Areas (HPSAs). We sought to assess the impact of the ACA Surgery Incentive Payment (SIP) on surgical procedures performed in HPSAs.

Methods:  Hospital utilization data from the California Office of Statewide Health Planning and Development between January 1, 2006 and December 31, 2015 were used to categorize hospitals according to HPSA location.  A difference in difference analysis was used to measure the effect of the SIP on year-to-year differences for in- and out-patient surgical procedures by hospital type pre-(2006-2010) versus post-(2011-2015) SIP implementation.

Results: Among 409 hospitals, two hospitals performed surgery in a designated HPSA. Both HPSA hospitals were located in a rural area, were non-teaching, and had <500 beds. The number of total surgical procedures was similar at both non-HPSA (Pre: n=210, 6,048  vs. Post: n=212,1,550) and HPSA (Pre: n=8,734  vs. Post: n=8,776) hospitals. Over the time period examined, inpatient (IP) procedures decreased (non-HPSA, Pre: 933,388 vs. Post: 890,322; HPSA, Pre: 5,166 vs. Post: 4,301), while outpatient (OP) procedures increased (non-HPSA, Pre: 1,172,660 vs. Post: 1,231,228; HPSA, Pre: 3,568 vs. Post: 4,475)(all p< 0.05). Post-SIP implementation, surgical procedures performed at HPSA hospitals markedly increased compared with non-HPSA hospitals (IP non-HPSA: -625 vs. HPSA: 363; OP non-HPSA: -111 vs. HPSA: 482)(both p<0.05). Of note, while the number of ORs increased over time among non-HPSA hospitals (Pre: n=3,042 vs. Post: n=3,206, p<0.05) OR numbers remained stable at HPSA hospitals (Pre: n=16 vs. Post: n=17). To estimate population-level effects of the SIP, a difference-in-differences model was used to adjust for cluster-related changes, as well as preexisting differences among non-HPSA and HPSA hospitals. Using this approach, the impact of the SIP on surgical procedure volume among HPSA relative to non-HPSA hospitals was noted to be considerable (Figure 1). 

Conclusion:  CMS SIP implementation was associated with a significant increase in the number of surgical procedures performed at HPSA hospitals relative to non-HPSA hospitals, essentially reversing the trend from negative to positive. Further analyses are warranted to determine whether bonus payment policies actually help to fill a need in underserved areas or whether incentives simply shift procedures from non-HPSA to HPSA hospitals.

36.05 An Analysis of Preoperative Weight Loss and Risk in Bariatric Surgery

L. Owei1, S. Torres Landa1, C. Tewksbury1, V. Zoghbi1, J. H. Fieber1, O. E. Pickett-Blakely1, D. T. Dempsey1, N. N. Williams1, K. R. Dumon1  1Hospital Of The University Of Pennsylvania,Gastrointestinal Surgery,Philadelphia, PA, USA

Introduction:

Preoperative weight loss theoretically reduces the risk of surgical complications following bariatric surgery. Current guidelines have focused on preoperative weight loss as an important element of patient care and, for some payers, a requirement for prior authorization. However, the association between preoperative weight loss and surgical complications remains unclear. The purpose of this study is to test the hypothesis that preoperative weight loss lowers operative risk in bariatric surgery.

Methods:

We conducted a retrospective analysis using the inaugural American College of Surgeons Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program data -2015. Only patients who had primary laparoscopic gastric bypass, open gastric bypass and laparoscopic sleeve gastrectomy were included. Patients were stratified into 4 groups by percent preoperative total body weight (TBW) loss. Univariate analyses was performed. Logistic regression was also used to determine the association between preoperative weight loss and surgical outcomes (mortality, reoperation, readmission, and intervention) with adjustment for potential confounders.   

Results:

A total of 120,283 patients were included in the analysis, with a mean age of 44.6 (±12.0) and 78.7% were female. Procedures were laparoscopic sleeve gastrectomy (69.0%), laparoscopic gastric bypass (30.3%), and open gastric bypass (1.2%). Of the total number of patients, 25% had <1% preoperative TBW loss, 22% had 1 – 2.99%, 29% had 3 – 5.99%, and 24% had ≥6%. When stratified by percent TBW loss, significant differences were found in age, sex, race, co-morbidities, smoking, and ASA classification (p<0.05). Using the <1% preoperative total percent body loss group as a reference, logistic regression revealed that a TBW loss of ≥3% was associated with a significant decrease in operative (30 day) mortality (p = 0.012). Preoperative weight loss in excess of 6% TBW was not associated with a further decrease in operative mortality. There was no significant association between percent TBW loss and reoperation, readmission or intervention within 30 days of operation (Table 1). 

Conclusion:

A preoperative reduction of more than 3% of TBW is associated with a significant reduction in operative mortality following bariatric surgery. These results suggest that a modest preoperative weight loss may substantially reduce operative mortality risk in this population. Further studies are needed to elucidate the association between preoperative weight loss and other outcome measures (reoperation, readmission, intervention). 

 

**The ACS MBSAQIP and the centers participating are the source of the data, and are not responsible for the validity or the conclusions.
 

36.06 Practices and Barriers Regarding Transitions of Care for Postoperative Opioid Prescribing

M. P. Klueh1, J. S. Lee1, K. R. Sloss1, L. A. Dossett1, M. J. Englesbe1, C. M. Brummett2, J. F. Waljee1  1University Of Michigan,Department Of Surgery,Ann Arbor, MI, USA 2University Of Michigan,Department Of Anesthesiology,Ann Arbor, MI, USA

Introduction:
Persistent opioid use is common following surgery, even among previously opioid-naïve patients. To date, it remains unclear how clinicians coordinate opioid prescribing for patients who require ongoing opioid analgesics after routine postoperative care is complete. To better understand these transitions of care, we conducted a qualitative study of surgeons and primary care physicians to describe practices and barriers for opioid prescribing in surgical patients who develop new persistent opioid use.

Methods:
We conducted face-to-face interviews with 11 physicians at a single academic healthcare system using a semi-structured interview guide. Participants were comprised of surgeons (n=4 resident surgeons; n=4 attending surgeons) and primary care physicians (n=3 attending physicians). We developed open-ended questions to describe the clinical course of patients after surgery, practices and attitudes for postoperative opioid prescribing, and the transition to chronic pain management. Interviews (15 – 30 minutes) were audiotaped, transcribed verbatim, and independently coded for a priori and emergent themes using the constant comparative method. Open and axial coding were applied using narrative analysis.

Results:
Table 1 summarizes key themes in transitions of care for postoperative opioid prescribing. Participants reported a wide range of underlying causes for the need to transition patients to chronic pain management, including provider confidence, signs of addiction, and time from operation. Practices for transitioning care ranged from passive transitions with no closed loop communication, to active transitions with continued follow-up to ensure the patient had transitioned to another physician for pain management. Barriers to transitioning care included a lack of standardized practices, lack of time, and limited access to pain specialists.

Conclusion:
Surgeons and primary care physicians describe varying practices and barriers for transitions of care in patients who develop new persistent opioid use after surgery. These findings may help identify interventions to improve coordination of care for these vulnerable patients.
 

36.02 WHO Surgical Safety Checklist Modification: Do Changes Emphasize Communication and Teamwork?

I. Solsky1, J. Lagoo1, W. Berry1,2, J. Baugh3, L. A. Edmondson1, S. Singer2, A. B. Haynes1,4,5  1Ariadne Labs,Boston, MA, USA 2Harvard School Of Public Health,Department Of Health Policy And Management,Boston, MA, USA 3University Of California – Los Angeles,Department Of Emergency Medicine,Los Angeles, CA, USA 4Harvard School Of Medicine,Surgery,Brookline, MA, USA 5Massachusetts General Hospital,Department Of Surgery,Boston, MA, USA

Introduction:  Adopted by thousands of hospitals globally, the World Health Organization’s (WHO) Surgical Safety Checklist is meant to be modified to best serve local practice but little is known about the type of changes that are made. The goal of this study is to provide a descriptive analysis of the extent and content of checklist modification.

Methods:  Non-subspecialty surgical checklists in English were obtained through online search along with targeted requests sent to hospitals. A detailed coding scheme was created to capture modifications to checklist content and formatting. Overall checklist information was collected such as the total number of lines of text and the team members explicitly mentioned. Information was also collected on modifications made to individual items and which were most frequently deleted. New items added were also captured. Descriptive statistics were performed.

Results: 161 checklists from 17 US states (n=116) and 11 countries (n=45) were analyzed. Every checklist was modified. Compared to the WHO checklist, those in our sample contained more lines of text (median: 63 (IQR: 50-73; Range: 14-216) vs. 56) and more items (36 (IQR: 30-43; Range: 14-80) vs. 28). Checklists added a median of 13 new items (IQR: 8-21, Range: 0-57). Items most frequently added referenced implants/special equipment (added by 83.23% of checklists), DVT prophylaxis/anticoagulation (74.53%), patient positioning (62.73%), and an opportunity to voice questions/concerns (55.28%). Despite increasing in size, checklists removed a median of 5 WHO items (IQR: 2-8; Range: 0-19). The most frequently removed items were the pulse oximeter check (removed in 75.16% of checklists), the articulation of patient-specific concerns from the nurse (47.83%) or anesthetist (38.51%), and the surgeon-led discussion of anticipated blood loss (45.96%) or case duration (42.24%), the latter 4 items comprising part of the WHO checklist’s 7-item “Anticipated Critical Events” section, which is intended for the exchange of critical information. The surgeon was not explicitly mentioned as participating in any part of the checklist in 14.29% of checklists; the anesthesiologist/CRNA in 14.91%, the circulator in 9.94%, and the scrub in 77.64%.

Conclusion: As encouraged by the WHO, checklists are highly modified. However, many are enlarged with additional lines and items that may not prompt discussion or encourage teamwork.  Of particular concern is the frequent removal of items from the WHO’s “Anticipated Critical Events” section, which is central to the checklist’s efforts to prevent complications by giving all team members an opportunity to voice concerns together. Leadership involved in checklist creation should ensure that checklists can be easily implemented, are inclusive of all team members, and promote a culture of safety. Further research is needed to assess the clinical impact of checklist modifications.

 

36.03 Crash Telemetry-Based Injury Severity Prediction Outperforms First Responders in Field Triage

K. He1, P. Zhang1,2, S. C. Wang1,2  2International Center Of Automotive Medicine,Ann Arbor, MI, USA 1University Of Michigan,Department Of Surgery,Ann Arbor, MI, USA

Introduction:

Early identification of severely injured patients in Motor Vehicle Collisions (MVC) is crucial. Mortality in this population is reduced by one quarter if these patients are directed to a level I trauma center versus a non-trauma center. The Centers for Disease Control and Prevention (CDC) Guidelines for Field Triage of Injured Patients recommends occupants at 20% or greater risk of Injury Severity Score (ISS) 15+ be urgently transported to a trauma center. With the increasing availability of vehicle telemetry technology, there is a great potential for advanced automatic collision notification (AACN) systems to improve trauma outcomes by detecting patients at risk for severe injury and facilitating early transport to trauma centers. We compared first responder field triage to a real-world field test of our updated injury severity prediction (ISPv2) algorithm using crash outcomes data from General Motors vehicles equipped with OnStar.

Methods:

We performed a literature search to determine the sensitivity of first responder identification of ISS 15+ MVC occupants. We used National Automotive Sampling System Crashworthiness Data System (NASS_CDS) data from 1999-2013 to construct a functional logistic regression model predicting the probability that one or more occupants in a non-rollover MVC would have ISS 15+ injuries. Variables included principal direction of force, change in velocity, multiple vs. single impacts, presence of older occupants (≥55 years old), presence of female occupants, belt use, and vehicle type. We validated our model using 2008-2011 crash data from Michigan vehicles with AACN capabilities identified from OnStar records. We confirmed telemetry crash data sent from the vehicles using police crash reports. We obtained medical records and imaging data for patients transported from the scene for evaluation and treatment. ISS was assumed to be ≤15 for MVC occupants not transported for medical assessment. We used our ISPv2 algorithm and transmitted telemetry data to predict the probability that an occupant had ISS 15+ injuries and compared our prediction to the observed injuries for each occupant and each vehicle.

Results:
Recent studies have found field triage to be 50-66% sensitive in identifying ISS 15+ occupants. Our study population included 924 occupants in 836 crash events. The median age was 41 years old, 57% were female, 21% were right-sided passengers, and 1.2% experienced an ISS 15+ injury. Our ISPv2 model was 72.7% sensitive (ISPv2 ≥0.2 when ISS 15+) and 93% specific (ISPv2<0.2 when ISS ≤15) for identifying seriously injured MVC patients. 

Conclusion:
Our second generation ISP algorithm was highly specific and more sensitive than current field triage in identifying MVC patients at risk for ISS 15+ injuries. This real-world field study shows telemetry data transmitted before dispatch of emergency medical systems is superior in selecting patients who require urgent transfer to trauma centers. 
 

35.09 The Significance of Laparoscopic Bursectomy Via an Outside Bursa Omentalis Approach in Gastric Cancer

L. Zou1, B. Zheng1, L. Zou1  1Guangdong Provincial Hospital Of Chinese Medicine,Department Of Gastrointestinal Surgery,Guangzhou, GUANGDONG, China

Introduction:

This study was aimed to compare the safety, feasibility and short-term effects of Laparoscopic bursectomy and D2 radical gastrectomy(LBDRG) with those of laparoscopic D2 radical gastrectomy (LDRG) in advanced gastric cancer (AGC).

Methods:

We retrospectively analyzed data on 68 consecutive patients undergoing LBDRG via an outside bursa omentalis approach (OBOA) from August 2012 to December 2014. The surgical outcomes of patients who underwent LBDRG were matched and compared with those of patients who underwent classic LDRG in our department at the same time.

Results:

The clinicopathological characteristics were similar between the two groups following matching. Although the mean operative time was longer in the LBDRG group than in the LDRG group (323.4±20.70 min vs. 288.5±21.76 min; p<0.05), the number of lymph nodes dissected was significantly greater in the LBDRG group than in the LDRG group (30.49±5.41 vs. 23.2±4.87; p<0.05). Additionally, there was no significant difference in the rate of local recurrence or metastases within the median two-year follow-up between the LBDRG group (5.9% [4/68]) and the LDRG group (8.8% [6/68]). 

Conclusion:
These results suggested that this technique is technically safe and feasible for AGC patients, and the short-term oncological effects are equal to those of LDRG.