44.03 Outcomes of Packed Red Blood Cell and Platelet Transfusion on Aortic Dissection Patients after Surgery

S. Naeem1, G. Baird2, N. Sodha1, F. Sellke1, A. Ehsan1  1The Warren Alpert Medical School,Cardio-Thoracic Surgery,Providence, RHODE ISLAND, USA 2Rhode Island Hospital,Lifespan Bio-Statistics,Providence, RHODE ISLAND, USA

Introduction:

Packed red blood cell (PRBC) and platelet transfusion are associated with morbidity and mortality among adults undergoing cardiac surgery. Our objective was to investigate the clinical effect of transfusion among acute type A aortic dissection (AAD) patients undergoing surgical repair in a large referral hospital.

Methods:

The medical records of 93 AAD patients were retrospectively reviewed and stratified into cohorts by median PRBC and platelet units received; PRBC ≤2 units (N=62) vs PRBC >2 units (N=31); platelets ≤1 unit (N=66) vs platelets >1 unit (N=27). The same dataset was also categorized into four groups; Group 0=no transfusion (N=8); Group 1=platelets only (N=10); Group 2= PRBC and platelets (N=66); Group 3= PRBC only (N=9). Multivariate logistic regression was applied to drive p-values for post-transfusion complications after adjusting for age, gender, history of hypertension and diabetes. Kaplan Meier survival analyses were used to compare the hospital length of stay (LOS) and survival rate at 1-month and 1-year.

Results:

Baseline demographics were similar between all groups. Patients receiving >2U of PRBC had median LOS of 15 vs 8 days, p<0.001. Transfusion of >2 units of PRBC was identified to be an independent risk factor for postoperative infection (OR=5.9, 95% CI: 1.6-21.7, p=0.006). One-month survival rate was 90% in patients receiving ≤2 units PRBC vs 90% in patients receiving >2 units, p=0.811. At 1 year, the survival rate was 89% in patients receiving ≤2 units of PRBC vs 80% in patients receiving >2 units, p=0.644. Patients receiving >1 unit of platelets had a median LOS of 15 vs 10 days, p<0.05. Transfusion of >1 unit of platelets was identified as an independent risk factor for postoperative atrial fibrillation and acute renal failure (OR=2.9, 95% CI: 1.1-8.0, p=0.026; OR=3.7, 95% CI: 1.3-10.6, p=0.014, respectively). One-month survival rate was 89% in patients receiving ≤1 unit of platelets vs 92% in patients receiving >1 unit, p=0.510. At 1 year, the survival rate was 88% in patients receiving ≤1 unit of platelets vs 81% in patients receiving >1 units, p=0.947. On pairwise analysis for the four groups using life table, there was a statistically significant difference in median LOS between group 0 vs 1 (6 vs 8 days, p=0.019) and group 0 vs 2 (6 vs 13 days, p=0.005). Survival at 1-month was 88% for group 0, 100% for group 1, 91% for group 2 and 78% for group 3, (p=0.425). Additionally, survival rate at 1-year was 88% for group 0, 100% for group 1, 84% for group 2 and 78% for group 3, (p=0.507). These results are not statistically significant likely due to the small number of patients.  

Conclusion:

Transfusion of PRBC and platelets above a particular threshold increases the incidence of postoperative complications and hospital LOS among patients undergoing repair of AAD.
 

43.18 Race Predicts Completion of Neoadjuvant Chemotherapy for Breast Cancer

A. T. Knisely2, J. H. Mehaffey1, A. D. Michaels1, D. R. Brenin1, A. T. Schroen1, S. Showalter1  1University Of Virginia,Department Of Surgery,Charlottesville, VA, USA 2University Of Virginia,School Of Medicine,Charlottesville, VA, USA

Introduction: It is not known what factors affect whether patients complete the prescribed neoadjuvant chemotherapy regimen in breast cancer. Racial and socioeconomic disparities have been demonstrated in breast cancer therapy. We hypothesized that race will be a predictor of completion of prescribed neoadjuvant chemotherapy regimen in patients with breast cancer. 

Methods: Patients diagnosed with breast cancer between 2009 and 2016 treated with neoadjuvant chemotherapy (n=63) at a single institution were reviewed. Patient demographics, socioeconomic status, and tumor characteristics as well as treatment details were abstracted by chart review. Univariate analysis was used to compare variables by completion of neoadjuvant chemotherapy. Unadjusted logistic regression was performed to evaluate the effects of these factors on patient's chances of completing their prescribed chemotherapy regimen.

Results: In our study population, 76.2% (n=48) of the patients completed their prescribed neoadjuvant chemotherapy regimen. On univariate analysis, patients who completed their regimen were significantly more likely to be white (79.2% vs 46.7%, p=0.02). Rates of completion of neoadjuvant chemotherapy were not significantly different among patients with private vs government insurance, positive vs negative nodal status, pathologic complete response, age or tumor size (all p>0.05). Unadjusted logistic regression demonstrated white patients were 7 times more likely to complete neoadjuvant chemotherapy (OR 7.0, p=0.01). Tobacco use significantly reduced the odds of a patient completing neoadjuvant chemotherapy (OR 14.4, p=0.02).

Conclusion: In our patient population of breast cancer patients treated with neoadjuvant chemotherapy, white patients were more likely to complete the prescribed course of neoadjuvant therapy. Future work should evaluate the reason non-white patients are less likely to complete chemotherapy and further assess the racial disparities in breast cancer therapies. 
 

43.11 Hyperthyroidism Symptoms in Children and Adults Seeking Definitive Surgical Treatment

A. A. Asban1, S. Chung1, J. Hur1, B. Lindeman1, C. Balentine1, H. Chen1  1University Of Alabama at Birmingham,Birmingham, Alabama, USA

Introduction:
Graves’ disease is the most common cause of hyperthyroidism in children (84%) and adults (80%), and can present symptoms that could impair development or have substantial long-term implications for quality of life in children. While surgery can offer definitive treatment, anti-thyroid medications and radioactive iodine (RAI) remain the most common therapeutic approaches despite potential side effects and known failure rates. We aimed to determine whether adults and children have different presenting symptoms that may impact therapeutic decisions.

Methods:
We retrospectively reviewed electronic medical records of patients with hyperthyroidism referred for thyroidectomy by one surgeon between January 2016 and April 2017. We divided our cohort into two groups: children (age ≤ 18 years), and adults (age > 18). We compared symptoms between the groups using chi-square for dichotomous variables and Kruskal-Wallis for continuous variables.

Results:

Thirty-eight patients (27 adults & 11 children) were evaluated for hyperthyroidism. Of those, 37 patients (97%) underwent total thyroidectomy and 1 (3%) underwent lobectomy. The mean age of the adult group was 44.3 years and 13.8 years for children. Twenty-nine (76%) were female with no difference in gender among groups. Children with hyperthyroidism were more likely than adults to present with hoarseness (55% vs 15%, p=0.01) and difficulty concentrating (45% vs 7%, p=0.01) (Table 1).

There were no statistically significant differences in the rates of adults and children reporting any other symptoms. A majority of patients in both groups reported palpitations, fatigue, and difficulty swallowing.

Conclusion:
Children with hyperthyroidism were more likely to present with hoarseness and difficulty concentrating than adults. Concentration and communication are critical skills in developing children, and early intervention with definitive therapy may improve such symptoms.

37.10 Overweight Patients with Chronic Mesenteric Ischemia Require More Diagnostic Studies

C. W. Elliott1, J. Cullen2, J. Mehaffey2, W. Robinson2, K. J. Cherry2, M. C. Tracci2, G. R. Upchurch2  1University Of Virginia,School Of Medicine,Charlottesville, VIRGINIA, USA 2University Of Virginia,Department Of Surgery,Charlottesville, VIRGINIA, USA

Introduction:
With the increasing prevalence of obesity and atherosclerotic vascular disease, there is potential for an increasing incidence of overweight patients presenting with chronic mesenteric ischemia (CMI). The impact obesity plays on the clinical presentation and treatment of CMI is an understudied area of potential clinical importance.  The purpose of this study was to evaluate if obesity impacts the diagnosis and treatment of CMI.

Methods:
Records of patients who received a procedure for CMI at a single center from 2007 to 2017 were obtained and reviewed. Patients were stratified into overweight (Body Mass Index [BMI] > 25) or non-overweight (BMI < 25). The primary outcome of interest was the number of preoperative diagnostic studies performed to make the diagnosis. Secondary endpoints included primary patency of the vascular reconstruction and 30-day mortality. Preoperative diagnostic studies included abdominal plain films, ultrasounds, cross-sectional imaging, endoscopy/colonoscopy, and angiogram.

Results:
A total of 90 patients were identified, and stratified into overweight (51.1%) and non-overweight. The number of preoperative diagnostic studies, a surrogate for delayed diagnosis, was significantly higher among the overweight (3.6 ± 1.9 vs. 2.7 ± 1.8, p = 0.04). These differences were driven by significantly more non-contrast CT scans (58.7% vs 25.0%, p=0.001) in the overweight group. However, overweight patients received fewer upper endoscopies (17.4 vs 36.4%, p=0.04). Endovascular interventions were the most common procedure to treat CMI, with 74% of overweight patients and 73% of non-overweight patients undergoing mesenteric vessel stenting (p = 0.9). There was no difference in the number of multi-vessel procedures performed in the overweight group (17.4% vs. 27.3% p=0.7). Thirty-day mortality (7.8% vs. 3.3%, p = 0.2) and time to loss of primary patency (19.25 ± 19.38 vs. 19.38 ± 18.36 months p = 0.8) did not differ between the groups.

Conclusion:
Overweight patients undergo an increased number of diagnostic tests prior to intervention to treat their CMI. An increase in the number of studies performed prior to CMI intervention in the overweight could be a proxy for a delay in time to diagnosis. Nevertheless, primary patency did not differ significantly between the two groups. As obesity continues to increase in prevalence, healthcare providers need to include CMI in their differential diagnosis for overweight patients.
 

37.09 Treatment Outcomes of Manual Lymphatic Drainage in Pediatric Lymphedema Patients

K. Ali1, C. Dougherty2, R. Maricevich1,2, I. Iacobas1,2  1Baylor College Of Medicine,Houston, TX, USA 2Texas Children’s Hospital,Houston, TX, USA

Introduction:
Generalized pediatric lymphedema is primarily due to congenital malformations or lymphatic dysplasia. Manual lymphatic drainage (MLD) via the Vodder technique is a popular therapeutic modality that incorporates superficial and deep massage to soften soft tissue, increase lymphatic flow, and improve functional limb performance. However, there is sparse literature discussing MLD outcomes in pediatric patients with lymphedema. Our purpose is to quantitatively measure the effect of MLD on pediatric lymphedema based on differences in girth, weight, and functional performance before and after therapy.

Methods:
Retrospective chart review was performed on pediatric patients with primary lymphedema who underwent a 1-month course of MLD between 2015-2017. Data collected included weight, limb girth, extremity strength, pain scores, and functional limb performance before and after MLD. Patients with other causes of edema (i.e. heart failure, renal failure) were excluded. Data statistics were used to quantify the reductive effect of MLD.

Results:
10 children with primary lymphedema who completed the MLD course were identified (median age 8 years, range 0.4-18 years). Immediately following MLD, weight reduced by 2-19%, and limb girth reduced by 4-27% in lower extremities, 0-10% in upper extremities, and 5-22% in truncal regions among the patients. More pronounced reduction was noted in the distal extremities compared to proximal extremities after therapy. Validated functional questionnaires showed at least 50-60% improvement in limb performance in half of the patients. Clinically, pain scores improved by 80-100%, soft tissue was softened with improved skin quality, and range of motion of affected limbs improved as noted during physical therapy sessions. Two patients had minimal improvement in lymphedema girth and range of motion after MLD and subsequently underwent sclerotherapy and lymphovenous bypass surgery.

Conclusion:
Manual lymphatic drainage is not a cure, but does improve reduction in the amount of extremity lymphedema with consistent usage. Noticeable improvements include decreased limb girth particularly of distal extremities, softening of skin and tissue, and favorable patient-reported functional outcomes. These findings suggest that MLD can be a reliable therapeutic modality in pediatric lymphedema until more permanent solutions are available. Large, prospective studies are needed to validate these results.
 

37.07 Risk Factors for Revision Following Lower Extremity Amputations

M. T. Cain1, M. Wohlauer1, P. Rossi1, B. Lewis1, K. Brown1, G. Seabrook1, C. J. Lee1  1Medical College Of Wisconsin,Division Of Vascular Surgery,Milwaukee, WI, USA

Introduction:  Lower extremity amputation is a significant cause of morbidity for patients with peripheral vascular disease.  Revision of an amputation to a more proximal level carries a heightened physiological, functional, and psychological stress for patients. Little data exists describing outcomes of non-traumatic amputations and risk factors associated with subsequent need for revisions. The objective of this study was to identify the determinants for revisions following lower extremity amputations:

Methods:  Patient data was reviewed retrospectively from a prospectively collected database.  Patients with underlying peripheral arterial disease who underwent non-traumatic lower extremity amputation between 2013 and 2016 were included in the study. A total of 260 patients met study criteria,. Patients who required revision were grouped and compared to those who did not require revision. Preoperative, intraoperative, and postoperative variables were collected and analyzed.  Univariate and multivariate analysis was performed. 

Results: Amputation revision was required in 70 patients (26.9%). Patients who underwent amputation revision to a higher level were significantly younger, taller, and heavier, and experienced higher degree of independence in mobility prior to revision (p value < 0.001, 0.035, 0.004, and 0.014, respectively).  Preoperative and postoperative aspirin use, statin use, and P2Y12 use appeared to be protective against the need for revisions (p value 0.018, 0.026, and 0.007, respectively).  Having undergone prior open or endovascular arterial intervention on the index limb was also protective against the need for amputation revision (p value 0.005).   The indication at the index amputation of acute limb ischemia or severe infection was more common in those who underwent subsequent revision (p value <  0.001).  Urgent or emergent amputations and active tobacco use were also associated with amputation revisions (p value < 0.001). Patients who experienced one or more postoperative complications after index amputation (arrhythmia, congestive heart failure, myocardial infarction, pneumonia, or respiratory failure) were also more commonly revised (p value < 0.001).   

Conclusion:  Preoperative patient comorbidities and limb acuity are significant determinants for amputation revision.  Procedural urgency and postoperative complications correlate significantly with the need for amputation revision in patients with peripheral vascular disease.  Factors protective against the need for revision included prior arterial interventions and optimal medical therapy to reduce 

 

37.03 Predictors of Serious Morbidity and Mortality Following Endovascular Repair of Aortoiliac Lesions

A. Thomas1, I. Leitman1  1Mount Sinai School Of Medicine,New York, NY, USA

Introduction:  Endovascular approaches to aortoiliac disease are becoming increasingly frequent due to the prospect of reduced recovery times, bleeding and discomfort. This study was designed to identify important risk factors for major 30-day outcomes and mortality following repair. 

Methods:  Data were derived from the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) participant user file. Patients that underwent aortic, bilateral common iliac, common iliac, external iliac, internal iliac, common and internal iliac, or common and external iliac repair (CPT codes 37220, 37221, 37222 and 37223) in one of 40 participating procedure targeted hospitals during 2015 were identified. Preoperative risk factors were analyzed using univariate and multivariate analysis for the endpoints of major 30-day postoperative outcomes and mortality.

Results

818 patients underwent endovascular aortoiliac repair; 474 (57.9%) males and 344 (42.1%) females. Ages ranged from 33 to 89, with a mean of 64.3 years. The overall 30-day mortality rate was 1.7%. Smoking, advanced age, low serum albumin and hematocrit, insulin-dependent diabetes, and dialysis correlated with increased mortality. 33 patients (4%) required major re-intervention of the treated arterial segment.  Risk factors included preoperative living dependency and transfer from a location other than home. 59 patients (6.9%) were discharged to a location other than home. Advanced age, low albumin and hematocrit, high BUN, creatinine, WBC, platelets, PTT and INR, ABI ≤0.89, dependency, open wound, 10% weight loss in 6 months prior to surgery, dialysis and diabetes were associated with discharge to a place other than home. Eight patients (1%) required amputation in the early post-operative period.  Risk factors included high BUN and WBC, low hematocrit, ABI ≤0.39, living dependency, and transfer from a location other than home. 44 patients (5.4%) required transfusion for perioperative hemorrhage and this was associated with an underlying bleeding disorder, low albumin and hematocrit, high alkaline phosphatase, ABI ≤0.89, living dependency, open wound, weight loss, and transfer from a location other than home.

After logistic regression, 30-day mortality was correlated with low serum albumin, with survivors having an average albumin of 3.72, and non-survivors 2.36; major re-intervention of the treated arterial segment was correlated with preoperative living dependency.

Conclusion: We identified preoperative living dependency and low serum albumin to be the most strongly associated risk factors with adverse 30-day postoperative outcomes following endovascular aortoiliac procedures. Mortality rates remain low following percutaneous revascularization. 
 

37.04 Results of Non-operative Management of Acute Limb Ischemia in Infants

S. Wang1, A. R. Gutwein1, N. A. Drucker1, R. L. Motaganahalli1, M. C. Dalsing1, B. W. Gray1, M. P. Murphy1, G. W. Lemmon1  1Indiana University School Of Medicine,Indianapolis, IN, USA

Objective:

Acute limb ischemia (ALI) in infants poses a challenge to the clinician secondary to poor operative outcomes, limb loss risk, and life-long morbidity.  This retrospective study reviews a 10-year institutional experience with non-operative management of ALI in infants.

 

Methods:

Infants (age ≤ 12-months) diagnosed with ALI by duplex and treated with initial non-operative management at a tertiary care dedicated children’s hospital were identified via vascular laboratory lower extremity arterial duplex records.  Patient demographics, injury characteristics, treatment given, and outcomes were abstracted via chart review and presented using descriptive statistics.  Continuous variables are presented as mean ± standard deviation.

 

Results:    

During the study period, a total of 25 (28% female) infant patients were diagnosed with ALI.  The average age for this cohort was 3.5 ± 3.2 months.  The majority of cases were secondary to iatrogenic injury (88%) from arterial cannulation (Table).  Injury sites were concentrated to the lower extremities (84%) as compared to the upper.  Absence of Doppler signals were noted in 64% of infants while limb cyanosis observed in 60% at the time of presentation.

 

Infants were initially treated with anticoagulation (80%) when possible.  Two patients failed non-operative management and required thrombolysis secondary to progression of thrombus burden while anticoagulated.  There were no major (above-ankle) amputations at 30-days.  Three deaths occurred within 30-days; all were unrelated to limb ischemia.  In the 30-day survivors, overall duration of follow-up was 52.1 ± 37.7 months.  One infant required above-knee amputation six weeks after diagnosis resulting in an overall limb salvage rate of 96%.  Long-term morbidity included two patients with a chronic wound of the affected limb and one patient with limb length discrepancy.  No subjects reported claudication at the latest follow-up appointment.  Additionally, all patients were independently ambulatory except for one female who was using a walker with leg braces.

 

Conclusions:

In contrast to the adult population, ALI in infants can be managed with anticoagulation and non-operative intervention.  Long-term follow-up continues to demonstrate excellent functional results and minimal disability. 

37.02 Impact of Glucose Control and Regimen on Limb Salvage in Patients Undergoing Vascular Intervention

J. L. Moore1, Z. Novak1, M. Patterson1, M. Passman1, E. Spangler1, A. W. Beck1, B. J. Pearce1  1University Of Alabama at Birmingham,Division Of Vascular Surgery And Endovascular Therapy,Birmingham, Alabama, USA

Introduction:

Studies have demonstrated correlation between levels of glycosylated hemoglobin (HbA1c) in diabetic patients and the incidence of both peripheral artery disease (PAD) and lower extremity amputation (AMP).  However, the impact of glucose control on outcomes in patients undergoing open or endovascular PAD treatment has not been examined.  The purpose of this study is to assess the effect of HbA1c and medication regimen on amputation-free survival (AFS) in patients undergoing treatment for limb salvage.

Methods:

Limb salvage patients with a baseline HbA1c within one month of treatment were identified from a prospectively maintained vascular registry queried from 2010-17.  The hospital EMR was cross-referenced to identify patients with HbA1c measured within 3 months of the index procedure.  Patient records were examined and instances of AMP, type of treatment (ENDO v OPEN), demographics, co-morbidities, and diabetic glycemic control modalities were analyzed.  Diagnosis of diabetes was determined by a combination of HbA1c, physician diagnosis, and usage of diabetic medications.

Results:

Our query found 306 eligible limbs for analysis.  AFS was associated with diabetes (82.6%, p=0.002), non-white race (56.5%, p=0.006), insulin-only diabetic control (52.2%, p<0.001), post-operative creatinine >1.3mg/dL (38.0%, p<0.001), and dialysis (26.1%, p<0.001).  HbA1c was not significantly associated with AFS.  Survival analysis (Kaplan-Meier plots) revealed a diagnosis of diabetes was significantly associated with worse AFS in the entire cohort (Log rank=0.011) [Graph 1] as well as in the critical limb ischemia subgroup (Log rank=0.049) (Rutherford >3) (not pictured).  Logistic regression demonstrated an association with age (p=0.040, AOR=1.027), post-operative creatinine level (p=0.003, AOR=1.247), non-white race (p=0.048, AOR=0.567), and insulin-only diabetic control (p=0.002, AOR=2.535) with worse AFS across all limbs surveyed.

Conclusion:

Diabetes with insulin only regimen has significantly worse AFS than non-diabetic patients or those on an insulin sensitizing regimen.  This may represent a surrogate for disease severity, but the type of medications may present a modifiable risk factor to improve limb salvage.

36.09 Opioid Prescribing vs. Consumption in Patients Undergoing Hiatal Hernia Repair

A. A. Mazurek1, A. A. Brescia1, R. Howard1, A. Schwartz1, K. Sloss1, A. Chang1, P. Carrott1, J. Lin1, W. Lynch1, M. Orringer1, R. Reddy1, P. Lagisetty2, J. Waljee1, M. Englesbe1, C. Brummett1, K. Lagisetty1  1University Of Michigan,Department Of Surgery,Ann Arbor, MI, USA 2Ann Arbor VA,Division Of General Internal Medicine And Center For Clinical Management And Research,Ann Arbor, MI, USA

Introduction:  Recent studies have demonstrated a high prevalence of excessive opioid prescribing after surgery, and the incidence of persistent opioid use is among the highest after thoracic surgery. Procedure-specific prescribing guidelines have been shown to reduce excessive prescribing in certain health systems; however, this has not been studied within thoracic surgery. There is little data available to assess how many opioids patients take versus are prescribed following surgery. To establish evidence-based guidelines to reduce excessive prescribing, this study compared postoperative opioid prescribing dosages to actual usage following open and laparoscopic hiatal hernia repair (HHR).

Methods:  Retrospective chart review was performed on 119 patients who underwent open (transthoracic and transabdominal) or laparoscopic HHR between January and December 2016, and received an opioid prescription after surgery. The patient cohort consisted of opioid naïve patients, defined as individuals not using opioids at the time of surgery. Patients underwent a telephone survey regarding postoperative opioid use. The amount of opioid prescribed was quantified in oral morphine equivalents (OME) to adjust for varying potencies between medications. Descriptive statistics (median and interquartile range, IQR) were used to summarize variables. Mann-Whitney U tests were used to compare the OME prescribed vs. actual patient use within the patient cohort.

Results: 91 opioid naïve patients (37 open HHR; 54 laparoscopic HHR) were surveyed, with a response rate of 69% (n=63, 27 open, 36 lap). Mean age was 59 ± 14 years and the cohort was 65% female. Median follow-up time was 305 days (IQR 209-463). The overall median prescription size was 300 mg OME (IQR 225-375) and median patient use was 150 mg OME (IQR 25-300) (p<0.0001). Following open HHR, median prescription size was 350 mg OME (IQR 250-420) and median patient use was 225 mg OME (IQR 105-300) (p=0.001). Following laparoscopic HHR, median prescription size was 270 mg OME (IQR 200-350) and median patient use was 106 mg OME (IQR 6-295) (p<0.0001). In comparing open vs. laparoscopic HHR, significantly more OME were prescribed for open (p=0.01), with a difference in median patient use that did not reach statistical significance (p=0.08).

Conclusion: Patients use far fewer opioids than they are prescribed following open and laparoscopic HHR. While there is excess prescribing in both cohorts, laparoscopic procedures tended to have a greater difference in amount prescribed versus actual usage. These findings may be used to develop guidelines that better standardize postoperative prescribing to reduce overprescribing. 

36.08 Development & Usage of a Computerized Simulation Model to Improve Operating Room Efficiency

L. H. Stevens1,2, N. Walke2, J. Hobbs2, T. Bell1, K. Boustany2, B. Zarzaur1  1IU School Of Medicine,General Surgery,Indianapolis, IN, USA 2IU Health,Perioperative Services,Indianapolis, IN, USA

Introduction:
~~Efficient usage of the operating rooms is crucial to a hospital’s mission and survival. Traditionally the allocation of operating room (OR) block time to surgeons has been heavily influenced by historical usage patterns (which may no longer be relevant), local politics and organizational culture instead of data driven analysis of the most efficient OR allocation. We created a computerized simulation model of the OR’s to drive more rationale and efficient utilization. This model provides the ability to test proposed changes in block allocation, demonstrate the impact of those changes to the surgeons, and thus gain surgeons’ buy-in to the proposed changes before implementation.

Methods:
~~A discrete-event, adaptive, complex system computerized simulation model was created based on big-data analysis of 3 years of historical OR data and an industrial engineering work-flow analysis of a 600-bed level-1 trauma hospital with 30 operating rooms. Data elements included: admission type, case urgency, number of cases by surgical specialty, equipment utilized, case duration, personnel required, and patient flow within the perioperative department (from patient check-in to discharge from the recovery room). The simulator provides the ability to model changes in OR block allocation by the full day or half day, create specialty specific blocks, open OR blocks as “first-come first-served,” set aside OR blocks for urgent or emergent cases, and/or to close OR blocks and then measure the impact of these changes on OR utilization and throughput. The simulator provides the ability to test up to 8 different block allocation scenarios at a time and runs each scenario 10 times to assess the total & mean OR utilization over a month.

Results:
~~Using actual O.R. case volumes, case urgencies, and specialty mix the simulator was used to contrast the O.R. utilization achieved by the historical specialty based OR block allocation (scenario #1) with total elimination of all specialty block allocation, making every OR open for elective scheduling on a “first-come, first served” basis (scenario #2). Having all OR’s open for “first-come, first-served” scheduling resulted in significantly higher total and mean OR utilization (Total OR utilization scenario 1= 2,051.9 hours vs. scenario 2=2,236.4, p=0.02; mean OR utilization scenario 1=68.4% vs. scenario 2=74.5%, p=0.02).

Conclusion:
~~The usage of a computerized simulator of the OR’s provides surgical leaders with a virtual laboratory to test experimental OR allocation scenarios that can increase OR utilization but would be far too radical to implement without the surgeons’ buy-in. Surgeon buy-in and implementation of new approaches to OR allocation are enhanced by this data driven approach.
 

36.05 An Analysis of Preoperative Weight Loss and Risk in Bariatric Surgery

L. Owei1, S. Torres Landa1, C. Tewksbury1, V. Zoghbi1, J. H. Fieber1, O. E. Pickett-Blakely1, D. T. Dempsey1, N. N. Williams1, K. R. Dumon1  1Hospital Of The University Of Pennsylvania,Gastrointestinal Surgery,Philadelphia, PA, USA

Introduction:

Preoperative weight loss theoretically reduces the risk of surgical complications following bariatric surgery. Current guidelines have focused on preoperative weight loss as an important element of patient care and, for some payers, a requirement for prior authorization. However, the association between preoperative weight loss and surgical complications remains unclear. The purpose of this study is to test the hypothesis that preoperative weight loss lowers operative risk in bariatric surgery.

Methods:

We conducted a retrospective analysis using the inaugural American College of Surgeons Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program data -2015. Only patients who had primary laparoscopic gastric bypass, open gastric bypass and laparoscopic sleeve gastrectomy were included. Patients were stratified into 4 groups by percent preoperative total body weight (TBW) loss. Univariate analyses was performed. Logistic regression was also used to determine the association between preoperative weight loss and surgical outcomes (mortality, reoperation, readmission, and intervention) with adjustment for potential confounders.   

Results:

A total of 120,283 patients were included in the analysis, with a mean age of 44.6 (±12.0) and 78.7% were female. Procedures were laparoscopic sleeve gastrectomy (69.0%), laparoscopic gastric bypass (30.3%), and open gastric bypass (1.2%). Of the total number of patients, 25% had <1% preoperative TBW loss, 22% had 1 – 2.99%, 29% had 3 – 5.99%, and 24% had ≥6%. When stratified by percent TBW loss, significant differences were found in age, sex, race, co-morbidities, smoking, and ASA classification (p<0.05). Using the <1% preoperative total percent body loss group as a reference, logistic regression revealed that a TBW loss of ≥3% was associated with a significant decrease in operative (30 day) mortality (p = 0.012). Preoperative weight loss in excess of 6% TBW was not associated with a further decrease in operative mortality. There was no significant association between percent TBW loss and reoperation, readmission or intervention within 30 days of operation (Table 1). 

Conclusion:

A preoperative reduction of more than 3% of TBW is associated with a significant reduction in operative mortality following bariatric surgery. These results suggest that a modest preoperative weight loss may substantially reduce operative mortality risk in this population. Further studies are needed to elucidate the association between preoperative weight loss and other outcome measures (reoperation, readmission, intervention). 

 

**The ACS MBSAQIP and the centers participating are the source of the data, and are not responsible for the validity or the conclusions.
 

34.10 Non-invasive Fibrosis Marker Impacts the Mortality after Hepatectomy for Hepatoma among US Veterans

F. B. Maegawa1,2, L. Shehorn3, J. B. Kettelle1,2, T. S. Riall2  1Southern Arizona VA Health Care System,Department Of Surgery,Tucson, AZ, USA 2University Of Arizona,Department Of Surgery,Tucson, AZ, USA 3Southern Arizona VA Health Care System,Department Of Nursing,Tucson, AZ, USA

Introduction:
The clinical role of non-invasive fibrosis markers (NIFM) on the mortality of patients undergoing hepatectomy for hepatocellular carcinoma (HCC) is not well established. We investigate the long-term impact of NIFM on mortality after hepatectomy for HCC. 

Methods:
This analysis utilized the Department of Veterans Affairs Corporate Data Warehouse database between 2000-2012. The severity of hepatic fibrosis was determined by the AST-platelet ratio index (APRI) and the Fibrosis-4 score (FIB-4). Kaplan-Meier survival and Cox proportional hazard regression methods were utilized for analysis. 

Results:
Mean age, MELD score, and BMI were 65.6 (SD: ± 9.4) years, 9 (SD: ± 3.1) and 28 (SD: ± 4.9) kg/m2, respectively. Most the patients were white (64.5%), followed by black (27.6%). The most common operation was partial lobectomy (56.5%) followed by right hepatectomy (28.7%). Out of 475 veterans who underwent hepatectomy for HCC, 26.3% had significant fibrosis utilizing APRI (index >1) and 29.2% utilizing FIB-4 (score > 3.25). The long-term survival among veterans with APRI > 1 was significantly worse compared to those with a normal index. Kaplan-Meier survival analysis revealed a median survival of 2.76 vs 4.38 years, respectively (Log-Rank: p< 0.0018). In contrast, the FIB-4 score was not associated with worse survival. Median survival among veterans with FIB-4 > 3.25 compared to those with a normal score was 3.28 vs 4.22 years, respectively (Log-Rank: p = 0.144). Unadjusted Cox proportional hazard regression showed that APRI >1 is associated with increased mortality (HR: 1.45; 95% CI 1.14 – 1.84). After adjusting for age, race, BMI and MELD score, APRI remained associated with increased mortality (HR: 1.36, 95% CI: 1.02 – 1.82). FIB-4 was not associated with increased mortality in both unadjusted and adjusted analysis (HR: 1.19; 95% CI: 0.94 – 1.50 and HR:1.29; 95% CI: 0.96 – 1.72, respectively).

Conclusion:
APRI can be used as a preoperative tool to predict long-term mortality after hepatectomy, refining the selection criteria for liver resection for HCC. These results suggest patients with APRI > 1 are likely to benefit from other curative therapies, such as transplantation.
 

34.09 Variations in Demographics and Outcomes for Extracorporeal Membrane Oxygenation in the US: 2008-2014

K. L. Bailey1, Y. Seo1, E. Aguayo1, V. Dobaria1, Y. Sanaiha1, R. J. Shemin1, P. Benharash1  1David Geffen School Of Medicine, University Of California At Los Angeles,Division Of Cardiac Surgery,Los Angeles, CA, USA

Introduction:

Extracorporeal membrane oxygenation (ECMO) is increasingly used as a life-sustaining measure in patients with acute or end-stage cardiac and/or respiratory failure. We aimed to analyze the national trends in cost and clinical outcomes for venoarterial and venovenous ECMO. We further assessed whether variations in the utilization of ECMO exist based on geography and hospital size. 

Methods:

All adult ECMO patients in the 2008-2014 National Inpatient Sample (NIS) were analyzed. NIS is an all-payer inpatient database that estimates more than 35 million annual U.S. hospitalizations. Patient demographics, hospital characteristics, and outcomes including mortality, cost, and length of stay were evaluated using non-parametric tests for trends.

Results:

A national estimate of 18,685 adult ECMO patients were categorized by indication: 8,062 (43.2%) respiratory failure, 7,817 (41.8%) postcardiotomy, 1,198 (6.4%) lung transplant, 903 (4.8%) cardiogenic shock, and 706 (3.8%) heart transplant patients. Annual ECMO admissions increased significantly from 1,137 in 2008 to 5,240 in 2014 (P<0.001). The respiratory failure group showed the greatest increase from 416 cases in 2008 to 2,400 cases in 2014 (P=0.003). Average cost and length of stay for overall admissions increased significantly from $125,000+/-$12,457 to $178,677+/-$8,948 (P=0.013) and 21.8 to 24.0 days (P=0.04) respectively. Elixhauser scores measuring comorbidities increased from 3.17 to 4.14 over the study period. Mortality decreased from 61.4% to 46.0% among total admissions (P<0.001) and among all indications except for cardiogenic shock and heart transplantation. The heart transplant group had the highest percentage of neurologic complications (14.9%). ECMO admissions exhibited a persistent increase at hospitals in the South, West, and Midwest (P<0.001, P<0.001, and P=0.002, respectively) with the South having the largest fractional growth. While ECMO was utilized more frequently at medium and large hospitals (P<0.001), a smaller fraction of cases was performed at large centers in more recent years. 

Conclusion:

The past decade has seen an exponential growth of ECMO at medium and large hospitals in multiple regions of the US, paralleling a significant improvement in outcomes across cardiac and respiratory indications. This is despite a higher risk profile of patients being placed on ECMO in more recent times. Developments in ECMO technology and care of critically ill patients are likely responsible for greater survival and longer lengths of stay. The rapid growth of this technology and costs of care warrant further standardization in order to achieve optimal outcomes in the present era of value-based healthcare delivery.

34.08 Prolonged Post-Discharge Opioid Use After Liver Transplantation

D. C. Cron1, H. Hu1, J. S. Lee1, C. M. Brummett2, J. F. Waljee1, M. J. Englesbe1, C. J. Sonnenday1  2University Of Michigan Medical School,Anesthesiology,Ann Arbor, MI, USA 1University Of Michigan Medical School,Surgery,Ann Arbor, MI, USA

Introduction:
Prolonged opioid use following surgical procedures is common. End-stage liver disease is associated with painful comorbidities, and liver transplant recipients may be at risk of postoperative prolonged opioid use. We studied the incidence and predictors of prolonged opioid use following hospital discharge after liver transplantation. 

Methods:
Using a national dataset of employer-based insurance claims, we identified N=1821 adults who underwent liver transplantation between 12/2009 and 8/2015. Prolonged opioid use was defined as patients who filled an opioid prescription within two weeks of post-transplant hospital discharge, and also filled ≥1 opioid prescription between 90-180 days post-discharge. We stratified our analysis by preoperative opioid use status: opioid-naïve, chronic opioid use (≥120 days supply in the year before transplant, or ≥3 opioid prescriptions in the 3 months before surgery), and intermittent use (all other non-chronic use). We also investigated demographics, comorbidities, liver disease etiology, and hospital length of stay (LOS) as potential predictors of prolonged use. We used multivariate logistic regression to compute covariate-adjusted incidence of prolonged opioid use. 

Results:
In the year before liver transplantation, 55% of patients were opioid-naïve, 34% had intermittent use, and 11% had chronic use. Overall, 47% of transplant recipients filled an opioid within 2 weeks of hospital discharge, and 19% of all patients had prolonged use. The adjusted rate of prolonged opioid use was 8-fold higher among preoperative chronic opioid users compared to opioid-naïve (61% vs. 8%, P<0.001, Figure). Among preoperatively opioid-naïve patients, predictors of prolonged post-transplant opioid use included: hospital LOS <21 days (Odds ratio [OR]=1.93, P=0.013) and any psychiatric comorbidity (OR=1.8, P=0.030). Age, gender, insurance type, medical comorbidities, and liver disease etiology were not predictive of prolonged opioid use.

Conclusion:
Opioid use remains common beyond 90 days after post-liver transplant hospital discharge, with particularly high rates among preoperative chronic opioid users. Close outpatient follow-up and coordination of care is necessary post-transplant to optimize pain control and decrease rates of prolonged opioid use.
 

34.07 Comparison of Premature Death from Firearms versus Motor Vehicles in Pediatric Patients.

J. D. Oestreicher1,2, W. Krief1,2, N. Christopherson3,6, C. J. Crilly5, L. Rosen4, F. Bullaro1,2  1Steven And Alexandra Cohen Children’s Medical Center,Pediatric Emergency Medicine,New Hyde Park, NY, USA 2Hofstra Northwell School Of Medicine,Pediatrics,Hempstead, NY, USA 3Northwell Health Trauma Institute,Manhasset, NY, USA 4Feinstein Institute For Medical Research,Biostatistics,Manhasset, NY, USA 5Hofstra Northwell School Of Medicine,Hempstead, NY, USA 6Steven And Alexandra Cohen Children’s Medical Center,New Hyde Park, NY, USA

Introduction:
Gun violence is the second leading cause of pediatric trauma death after only motor vehicles. Though federally funded scientific data have driven life-saving policy from lead poisoning to SIDS, there remain little data on pediatric gun violence. While Congress spends $240 million annually on researching traffic safety, it explicitly bans research on gun violence despite the fact that, with the inclusion of adults, guns and cars kill the same number of Americans annually. Therefore, we sought to describe demographic and clinical characteristics of pediatric firearm and motor vehicle injuries and compare their impact on years of potential life lost (YPLL). We hypothesized that these two mechanisms have similar impact on premature death, thus highlighting this staggering disparity in research.

Methods:
We analyzed data from the National Trauma Data Bank (NTDB) in patients ≤21 years of age presenting to a participating emergency department (ED) with a pediatric firearm (PF) or pediatric motor vehicle (PMV) event from 2009 through 2014. We examined demographic and clinical characteristics of PF and PMV cases using descriptive statistics. The Cochrane-Armitage test was used to trend PF cases over time. YPLL was calculated for PF and PMV cases, using 75 years of age as reference. Because the large sample size yielded p<0.0001 for all comparisons, clinical rather than statistical significance was assessed.

Results:
A total of 1,047,018 pediatric ED visits were identified, with 5.7% PF cases and 27.8% PMV cases. There was a significant decline in PF cases from 2009 (6.2%) to 2014 (5.3%). Demographics for PF cases were as follows: mean age of 17.9 years, 89.0% male, 60.0% African American, 16.9% Hispanic. For PMV: mean age of 15.5 years, 60.6% male, 60.3% Caucasian, and 16.5% Hispanic. PF cases were more likely to die in the ED or hospital (12.5% vs 3.2%), less likely to be transferred to a different hospital (2.5% vs 3.9%), and had similar admission rates (77.5% vs 78.3%) and median lengths of stay (2.0 days). Assault accounted for 79.3% of PF cases, self-inflicted, 4.8%, and accidental, 11.7%. Self-inflicted PF cases had a higher median Injury Severity Score (13) than assault (9) or accidental (4) and were more likely to die (40.2% vs 11.4% vs 6.7%). Accidental PF cases tended to be younger (15.7 years) as compared to assault (18.2 years) and self-inflicted (17.8 years). Among all pediatric ED visits, YPLL from a PF case was 4.1 per 10 visits and, for PMV, 5.4 per 10 visits.

Conclusion:
Motor vehicles and firearms each remain a major cause of premature death. For traumatized children who are brought to an ED, four children die from a gun for every five who die from a motor vehicle, leading to similar and profound YPLL. An evidence-based approach has saved millions of lives from motor vehicle crashes; the same federal funding and research should be directed at the epidemic of pediatric firearm injury.
 

34.04 Hemodialysis Predicts Poor Outcomes after Infrapopliteal Endovascular Revascularization

C. W. Hicks1, J. K. Canner2, K. Kirkland2, M. B. Malas1, J. H. Black1, C. J. Abularrage1  1Johns Hopkins University School Of Medicine,Division Of Vascular Surgery,Baltimore, MD, USA 2Johns Hopkins University School Of Medicine,Center For Surgical Trials And Outcomes Research,Baltimore, MD, USA

Introduction:

Hemodialysis (HD) has been shown to be an independent predictor of poor outcomes after femoropopliteal revascularization procedures in patients with critical limb ischemia (CLI). However, HD patients tend to have isolated infrapopliteal disease. We aimed to compare outcomes for HD versus non-HD patients following infrapopliteal open lower extremity bypass (LEB) and endovascular peripheral vascular interventions (PVI).

Methods:

Data from the Society for Vascular Surgery Vascular Quality Initiative database (2008-2014) were analyzed. All patients undergoing infrapopliteal LEB or PVI for rest pain or tissue loss were included. One-year primary patency (PP), secondary patency (SP), and major amputation outcomes were analyzed for HD vs. non-HD stratified by treatment approach using both univariable and multivariable analyses.

Results:

1,688 patients were included, including 348 patients undergoing LEB (HD=44 vs. non-HD=304) and 1,340 patients undergoing PVI (HD=223 vs. non-HD=1,117). Patients on HD more frequently underwent revascularization for tissue loss (89% vs. 77%, P<0.001) and had ≥2 comorbidities (91% vs. 76%, P<0.001). Among patients undergoing LEB, one-year PP (66% vs. 69%) and SP (71% vs. 78%) were similar for HD vs. non-HD (P≥0.25), but major amputations occurred more frequently in the HD group (27% vs. 14%; P=0.03). Among patients undergoing PVI, one-year PP (70% vs. 78%) and SP (82% vs. 90%) were lower and the frequency of major amputations was higher (27% vs. 10%; P<0.001) for HD patients (all, P<0.001). After correcting for baseline differences between groups, outcomes were similar for HD vs. non-HD patients undergoing LEB (P≥0.21), but persistently worse for HD patients undergoing PVI (all, P≤0.007) (Table).

Conclusion:

Hemodialysis is an independent predictor of poor patency and a higher risk of major amputation following infrapopliteal endovascular revascularization procedures for the treatment of critical limb ischemia. The use of endovascular interventions in these higher-risk patients is not associated with improved limb salvage outcomes and may be an inappropriate use of healthcare resources.

34.05 Cognitive Impairment and Graft Loss in Kidney Transplant Recipients

J. M. Ruck1, A. G. Thomas1, A. A. Shaffer1,2, C. E. Haugen1, H. Ying1, F. Warsame1, N. Chu2, M. C. Carlson3,4, A. L. Gross2,4, S. P. Norman5, D. L. Segev1,2, M. McAdams-DeMarco1,2  1Johns Hopkins University School Of Medicine,Department Of Surgery,Baltimore, MD, USA 2Johns Hopkins School Of Public Health,Department Of Epidemiology,Baltimore, MD, USA 3Johns Hopkins School Of Public Health,Department Of Mental Health,Baltimore, MD, USA 4Johns Hopkins University Center On Aging And Health,Baltimore, MD, USA 5University Of Michigan,Department Of Internal Medicine, Division Of Nephrology,Ann Arbor, MI, USA

Introduction:  Cognitive impairment is common in patients with end-stage renal disease and impairs adherence to complex treatment regimens. Given the complexity of immunosuppression regimens following kidney transplantation, we hypothesized that cognitive impairment might be associated with an increased risk of all-cause graft loss among kidney transplant (KT) recipients. 

Methods:  Using the Modified Mini-Mental State (3MS) examination, we measured global cognitive function in a prospective cohort of 864 KT candidates (8/2009-7/2016). We estimated the association between pre-KT cognitive impairment and graft loss, using hybrid registry-augmented Cox regression to adjust for confounders precisely estimated in the Scientific Registry of Transplant Recipients (N=101,718). We compared the risk of graft loss between KT recipients with vs. without any cognitive impairment (3MS<80) and those with vs. without severe cognitive impairment (3MS<60), stratified by the type of transplant (living donor KT (LDKT) or deceased donor KT (DDKT)). We extrapolated estimates of the prevalence of any cognitive impairment and of severe cognitive impairment in the national kidney transplant recipient population using predictive mean matching and multiple imputation by chained equations.

Results: The prevalence of any cognitive impairment in this 864-patient multicenter cohort was 6.7% among LDKT recipients and 12.4% among DDKT recipients, extrapolating nationally to 8.1% among LDKT recipients and 13.8% of DDKT recipients. LDKT recipients with any cognitive impairment had higher graft loss risk than recipients without any cognitive impairment (5-year graft loss: 45.5% vs. 10.6%, p<0.01; aHR: 1.263.288.51, p=0.02); those with severe impairment had a risk of similar magnitude that was not statistically significant (0.742.7910.61, p=0.1). DDKT recipients with any cognitive impairment had no increase in graft loss vs. those without any cognitive impairment, but those with severe cognitive impairment had higher graft loss risk (5-year graft loss: 53.0% vs. 24.0%, p=0.04; aHR: 1.382.976.29, p<0.01). 

Conclusion: Cognitive impairment is common among both LDKT and DDKT recipients in the United States. Given these associations between cognitive impairment and graft loss, pre-KT screening for impairment is warranted to identify and more carefully follow higher-risk KT recipients. 

 

34.06 Lymph Node Ratio Does Not Predict Survival after Surgery for Stage-2 (N1) Lung Cancer in SEER

D. T. Nguyen2, J. P. Fontaine1,2, L. Robinson1,2, R. Keenan1,2, E. Toloza1,2  1Moffitt Cancer Center,Department Of Thoracic Oncology,Tampa, FL, USA 2University Of South Florida Health Morsani College Of Medicine,Tampa, FL, USA

Introduction:   Stage-2 nonsmall-cell lung cancers (NSCLC) include T1N1M0 and T2N1M0 tumors in the current Tumor-Nodal-Metastases (TNM) classification and are usually treated surgically with lymph node (LN) dissection and adjuvant chemotherapy.  Multiple studies report that a high lymph node ratio (LNR), which is the number of positive LNs divided by total LNs resected, as a negative prognostic factor in NSCLC patients with N1 disease who underwent surgical resection with postoperative radiation therapy (PORT).  We sought to determine if a higher LNR predicts worse survival after lobectomy or pneumonectomy in NSCLC patients (pts) with N1 disease but who never received PORT.

Methods:   Using Surveillance, Epidemiology, and End Results (SEER) data, we identified pts who underwent lobectomy or pneumonectomy with LN excision (LNE) for T1N1 or T2N1 NSCLC from 1988-2013.  We excluded pts who had radiation therapy, multiple primary NSCLC tumors, or zero to unknown number of LNs resected.  We included pts with Adenocarcinoma (AD), Squamous Cell (SQ), Neuroendocrine (NE), or Adenosquamous (AS) histology.  Log-rank test was used to compare Kaplan-Meier survival of pts who had LNR <0.125 vs. 0.125-0.5 vs. >0.5, stratified by surgical type and histology.

Results:  Of 3,452 pts, 2666 (77.2%) had lobectomy and 786 (22.8%) had pneumonectomy.  There were 1935 AD pts (56.1%), 1308 SQ pts (37.9%), 67 NE pts (1.9%), and 141 AS pts (4.1%).  When comparing all 3 LNR groups for the entire cohort, 1082 pts (31.3%) had LNR <0.125, 1758 pts (50.9%) had LNR 0.125-0.5, and 612 pts (17.7%) had LNR >0.5.  There were no significant differences in 5-yr survival among all 3 LNR groups for the entire population (p=0.551).  After lobectomy, 854 pts (32.0%) had LNR <0.125, 1357 (50.9%) pts had LNR 0.125-0.50, and 455 pts (17.1%) had LNR >0.5.  After pneumonectomy, 228 pts (29.0%) had LNR <0.125, 401 pts (51.0%) had LNR 0.125-0.5, and 157 pts (19.9%) had LNR >0.5.  There was no significant difference in 5-yr survival among all 3 LNR groups in either lobectomy pts (p=0.576) or pneumonectomy pts (p=0.212).  When stratified by histology, we did not find any significance in 5-yr survival among all 3 LNR groups in AD pts (p=0.284), SQ pts (p=0.908), NE pts (p=0.065), or AS pts (p=0.662).  There were no differences in 5-yr survival between lobectomy vs. pneumonectomy pts at LNR <0.125 (p=0.945), at LNR 0.125-0.5 (p=0.066), or at LNR >0.5(p=0.39).

Conclusion:  Patients with lower LNR did not have better survival than those with higher LNR in either lobectomy or pneumonectomy pts.  Lower LNR also did not predict better survival in each histology subgroup.  These findings question the prognostic value of LNRs in NSCLC patients with N1 disease after lobectomy or pneumonectomy without PORT and suggest further evaluation of LNRs as a prognostic factor.

34.03 Trends in Opioid Prescribing From Open and Minimally Invasive Thoracic Surgery Patients 2008-2014

K. A. Robinson1, J. D. Phillips2, D. Agniel3, I. Kohane3, N. Palmer3, G. A. Brat1,3  1Beth Israel Deaconess Medical Center,Surgery,Boston, MA, USA 2Dartmouth-Hitchcock,Thoracic Surgery,Lebanon, NH, USA 3Harvard Medical School,Biomedical Informatics,Boston, MA, USA

Introduction:
The US is facing an opioid epidemic with an increasing number of abuse, misuse and overdose events. As a major group of prescribers, surgeons must understand the impact that post-surgical opioids have on the long-term outcome of their patients. Previous work has demonstrated that approximately 6% of opioid naïve patients have new persistent opioid use postoperatively (Brummett et al., 2017). In thoracic surgery, postoperative pain has been a significant determinant of morbidity. It is generally accepted that video assisted or minimally invasive approaches allow patients to recover faster and with less postoperative pain. However, recent literature has been unable to show a significant difference in chronic pain after minimally invasive versus open thoracotomy (Brennan & Ph, 2017). In this study, we aimed to identify if there was a difference in postoperative opioid prescribing in patients undergoing minimally invasive compared to open thoracic surgery.

Methods:
In a de-identified administrative and pharmacy database of over 1.4 million opioid naïve surgical patients for the years 2008-2014, we retrospectively analyzed patients undergoing minimally invasive thoracic surgery vs open thoracic surgery based upon their ICD coding and compared these cohorts with opioid prescribing and post-operative misuse codes.

Results:
1907 minimally invasive (MIS) and 2081 open thoracic surgery cases were identified from CPT cohorts. During the years of the study, average daily morphine milligram equivalents prescribed decreased for both open and MIS thoracic cases (Figure 1a). However, during this same time period, the duration of opioids prescribed after minimally invasive thoracic did not significantly change. In fact, duration of prescription was trending toward an increased duration for both open thoracic surgery and MIS thoracic surgery (Figure 1b).

Conclusion:
Previous work has demonstrated that increasing the duration of opioid prescribed after surgery is a stronger predictor of opioid misuse than dosage prescribed. By prolonging the length of exposure to opioid medications, prescribers may not be reducing the risk of misuse in their patients. Furthermore, we observed that open and MIS patients were prescribed approximately the same daily dose. This suggests that postoperative prescribing behavior for pain is not defined by the surgery performed.