57.09 Impact of Hyponatremia on Adult Post-Liver Transplant Outcomes

I. Clouse1, J. W. Clouse1, C. A. Kubal1, R. S. Mangus1  1Indiana University School Of Medicine,Surgery / Transplant,Indianapolis, IN, USA

Introduction:
Serum sodium level has recently been added to calculation of the model for end-stage liver disease (MELD) score because it is recognized as an important marker for disease severity in the cirrhotic patient. This study reviews all adult patients undergoing liver transplant at a single center over a 10-year time period to assess the impact of serum sodium levels on clinical outcomes.

Methods:
The records of all adult patients undergoing liver transplant at a single center from 2007 to 2018 were reviewed. Baseline and post-transplant serum sodium levels were recorded.  Outcomes included length of stay and patient survival. Neurologic outcomes included any altered mental status, need for neurology consult and any required brain evaluation or imaging. Cox regression was used to assess 10-year patient survival.

Results:
There were 1,363 adult transplants reviewed during the study period. The median patient age was 57 years, with 67% being males and 89% being Caucasian. Etiologies of liver failure included hepatitis C (30%), fatty liver disease (21%), alcoholic liver disease (29%) and hepatocellular carcinoma (22%). The median MELD was 21, with median hospital stay of 10 days. There were 72% of patients with baseline serum sodium of 135mEq/L or higher, 20% were between 130 and 134mEq/L, and 8% (109 patients) with serum sodium less than 130mEq/L at transplant.

Patients older than 40 years of age were much more likely to present with hyponatremia (p=0.02), as well as those with alcoholic liver disease (p<0.01). Lower serum sodium levels were associated with higher MELD score. Patients with varying levels of hyponatremia did not differ in risk of perioperative death, 90-day death or in 1-year patient survival. They had similar hospital length of stay (12 versus 10 days, p=0.97). Hyponatremia was associated with 30-day post-transplant altered mental status (>25%, p=0.01) and with the request for neurology consultation (>20%, p<0.01). Brain studies in the first 30-day post-transplant were much more likely in hyponatremic patients (CT, p=0.07; MRI, p=0.05; and EEG, p<0.01).  Cox regression 10-year patient survival demonstrates a decreasing survival from 75% down to 70% with increasing level of hyponatremia.

Conclusion:
These results provide a broader understanding of the impact of hyponatremia on post-liver transplant outcomes. There were 8% of patients going to the operating room with serum sodium level <130mEq/L. Though patient survival is similar, patients with hyponatremia are much more likely to require neurologic intervention and testing. At our center, efforts are made to maintain stable sodium levels during the transplant procedure, with slow increases in the days post-transplant up to the time of discharge.
 

57.08 High BMI does not Predispose to Post-transplant DM, Morbidity or Mortality in Renal Transplantation.

S. Kaur1, L. Greco1, K. Lau1, A. Di Carlo1, S. Karhadkar1  1Temple University,Surgery / Division Of Abdominal Organ Transplantation,Philadelpha, PA, USA

Introduction:
Candidacy for renal transplantation is multifactorial and one of the variables factored in this decision is recipient obesity. Obesity has been shown to be associated with an increased risk of allograft dysfunction, however the association between obesity and short-term complications remains unclear. There is an increasing trend to subject obese patients to bariatric surgery before transplantation. The purpose of this study is to evaluate the association between obesity and the risk of short and long-term complications after renal transplantation. 

Methods:
We identified consecutive patients who underwent renal transplantation at a single center between the years 2013 and 2018. The body mass index (BMI) was calculated for all patients and patients were stratified by BMI: Obese (BMI greater than 30) and non-obese (BMI less than 30). Patient charts were reviewed for infectious complications, rejection of allograft, new onset diabetes, return to dialysis after transplant, and proteinuria. Student’s paired T-test and odds ratios were calculated to assess the relationship between obesity and the aforementioned complications.

Results:
A total of 246 patients underwent renal transplant between 2013 and 2018, 63.3% (n=155) were male, 91.1% (n=224) underwent deceased donor transplant, 85.0% (n=209) were on dialysis prior to transplant. In these patients, 2% (n=5) were underweight (BMI<18.5), 28% (n=69) were normal BMI (18.5-24.9), 35% (n=86) were overweight (BMI 25-29.9), and 34.6% (n=85) were overweight (BMI >30).  There was no difference between the obese and non-obese kidney transplant recipients with regards to incidence of return to dialysis after transplant (p<0.458, OR 0.606), new onset of diabetes after transplant (p<0.874, OR 0.915), or proteinuria (p<0.188, OR 1.424). Additionally, in patients who had complications following transplant, there was no significant difference in the obese and non-obese transplant recipients and incidence of organ rejection (p<0.340, OR 0.703) or complications that were classified as not secondary to infection or rejection (p<0.965, OR 1.017). There was a weak association of obesity with increased risk of infectious complication (p<0.051, OR 2.199). 

Conclusion:
In patients undergoing kidney transplantation, there is no significant difference in incidence of complications in obese patients compared to non-obese patients. There is a weak association of obesity with increased risk of infectious complication that is not significant. High BMI is not associated with proteinuria, graft loss or rejection. Obesity should not be a contra-indication for Renal Transplantation.

57.07 Risk Factors for Predicting Prolonged Length of Stay (PLOS) After Kidney Transplant

W. Q. Zhang1, A. Rana2, B. V. Murthy2  1Baylor College Of Medicine,Houston, TX, USA 2Baylor College Of Medicine,Division Of Abdominal Transplantation,Houston, TX, USA

Introduction:  Length of stay is a surrogate for determining use of healthcare resources and costs to both patients and hospitals. Currently, there is not a comprehensive review of risk factors for prolonged length of stay (PLOS; >14 days) at a hospital after kidney transplant. Insight into factors that increase the probability for PLOS will provide a basis for future clinical applications.

Methods: Univariate and multivariate analyses (p<0.05) were performed on 98,322 adult recipients of deceased donor kidneys between August 2008 and March 2018 using the United Network for Organ Sharing (UNOS) database to identify donor, recipient, and perioperative risk factors for PLOS.

Results: Univariate analysis identified 32 factors, in addition to Estimated Post Transplant Survival (EPTS) score and the Kidney Donor Profile Index (KDPI), as possible predictors of PLOS. Including EPTS and KDPI, 18 total factors remained significant after multivariate analysis. Factors increasing the probability of PLOS include longer cold ischemia times (CITs), admission to the intensive care unit (ICU) at time of transplant, lower functional status, African American ethnicity, male donor, body mass index (BMI) under 18.5 or over 35, longer time on dialysis, and national procurement. Factors protective against PLOS include shorter time on waitlist, shorter time on dialysis, and BMI of 25 up to 30.

Conclusion: Overall, admission to the ICU (Odds Ratio (OR) = 13.61) had the largest effect on PLOS, but other interactions were also revealed. Of note, groups with CITs of 7 hours up to 18 hours (OR = 1.65), 18 hours up to 32 hours (OR = 1.97), and over 32 hours (OR = 2.42) all had significantly increased risk of PLOS compared to the reference group of CIT under 7 hours, with the effect on PLOS increasing with increasing CIT. This emphasizes the need to minimize CIT. Other factors will require further analysis to interpret. A next step for this project will be to create a predictive index for PLOS.
 

57.05 Pediatric Liver Transplantation for Malignancy: Surgical Outcomes and the Role for Segmental Grafts

F. Lopez-Verdugo1, A. Florez-Huidobro Martinez2, S. Fujita1, K. Jensen1, I. Zendejas1, E. R. Scaife1, L. Book1, R. L. Meyers1, M. I. Rodriguez-Davalos1  1Intermountain Primary Children’s Hospital,Department of Surgery, Transplant And Hepatobiliary Surgery,Salt Lake City, UT, USA 2Universidad AnĂ¡huac,School Of Medicine,Mexico City, MX, Mexico

Introduction: Primary resection remains the mainstay treatment for liver malignancies in the pediatric population, unfortunately many of these children present with unresectable disease, for which liver transplantation has become the standard of care. Hepatoblastoma (HBL) is the most common type of liver malignancy in children, representing 80% of all liver tumors. It usually presents before the age of four and seems to affect more male patients  with a ratio that varies from 1.2 to 3.6:1. The aim of this study is to review patient and graft survival in a cohort of patients with liver malignancy who underwent liver transplantation at our center over the past 2 decades and compare the different types of liver allografts.

Methods:  All patients diagnosed with liver malignancy who underwent liver transplantation as treatment from 1998 to 2018 were analyzed. Demographics, age at the time of transplant, prior resections, type of graft, vascular complications, survival rate and recurrence were evaluated. Fisher’s exact test was performed to demonstrate differences in survival rate at 12-month follow-up between graft types used.

Results: From the 249 transplants performed in our center over the last two decades, 16 transplants (6.4%) were performed for malignancies in 15 patients. The mean age at transplant was 4.19 years (range: 0.6-7.1 years), 9 patients were female (60%). 13 transplants (81.2%) were performed for HBL, 2 for hemangioendothelioma (12.5%) and 1 for pancreatoblastoma (6.25%). 5 transplants were from living donors and 11 from deceased donors (3 reduced/split and 8 whole), 1 patient received an ABO incompatible liver. Half of our cohort received technical variants grafts either from living or deceased donors. Out of the patients with HBL, 4 (30%) had a prior resection attempt; among these, 2 patients (50%) succumbed within 2 months after the LTx. Overall, 3 patients died within the first 6 months after LTx. Causes of death included recurrence of the disease (n=2) and primary graft non-function  (n=1). All the patients with diseases other than HBL are alive and doing well. At a median follow-up of 84.5 months (range: 0-241) overall patient and graft survival were 80% and 75%, respectively. There was no statistically significant difference on survival rate between patients who received whole grafts compared to those who received technically variant grafts (p=0.5).

Conclusion:Timing is critical in providing liver transplantation for patients suffering from liver malignancy in which extent of the disease precludes complete resection. The use of segmental grafts and ABOi livers did not appear to diminish survival rate at 1 year follow-up; thus, utilization of these graft types might increase the organ donor pool and expedite treatment for patients with liver malignancy. The most important factor in our series was histology of the tumor as this was consistent with small cell undifferentiated variant of HBL in patients that died of recurrence of disease.

57.04 Small Pediatric Livers can be used safely in adult recipients with good long term outcomes

A. Shubin1, C. S. Hwang1, P. Vagefi1, D. Desai1, M. MacConmara1  1University Of Texas Southwestern Medical Center,Department Of Surgery, Division Of Surgical Transplantation,Dallas, TX, USA

Introduction:

It is well recognized from clinical experiences in living liver transplantation that the use of small sized allografts can lead to early complications in the recipient, including primary graft non-function, however less is known about outcomes from whole organ use especially long term outcomes. We evaluated patient and allograft survival in adult liver recipients who received allografts from young pediatric donors to see if broader use of these organs is warranted.

Methods:

The United Network for Organ Sharing (UNOS) database was queried to examine the outcomes in all liver recipients from 1993 to 2017.  Patients were then divided into those receiving their liver allograft from a small pediatric donor (<30Kg) or an adult graft. Further stratification was done on the basis of fulminant status of the recipient. Kaplan Meier survival curves were generated and categorical differences compared using the unpaired Student's t-test and nominal variables using either the Chi Square or Fischer's exact test. A p-value <0.05 was considered significant.

Results:

Data on 143,612 recipients was evaluated. 668 adult recipients received pediatric donor liver allografts and of those, 109 patients were transplanted for fulminant disease. The average pediatric donor age was 7.95 v 38.9 for adult donors while the donor to recipient weight difference was 35.2 v 4.5 Kg (p<0.05) respectively. The recipients of pediatric livers were smaller 61.2 v 82.7 Kg (p<0.05).  Overall long term survival of recipients who received small pediatric allografts was not statistically different.  In general recipients with fulminant disease had a worse outcome, however those adults with fulminant disease who received a small pediatric graft had markedly worse outcomes.

Conclusion:

Small pediatric liver grafts can be safely used in selective adult recipients and provide excellent outcomes.

57.03 Analysis of Anti-Thymocyte Globulin Antibody Titer Congruent with Kidney Transplantation

S. G. Lattimore1, N. J. Skill1, M. A. Maluccio1  1Indiana University School Of Medicine,Transplant,Indianapolis, IN, USA

Introduction:  Rabbit thymoglobulin is an anti-thymocyte globulin (rATG) used in transplant to eliminate lymphocytes. However, anti-rATG antibodies are associated with both acute and chronic rejection. The purpose of this study is three-fold. Firstly, to report the incidence of anti-rATG in a large renal transplant center. Secondly, to evaluate outcomes, risk factors, hazard ratios, and costs associated with positive ATG antibody titer. Finally, investigate CD40L and IL21, both linked to antibody mediated rejection, as alternate targets for patients with anti-ATG antibodies.

Methods:  Clinical records were reviewed of renal transplant recipients between January 2004 and May 2018 for anti ATG antibody titers. Serum CD40L and IL21 quantifications were performed using commercially available ELISA. Cost analysis was extracted from billing records 0-7 days post rATG titer using total charges as a proxy for cost.

Results: Between 2004 and May 2018 the Indiana University Hospital transplant program performed, on average, 160 renal transplants per year. Anti rATG antibody ELISAs was requested and performed for 19 patients per year (11.8%), 4.8 patients per year were positive at 1:100 titer (25.3%). anti rATG antibodies was associated with a significantly lower time to rejection (137days) when compared to negative patients (873 days). No correlation between rATG antibodies and time of dialysis, or lymphocytes populations was found. A slight correlation was observed between positive rATG antibody titer and recipient age. Anti rATG rates were greater in patients receiving a second kidney (37.5%). Cost of treatment in patients with positive anti-ATG titer was significantly lower ($39,549±9,504 vs 117,503±16,273 ttest p=0.0001). IL21 and CD40L were significantly greater in patients with positive anti-ATG antibody titer when compared to patients whom were negative.

Conclusion: Anti rATG antibodies significantly impact outcomes and costs of kidney rejection. Monitoring of anti rATG antibody titer is required to evaluate outcomes and treatment options, especially in the setting of second transplants. Elucidation of the mechanisms associated with positive anti rATG antibody is required. IL21 and CD40L are potential targets.

57.02 Simultaneous Pancreas-Kidney (SPK) Transplantation Outcomes in Type 2 (T2D) vs Type 1 (T1D) Diabetes

P. H. Pham1,2,4, E. Martinez1,2,3,4, B. Welch2,4, G. Leverson2,4, N. Marka2,4, H. W. Sollinger1,2,4, D. Kaufman1,2,4, R. R. Redfield1,2,4, J. S. Odorico1,2,4  4University Of Wisconsin,Surgery,Madison, WI, USA 1University Of Wisconsin,School Of Medicine And Public Health,Madison, WI, USA 2University Of Wisconsin,Transplantation,Madison, WI, USA 3Baylor University,Dallas, TX, USA

Introduction:

Compared to T1D, in which pancreas transplantation (PTx) is an established effective treatment 1, 2, 3, 4, few studies found similar transplant outcomes including patient, kidney graft, and pancreas graft survival between T1D and T2D SPK recipients5, and suggested that SPK transplantation might be associated with improved patient and kidney survival compared to kidney transplantation alone.6,7,8  However, limited data are available regarding the effect of recipient factors such as age, BMI, or pre-transplant insulin requirements on such outcomes, specifically for T2D recipients.

In this study, we assessed the effects of recipient pre-transplant BMI and insulin requirement on the outcomes of SPK transplantation in T2D patients, and compared these to the impact of those parameters on T1D SPK recipients. The results of this study will not only contribute to the understanding of PTx in T2D recipients, but will also better inform patients and physicians in the decision-making process regarding treatment options.

Methods:
A total of 323 patients who underwent SPK at the University of Wisconsin Hospital between 2006-2017 were assessed for recipient pre-transplant BMI, insulin requirements (Pre-IR), post-transplant diabetes (PTDM) (defined by post-transplant return to an oral hypoglycemic agent (OHA) and/or return to any insulin for > 3 consecutive months) and graft failures (GF) (as reported for resumption of insulin, pancreatectomy, or death). Minimum follow-up was 1 year; except patients who underwent pancreatectomy or failed <90 days, or expired <1 year after transplant. Recipient factors were analyzed to categorize patients as T1D or T2D.  Additional variables controlled for included: Donor: age, race, gender, BMI, type (DBD vs. DCD), KDPI, CIT; and Recipient: age, gender, race, donor-recipient CMV/EBV status and induction therapy. Data collection was completed using UW Transplant Database and EHR.

Results:
SPK transplants for T2D increased from 1 per year (3.1% all SPK/year) (2006) to 12 (29.3%) per year (2016).  The 323 patients were categorized: 284 T1D and 39 T2D patients based on several clinical parameters. During the follow-up period, 52 patients (16.1% [49 T1D and 3 T2D]) resumed insulin for > 3months, 23 patients (7.1%, all T1D) initiated OHA use post-transplant. Overall, 59 patients (18.2%) experienced GF (pancreatectomy: 18 T1D, 1 T2D; resumption of insulin: 37 T1D, 3 T2D). In T2D patients, BMI and Pre-IR were not significantly associated with GF (pBMI = 0.71; ppre-IR= 0.30) or PTDM (pBMI = 0.58; ppre-IR= 0.54). In T1D patients, neither BMI nor Pre-IR was significantly associated with GF (pBMI = 0.12; ppre-IR= 0.16) or PTDM (pBMI = 0.14; ppre-IR= 0.16).

Conclusion:
In this study, we could not identify a significant association between these pre-transplant parameters and graft failure in general, or PTDM specifically, in T2D SPK recipients. These observations could inform a less restricted approach in T2D recipients.

57.01 Frailty and Causes of Morbidity and Mortality in Kidney Transplant Recipients

L. Greco1, S. Kaur1, A. Di Carlo1, K. Lau1, S. Karhadkar1  1Temple University,SURGERY / ABDOMINAL ORGAN TRANSPLANTATION,Philadelpha, PA, USA

Introduction:
Frailty has been shown to be predictive of patient outcomes following renal transplantation. The purpose of this study is to analyze the effects of frailty on specific types of complications in patients following kidney transplantation

Methods:
We identified all patients who had undergone renal transplantation at an Urban University Hospital between the years 2013 and the first half of 2018. The frailty score based on the Modified Frailty Index 5 (MFI5) was calculated for all patients and charts were reviewed for infectious complications, rejection of allograft, new diagnosis of diabetes, return to dialysis after transplant, and proteinuria. Student’s paired T-test and odds ratios were calculated to assess the relationship between frailty index score greater than 3 and the aforementioned complications.

Results:
A total of 246 patients underwent renal transplant at a single center University hospital between 2013 and 2018, 63.3% (n=155) were male, 91.1% (n=224) underwent deceased donor transplant, 85.0% (n=209) were on dialysis prior to transplant. Based on an MFI index score greater than 3, 45.1% (n=111) patient were considered frail. There was no significant difference between the frail and non-frail kidney transplant recipients with regards to incidence of return to dialysis after transplant (p<0.4, OR 0.593), new onset of diabetes after transplant (p<0.85, OR 0.85), or proteinuria (p<0.325, OR 1.28). Additionally, in patients who had complications following transplantation, there was no significant difference in the frail and non-frail transplant recipients and incidence of organ rejection (p<0.506, OR 0.796) or complications that were classified as not secondary to infection or rejection (p<0.807, OR 1.095). There was an association of frailty with increased risk of infectious complication (p<0.04, OR 2.261).

Conclusion:
Frail patients who underwent kidney transplantation were not more likely than their non-frail cohorts to develop diabetes after transplant, return to dialysis, to have proteinuria or to have rejection of their organ. There was an association between frailty and increased risk of infectious complication. These results have important implications for antibiotic prophylaxis in this cohort of immunosuppressed frail patients.
 

56.20 Hit, Stand, or Fall: a Review of Casino-Related Injuries from a Level-I Trauma Center

Z. A. Spigel1, M. R. Noorbakhsh1, F. H. Philp1  1Allegheny Health Network,Department Of Surgery,Pittsburgh, PA, USA

Introduction:

 Within our large metropolitan area is a single, large, waterfront-style casino, which opened on August 9, 2009 and rests exclusively in the catchment area of our Level I trauma center. As the Level I trauma center serving the only casino, this affords us a unique opportunity to explore the epidemiologic 10-year history of casino-related traumas in a major metropolitan center.

Methods:

Our Trauma Registry is a prospectively maintained database containing all patients who present with a traumatic injury, regardless of trauma activation (or “Level”) status. We retrospectively queried our database for all patients presenting between August 2009 and April 2018 with place of injury listed as “casino.”

Results:

Fifty-five patients were identified with casino as place of injury.  Females represented 55% of patients presenting from the casino and patients were predominantly white (89%). Mean age was 68 ± 17 years. Alcohol was common only in patients below the mean age, with a range of 0.16-0.32 in BAC-positive patients. Saturday was the most common day of traumatic injury with 49% of patients presenting on the weekend. Night time (1800-0600) was more common than day time for injuries to occur (63% vs 37%). 2% of patients were a Level I trauma activation, 44% level II, and 55% triaged initially by the emergency department.  Almost all patients (95%) presented after falls, with the remainder presenting after assault. All patients required hospital admission, including 13 (24%) to the Trauma ICU. 23 (42%) of patients required an operation, predominantly by orthopedic surgery (80%) or neurosurgery (12%). Three patients (5%) did not survive their injuries and died in-hospital.

Conclusions:

While elderly falls in casinos represent the majority of casino-related traumas in our catchment area, the mechanisms do not appear to be attributable to variables presented solely at casinos and absent from home environments. The younger population experiencing trauma at a casino appears related to alcohol consumption more than the casino venue itself. As a Public Health Initiative, identifying fall risks in the casino environment and providing safe transit for an elderly population, while limiting younger patrons from over-consumption of alcohol, could significantly decrease the number of traumas experienced in our casino environment.

56.19 Assessing Specialty-Based Management of Operative Traumatic Extremity Vascular Injury

N. A. Ludwig1, N. Bhutiani1, B. G. Harbrecht1, A. J. Dwivedi1, M. C. Bozeman1  1University of Louisville,Surgery,Louisville, KY, USA

Introduction: Despite several studies across a variety of procedures demonstrating no significant improvement in outcomes associated with high-volume surgeon specialty, some groups in the United States have increasingly advocated for specialty-specific surgical intervention.  This study sought to evaluate the impact of operative surgeon specialty on perioperative outcomes following repair of traumatic vascular injuries.

Methods: A level 1 trauma center registry was queried for patients with extremity vascular injuries between January 2010 and March 2018.  Patients were classified with respect to the specialty of the service (trauma surgery, vascular surgery) that performed operative repair of their vascular injury.  Demographic, injury, and perioperative outcome variables were compared.

Results: Of 217 patients undergoing surgical repair of an extremity vascular injury, 142 had repairs performed by trauma surgery and 75 had repairs performed by vascular surgery.  Patient cohorts did not differ with respect to age, gender, abbreviated injury severity, or injury severity score (Table).  Regarding location of vascular injury, patients undergoing repair by trauma surgery were more likely to have injuries to the axillary/subclavian artery, iliac artery, and femoral artery and less likely to have injuries to the brachial or popliteal artery than patients undergoing repair by vascular surgery.  Patients undergoing repair by vascular surgery were more likely to be admitted to the intensive care unit (ICU) postoperatively (96.0% vascular surgery vs. 83.1% trauma surgery, p=0.005).   ICU length of stay, mechanical ventilation requirement, and duration of mechanical ventilation did not differ between groups, nor did requirement for amputation during index hospitalization.  Regarding mortality and discharge destination, groups did not differ with respect to mortality during index hospitalization, though patient undergoing repair by trauma surgery were more likely to be discharged home with or without home health (76.1% trauma surgery vs. 57.3% vascular surgery, p=0.005). 

Conclusion: Among patients with traumatic extremity vascular injuries, operative surgeon specialty does not significantly impact perioperative outcomes.  As evidenced by a greater likelihood of vascular surgery involvement in brachial and popliteal artery injuries, surgeon comfort and experience with vascular exposures do and should factor into the decision to involve a vascular surgeon in patient care.  Still, the data suggest that the training and exposure received by trauma surgery is sufficient to provide high quality care for patients with traumatic extremity vascular injuries.

 

56.18 Robot-assisted Versus Laparoscopic Colonic Resection for Colon Cancer: Are We Getting Better?

K. Memeh1, V. Pandit1, P. N. Omesiete1, V. N. Nfonsam1  1University Of Arizona,Surgery,Tucson, AZ, USA

Introduction:
Robotic assisted surgery has continued to gain popularity amonsgt surgeons in the past decade. Over the last decade, robotic surgey has also started to emarge as probable porcedure of choice in sometimes difficult minimal invasive cases like colon resection. The aim of our study is to compare the outcomes of robotic colon resection (RCR) vs laparoscopic colon resection (LCR) for colon cancer.

Methods:
2-year review (2015-2016) of the ACS-NSQIP Colectomy database and included all patients with colon cancer who underwent RCR or LCR with primary anastomosis. Patients were stratified into two groups: RCR and LCR. Outcomes were conversion rates to open, 30-d complications and 30-d mortality. Regression analysis was performed.

Results:
14824 patients were analyzed of which 5726 who underwent colonic resection with primary anastomosis and were included. Mean age was 65±12 years, 52% were males, and median ASA class was 3[2-4]. Overall 16% patients underwent RCR and mortality rate was 1.6%. There was no difference in age (p=0.25), gender (p=0.35), ASA class (p=0.77), comorbidities, and pre-operative labs between the two groups. Median operative time for RCR was (218 vs. 160 minutes, p=0.01) and conversion to open rate was lower (4.7% vs. 7.7%, p=0.03) compared to LCR. There was no difference in 30-day complications and 30-day mortality (Table-1). On regression analysis, patients who underwent LCR had higher odds of converting to open (OR: 2.6[1.2-11.8], p=0.03) compared to RCR

Conclusion:
Patients who underwent robotic colonic resection for colon cancer though had longer operative times, is more likely stay minimal invasive compared to LCR. There was no difference in 30-day outcomes between robotic and laparoscopic approach for colon resection.
 

56.16 Systematic Under Coding of Diagnostic Procedures in National Inpatient Sample (NIS). A Threat to Validity

O. P. Owodunni1, B. D. Lau2,6,10, K. L. Florecki1, K. L. Webster1,9, D. L. Shaffer3,7, D. B. Hobson3,7, P. S. Kraus8, C. G. Holzmueller1,9, J. K. Canner4, M. B. Streiff5,9, E. R. Haut1,9,10,11  1The Johns Hopkins University School Of Medicine,Acute Care/Surgery,Baltimore, MD, USA 2Johns Hopkins University School Of Medicine,Radiology,Baltimore, MD, USA 3Johns Hopkins University School Of Medicine,Surgery,Baltimore, MD, USA 4Johns Hopkins University School Of Medicine,Surgery Center For Outcomes Research,Baltimore, MD, USA 5Johns Hopkins University School Of Medicine,Hematology/Medicine,Baltimore, MD, USA 6Johns Hopkins University School Of Medicine,Health Sciences Informatics,Baltimore, MD, USA 7Johns Hopkins University School Of Medicine,Nursing,Baltimore, MD, USA 8Johns Hopkins University School Of Medicine,Pharmacy,Baltimore, MD, USA 9The Armstrong Institute for Patient Safety and Quality,Baltimore, MD, USA 10The Johns Hopkins Bloomberg School of Public Health,Health Policy And Management,Baltimore, MD, USA 11Johns Hopkins University School Of Medicine,Anesthesiology And Critical Care Medicine,Baltimore, MD, USA

Introduction:  Acute myocardial infarction (AMI) and venous thromboembolism (VTE) are common cardiovascular disorders and a significant cause of morbidity and mortality in surgical patients. These disorders are commonly monitored and publicly reported quality-of-care measures. Many health outcomes research rely on readily available, inexpensive data, originally collected for administrative and billing purposes. However, the inherent limitations of administrative databases are often overlooked. We hypothesized that the National Inpatient Sample (NIS) would have a high prevalence of underreporting of diagnostic procedures for AMI and VTE.

Methods:  We retrospectively analyzed inpatient encounters with a primary diagnosis of AMI or VTE (comprising deep vein thrombosis [DVT], or pulmonary embolism [PE]) in the 2012-2014 Healthcare Cost & Utilization Project NIS database. We identified standard diagnostic procedures for each diagnosis: venous duplex ultrasound (US) for DVT, electrocardiogram (EKG) for AMI, and chest computer tomography (CT) scan, pulmonary angiography, echocardiography, and nuclear medicine ventilation/perfusion (V/Q) scan for PE. We appropriated survey weighting statistical approaches and calculated the proportions of patients with a documented corresponding diagnostic test for their diagnosis.

Results: All three cardiovascular outcomes of interest had a similarly low proportion of patients with documentation of a corresponding diagnostic procedure. Only 0.26% (n = 4,800) of patients with reported AMI had an EKG. Just 2.20% (n = 9,655) of patients with reported DVT events had a peripheral vascular ultrasound. For patients with a PE diagnosis, 4.10% (n= 20,825) had a CT scan, 1.60% (n= 7,965) had pulmonary angiography, 5.20% (n= 26,390) had echocardiography, and 0.48% (n= 2,460) had a V/Q scan (see table 1). 

Conclusion: An extremely small proportion of diagnostic procedures are documented for inpatient encounters with a documented diagnosis of AMI, DVT, and PE. This finding calls into question the validity of using NIS and other administrative databases to examine health care use and outcomes for at least some categories of diagnostic tests. Unfortunately, NIS does not provide enough granular data to control for differences in diagnostic procedure use which may lead to surveillance bias in outcomes reporting. Clinical researchers and policy makers must understand and acknowledge the limitations inherent in these data when using them for pay-for-performance initiatives and hospital benchmarking. 

 

56.15 Patients, Tell Us What You Think: Qualitative Evaluation of an Enhanced Recovery Pathway

L. J. Kreutzer1, M. W. Meyers3, M. McGee1,3, S. Ahmad3, K. Gonzalez3, S. Oberoi3, K. Engelhardt1, K. Y. Bilimoria1,2,3, J. K. Johnson1,2  1Northwestern University,Surgical Outcomes And Quality Improvement Center,Chicago, IL, USA 2Feinberg School Of Medicine – Northwestern University,Chicago, IL, USA 3Northwestern Memorial Hospital,Chicago, IL, USA

Introduction: Enhanced Recovery Pathways (ERPs) improve post-surgical recovery and patient outcomes by reducing complications, decreasing length of stay, and improving patient satisfaction; however, hospitals underestimate the complexity of implementing a multifaceted intervention that requires high levels of patient participation pre- and post-operatively. Our goal was to evaluate patient perspectives during early ERP implementation and to address challenges patients face when preparing for, and recovering from, surgery.

Methods: As part of an in-depth, formative evaluation of an ERP for patients recovering from elective colorectal resections at a large urban tertiary care teaching hospital, we conducted semi-structured interviews with patients (n=9) from September 2016 to August 2017. At least two patients for each colorectal surgeon (n=4) participated in the interviews. Patients were asked if they knew they were participating in an ERP and about their pre-operative experience, level of preparedness, and expectations for surgery and post-operative recovery.  Detailed notes were taken during each interview in lieu of audio recording to maintain patient confidentiality. We conducted thematic analysis using the constant comparative method to identify common themes.

Results: All patients approached for an interview agreed to participate.  Patients interviewed were not able to identify specific benefits of ERP related to clinical outcomes but focused their comments on the patient-facing components of ERP. While all patients shared positive feedback regarding their care and post-operative pain control, their comments about ERP were inconsistent. Themes identified included expectations, preparation for surgery, the ERP patient education booklet, and inpatient experience (specifically diet, pain, and education). Patient views about the ERP patient education booklet provided prior to surgery ranged from useful – one patient strongly agreed and said she used the booklet to identify which activities to undertake each postoperative day to enhance her recovery – to inadequate or forgettable with some patients unable to remember receiving the book or not feeling that it fully answered their questions. Another common theme during the interviews involved patient confusion about the early feeding component of the protocol that allowed patients to eat on postoperative day 0.

Conclusion: Conducting patient interviews during the post-operative inpatient stay enabled us to explore patient understanding of an ERP. Patient activation is an important component of a successful ERP and careful attention is needed to engage patients in preoperative expectation setting and postoperative recovery. Multiple modes of education and augmented patient education materials may be more effective than a one-size-fits-all approach to facilitate engagement.
 

56.14 Management Of Perforated Peptic Ulcer Disease At A Safety Net Hospital: Is Nonoperative Management Safe?

G. J. Roberts1, M. Cripps1, H. Phelan1, R. Mao1, T. Hranjec2, S. Hennessy1  1University Of Texas Southwestern Medical Center,General And Acute Care Surgery,Dallas, TX, USA 2Memorial Physician Group,Trauma,Hollywood, FL, USA

Introduction:
Peptic ulcer disease (PUD) affects over 4 million people worldwide annually, and up to 14% of them will develop a perforation. The selective nonoperative management of patients with perforated duodenal ulcers has been found to be safe and successful. However, the nonoperative management of perforated gastric ulcers is more questionable, especially when medical treatment and follow up are uncertain.  We sought to evaluate our experience in the management of perforated peptic ulcers at a tertiary care public safety net hospital and compare operative and nonoperative management. 

Methods:
A retrospective review of all patients with a perforated peptic ulcer at a single tertiary care public safety net hospital from April 2009 to August 2017 was performed. Data was obtained by chart review and included patient demographics, comorbidities, lifestyle factors, perioperative and in- hospital admission factors along with morbidity, mortality and follow-up. Patient outcomes and follow up after nonoperative versus operative management were compared by univariate analysis using Wilcoxon rank sum, Chi-square, and Fisher’s exact tests where appropriate.  

Results:
A total of 74 patients with perforated PUD were identified with 12 (16%) duodenal ulcers and 59 (80%) gastric ulcers, and 3 (4%) unknown.  Overall 30-day mortality was 5.4% with a median length of stay of 7 days (range 2-70). Only 61.5% (43 patients) followed up in clinic with 11% of patients undergoing a subsequent EGD.  Of these patients 14 (19%) were managed nonoperatively, of which 10 (71%) were gastric ulcer perforations.

Conclusion:
Clinic and endoscopic follow up after perforated peptic ulcer disease in a safety net hospital is suboptimal. In hemodynamically stable patients with perforated gastric ulcers without diffuse peritonitis, nonoperative management appears to be safe in the short term.  However, given the lack of medical, biochemical, and endoscopic follow up in this patient population, one should strongly consider definitive operative management.
 

56.13 Optimum Operative Time Predicting Peak Outcomes in Laparoscopic Surgery: Is faster better?

G. Ortega1, N. Bhulani1, M. A. Chaudhary1, R. Manzano1, S. O’Sullivan1, M. Jarman1, C. K. Zogg1, T. M. Fullum2, A. Haider1  1Center for Surgery and Public Health, Brigham and Women’s Hospital,Surgery,Boston, MA, USA 2Howard University College Of Medicine,Surgery,Washington, DC, USA

Introduction:

Operative time is dependent on multifaceted interactions between physician and patient-level factors. Prolonged operation room time is known to have adverse outcomes in patients, however optimal time for individual laparoscopic procedures leading to a lower likelihood of complications remain unknown. Utilizing a national surgical database, our objective is to assess operation room time for major laparoscopic procedures to gain insight into the optimum length of time leading to the best outcomes.

Methods:

Using the ACS-NSQIP database from 2005-2015, we identified 653,119 patients undergoing laparoscopic procedures in the database (appendectomy (AP), cholecystectomy (CC), Roux-en-Y-gastric bypass (RYGB), adjustable gastric banding (AGB), Nissen fundoplication (NFP), inguinal hernia repair (IHR), sigmoidectomy (SG), and splenectomy (SP). The optimum operative time and its relation to adverse outcomes were analyzed for each procedure using nonparametric LOWESS regression to identify time nodes. The time intervals were analyzed in an adjusted multivariable logistic regression for the primary outcome of surgical site infections, intra-operative complications, major and minor postoperative complication.

Results:

Of a total of 653,119 surgical patients who underwent laparoscopic procedures, there were 178,825 AP, 168,413 CC, 5,637 RYGB, 97,589 AGB, 73,867 NFP, 58,051 IHR, 26,101 SG, and 44,266 SP procedures. The overall complication rate was 31.9% (n=208,406). The median time for each surgery is tabulated in table 1. Overall, on univariate analysis, increasing operative time was associated with a higher likelihood of complication for all procedures (P <0.001). Table 1 shows the results of subgroup analysis by procedure type. This analysis showed the optimal operative time associated with the lowest risk-adjusted odds of surgical complications for each laparoscopic procedure. Less than 100 minutes was the optimal time for Roux-en-Y- gastric bypass, inguinal hernia repair, and sigmoidectomy. Less than 120 and 75 minutes were the optimal times for Cholecystectomy and Adjustable gastric banding respectively. The optimal times for appendicectomy, Nissen fundoplication and splenectomy were (1) 20-100 minutes, (2) 120-220 minutes, and 60- 150 minutes respectively.

Conclusion:

We show the optimal operative time associated with lower likelihood of surgical complications for several laparoscopic procedures. Evaluation of the optimum operative times is a valuable tool for training future surgeons and predicting optimal outcomes in surgical patients to reduce morbidities and complications. 
 

56.12 A Celiotomy Closure Technique with Minimized Risk for Incisional Hernia Formation

M. J. Minarich1, L. Y. Smucker1,2, R. E. Schwarz1,2  1Goshen Center for Cancer Care,Surgical Oncology,Goshen, INDIANA, USA 2Indiana University School of Medicine,South Bend, INDIANA, USA

Introduction: Long-term recovery after abdominal operations can be impaired by incisional healing complications. Incisional herniae are reported in between 5 to more than 20% of patients undergoing open celiotomy, and are often encountered after laparoscopic operations as well. Incisional hernia formation risk has been linked to closure techniques.

Methods: A continuous, single-layer, tension-free musculofascial mass closure with absorbable monofilament looped #1 PDS suture with greater than 2 cm bite size has been used for all open celiotomies, while 12 mm umbilical port sites are closed with 2 interrupted #0 PDS figure-of-eight sutures. Incisional hernia frequency and associated clinicopathologic factors were analyzed from prospective data in consecutive patients undergoing primary incision closure after visceral resections without simultaneous hernia repair or mesh placement.

Results: Out of 159 patients, 146 met study criteria and had at least 30 days of follow-up. There were 77 men and 69 women, with a median age of 64 years (range: 17-88) and a cancer diagnosis in 87%. There were 62 pancreatic (42%), 47 hepatobiliary (32%), 13 gastroesophageal (9%), 4 colorectal (3%) and 20 other procedures (14%), including 21 multivisceral resections (14%). Open operations (n=131, 90%) outweighed laparoscopic resections (n=15, 10%); a total of 69 patients had an umbilical port site, primarily from simultaneous diagnostic laparoscopy. The main incision was at the subcostal margin (n=117, 80%), midline (n=17, 12%) or elsewhere (n=12, 8%). At a median follow-up of 20.4 months (range 1-58, 22.5 for survivors), 6 patients had developed an incisional hernia (4.1%); 2 of these developed at a subcostal, 2 in an umbilical, 1 in both of these, and 1 at a transverse incision. Median time to hernia was 355 days (209-592). Hernia rates were 2.6% for subcostal margin, 0% for midline, and 8.3% for other incisions (including 10% of umbilical port sites)(p=0.006). Factors associated with incisional herniae included increased weight (p=0.007), abdominal depth and girth (p=0.02), spleen size (p=0.02), visceral fat (p=0.03), and platelet count (p=0.04), but not type of resection, prior operations, underlying diagnosis, weight loss or adjuvant treatment with chemotherapy or radiation.

Conclusion: The closure technique as utilized leads to a low, acceptable incisional hernia rate of <3% in subcostal or midline incisions, and can thus be recommended as routine approach. Based on the hernia rate in umbilical port site closures of 10%, we now prefer nonumbilical 5 mm port access for diagnostic laparoscopy procedures.

 

56.11 Practice Variation in Inguinal Hernia Repair: a Longitudinal Population Based Study

C. S. Latenstein1, F. Atsma2, M. Noordenbos2, S. Groenewoud2, P. R. De Reuver1  1Radboudumc,Surgery,Nijmegen, NIJMEGEN, Netherlands 2Radboudumc,Scientific Institute For Quality Of Healthcare,Nijmegen, NIJMEGEN, Netherlands

Introduction:
Approximately, 800,000 inguinal hernia repairs are performed in the US every year. Based on Medicare data the Darthmouth Atlas of Healthcare shows that the chance a patient will have surgical treatment for an inguinal hernia can vary across hospitals, with a factor score ranging from 2 to 4 fold difference. Nationwide longitudinal data about hernia repair rates in all patients presenting at the surgical outpatient clinic are lacking. We aimed to determine the longitudinal practice variation for inguinal hernia repair in the Netherlands.

Methods:
A population based analysis was performed for inguinal hernia patients in all Dutch general hospitals from 2013 up to 2015. Operation rates were determined for each hospital by dividing the absolute number of operated patients by the total  patients who consulted a surgeon for an inguinal hernia. Differences in operation rates between academic, teaching and non-teaching hospitals were evaluated. Operation rates were adjusted for differences in  age, sex and Social Economic Status (SES) to summarize a factor score for practice variation. This score was calculated by dividing the adjusted operation rate of the 95% percentile by the adjusted operation rate of the 5% percentile to asses a trend in practice variation over time.

Results:
A total of 88,538 patients  (90.2% male, mean age 59.9 years) with an inguinal hernia were included from all hospitals during the study period. The average nationwide operation rate for inguinal hernia repair was 74.7%. The difference in operation rate ranged between centers ranged from 22.1% to 96.5%. The average operation rate in eight academic hospitals was 57.0%, 73.0% in 44 teaching hospital and 78.7% in 31 non teaching hospitals. The summarized annual factor score for 75 general (teaching and non-teaching) hospitals was 1.4 in all three years.

Conclusion:
Unadjusted operation rates for inguinal hernia repair vary among hospitals, but the factor score illustrates a relatively low practice variation and comparable adjusted operation rates in the Netherlands. Compliance to evidence based guidelines and uniformity in decision making could potentially contribute to a low practice variation in inguinal hernia repair compared to other countries.
 

56.10 Readmission Following Outpatient Laparoscopic Cholecystectomy: Opportunities For Improvement

A. C. Beck1, P. Goffredo1, X. Gao1, P. W. McGonagill1, R. J. Weigel1, I. Hassan1  1University Of Iowa,Department Of Surgery,Iowa City, IA, USA

Introduction: Readmissions are a burden to the healthcare system and used as a quality metric. Over 400,000 outpatient laparoscopic cholecystectomies (LC) are performed annually in the United States and decreasing readmissions could represent a significant opportunity for quality improvement. The aim of this study is to determine causes of readmissions and identify modifiable risk factors.

Methods: Patients (pts) undergoing only elective outpatient LCs were identified from the 2013-2015 American College of Surgeons National Surgical Quality Improvement Program (NSQIP) database by Current Procedural Terminology (CPT) codes 47562 (LC) and 47563 (LC+intraoperative cholangiogram/IOC). ICD-9 and 10 codes were used to identify reasons for LC and readmissions. Student-t, chi-square and Mann-Whitney tests were used to assess continuous and categorical variables.

Results: A total of 69,376 pts underwent LC or LC+IOC, of which 2027 (2.9%) were readmitted within 30 days. The majority (71.5%) of readmissions were related to surgery and occurred after a median of 5 days (interquartile range 3-8).  The cohort had a mean age of 48 years±16, females 75%, BMI 31±8, ASA ≥3 27%, IOC 22% and mean operative time 58±34 minutes. Indications for LC were cholecystitis (72%), gallstones without (wo) obstruction (22%), pancreatitis (1%), and gallstones with (w) obstruction (0.3%). Readmission rates varied by indication: pancreatitis (4.9%), gallstones w obstruction (3.9%), cholecystitis (3.0%) and gallstones wo obstruction (2.6%) (p=0.003). The most frequent causes for readmissions were infection, retained stones, and other GI complications (Table). In adjusted analyses, readmitted pts were older, males, had higher ASA, longer operative times, and higher rates of post-operative complications (all p<0.001), however these factors varied by the reasons for readmission. Pts readmitted for infection or cardiopulmonary complications were older with higher ASA (p<0.01), while pts with pain, retained stones and other GI complications were younger with lower ASA (p<0.01). Pts who underwent LC+IOC had a lower readmission rate due to retained stones compared to LC alone (0.17% vs 0.31%, p=0.006). Abnormal serum bilirubin and indication for LC did not correlate with readmission for retained stones (p=0.21 and p=0.33).

Conclusions: Readmissions following outpatient LC are infrequent and depend on the preoperative indications. They occur for diverse reasons usually within the first week. Associated factors are patient and disease related, and are not all preventable or modifiable. In selected patients increased IOC use may decrease readmissions from retained stones.

56.09 Using Ultrasound Findings to Predict a Complicated Cholecystectomy

K. Venagtesan1, J. Santos1, T. S. Brahmbhatt1, S. Sanchez1, C. Narsule1, B. Sarkar1, C. LeBedis1, A. Gupta1, G. Kasotakis1  1Boston University,School Of Medicine,Boston, MA, USA

Introduction: Laparoscopic cholecystectomy (LC) is one of the most commonly performed general surgical procedures. Despite its frequency, several cases are riddled with intraoperative complications. We have previously identified preoperative clinical features that help predict a complicated LC (CLC). With this project, we aim to identify radiographic features that may also help predict these CLCs.  

Methods: A total of 750 consecutive cholecystectomies that took place over a 40-month period at a tertiary academic urban institution were reviewed. Outcomes of interest included Operative Time (OT) and a CLC (conversion to open, partial cholecystectomy, need for surgical drainage). Patient demographics, clinical and radiographic features, surgeon characteristics and operative outcomes were analyzed. Regression models were fitted for our outcomes of interest. Statistical significance was declared at p<0.05. 

Results: Ninety eight percent of our sample underwent a preoperative ultrasound (US), and 20.7% computed tomography. The average age was 44.6 years (IQR 42, 57), Charlson comorbidity index 1.71±2.1, and males comprised 27.5% of our sample. OT averaged 72 min (IQR 63, 82), and the incidence of CLC was 15.1%. Hospital LOS was 3±3.6 days. Gallbladder wall thickness on US was associated with CLC [OR 1.26 (95%CI 1.10-1.45), p=0.001], as was presence of pericholecystic fluid [OR 2.00 (95%CI 1.23-3.26), p=0.005]. US gallbladder wall thickness [OR 1.47 (95%CI 0.00-2.94), p=0.05] and pericholecystic fluid [OR 6.40 (1.90-10.89), p=0.005] were also associated with longer OT. The presence of sludge, gallbladder distention and number of stones were not associated with either outcome. The multivariable sonographic prediction of CLC is depicted on figure 1.

Conclusion: Knowledge of the above factors may alert surgeons to the possibility of a challenging cholecystectomy preoperatively.

 

56.08 Variable Management Preferences in the Treatment of Lower Extremity Prosthetic Graft Infections

N. Zamani1, S. E. Sharath1, P. Kougias1  1Baylor College of Medicine / Michael E. DeBakey VA Medical Center,Division Of Vascular Surgery And Endovascular Therapy, Michael E. DeBakey Department Of Surgery,Houston, TX, USA

Introduction: Lower extremity prosthetic graft infections continue to be serious postoperative complications. Though complete graft excision with extra-anatomic bypass has traditionally been required, graft preservation techniques have been proposed for select patients in order to avoid the physiologic demand of such a reconstruction. We aimed to assess the attitudes of practicing surgeons regarding their preferred management strategy and their perceptions about which operative technique results in the most favorable long-term outcomes.

Methods:  A voluntary, anonymous, cross-sectional survey was administered to actively practicing members of the Society for Clinical Vascular Surgery (SCVS) in the United States. Surgeons were asked to: (1) rank the factors that influence their management strategy, (2) choose between graft excision or preservation in a standard clinical scenario, and (3) identify the most effective strategy for promoting long-term limb preservation.

Results: Ninety (19%) licensed surgeons participated in the survey.  The three factors that were most influential in determining the management of lower extremity prosthetic graft infections were: the presence of sepsis, involvement of an anastomosis, and the presence of Pseudomonas. Conversely, the three least influential factors were: operating room availability, projected length of stay, and prosthetic graft material (Figure 1). In a stable, non-septic patient, 67% (n = 60) of respondents most frequently excise the graft. A form of preservation, however, was the preferred management strategy in 31% (28) of instances, with 20% (18) of all surgeons using antibiotic beads as their preferred method of graft preservation. In assessing the operative strategy associated with the greatest long-term limb preservation rates, 52% (47) of surgeons believed that excision provides the best limb outcomes, while 29% (26) identified preservation as the preferred overall strategy. The remainder (19%) felt that there was probably no difference in the outcomes of these two approaches. Interestingly, of those that prefer to excise the graft, 15% (9/60) actually believed that preservation with antibiotic beads may be the more beneficial therapeutic option.

Conclusion: Substantial discrepancy exists among providers regarding their personal management of lower extremity graft infections and their perception of which operative strategy is ultimately associated with higher rates of limb salvage. Given the range of personal, clinical, and institutional factors that influence a surgeon’s preferred operative approach, a well-designed study is required to definitively address this issue and inform clinical practice.