64.06 Epidural-related events are associated with ASA class, but not ketamine infusion following pancreatectomy

V. Ly1, J. Sharib1, L. Chen2, K. Kirkwood1  2University Of California – San Francisco,Anesthesia,San Francisco, CA, USA 1University Of California – San Francisco,Surgical Oncology,San Francisco, CA, USA

Introduction:

Epidural analgesia following pancreatectomy has become widely adopted; however, high epidural rates are often associated with early hypotensive events that require rate reduction and fluid resuscitation. It is unclear which patients are most at risk for such events. Continuous subanesthetic ketamine infusion reduces opioid consumption after major abdominal surgery. The effects of ketamine added to epidural analgesia have not been well studied in patients undergoing pancreatectomy. This study evaluates the safety and postoperative analgesic requirements in patients who received continuous ketamine infusion as an adjunct to epidural analgesia following pancreatectomy.

Methods:

A retrospective data analysis was conducted on 234 patients undergoing pancreaticoduodenectomy (n=165) or distal pancreatectomy (n=69) at UCSF Medical Center between January 2014 and January 2017. Patient demographics, including history of prior opiate use, along with perioperative fentanyl-ropivacaine epidural and continuous intravenous ketamine rates were collected. Oral morphine equivalents (OME) and visual analogue pain scales (VAS) were recorded at post op day 0, 1, 2, 3, and 4. To assess for safety, epidural rate decreases due to hypotension within the first 24 hours post op and ketamine-related adverse events were recorded.

Results:

Epidural (n=197) and other opiate analgesia (n=234) were administered perioperatively per surgeon preferences and institutional standards. Continuous ketamine infusion was given intraoperatively, postoperatively, or both in 71 patients, with a trend toward preferential use in patients with prior opiate exposure. Ketamine infusion was not associated with hypotensive events, daily maximum epidural rates, or significant epidural rate changes on postoperative days 0-4. OMEs and VAS were similar between groups, regardless of prior opiate use. Patients with American Society of Anesthesia (ASA) class 3 or 4 (n=111) were more likely to require epidural rate decreases (OR 2.37, 95%CI 1.3-4.2, p = 0.003) and associated interventions in the first 24 hours post op. Three patients reported ketamine-related adverse events such as unpleasant dreams and hallucinations.

Conclusion:

Subanesthetic ketamine infusion as an adjunct to epidural analgesia for pancreatic surgery patients is safe. Patients with ASA classification 3 or 4 experience more hypotensive events which require epidural rate decreases in the first postoperative day following pancreatectomy. Further study is required to assess whether ketamine infusion allows for use of lower epidural rates, reduces post op opioid consumption, or improves pain score in the early postoperative period.

64.04 Comparing Frailty Scales to Guide Creation of a Multidimensional Assessment for Surgical Patients

J. McDonnell1, P. R. Varley1, D. E. Hall1,2, J. W. Marsh1, D. A. Geller1, A. Tsung1  1University Of Pittsburgh,General Surgery,Pittsburgh, PA, USA 2VA Pittsburgh Healthcare System,General Surgery,Pittsburgh, PA, USA

Introduction:  Frailty defines a phenotype of functional decline that places patients at risk for death and disability, and the American College of Surgeons and American Geriatric Society have joint guidelines which recommend implementation of a frailty assessment for aging patients. Though various instruments for measuring patient frailty have been described in the literature, it is unclear which is the most appropriate for routine screening of surgical patients. The goal of this project was to compare assessments from three separate frailty instruments in a cohort of surgical patients in order to inform the development of a robust, clinically feasible frailty assessment for surgical patients.

Methods:  Demographic and medical history for all new patients evaluated at the Liver Cancer Center of UPMC was collected by patient-completed questionnaire and verified by a research associate (RA). Patients were then assessed for functional measures of frailty including extended timed up-and-go (eTUG), walking speed, grip strength, and Mini-Cog. Information from this assessment was then used to calculate scores for the Fried Frailty Phenotype (FF), Edmonton Frail Scale (EFS), and Risk Analysis Index (RAI). Frailty was defined as FF ≥ 3, EFS ≥ 8, or RAI ≥ 21.

Results: As part of a pilot project, 127 patients were evaluated. 64 (52.0%) of the patients were male. The cohort had a mean age of 62.9±15.0 years, and mean BMI of 29.4±6.4. Median scores for the RAI were 10 [IQR 7-17], 3 [IQR 2-5] for the EFS, and 1 [IQR 0-2] for FF. With respect to frailty, 36 (28.4%) of the patients were frail with respect to any of the three measures of frailty. 12 (9.5%) of patients were rated frail by the EFS, while 21 (16.5%) of patients were rated frail by the FF and 23 (18.1%) by the RAI. 20 patients (15.8%) were classified frail by only one measure, 12 (9.5%) by two measures, and only 4 (2.2%) by all 3 scales. Inter-rater agreement between the three scales was fair (κ = 0.33, p <0.001). Figure 1 demonstrates the concordance of measures among all three instruments, and demonstrates that choosing only one of the EFS, RAI or FF would have failed to recognize 16 (44.4%), 10 (27.8%), and 12 (33.3%) of the potentially frail patients respectively. 

Conclusion: The results of this pilot project suggest that it is feasible to implement a routine frailty screening process in a busy surgical clinic. Utilizing only single frailty instrument to evaluate patients may lead to an underestimate of frailty in surgical populations. Future work should focus on creation of a frailty screening process developed specifically for surgical patients and linked to surgical outcomes.

 

64.02 Isolated Pancreatic Tail Remnants After Transgastric Necrosectomy Can Be Observed

C. W. Jensen1, S. Friedland2, P. J. Worth1, G. A. Poultsides1, J. A. Norton1, W. G. Park2, B. C. Visser1, M. M. Dua1  1Stanford University,Surgery,Palo Alto, CA, USA 2Stanford University,Gastroenterology,Palo Alto, CA, USA

Introduction:  Severe necrotizing pancreatitis may result in mid-body necrosis and ductal disruption. When a significant portion of the tail remains viable but cannot drain into the proximal pancreas, the “unstable anatomy” that results is often deemed an indication for distal pancreatectomy. The transgastric approach to pancreatic drainage/debridement has been shown to be effective for retrogastric walled-off collections. A subset of these cases are performed in patients with an isolated viable tail. The purpose of this study was to characterize the outcomes among patients with an isolated pancreatic tail remnant who underwent trangastric drainage or necrosectomy (endoscopic or surgical) and determine how often they required subsequent operative management.

Methods:  Patients with necrotizing pancreatitis and retrogastric walled-off collections that were treated by either surgical transgastric necrosectomy or endoscopic cystgastrostomy +/- necrosectomy between 2009-2017 were identified by retrospective chart review. Clinical and operative details were obtained through the medical record. All available pre- and post-procedure imaging was reviewed for evidence of isolated distal pancreatic tail remnants. 

Results: A total of 75 patients were included in this study (41 surgical and 34 endoscopic). All of the patients in the surgical group underwent laparoscopic transgastric necrosectomy; the endoscopic group consisted of 27 patients that underwent pseudocyst drainage and 7 that underwent necrosectomy. Median follow-up for the entire cohort was 13 months and there was one death. A disconnected pancreatic tail was identified in 22 (29%) patients (13 laparoscopic and 9 endoscopic). After the surgical or endoscopic creation of an internal fistula (“cystgastrostomy”), there were no external fistulas despite the viable tail. Of the 22 patients, there were 5 (23%) patients that developed symptoms at a median of 23 months from the index procedure (3-recurrent episodic pancreatitis and 2-intractable pain). Two patients (both initially in endoscopic group) ultimately required distal pancreatectomy and splenectomy at 6 and 24 months after index procedure. 

Conclusion: Patients with a walled-off retrogastric collection and an isolated viable tail are effectively managed by a transgastric approach. Despite this seemingly “unstable anatomy,” the creation of an internal fistula via surgical or endoscopic “cystgastrostomy” avoids external fistulas/drains and the short term (near to initial pancreatitis) necessity of surgical distal pancretectomy. A very small subset require intervention for late symptoms. In our series, the patients that ultimately required distal pancreatectomy had initially undergone an endoscopic rather than a surgical approach; however, whether there is a difference between the two approaches in the outcome of the isolated pancreatic remnant is difficult to conclude due to small sample size.                                              

 

64.03 National Trends and Predictors of Adequate Nodal Sampling for Resectable Gallbladder Adenocarcinoma

A. J. Lee1, Y. Chiang1, C. Conrad1, Y. Chun-Segraves1, J. Lee1, T. Aloia1, J. Vauthey1, C. Tzeng1  1University Of Texas MD Anderson Cancer Center,Surgical Oncology,Houston, TX, USA

Introduction: For gallbladder cancer (GBC), the new American Joint Committee on Cancer 8th edition (AJCC8) staging system classifies lymph node (LN) stage by the number of metastatic LN, rather than their anatomic location as in AJCC6 and AJCC7.  Additionally, AJCC8 now recommends resection of ≥6 LNs for adequate nodal staging.  In the context of this new staging system and recommendation for GBC surgery, we evaluated current national trends in LN staging and sought to identify factors associated with any and/or adequate LN staging according to this new guideline.

Methods: Utilizing the National Cancer Data Base (NCDB), we identified all gallbladder adenocarcinoma patients treated with surgical resection with complete tumor staging information between 2004-2014.  We excluded patients with T1a and lower pathologic T-stage, as nodal staging is not indicated in these patients.  Nodal staging and nodal positivity rates were compared over the study period.  Univariate and multivariate logistic regression modeling were performed to identify factors associated with any and/or adequate nodal staging.

Results: We identified 11,525 patients with T-stage ≥T1b, for whom lymphadenectomy is recommended.  Only 49.6% (n=5,719) of patients had any LN removed for staging.  On multivariate analysis, treatment at academic centers (OR=2.33, p<0.001), more recent year of diagnosis (OR=2.29, p<0.001), clinical node-positive status (OR=3.46, p<0.001), pathologic T2 stage (OR=1.25, p<0.001), and radical surgical resection (OR=4.85, p<0.001) were associated with higher likelihood of having any nodal staging.  Age ≥80 (OR=0.57, p <0.001), and higher co-morbidity index (OR=0.70, p<0.001) were associated with lower likelihood of having any nodal staging.  However, of the 5,719 patients who underwent any nodal staging, only 21.8% (n=1,244) met the AJCC8 recommendation of adequate LN staging.  On multivariate analysis, female sex (OR=1.18, p=0.02), treatment at academic centers (OR=1.52, p<0.001), radical surgical resection (OR=2.53, p<0.001), and pathologic T4 stage (OR=2.14, p<0.001) were associated with having ≥6 LN resected concomitantly with their oncologic operation.  Patients over 80 years old (OR=0.60, p<0.001) and in South region (OR=0.79, p=0.002) were less likely to have adequate LN sampling according to the new recommendation.

Conclusion: National trends in the overall GBC LN staging rate of 49.6% do not live up to the new AJCC8 recommendations.  Furthermore, the finding that only 21.8% of patients met the 6 LN threshold highlights the gap between the new AJCC8 recommendations and reality.  We have identified demographic and clinicopathologic factors associated with any and/or adequate LN staging, which can be incorporated into future targeted quality improvement initiatives.

63.09 Outcomes in VATS Lobectomies: Challenging Preconceived Notions

D. J. Gross1, P. L. Rosen1, V. Roudnitsky4, M. Muthusamy3, G. Sugiyama2, P. J. Chung3  2Hofstra Northwell School Of Medicine,Department Of Surgery,Hempstead, NEW YORK, USA 3Coney Island Hospital,Department Of Surgery,Brooklyn, NY, USA 4Kings County Hospital Center,Department Of Surgery, Division Of Acute Care Surgery And Trauma,Brooklyn, NY, USA 1SUNY Downstate,Department Of Surgery,Brooklyn, NY, USA

Introduction:   The number of thoracic resections performed for lung cancer is expected to rise due to increased screening in high risk populations. However majority of thoracic surgical procedures in the US are performed by general surgeons (GS). Currently Video Assisted Thoracoscopic Surgery (VATS) has become the preferred approach to lung resection when feasible. Our goal is to examine short term outcomes of VATS lobectomy for malignancy performed by either GS or CT surgeons using the America College of Surgeons National Surgical Quality Improvement Project (ACS NSQIP) database.

Methods:  Using ACS NSQIP 2010-2015 we identified patients that had an ICD 9 diagnosis of lung cancer (162) that underwent VATS lobectomy (CPT 32663). We included only adults (≥18 years) and elective cases and excluded cases that had preoperative sepsis, contaminated/dirty wound class, and missing data. Risk variables of interest included demographic, comorbidity, and perioperative variables. Outcomes of interest included 30-day postoperative mortality, 30-day postoperative morbidity, and length of stay (LOS). Univariate analysis comparing cases performed by GS vs CT was performed. We then performed propensity score analysis using a 3:1 ratio of CT:GS cases with categorical outcome variables assessed using conditional logistic regression.

Results: A total of 4,308 cases met criteria; 649 (15.1%) by GS and 3,659 (84.9%) by CT. Mean age in the GS group was 68.6 vs 67.8 years in the CT group (p=0.034). There was a greater proportion of African American patients in the GS compared to CT group (8.0% vs 3.4%, p<0.0001), but higher rates of dyspnea with moderate exertion in the CT compared to GS group (19.8% vs 12.9%, p<0.0001). Operative time was shorter in the GS group vs CT group (179 vs 196 minutes, p <0.0001).  After propensity score matching the two groups were found to be well balanced on all risk variables. LOS was longer in the GS vs matching CT group (mean 6.2 vs 5.3 days, p=0.0001). Conditional logistic regression showed that GS treated patients had no greater risk of 30-day mortality (p=0.806), but had greater risk of postoperative sepsis (OR 2.20, 95% CI [1.01, 4.79], p=0.047).

Conclusion: In this large observational study using a prospectively collected clinical database, we found that while general surgeons had longer LOS, compared to cardiothoracic trained surgeons there were no differences in short-term mortality and morbidity with the exception of increased risk of postoperative sepsis. Further prospective studies are warranted to investigate oncologic and long-term outcomes.

63.10 Acid Suppression to Prevent Gastrointestinal Bleeding in Patients with Ventricular Assist Devices

A. W. Hickman1, N. W. Lonardo1, M. C. Mone1, A. P. Presson1, C. Zhang1, R. G. Barton1, S. H. McKellar1, C. H. Selzman1  1University Of Utah,Salt Lake City, UT, USA

Introduction:  The high incidence of gastrointestinal bleeding (GIB) in patients with ventricular assist devices (VAD) is well known, but there is limited evidence to support the use of proton pump inhibitors (PPI) or histamine receptor antagonists (H2RA) for preventing GIB in patients who require treatment for their cardiac disease with VAD implantation.

Methods: The institutional Surgical and Cardiovascular ICU and VAD databases within an academic cardiac mechanical support and transplant center were queried for patients who underwent VAD implantation between 2010 and 2014. The devices included HeartWare, HeartMate II, Jarvik 2000, or SynCardia TAH devices and could be used for left, right or both-sided failure. An observational cohort study was conducted on the final population to identify which prophylactic acid suppressing drug regimen was associated with the fewest number of GIB events within 30-days after VAD implantation: PPI or H2RA, or no acid suppressing therapy. Secondary outcomes included an evaluation of the timing, etiology, and location of all GIB events. Univariate and multivariate regression was performed using clinically important covariates. A combined variable for pre-existing GIB risk was created based on history of GIB and previous use of acid suppressive medication. Based on the number of GIB events, the acid suppressing treatment and three other covariates were used in the final model.

Results: There were a total of 138 patients included for analysis, 19 (13.8%) of which had a GIB event within the 30-day period. Both H2RA and PPI use were associated with a reduction in GIB events when compared to no acid suppressive therapy. In the logistic regression analysis controlling for ICU admission APACHE II score, preoperative hematocrit, and pre-existing GIB risk, the PPI cohort had a statistically significant reduction in GIB [OR 0.18 (0.04-0.79) p=0.026] (see table).

Conclusion: This review of patients with newly implanted VAD revealed that the use of acid suppressing therapy during the postoperative ICU period resulted in fewer GIB events. When controlling for severity of illness and known risks for bleeding, those patients treated with a PPI had a statistically lower risk for GIB. Cardiothoracic surgeons and ICU clinicians should consider this treatment option in order to reduce complications for this high-risk subset of patients.

 

63.07 Outcomes and Readmissions after Orthotopic Heart Transplant vs Left Ventricular Assist Devices

E. Aguayo1, L. Mukdad1, A. Mantha1, A. Iyengar1, R. Hernandez1, P. Benharash1  1David Geffen School Of Medicine, University Of California At Los Angeles,Cardiac Surgery,Los Angeles, CA, USA

Introduction:  Left ventricular assist devices (LVAD) have significantly expanded the range of options for stabilizing and treating end stage heart failure.  As LVAD technology continues to improve, the morbidity and mortality for patients is expected to approach that of orthotopic heart transplantation (OHT). While others have examined outcomes of such individual heart replacement modalities in trials, large-scale comparisons between OHT and LVADs have not been performed thus far. The present study was performed to compare the perioperative outcomes and 30-day readmissions between LVAD implantation and OHT using a national cohort.

Methods:  Patients who underwent either OHT or LVAD implantation from 2010 to 2014 in the National Readmission Database (NRD) were selected. The NRD is an all-payer inpatient database maintained by the Healthcare Cost and Utilization Project that estimates more than 35 million annual U.S. hospitalizations. Mortality, readmission, and GDP-adjusted cost were evaluated using hierarchical linear models adjusting for socioeconomics, demographics, and comorbidities.

Results: Of the 13,660 patients identified during the study period, 5,806 (43%) received OHT while 7,854 (57%) received LVADs. LVAD patients were on average older (56 vs. 52, P<0.001) and had less severe comorbidities based on the Elixhauser Index (5.7 vs. 6.6, P<0.001). LVAD was associated with shorter length of stay after adjustment (37.1 vs 36.0 days, IRR:0.95, P<0.001), higher adjusted in-hospital mortality (12.3% vs. 7.0%, OR= 2.01, P<0.001), higher adjusted costs ($220,052 vs. $184,625, P<0.001), and longer readmission (10.0 days vs. 6.9 days, IRR: 1.28, P<0.001) length of stay. All-cause readmission at 30 days (27.5% LVAD vs 24.4% OHT, OR=1.02, P=0.81) and cost of readmission ($28,653 LVAD vs. $22,105 OHT, P=0.73) were not significantly different between modalities.

Conclusion: In this nationwide analysis of patients who underwent cardiac replacement therapy from 2010 to 2014 patients receiving LVAD had similar rate and cost of 30-day readmission compared to those undergoing OHT. These results further support recent studies indicating improved outcomes and survival using LVAD implantation. However, the initial cost of implantation and in-hospital mortality remain significantly greater among LVAD recipients after adjusting for demographics, comorbidities, and hospital variation. Given the projected increases in LVAD utilization and limited transplant donor pool, further emphasis on LVAD cost containment and comparative effectiveness is essential to the viability of such therapy in the era of value-based healthcare delivery.
 

63.08 In-hospital Outcomes and Resource Use for Robotic Mitral Valve Repair: Beyond the Learning Curve

Y. Seo1, Y. Sanaiha1, K. Bailey1, E. Aguayo1, A. Mantha3, V. Dobaria2, A. Chao1, T. Fan1, N. Satou1, P. Benharash1  3University Of California – Irvine,Orange, CA, USA 1David Geffen School Of Medicine, University Of California At Los Angeles,Los Angeles, CA, USA 2University Of California – Los Angeles,Los Angeles, CA, USA

Introduction:
The additional cost burden associated robotic operations have been cited as a major barrier to the wide dissemination of this technology. Robotic valve operations have demonstrated similar safety and efficacy but higher initial costs when compared to open surgery.  Previous studies have demonstrated the initial learning curve of robotic procedures to be followed by a plateau phase where operative time and complication rates stabilize. The objective of the present study was to evaluate our institutional experience with robotic mitral valve repairs (rMVR) beyond the learning curve, and to compare clinical and financial outcomes to the open approach.

Methods:
The prospectively-maintained institutional Society of Thoracic Surgeons database was utilized to identify all adult patients undergoing robotic and open isolated mitral valve repair from January 2008 to December 2016. Subjects with concomitant surgeries and previous cardiac surgeries were excluded. Multivariate regressions were performed to produce risk-adjusted operative times, complications, length of hospitalization, and costs. Financial data was obtained from the hospital database and adjusted for inflation. Categorical variables were analyzed by Fisher’s exact test and continuous variables were analyzed by the independent sample T-test for unequal sample size. An alpha of < 0.05 was considered statistically significant.

Results:
During the study period, 175 robotic and 259 open MVR cases were performed. Compared to open, rMVR patients were less likely to be hypertensive (51 vs 41%, p=0.002), or have chronic lung disease (13 vs 5%, p=0.005) and had higher hematocrit values (36 vs 39%, p<0.001) and ejection fractions (58 vs 60%, p=0.023). With increasing robotic experience, operative times decreased significantly as shown in Figure 1, but rates of complications, hospital and ICU lengths of stay did not change. Compared to costs of open surgery, rMVR was associated with 43% less cost (p=0.001), fewer cases needing postoperative blood products (27 vs 15%, OR=0.61, p=0.004) and lower rates of complication (46 vs 30%, OR=0.44, p=0.001). Also rMVR was associated with significantly shorter times in the ICU (84 vs 144 hours, p<0.001) and length of stay (6.5 vs 9.9 days, p<0.001).

Conclusion:
In this longitudinal single institution experience, increasing number of rMVR’s beyond the initial learning curve was associated with decreasing operative times. Our findings demonstrate comparable short-term outcomes between robotic and open MVR. Interestingly, the robotic approach was more cost effective likely due to shorter hospital and ICU length of stay. With increasing experience, robotic MVR can surpass the open technique in cost effectiveness while providing equivalent short term outcomes. 
 

63.04 Incidence, Costs and Length of Stay for Heparin Induced Thrombocytopenia in Cardiac Surgery Patients

E. Aguayo1, K. L. Bailey1, Y. Seo1, A. Mantha2, V. Dobaria1, Y. Sanaiha1, P. Benharash1  1University Of California At Los Angeles,Department Of Surgery/ Division Of Cardiac Surgery,Los Angeles, CA, USA 2University Of California – Irvine,School Of Medicine,Orange, CA, USA

Introduction:
Heparin is routinely used in many cardiovascular procedures to prevent thrombosis. An antibody mediated process, heparin-induced thrombocytopenia (HIT) occurs in a small subset of patients exposed to heparin. While hemorrhage is thought to be rare in HIT, the incidence of stroke, pulmonary embolism, and deep vein thrombosis dramatically increase. While some have suggested a recent increase in the incidence of HIT, data on the impact of HIT on costs and length of stay (LOS) after cardiac surgery is generally lacking. The present study aimed to assess national trends in the incidence and resource utilization associated with HIT in cardiac surgical patients. 

Methods:
A retrospective cohort study was performed identifying adult cardiac surgery patients (≥ 18 years) with a diagnosis of HIT were identified using the 2009-2014 National Inpatient Sample (NIS) Database and International Statistical Classification of Diseases and Related Health Problems (ICD9) codes. In hospital mortality and GDP-adjusted cost were evaluated using hierarchical linear models adjusting for socioeconomic, demographic and comorbidity measured by Elixhauser Index.

Results:
Of the 3,985,878 adult cardiac surgery patients, 16,610 (0.42%) had HIT as a primary diagnosis with no trend over the study period. Compared to those without the diagnosis, HIT patients were on average older (67.1 vs 65.1, p<0.001), insured by Medicare (62% vs 52%, p<0.001), and had a higher Elixhauser comborbidity index (4.48 vs. 3.75, p<0.001). HIT was associated with significantly longer index LOS (19.1 vs 10.6 days, p<0.001) and higher hospitalization costs (91,977 vs $52,090, p<0.001). After adjustment for baseline differences, HIT was independently associated with increased risk of death (OR 2.72, 95% CI: 2.41-3.06), stroke (OR 2.12, 95% CI: 1.72-2.62), deep venous thrombosis (OR: 8.63, 95% CI: 7.60-9.80), and pulmonary embolism (OR: 5.43, 95% CI: 4.55-6.48).

Conclusions:

Based on this national analysis of adult cardiac surgical patients, HIT disproportionately affected those with government sponsored health insurance. The presence of HIT was associated with a significantly longer LOS, higher costs and comorbidities. The incidence of serious complications such as stroke, DVT, and PE more than doubled in HIT patients. These findings have significant implications in the era of value-based healthcare delivery. In addition to reducing unnecessary exposure to heparin, proper diagnosis and treatment is essential for favorable outcomes in these patients.

 

63.05 Immune Cell Alterations after Cardiac Surgery Associated with Increased Risk of Complications and Mortality

D. J. Picone1, N. R. Sodha1, T. C. Geraci1, J. T. Machan1, F. W. Sellke1, W. G. Cioffi1, S. F. Monaghan1  1Brown University School Of Medicine,Surgery,Providence, RI, USA

Introduction: Systemic inflammatory response syndrome (SIRS) frequently occurs following cardiac surgery, a controlled traumatic event. Typically, emphasis is placed on the white blood cell count; however, immune cell responses following trauma have been associated with poor outcomes.  We hypothesize that lymphocyte loss and lack of recovery after cardiac surgery will predict poor outcomes.

Methods: This is a retrospective review of all adult post-cardiac surgery patients from a single institution from Oct 2008 to Oct 2015. Patients were included if they had more than two complete blood counts (CBC) drawn in the first 7 days post operatively. Demographic data, complications, hospital and ICU length of stays, operative data, and mortality were obtained from the Society of Thoracic Surgery (STS) database. Laboratory data was obtained from the medical record.  Leukocyte, neutrophil, and lymphocyte counts were retained. Patients were grouped based on the pattern of response of elevation/depression and normalization versus failure of normalization for each component (leukocyte, neutrophil, lymphocyte). Kaplan-Meier curves and odds ratios were used to analyze association with 30 day mortality, development of pneumonia, renal failure, post-operative sepsis, and all complications. 

Results: 2401 patients were included in the leukocyte group and 1795 patients in both the neutrophil and lymphocyte groups. Patients who developed increased leukocytosis that remained elevated within 7 days had an increased risk of mortality (8.7%), compared to both those who normalize (2.9%, p <0.0001) and those who did not develop a leukocytosis (1.8%, p <0.0001). There was no difference in mortality for the neutrophil or lymphocyte groups.  Patients who did not develop post-operative lymphopenia had decreased risk compared both to those with persistent lymphopenia and those with normalization of lymphopenia, as indicated respectively in the following: pneumonia (OR 0.42 (CI 0.25-0.69), 0.49 (CI 0.24-0.98)); renal failure (OR 0.21 (CI 0.12-0.39), 0.36 (CI 0.15-0.8)); sepsis (OR 0.21 (CI 0.06-0.64), 0.11 (CI 0.03-0.36)); all complications (OR 0.42 (CI 0.30-0.61), 0.37 (CI 0.23-0.58)). Leukocytosis that failed to normalize was associated with increased risk of pneumonia (OR 2.5 (CI 1.2-3.4)) and all complications (OR 3.46 (CI 2.5-4.8)). There was no associated complication risk with neutrophilia.

Conclusion: Failure to normalize leukocytosis after cardiac surgery is associated with higher risk of mortality. Development of lymphopenia in the post-operative period is associated with increased risk of post operative complications. Use of these routinely ordered labs may help identify patients at risk for complications and who should not be “fast tracked” for discharge. Future work will compare the predictive nature of these laboratory tests versus standard predictors from the STS database.

 

63.06 A Nationwide Study of Treatment Modalities for Thoracic Aortic Injury

Y. Seo1, E. Aguayo1, K. Bailey1, Y. Sanaiha1, V. Dobaria2, P. Benharash1  1David Geffen School Of Medicine, University Of California At Los Angeles,Los Angeles, CA, USA 2University Of California – Los Angeles,Los Angeles, CA, USA

Introduction:
Thoracic aortic injuries (TAI) have traditionally been associated with high morbidity and mortality. Since its FDA approval in 2005, thoracic endovascular aortic repair (TEVAR) has emerged as a suitable alternative to open repair. However, use of TEVAR and impact on other treatment modalities at a national level remains ill defined.  This study aims to analyze the national trends of hospital characteristics, patient characteristics, and resource utilization in the treatment of TAI.

Methods:

Patients admitted with TAI between 2005 and 2014 were identified in the National Inpatient Sample (NIS). Patients were identified as undergoing TEVAR, open surgery and non-operative management. The primary outcome was in-hospital mortality while secondary outcomes included complication, length of stay, and GDP-adjusted costs. Multivariate logistic regression accounting for comorbidities, concomitant injuries, and other interventions was used to determine predictors of mortality and receiving a particular treatment.

Results:
Of the 11,257 patients who were admitted for TAI during the study period, 33% received TEVAR, 2% open surgery, and 12% non-operative management. Trends in the use of various modalities are shown in Figure 1 with TEVAR having the largest growth (p<0.001). Compared to open surgery, TEVAR patients had higher rates of concomitant brain injury (17 vs 26%, p=0.01), pulmonary injury (21 vs 33%, p<0.001) and splenic injury (2 vs 4%, p=0.031). Patients were less likely to undergo TEVAR if they were female (OR=0.73, P=0.026), older than 85 (OR=0.29, P=0.019), had congestive heart failure (OR=0.27, P=0.014), or coronary artery disease (OR=0.34, P=0.035). In hospital mortality was greater for open surgeries (OR=3.06, p=0.003) and nonoperative management (OR=4.33, p<0.001) than TEVAR.  Open had higher rates of cardiac complication (10 vs 4%, p<0.001). Mortality rate for TEVAR and nonoperative management did not change throughout the years but mortality for open surgery increased (p=0.04). Interestingly, the cost for admissions with TEVAR increased from $35K to $95K (p=0.004), while the cost for open surgery has steadily declined (p=0.031). 

Conclusion:

Our findings indicate the rapid adoption of TEVAR over open surgery for management of TAI. TEVAR is associated with lower mortality and complication rates but has increased costs not otherwise explained by other patient factors. This warrant further studies into to change in cost and socioeconomic barriers to receiving optimal care.

63.03 The Additive Effect of Comorbidity and Complications on Readmission after Pulmonary Lobectomy

R. A. Jean1,2, A. S. Chiu1, J. D. Blasberg3, D. J. Boffa3, F. C. Detterbeck3, A. W. Kim4  1Yale University School Of Medicine,Department Of Surgery,New Haven, CT, USA 2Yale University School Of Medicine,National Clinician Scholars Program,New Haven, CT, USA 3Yale University School Of Medicine,Section Of Thoracic Surgery, Department Of Surgery,New Haven, CT, USA 4Keck School Of Medicine Of USC,Division Of Thoracic Surgery, Department Of Surgery,Los Angeles, CA, USA

Introduction: Hospital readmission after cardiothoracic surgery has a significant effect on healthcare delivery, particularly in the era of value-based reimbursement. Studies have shown that readmission after major surgery is significantly associated with preoperative comorbidity burden and the development of postoperative complications. We sought to investigate the additive impact of comorbidity and postoperative complications on the risk of readmission after thoracic lobectomy, and compare which of these factors were driving this phenomenon. 

Methods:  The Healthcare Cost and Utilization Project’s Nationwide Readmission Database (NRD) between 2010 and 2014 was used as the dataset for this study. The NRD was queried for discharges for pulmonary lobectomy with a primary diagnosis of lung cancer. Patients surviving to discharge were followed for rates of 90-day readmision. Readmission rates were calculated for low-risk patients who had no comorbidity and no postoperative complications.  Next, rates were compared iteratively by the presence of Elixhauser comorbidity and postoperative complications. Adjusted linear regression, accounting for patient age, sex, insurance status, and income, was used to calculate the mean change in readmission rate by the number of comorbidities and postoperative complications.

Results: A total of 106,262 pulmonary lobectomies were identified over the study period, of whom 20,112 (18.9%) were readmitted within 90 days of discharge. Of this total cohort, the mean age was 67.7 ± 0.11 years, with a mean of 2.5 Elixhauser comorbidities and an mean incidence of 0.8 postoperative complications per patient. Of the 5812 (5.5%) patients with no comorbidities or postoperative complications, 680 (11.7%) were readmitted. At the other extreme, of the 6121 (5.8%) of patients with 3+ comorbidities and 3+ complications, 1877 (30.7%) of patients were readmitted. After adjusting for age, sex, and insurance status, each additional comorbidity and any postoperative complication were associated with a 2.3% (95% CI 2.0% – 2.6%) and 2.7% (95% CI 2.3% – 3.2%) increased probability of readmission, respectively.

Conclusion: Among patients with the lowest risk profile, there was an 11.7% 90-day readmission rate. Adjusting for other factors, each additional comorbidity increased this rate by approximately 2.3%, while each postoperative complication increased this rate by 2.7%. These results demonstrate that even among optimized patients without postoperative complications, there remains notable risk of rehospitalization, indicating that careful patient selection and the avoidance of complications may not completely reduce readmission risk after pulmonary lobectomy.

 

63.02 Risk Factors for and Outcomes of Conversion to Thoracotomy during Robotic-Assisted Lobectomy

S. Hernandez2, F. Velez-Cubian2, R. Gerard2, C. Moodie1, J. Garrett1, J. Fontaine1,2, E. Toloza1,2  1Moffitt Cancer Center,Thoracic Oncology,Tampa, FL, USA 2University Of South Florida Health Morsani College Of Medicine,Tampa, FL, USA

Introduction:   We aimed to identify risk factors and outcomes for conversion to thoracotomy from robotic-assisted video-assisted thoracoscopic (R-VATS) pulmonary lobectomy.

Methods:   We retrospectively analyzed all patients (pts) who underwent R-VATS lobectomy for primary lung cancer by one surgeon between September 2010 and August 2016.  Patients were grouped to “conversion” versus “non-conversion” to open lobectomy.  Patients’ demographics, co-morbidities, pulmonary function tests (PFTs), perioperative outcomes, hospital length of stay (LOS), tumor histology, and pathologic stage were compared between groups.  Chi-square, analysis of variance, Student’s t-test, or Kruskal-Wallis test was used, with p≤0.05 as significant.

Results:  Twenty pts (5.3%) required conversion to open lobectomy from a total of 380 R-VATS lobectomy pts.  “Conversion” pts were similar in age, BMI, smoking history, co-morbidities, and PFTs to “non-conversion” pts.  More “conversion” pts received neoadjuvant therapy than “non-conversion” pts (25.0% vs. 3.6%; p<0.001).  Estimated blood loss (EBL) was higher in “conversion” pts (500 mL [interquartile range (IQR)=675] vs 150 mL [IQR=150]; p<0.001), and median operative time was longer for “conversion” pts (298 min [IQR=157] vs 171 min [IQR=71]; p<0.001), compared to “non-conversion” pts.  Tumor laterality and having an extended resection or re-do surgery did not significantly differ between groups.  Bleeding from a pulmonary vessel occurred in 50% of “conversion” pts versus 0.3% of “non-conversion” pts (p<0.001).  Tumor size, histology, grade of differentiation, and lymphovascular invasion were not significant factors for conversion.  Patients with pN2 disease had higher risk for conversion (45.0% vs 16.4%; p<0.001).  Pulmonary complications were similar between groups, including prolonged air leak (15.0% vs 21.9%; p=0.46), pneumonia (5.0% vs 6.4%; p=0.80), and respiratory failure (0% vs 1.9%; p=0.53), as was in-hospital mortality (5.0% vs 1.1%; p=0.14).  However, “conversion” pts were at higher risk for cardiopulmonary arrest (5% vs 0.6%; p=0.029), cerebrovascular accident (5% vs 0%; p<0.001), and multi-organ failure (10% vs 0.6%; p<0.001).  Median chest tube duration (5.0 days [IQR=3.8]) for “conversion” pts was longer compared to “non-conversion” pts (4.0 days [IQR=4.0]; p=0.022).  Median hospital LOS was also longer for “conversion” pts (6.0 days [IQR=5.5] vs 4.0 days [IQR=4.0]; p=0.026). 

Conclusions:  Pulmonary lobectomy via R-VATS approach is associated with low conversion rate to thoracotomy.  However, pts with neoadjuvant therapy or clinical N2 disease should be counseled about higher risk of conversion to thoracotomy.  Further, preoperative cardiovascular risk assessment and postoperative monitoring for cardiovascular events are important.

62.10 Failure to Rescue is Associated with Delayed Exploratory Laparotomy After Traumatic Injury

A. M. Stey1, T. Bongiovanni1, R. Callcut1  1University Of California San Francisco,Department Of Surgery,San Francisco, CA, USA

Introduction: Failure to rescue is an outcome measure more dependent on care related factors than other outcome measures. As such, failure to rescue may be a better means of targeting areas for improvement in quality of care. The aim of this study was to determine whether delayed exploratory laparotomy following traumatic injury was associated with higher failure to rescue rates postoperatively.

Methods:  The National Trauma Data Bank (NTDB) National Sample Program 2008-2012 weighted file was used to identify patients older than 12 year of age that underwent exploratory laparotomy following injury. Delay was defined as greater than one day following presentation. A multi-level logistic model estimated the association between delay and failure to rescue while controlling for hypotension in the emergency room, Glasgow Coma Scale (GCS), Injury Severity Score and age at the patient level and hospital strata as well as response weight at the hospital level. 

Results: A total of 2,245 patients underwent an exploratory laparotomy following traumatic injury. Of those, 5.5% experienced a delay greater than one day, despite the fact that 9.9% were hypotensive in the emergency room, 9.8% had a GCS of three and 16.5% had an Injury Severity Score greater than 25. In total, 31.2% of patients had a complication as defined in NTDB. The mortality rate overall was 9.8%. Failure to rescue occurred in 4.1% of patients. The average odds of failure to rescue if there was delayed exploratory laparotomy was 2.9 times higher than if there was no delay (95% confidence interval 1.3-6.2, p=0.0008) after adjusting for hypotension in the emergency room, GCS, injury severity score, age and hospital strata.

Conclusion: Operative delay greater than one day in exploratory laparotomy after traumatic injury was associated with significantly higher failure to rescue rates during their hospitalization. These findings could imply that, either that injury sets into motion pathophysiologic insults that cannot easily be reversed upon delayed exploration or possibly patients whose care is delayed initially may also be more likely to fail to be rescued from subsequent complications.

63.01 Nationwide Comparison of Cost & Outcomes of Transcatheter vs Surgical Aortic Valve Replacement

M. Lopez1, J. Parreco1, M. Eby1, J. Buicko1, R. Kozol1  1University Of Miami, Palm Beach,General Surgery Residency,Miami, FL, USA

Introduction:

The Healthcare Cost and Utilization Project (HCUP) released the first version of the Nationwide Readmission Database (NRD) in 2015. The NRD is unique in that it allows for tracking a patient across hospitals and supports analysis of national readmission rates. This database was used to compare various clinical outcomes and costs associated with transcatheter aortic valve implantation (TAVI) versus surgical aortic valve replacement (SAVR).

Methods:
The NRD was queried for all patients undergoing SAVR or TAVI in 2013 that were over the age of 18. Propensity matching was performed, 1 to 1, using age, gender, elective status and 12 comorbidities. Multivariate logistic regression was then implemented on matched versus unmatched datasets on all outcomes of interest. Kaplan-Meier curves were then constructed for the mortality endpoint.

Results:
There were 39,270 patients who underwent aortic valve procedures (33,191 with SAVR compared to 6,079 with TAVI). Baseline analysis revealed the TAVI patients were older and had more comorbidities. Propensity matching resulted in 8,774 patients (4,387 patients per group). The mean sum of length of stay (LOS) including readmissions was less in TAVI patients (unmatched 12.5 vs 13.4 days, p < 0.01 and matched 12.0 vs 14.2 days, p < 0.01) and the mean total cost of readmissions was similar in both groups (unmatched $23,114 vs $22,629, p = 0.57 and matched $22,805 vs $22,220, p = 0.62).

Conclusion:
Initial admission costs are greater for patients undergoing TAVI compared to patients undergoing SAVR. However, after factoring in readmissions, the mean LOS for patients undergoing TAVI is less than the LOS for those undergoing SAVR, and the mean cost of readmissions is similar in both groups. Additionally, any excess mortality risk associated with TAVI was eliminated via a matching procedure. 
 

62.07 Rapid Release of Blood Products Protocol Optimizes Blood Product Utilization Compared to Standard MTP

S. Jammula1, C. Morrison1  1Penn Medicine Lancaster General Health,Trauma Services,Lancaster, PA, USA

Introduction:  While massive transfusion protocols (MTPs) are effective means of expeditiously delivering blood products to patients with exsanguinating hemorrhage, activation often occurs in cases with small blood volume deficits, leading to blood product wastage and over-transfusion. We sought to determine whether the implementation of a rapid release of blood products protocol (RR), which utilizes less resources, would impact the manner in which MTPs were activated in our level II trauma center, and if decreases in blood product wastage would be observed. We hypothesized that RR would result in the reservation of MTPs for sicker patients and that blood product wastage would decrease.

Methods: All MTP activations one year pre (May 2015-2016) and post-RR (May 2016-2017) were analyzed. Compared to MTP (6 pack red blood cells [RBCs], 6 units fresh frozen plasma [FFP]), RR only releases 4 RBCs and 1 FFP per activation. MTP resource utilization and blood product wastage was compared pre to post RR in both trauma and non-trauma populations. p ≤ 0.05 was considered significant.

Results: A total of 61 MTPs were activated pre (n=24) to post (n=37) RR (Trauma: 36; non-Trauma: 25), with 34 RRs activated in the post-RR period. Compared to the pre-RR group, significantly higher transfusion rates for FFP (Pre: 4.46±7.91 units, Post: 9.20±10.5 units; p=0.050) and platelets (Pre: 0.79±1.35 units, Post: 1.68±1.86 units; p=0.036) were found. Higher, albeit non-significant, transfusion rates were also observed for RBCs and cryoprecipitate pre to post for the total population. No difference in wasted blood products was found pre to post RR within the trauma population, however significant increased waste of FFP was observed post-RR in the non-trauma cohort.

Conclusion: Institution of the RR protocol results in more appropriate activation of MTPs, but does not decrease waste within a mature trauma center.
 

62.08 Rapid and Efficient: Comparison of Native, Kaolin and Rapid Thrombelastography Assays in Trauma

J. Coleman1, E. E. Moore2, A. Banerjee1, C. C. Silliman1, M. P. Chapman1, H. B. Moore1, A. Ghasabyan2, J. Chandler2, J. Samuels1, A. Sauaia1  1University Of Colorado,Surgery,Denver, CO, USA 2Denver Health,Surgery,Denver, CO, USA

Introduction:  Several thrombelastography (TEG) functional assays have been developed to reduce time to available data to guide transfusion in trauma and hemorrhage.  These assays involve addition of various agonists: native, kaolin (contact activation/intrinsic pathway) and rapid with kaolin plus tissue factor (extrinsic pathway).  How this acceleration of TEG affects its efficiency and accuracy as a predictor of adverse outcomes is unknown.  We hypothesize that citrated native TEG, without addition of activators, best predicts massive transfusion (MT).

Methods:  The Trauma Activation Protocol (TAP) study is an ongoing prospective study of trauma activation patients.  Whole blood samples were immediately collected upon presentation from April 2015 to March 2017 for concomitant citrated native (CN-TEG), citrated kaolin (CK-TEG) and citrated rapid (CR-TEG) TEGs.  We assessed the predictive performance for MT (defined as >10 RBC units or death within 6 hours postinjury) via the area under the receiver-operating- characteristics curve (AUROC, derived with a logistic regression model) for ACT/R, maximum amplitude (MA) and angle, and via positive/negative predictive values (PV+, PV-) for fibrinolysis (LY30, stratified as >3% or >5% or >7%).  We excluded 43 (12%) patients who died within 30 minutes postinjury. Data are expressed as median(IQR) or n(%). 

Results: Overall, 339 TAP patients had data for all TEGs (age: 32 years, IQR:26-46,  51% blunt injuries, NISS: 21, IQR:9-34). RBC transfusion was required in 140 (41%) patients; 12% (n=41) qualified as MT (>10RBC units or death/6hrs). The figure depicts AUROCs for ACT (CR-TEG) or R (CN-TEG, CK-TEG), MA and angle.  Compared to CR-TEG, CK-TEG performed better for ACT/R (p=0.06 and angle (p=0.09), while CN-TEG was better for MA (p=0.16). LY30>7% produced the best balance between PV+ (CR-TEG: 47%, CK-TEG: 58%, CN-TEG: 67%) and PV- (CR-TEG: 91%, CK-TEG: 92%, CN-TEG: 92%).  The 95% confidence intervals of the AUROCs and of the PVs of the different assays overlapped considerably, suggesting the CR-TEG’s results were not inferior to the other functional assays.  When all TEG measurements (ACT/R+MA+angle+LY30>7%) for each assay (CR-, CN-, CK-TEG) were in the model, again there was considerable overlap between predictive performances of the different assays and all AUROCs were >0.79.

Conclusion: There was a significant overlap in the performance of the different TEG assays, suggesting CR-TEG is a rapid and efficient method to guide hemostatic resuscitation in trauma patients.  

 

62.09 Readmission for Falls Among Elderly Trauma Patients and the Impact of Anticoagulation Therapy

A. S. Chiu1, R. A. Jean1, M. Flemming1, B. Resio1, K. Y. Pei1  1Yale University School Of Medicine,Surgery,New Haven, CT, USA

Introduction:
Traumatic falls are the leading source of injury and trauma related hospital admission for adults over 65 in the United States. A strong predictor of future falls is a history of previous falls, making patients hospitalized for a fall a high-risk population. It is unknown exactly how frequently this group is hospitalized for a repeat fall. Additionally, there remains debate whether to resume anticoagulation in elderly patients who fall due to fears of bleeding complications with repeat falls. We evaluated the rates of readmission after a fall and frequency of bleeding complications.

Methods:

The National Readmission Database is a nationally representative, all-payer database that tracks patient readmissions. All patients over 65 and admitted in the first 6 months of 2013 and 2014 for a traumatic fall were included for analysis. Those who died during their index hospitalization were excluded.

Primary outcome measured was 6-month readmission rate for a subsequent traumatic fall. Secondary outcomes included the frequency of death and bleeding complications (intracranial hemorrhage, solid organ bleed, and hemothorax) during readmission. Further analysis was conducted stratifying by anticoagulation use.

Results:

In the first 6 months of 2013 and 2014, there were 342,731 admissions for a fall. The cohort had a mean age of 80.2 and 9.3% were on anticoagulation. The rate of 6-month readmission for a repeat fall was 4.7%. Of those who were readmitted for a fall, 3.9% died during the subsequent admission and 12.6% had a bleeding complication. The mortality rate among those with a bleeding complication was 8.5%. The most common bleeding complication on readmission was intracranial bleed (90.8%), followed by hemothorax (5.8%) and solid organ bleed (3.5%).

The rate of readmissions for falls among patients on anticoagulation (4.4%) was not significantly different from those not on anticoagulation (4.7%, p=0.0933). The percent of readmitted patients with bleeding complications was also not statistically different (12.2% with anticoagulation vs. 12.6% without anticoagulation, p=0.7629). However, the mortality rate was higher among those on anticoagulation (6.0% vs. 3.7% without anticoagulation, p=0.0211). Specifically, among patients readmitted with a bleeding complication, those on anticoagulation had a significantly higher mortality rate (24.8% vs. 7.0% without anticoagulation, p<0.0001).

Conclusion:

Among patients hospitalized for a fall, nearly 5% will be re-hospitalized for a subsequent fall within 6 months. Patients on anticoagulation do not have increased rates of bleeding complications when hospitalized for repeat falls; however, when they do have a bleed, they have far higher mortality rates than those not on anticoagulation. Given the high rate of repeat falls and the potential to fatally exacerbate injuries when on anticoagulation, caution should be exercised when restarting anticoagulation among elderly patients hospitalized for a fall.

62.05 The Sooner the Better: Use of a Real Time Automated Bedside Dashboard to Improve Sepsis Care

A. Jung1, M. D. Goodman1, C. Droege1, V. Nomellini1, J. A. Johannigman1, J. B. Holcomb2, T. A. Pritts1  1University Of Cincinnati,Surgery,Cincinnati, OH, USA 2University Of Texas Health Science Center At Houston,Houston, TX, USA

Introduction: Despite advances in modern critical care, sepsis and septic shock remain major contributors to morbidity and mortality.  Recent data indicates that decreasing the time interval between the diagnosis and administration of antibiotics is associated with improved patient outcomes.  The effect of a visual clinical decision support system on this interval is unknown.  We hypothesized that implementation of a commercially available bedside clinical surveillance visualization system would be associated with earlier antibiotic administration and decreased length of stay in surgical intensive care unit (SICU) patients.

Methods: An automated clinical surveillance visualization system was implemented in our SICU beginning in July 2016.  This system was integrated with our electronic medical record and continuously displayed vital signs and laboratory data at the bedside on a dedicated clinical dashboard (42” screen).  A bedside visual sepsis screen bundle (SSB) was added to the dashboard in June 2017.  Among other variables, the clinical dashboard displayed each patient’s calculated sepsis score based on heart rate, body temperature, respiratory rate, and white blood cell count.  The SSB indicator turned red if the sepsis score exceeded four points.  We retrospectively analyzed prospectively collected data from patients with bedside visualization systems before and after implementation of the SSB.  We determined mean sepsis score, maximum sepsis score, time to antibiotic administration, and SICU length of stay.

Results: During the study period, data were collected on 232 patients admitted to the beds with bedside clinical surveillance visualization systems.  Of these, 37 patients demonstrated elevated sepsis scores and were given antibiotics for clinical evidence of sepsis or septic shock (26 prior to SSB implementation, 11 after SSB implementation).  The mean sepsis score was similar in the pre- and post-SSB groups (1.8 vs 1.6, p=NS) as was the maximum sepsis score (6.0 vs 5.4, p=NS).  Time to antibiotic administration was significantly less in the post-SSB patients (pre= 48.9+14 hours vs post= 11.6+6 hours, p<0.05, see figure).  ICU length of stay was also shorter (pre=16.8 days, post=6.2 days, p<0.05) following the introduction of the SSB.   

Conclusion: Implementation of a bedside clinical surveillance visualization decision support system was associated with a decreased time interval between the diagnosis of sepsis or septic shock to administration of antibiotics. Integration of decision support systems in the ICU setting may help providers to adhere to Surviving Sepsis guidelines for identification and treatment of surgical patients with infections as well as improve quality of care.

62.06 ADAMTS13 activity correlates with coagulopathy after trauma: a potential novel marker and mechanism

M. R. Dyer1, W. Plautz1, M. A. Rollins-Raval2, J. S. Raval2, B. S. Zuckerbraun1, M. D. Neal1  1University Of Pittsburgh,Surgery,Pittsburgh, PA, USA 2University Of North Carolina At Chapel Hill,Pathology And Laboratory Medicine,Chapel Hill, NC, USA

Introduction: ADAMTS13 is metalloprotease that binds and cleaves von Willebrand factor (VWF), which is a critical regulator of thrombus formation.  ADAMTS13 deficiency has been implicated as the cause of idiopathic thrombotic thrombocytopenia purpura (TTP) but is also closely associated with other diseases of microvascular injury. Trauma is a leading cause of mortality worldwide and is characterized by a unique trauma-induced coagulopathy (TIC), with a pathophysiology that is incompletely understood. We sought to evaluate a possible link between ADAMTS13 activity and coagulopathy in trauma patients.

Methods:  Plasma samples from an ongoing randomized trial in trauma patients were obtained for analysis. Plasma from healthy individuals served as controls. Fluorescence resonance energy transfer analysis (FRET) was utilized to determine the activity level of ADAMTS13 in circulation. ADAMTS13 antigen and antibody levels were determined by enzyme-linked immunosorbent assays (ELISA). FRET activity levels, ADAMTS13 antigen, and antibody levels were compared between trauma patients and healthy controls using Mann-Whitney U tests. FRET activity levels were correlated with laboratory markers of coagulopathy (INR, Thromboelastography (TEG) maximum amplitude value, TEG activated clotting time, and blood product requirement) by determining the Spearman correlation coefficient.

Results: 70% of trauma patients had abnormal ADAMTS13 activity on FRET analysis (normal >80% IU/ml). Compared to healthy controls, the ADAMTS13 activity in plasma is significantly lower in trauma patients (96% vs. 66% IU/ml, p=0.0013). Correspondingly 81% of trauma patients displayed depressed levels of circulating ADAMTS13 compared to healthy controls, and overall levels of circulating ADAMTS13 were significantly lower in trauma patients (0.96 vs. 0.47 ug/ml, p=0.0019). There was no difference in circulating antibodies against ADAMTS13 between trauma patients and healthy controls. Interestingly, ADAMTS13 activity was found to be significantly correlated with injury severity score in trauma patients (ISS) (r=-0.34, p<0.05). Next, ADAMTS13 activity levels were correlated to markers of coagulopathy. Strikingly, ADAMTS13 activity was closely correlated with admission INR (r=-0.6250, p<0.001), admission TEG activated clotting time (r=-0.36, p<0.05), and admission TEG MA (r=0.36, p<0.05). Finally, ADAMTS13 activity significantly correlated with overall blood product transfusion requirements in trauma patients (r=-0.46, p<0.05).

Conclusion: Trauma results in several derangements in the normal clotting process. We now present evidence that circulating levels and enzymatic activity of ADAMTS13, a key regulator in normal hemostasis, are decreased following severe trauma. The decreased activity was found to be significantly correlated with markers of coagulopathy and may represent a novel insight into potential mechanisms of TIC.