92.03 Exploring the surgical needs of the incarcerated population

C. Hutchinson1, M. K. Bryant1, S. Scarlet1, R. Maine1, E. B. Dreesen1  1University of North Carolina at Chapel Hill,Surgery,Chapel Hill, NC, USA

Introduction:  There are over two million incarcerated people in America. Incarceration is strongly tied to poor health outcomes and contributes to health disparities. Medical and mental illness are more common among the incarcerated than the general population. The need for primary, psychiatry, and infectious disease care in this population has been well described. However, little is known regarding the surgical needs of incarcerated people. In this study, we characterized surgical care provided to incarcerated people at a large academic medical center in North Carolina.

Methods:  We conducted a retrospective case series. All incarcerated patients who received surgical care between April 4, 2014 and March 31, 2018 were identified in billing records based on payer. Basic demographic information (age, sex, correctional facility), primary diagnosis, surgical division, operation(s) performed, length of stay (LOS), charges, and amounts paid were obtained.

Results: A total of 1,725 incarcerated patients were cared for during the study period. Mean patient age was 46.5 years (SD 13.6, range 16-88). The majority of patients were men (n=1265, 73%). Location of incarceration was available for 81% of patients. Seventy-one correctional facilities across North Carolina (40%) were represented. A total 8,568 charges and 1,553 procedures were identified. The average number of charges per patient was 5.0 (SD 9.8, range 1-211). Every division in the Department of Surgery cared for an incarcerated person during the study period, including Pediatric Surgery. The division most likely to care for incarcerated patients was Vascular Surgery (n=2268 charges, 30%), followed by Gastrointestinal Surgery (n= 1636 charges, 22%), and Surgical Oncology (n=1293 charges, 17%). Emergency room visits represented 11.6% of the charges (n=992), and 620 patients (35.9%) had at least one ER visit. The division that billed for the most operative/procedural charges was Vascular Surgery (n=458, 29%), followed by GI Surgery (n= 319, 21%), and Surgical Oncology (n=222, 14%). A wide range of operative procedures were performed across the Department with the top five procedures billed representing only 11.7% of the total number of procedures. Of these, the most common procedure performed was wound debridement (n=64, 4%), followed by vascular access procedures (n=33, 2%), lysis of adhesions (n=29, 1.9%) and robotic surgery (n=29, 1.9%). The sum total of charges filed was $4,035,981.73. The amount paid by the Department of Corrections (DOC) was $2,594,533.51 (64% of the total charges filed). 

Conclusion: Incarcerated patients sought care for a wide variety of operations and procedures.  Incarcerated patients presented to our hospital from correctional facilities across the state. Elective procedures related to chronic conditions were more common than emergent procedures.  The procedures performed were reflective of the aging prison population. The DOC offers comparable reimbursement rates at > 60% of charges billed. 

 

90.18 Mortality Related to Mass-Casualty Incidents at a Malawian Tertiary Hospital

J. Kincaid1,3, G. Mulima3, N. Rodriguez-Ormaza2, A. Charles2, R. Maine2  1Thomas Jefferson University,Surgery,Philadelphia, PA, USA 2University Of North Carolina At Chapel Hill,Surgery,Chapel Hill, NC, USA 3Kamuzu Central Hospital,Surgery,Lilongwe, Malawi

Introduction:  Mass-casualty incidents (MCI) suddenly strain a healthcare system with an influx of trauma patients. Little is known about how MCIs in low resource settings impact mortality. We aimed to determine if the resource strain from MCIs at a tertiary hospital in Malawi increased mortality for MCI patients and patients who arrived on the same day as an MCI compared to patients who presented days without MCIs.

Methods:  This is a retrospective analysis of a prospective trauma registry, from January 1, 2012 through December 31, 2016, at a tertiary hospital in Malawi. MCIs were defined as ≥ 4 trauma patients who present simultaneously to the casualty department. We conducted bivariate analysis comparing patient, mechanism of injury, and outcome characteristics by whether or not the event was an MCI. Next, we determined whether non-MCI patients presented on the same day as an MCI or on a non-MCI day and compared the same variables. Categorical variables were compared with Pearson chi-squared test or the Fisher’s exact test; continuous variables were compared using Student t-test, Wilcoxon rank sum test or the Kruskal-Wallis test by ranks, as appropriate. Multivariable analysis using a Modified Poisson regression was utilized to estimate risk ratios (RR) and 95% confidence intervals (CI). We adjusted for sex, age, primary body area injured, transfer status, nighttime presentation, vehicle-related trauma and admission year.  P-values <0.05 were statistically significant.

Results: The registry included 75,350 trauma patients; 3% (2,227) were part of an MCI and 11,365 (15%) presented on the same day as an MCI. Overall more patients who presented as part of an MCI died, 90 (4%) vs. 2,124 (2.9%), p <0.001). This difference was driven by a higher proportion of MCI patients who were dead on arrival (2.9% vs. 1.1%, p<0.001), as in-hospital mortality rates for MCI or non-MCI traumas did not differ statistically (4.1% vs. 3.7%, p=0.671). However, trauma patients who were not a part of an MCI but presented to the ED the same day as an MCI had higher in-hospital mortality than patients who presented on days without an MCI (7.0% vs. 5.4% vs. 5.6%, p=0.015).  When compared to non-MCI trauma patients presenting on a non-MCI day, being part of an MCI increased the risk of in-hospital mortality by 19% (RR=1.19, 95%CI: 0.98-1.44, p=0.0821).

Conclusion: MCIs presented frequently to this Malawian tertiary hospital, which stressed the hospital’s limited capacity. The higher in-hospital mortality of trauma patients not involved in MCI but who presented the same day as an MCI points to the strain on the limited resources resulting in poorer patient outcomes when the hospital suffers the stress of an MCI. Both improved capacity for treating trauma patients at the central hospital and district hospitals coupled with improved triage protocols could decrease inappropriate transfers of trauma patients, which contributes to overwhelming the central hospital.
 

90.14 Mortality Following Trauma Exploratory Laparotomy in Sub-Saharan Africa

L. N. Purcell1, A. N. Yohann1, R. N. Maine1, T. N. Reid1, C. Mabedi2, A. Charles1  1University Of North Carolina At Chapel Hill,General Surgery,Chapel Hill, NC, USA 2Kamuzu Central Hospital,General Surgery,Lilongwe, LILONGWE, Malawi

Introduction: Trauma is a leading cause of morbidity and mortality, particularly in those 15 to 45 years old.  Over 90% of trauma mortality occurs in low- and middle-income countries (LMICs), especially in sub-Saharan Africa. Head injury is the main driver of trauma mortality, specifically in the pre-hospital setting. For patients presenting with torso injury, mortality is potentially preventable if bleeding, particularly from solid organ injury, is controlled expeditiously. We therefore sought to determine the risk of mortality in trauma patients requiring laparotomy in Malawi.

Methods:  This is a retrospective analysis of prospectively collected data at Kamuzu Central Hospital from 2008 – 2017 of admitted patients with torso trauma. Data variables include basic demographics, injury severity and characteristics, surgical intervention, and mortality outcome. Bivariate analysis was performed for covariates based on exploratory laparotomy status. A Poisson regression analysis was performed to estimate risk of mortality after trauma laparotomy controlling for pertinent covariates (injury severity, night time and weekend penetration, injury mechanism, time from injury to presentation).

Results: Over the study period, there were 120,573 trauma patients. Of the 20,522 (17%) patients admitted, 6,474 (31.6%) had torso trauma. Of these, 341 (5.3%) had exploratory laparotomies. Exploratory laparotomy had a male and blunt injury mechanism preponderance of 73.3% and 92.8%, respectively. The crude mortality for patient undergoing exploratory laparotomy versus non-operative management was 9.5% and 6.6 %, respectively. There was an 6.8% overall mortality for torso trauma. Following Poisson regression analysis, the incidence risk ratio for mortality following exploratory laparotomy after controlling for covariates was 3.74 (CI 2.06 -6.78, p <0.001).

Conclusion: After adjusting for injury severity, there is a greater than three-fold increased risk of mortality following trauma exploratory laparotomy. This may be attributable to limited availability of allogenic blood transfusion, inadequate perioperative resuscitation, in-hospital delays to operative intervention including limited access to the operating room, and delays in providers’ decision to perform operative intervention. Trauma protocols are imperative in low-resource settings to optimize timely and appropriate operative management of torso trauma.

 

90.02 The Epidemiology of Mass−Casualty Incident Patients Presenting to a Malawian Tertiary Hospital

J. Kincaid1,3, G. Mulima3, N. Rodriguez-Ormaza2, A. Charles2, R. Maine2  1Thomas Jefferson University,Surgery,Philadelphia, PA, USA 2University Of North Carolina At Chapel Hill,Surgery,Chapel Hill, NC, USA 3Kamuzu Central Hospital,Surgery,Lilongwe, Malawi

Introduction:  There is a dearth of information regarding mass-casualty incidents (MCIs) in low resource settings like Malawi. Most literature describes single catastrophic events that expose the fragility of a trauma system and its limited ability to handle the sudden increase in patients. However, in low resource environments, events that can stress the hospital care delivery system are more common than large disasters. We aim to describe the frequency and characteristics of mass casualty events at a tertiary hospital in Malawi.

Methods:  We retrospectively analyzed trauma registry data at a tertiary hospital in Malawi from January 1, 2012 through December 31, 2016. We defined MCI as ≥4 trauma patients presenting simultaneously. We present descriptive statistics and a bivariate analysis comparing patient, trauma mechanism, and outcome characteristics for MCI and non-MCI trauma patients. Categorical variables were compared with chi-squared or Fisher’s exact test and continuous variables were compared using the t-test or the Wilcoxon rank sum test. Statistical significance was defined as p?0.05.

Results: From 2012 to 2016, 75,278 trauma patients arrived at the casualty department; 2,227 patients (3%) arrived as part of an MCI. A total of 341 occurred during five years, an average of 1.1 per week. Most MCIs involved between 4 and 6 people. More women were part of an MCI, 35% vs. 27% for non-MCI. MCI victims were older than non-MCI patients (29±15 vs. 23±14 years). The most common mode of transportation overall was private vehicles for both MCI (52%) and non-MCI (35%) respectively. The median time to hospital presentation is shorter for MCI patients (1hr vs. 4hrs, p<0.0001). More of the MCI patients presented between 6pm and 6am (41% vs. 25%, p<0.0001), when staffing at the hospital is the lowest. Vehicle-related trauma was the most common mechanism for MCI, 77%, compared to 25% for non-MCI (p<0.0001). MCI patients were also admitted to the hospital more frequently (20% vs. 16%). A higher proportion of MCI victims were brought in dead (3% vs. 1%, p<0.0001). While overall mortality was higher among MCI victims (4% vs. 2%, p<0.0001), in-hospital mortality was 5.6% for both MCI and non-MCI patients.

Conclusion: In Malawi, MCIs occur frequently, and most MCIs arrive between 6pm and 6am when staffing is most limited. Hospital and public health efforts should address staff capacity for MCIs and efforts to decrease road traffic crashes. While overall mortality is higher in MCI, MCI patients who arrive at the hospital alive, have an equal chance of survival to discharge rate as admitted non-MCI patients. Establishing pre-hospital care and an organized trauma system to improve triage could improve post MCI survival.

85.06 A National Survey of Sexual Harassment Among Surgeons

A. Nayyar1, S. Scarlet1, P. D. Strassle1, D. W. Ollila1, L. M. Erdahl3, K. P. McGuire2, K. K. Gallagher1  1University Of North Carolina At Chapel Hill,Department Of Surgery,Chapel Hill, NC, USA 2Virginia Commonwealth University,Department Of Surgery,Richmond, VA, USA 3University Of Iowa,Department Of Surgery,Iowa City, IA, USA

Introduction:

Emerging data suggests that experiencing sexual harassment in the workplace is a common occurrence for women across all professions, which can be harmful personally and professionally.  While recent studies have reported the incidence of sexual harassment in medicine, no study has examined the nature or scope of sexual harassment experienced by surgeons.

 

Methods:

An anonymous, electronic survey was distributed via a web-based platform to members of American College of Surgeons (ACS), Association of Women Surgeons (AWS), and through targeted social media platforms from April-July 2018. Questions pertained to workplace experiences and frequency of sexual harassment in the past 12 months. Sexual harassment was defined according to the U.S. Equal Employment Opportunity Commission. Fisher’s exact and Mantel-Haenszel tests were used to assess differences in respondent characteristics.

Results:

1,005 individuals completed the survey. 74% (n=744) identified as women, a response rate of 18% among US women surgeons. 25% (n=249) of respondents identified as men which represents 1% of male surgeons in the US – these responses were analyzed separately. Respondents reported employment by an academic institution (51%), community medical center (13%), private practice (15%), or other (19%). For both genders, the most common specialties were general surgery (34%), trauma surgery (10%), and surgical oncology (8%). Overall, 58% (n=432) of women surgeons experienced sexual harassment within the 12-month period preceding the survey, compared to 25% (n=61) of men, p<0.0001. Among women, the most common forms of harassment reported were “verbal or physical conduct (e.g. body language)” (53%), “unwanted sexual advances or physical contact” (23%), and “comments about sexual orientation” (10%). Women trainees were more than twice as likely to experience harassment as compared to attending surgeons (OR 2.52, 95% CI 1.78, 3.56, p<0.0001). The majority (84%) of incidents of harassment reported by women as part of the survey were not reported to any institutional authority. The most common reasons for not reporting was “fear of a negative impact on my career” (43%), “fear of retribution” (32%), and “fear of being dismissed and/or inaction towards perpetrator” (31%) (Figure).  

Conclusion:

Our study indicates that there is an alarming prevalence of unreported sexual harassment experienced by women surgeons in the US.   Combined with the documented sexual harassment of women physicians in other medical specialties, we believe this finding demonstrates an urgent need to improve the safety of the healthcare workplace, not just for women surgeons, but for all.
 

62.19 Fellowship or Family? A comparison of residency leave policies with the Family and Medical Leave Act

S. T. Lumpkin1, M. K. Klein1, S. Scarlet1, M. Williford1, K. Cools1, M. C. Duke1  1University Of North Carolina At Chapel Hill,Surgery,Chapel Hill, NC, USA

Introduction: In 1993, the Family and Medical Leave Act (FMLA) required 12 weeks of unpaid, job-protected leave. Residency training is inherently demanding and inflexible. While 40% of residents anticipate having a child during training, taking leave to care for personal and family needs may delay residency graduation, board certification, and fellowship initiation. Our hypothesis is that a 12-week (FMLA) leave would delay board certification and fellowship training with the current specialty board training requirement policies.

Methods: We categorized the primary specialties recognized by the Accreditation Council for Graduate Medical Education (n=24) into surgical (n=10) and non-surgical (n=14) specialties. We excluded secondary specialties and specialties with fewer than 100 active residents nationwide. From May 2018 to August 2018, we examined the specialty leave policies to determine the impact of leave on the duration of residency training, board eligibility, and fellowship training. We compared our findings to a similar study of policies published in 2006.

Results: Across all specialties, the mean maximum leave allowed per year was 4.9 weeks (range 4-8). Among surgical specialties, the mean maximum leave per year was 5.3 weeks (range 4-8), compared to 4.6 weeks (range 4-6) among non-surgical specialties (p=0.38). Only five (21%) specialties have specific policy language regarding parental leave, and four (16%) regarding medical leave. Since 2006, seven specialty boards have substantially changed leave policies. In 2006, a 6-week leave would cause a delay of one year in board eligibility in 6 specialties; whereas in 2018, a 6-week leave would not result in delayed board eligibility for any specialty. A minority of specialties offer strategies to mitigate the impact of a 6-week leave, including taking leave during elective or non-clinical rotations (n=2), averaging leave across multiple years (n=8), extension of chief year (n=2), merit-based advancement (n=3), and exclusive program director discretion (n=2). In 2018, a 12-week (FMLA) leave during residency would extend training by a mean of 4.1 weeks (range 0-8) and delay board eligibility by a mean of 2.25 months (range 0-12). A 12-week leave in 17 specialties (71%) would delay fellowship training by at least one year.

Conclusion: Residents training in surgical and non-surgical specialties have similar allowable time for leave, although this is less than half of the FMLA requirement. Overall, there has been minimal change in the maximum duration of leave since 2006, but the impact of such leave on board eligibility has been mitigated. Unfortunately, a 12-week, FMLA-eligible, leave would cause significant delays in training, board eligibility, and entry into fellowship. The long-term effect of extending the duration of training may affect the decision to pursue fellowship, decrease the protected time to study for boards, and ultimately increase physician burnout.

61.10 Treatments Modalities for Esophageal Adenocarcinoma in the US: Trends and Survival Outcomes

M. Di Corpo1, F. Schlottmann1, P. D. Strassle2, C. Gaber2, M. G. Patti1,2  1University Of North Carolina At Chapel Hill,Department Of Surgery,Chapel Hill, NC, USA 2University Of North Carolina At Chapel Hill,Department Of Medicine,Chapel Hill, NC, USA

Introduction:  The rise in incidence of esophageal adenocarcinoma in the United States over the last decade has been well documented; however, data on trends in use of different therapies and their impact on long-term survival are lacking. We aimed to: a) assess the national trends in the use of different treatment modalities; and b) compare survival outcomes among the different treatment strategies. 

Methods:  A retrospective, population-based analysis was performed using the National Cancer Institute Surveillance, Epidemiology, and End Results (SEER) Program registry for the period 2004-2014. Adult patients (>18 years old) diagnosed with esophageal adenocarcinoma were eligible for inclusion. Treatments of interest included chemoradiation, esophagectomy, and chemoradiation plus esophagectomy. The yearly incidence of each treatment strategy was calculated using Poisson regression. A weighted Cox regression model was used to assess the overall effect of each treatment on mortality. Inverse-probability of treatment weights were used to account for potential confounding by year of diagnosis, sex, age, race/ethnicity, tumor grade, and derived AJCC TNM value. 

Results: A total of 10,755 patients were included. The median follow-up time was 15 months (interquartile range 7 – 33). During the study period, the percentage of esophagectomy alone significantly decreased from 14.6% to 4.8% (p<0.0001), the percentage of chemoradiation alone significantly decreased from 25.45% to 28.5% (p=0.08), and the percentage of chemoradiation plus esophagectomy significantly increased from 13.7% to 19.8% (p<0.0001). The 60-month survival rate was 13.0% for chemoradiation only, 33.0% for esophagectomy only, and 36.3% for chemoradiation plus esophagectomy (figure). After accounting for patient and cancer characteristics, both esophagectomy (hazard ratio [HR] 0.62, 95% CI 0.55, 0.70, p<0.0001) and chemoradiation plus esophagectomy (HR 0.45, 95% CI 0.41, 0.48, p<0.0001) had significantly lower rates of mortality compared to chemoradiation only.

Conclusion: The use of esophagectomy alone has decreased, and both the use of chemoradiation plus esophagectomy and chemoradiation alone have increased for patients with esophageal adenocarcinoma. Considering the better survival outcomes achieved with surgical resection, the use of chemoradiation alone should be discouraged in surgically fit patients.    

 

58.20 Characteristics and Complications of G-Tube Placement Among Surgical and Non-Surgical Services.

P. M. Alvarez1, J. Herb2, A. Vijay1, C. Cunningham1, K. Anderson1, S. Francois1, K. Herbert1, N. Bartl1, E. Hoke2, J. Jadi1, N. Rodriguez-Ormaza2,3, R. Maine2, E. Dreesen2, A. Charles2, T. Reid2  1University Of North Carolina At Chapel Hill,School Of Medicine,Chapel Hill, NC, USA 2University Of North Carolina At Chapel Hill,Department Of General Surgery,Chapel Hill, NC, USA 3University Of North Carolina At Chapel Hill,Department Of Epidemiology,Chapel Hill, NC, USA

Introduction:  While surgical and non-surgical services routinely place gastrostomy tubes, few investigations have examined the procedure’s outcomes based on performing service. This study describes baseline characteristics, complications, and mortality among patients who had gastrostomy tubes placed by either a surgical or non-surgical service.

Methods:  This is a retrospective study of all adult patients who underwent gastrostomy tube placement at UNC from March 2014 to July 2017. Baseline characteristics included age, sex, BMI, substance abuse, comorbidities, previous abdominal surgery, and prior gastrostomy tube. We compared placement by surgical versus non-surgical services outcomes, including severe and minor complications, and mortality, overall and gastrostomy tube related.

Results: Of the 1,339 adults who underwent gastrostomy tube placement, 45%(n=626) were placed by surgical services and 55% (n=713) were placed by non-surgical services. Baseline characteristics were similar although non-surgical services had higher rates of congestive heart failure (p=0.004) and COPD (p=0.05). Non-surgical services placed all gastrostomy tubes percutaneously, while surgical services placed 52.6% percutaneously, 37.3% laparoscopically, and 10.1% open. Mortality related to gastrostomy tube placement was similar (surgical 0.6% vs nonsurgical 0.5%, p=1.0), however overall mortality was higher among non-surgical services (23.7% vs 16.5%, p=0.004). There was no difference in major or minor complication rate (27.3% surgical vs 27.2% non-surgical, p=0.88).

Conclusion: Surgical and non-surgical service placement of gastrostomy tubes had equivalent gastrostomy tube related mortality and complication rates, although patients with gastrostomy tubes placed by non-surgical services experienced higher overall hospital mortality. The high in-hospital mortality and complication rates underscore the need for thoughtful patient selection for this procedure.

55.16 Do negative pressure incisional wound VACs decrease surgical site infections in pediatric surgery?

M. R. Phillips2, S. L. English1, E. Teeple1, A. E. Martin1, K. Reichard1, C. D. Vinocur1, L. Berman1  1Nemours/Alfred I DuPont Hospital for Children,Sidney Kimmel School Of Medicine At Thomas Jefferson University,Wilmington, DE, USA 2University Of North Carolina At Chapel Hill,Surgery,Chapel Hill, NC, USA

Introduction: Surgical site infection (SSI) rates are an important quality metric in surgery. Adult studies have demonstrated a decrease in rate of SSI with the use of negative pressure incisional wound vac device (NPIWV) dressings. No studies have examined the effect of NPIWV dressings on SSI rates in pediatric patients.

Methods:   We performed a retrospective review of patients who underwent surgery using NPIWV at our institution between February 2016 and February 2018. NPIWV dressings were applied by approximating the skin edges with buried, absorbable sutures, protecting the skin with adhesive barrier, covering the wound with black sponge, and applying negative pressure for up to 4 days. We identified a group of patients with the same CPT codes from our National Surgical Quality Improvement Program-Pediatric (NSQIP-P) data between January 2014 and January 2016, in order to compare SSI rates in the NPIWV population to historical controls.  The cohorts were compared using either a Mann-Whitney test or chi-square analysis (p<0.05).

Results: Thirty-five patients underwent surgery using NPIWV for wound closure. Sixty-four patients with similar CPT codes were identified from our institutional NSQIP-P data who did not have NPIWV dressings (Table 1). The groups were comparable.  In the NPIWV group there was 1 SSI in 35 cases (2.9%), and there were 7 SSIs in the 64 historical control patients (10.9%). The difference in SSI rates did not reach significance (p=0.25, OR: 4.18 CI [0.49-35.43]). There were no complications associated with the use of NPIWV dressings

Conclusion: NPIWV dressings can be used safely for the management of contaminated wounds in pediatric patients undergoing surgery, including neonatal patients. Our study shows a trend toward decreased SSI rates with NPIWV. However, our sample size was likely too small to reach statistical significance. These findings from a small retrospective study need to be confirmed in a larger, prospective trial.

54.17 Extent of Lymphadenectomy is Not Associated with Improved Survival in Esophageal Cancer

S. Mahoney1, P. Strassle2, M. Meyers1  1University Of North Carolina At Chapel Hill,Dept Of Surgery,Chapel Hill, NC, USA 2University Of North Carolina At Chapel Hill,School Of Public Health, Dept Of Epidemiology,Chapel Hill, NC, USA

Introduction: The impact of lymph node (LN) dissection in esophageal cancer outcomes remains unclear.  We sought to examine trends in LN yield over time in patients undergoing curative resection for esophageal cancer and the relationship with survival.

Methods: All National Cancer Database patients >18yo undergoing esophagectomy for adenocarcinoma or squamous cancer from 2004-2015 were included except those with metastatic disease or palliative treatment.  Bivariate analyses comparing demographics and cancer characteristics, stratified by LN yield, were compared using Chi-square and Wilcoxon-Mann-Whitney tests.  Trends in LN yield over time were assessed using Poisson regression.  5-year survival differences were compared using Kaplan-Meier curves and multivariable Cox proportional hazards regression. 

Results:  20,588 patients were included.  71% received neoadjuvant therapy.  Most (82%) were adenocarcinoma.  Stage II (44%) and stage III (40%) predominated.  Average LN yield increased over time from 11.7 to 16.4 (p<0.0001) as did the proportion of patients with 10-19 LN examined (32% to 45%) and >20 LN examined (14% vs. 28%).  The average number of positive LN remained the same (1.4 vs. 1.2; p=0.21).  Although crude survival was associated with increased LN yield (Table), adjusted survival had no association with LN yield.  Similarly, there was no association with survival when LN yield was treated as a linear variable (HR for any 5 LN increase 0.99, 95% CI 0.98, 1.01).

Conclusion:  Improvements in LN yield with esophagectomy has been seen on a population level over time.  However, higher LN yield is not associated with improvements in adjusted survival in patients with resected esophageal cancer. 

 

52.16 Nationwide trends in laparoscopic synchronous resection of colon cancer with liver metastasis

S. T. Lumpkin1, P. D. Strassle1,2, L. N. Purcell1, N. Lopez3, K. B. Stitzenberg1  1University Of North Carolina At Chapel Hill,Surgery,Chapel Hill, NC, USA 2University Of North Carolina At Chapel Hill,Epidemiology,Chapel Hill, NC, USA 3University Of California – San Diego,Surgery,San Diego, CA, USA

Introduction: Minimally invasive synchronous resection of primary colon cancer and metastatic liver lesions has been proven safe and effective. We hypothesized that the proportion of synchronous resections performed laparoscopically has increased.

Methods: Using the National Inpatient Sample (NIS) from 2009-2015, we identified all adult patients with colon cancer undergoing colon resection alone and those undergoing a synchronous colon and liver resection based on ICD-9 codes.  We compared the pace of laparoscopic uptake in the synchronous resection cohort to our control group, colon resections. All surgeries were classified as either laparoscopic or open. The yearly incidence of laparoscopic procedures was calculated using Poisson regression. Chi-square and Wilcoxon tests were used to compare patient and hospital characteristics.  

Results: Overall, 86,520 patients with colon cancer were identified, 55,766 underwent colon resections alone and 754 underwent synchronous resections. Wedge resections composed 50% of liver procedures. Laparoscopic procedures constituted 27,158 (49%) of the colon resections and 161 (21%) of synchronous resections, p<0.0001. Laparoscopic procedures have increased significantly in both colon resection (42% to 54%) and synchronous resections (11% to 32%) between 2009 and 2015, p<0.0001 and p=0.006, respectively, (Figure), although there was no significant difference in the pace of uptake of laparoscopy between groups (p=0.09).  Robotics composed 3% of all operations, and robot use was similar between colon resection and synchronous resections (p=0.13). Among synchronous resections, patients undergoing laparoscopic and open procedures were similar in regards to age (p=0.26), sex (p=0.69), race/ethnicity (p=0.28), insurance status (p=0.52), median household income (p=0.30), Charlson Comorbidity Index (p=0.19), and hospital size (p=0.95). However, significant differences were seen across colectomy procedure (p=0.004), liver procedure (p=0.0001), and hospital region (p=0.04). Specifically among synchronous resections, a laparoscopic approach was more likely performed in patients undergoing a left hemicolectomy (29% vs. 17%), liver ablation (32% vs. 18%), and among patients having surgery in the West (30% vs 19%). Laparoscopic approach was significantly less common among patients undergoing right hemicolectomy (19% vs. 25%, p=0.049). No difference was seen across teaching hospital status (21% vs. 24%, p=0.36).

Conclusions: Laparoscopic synchronous resection of colon and liver disease for colon cancer is becoming increasingly popular nationwide. The type of colon resection and liver procedure performed may guide a surgeon’s operative approach. There are also regional differences in practice patterns.

51.01 The Feasibility of Extracorporeal Membrane Oxygenation (ECMO) in Burn and Inhalation Injury Patients

T. D. Reid1, Y. Mikhaylov-Schrank1, C. Gaber1, P. D. Strassle1, R. Maine1, S. M. Higginson1, A. G. Charles1, C. Beckman1, B. A. Cairns1, L. Raff1  1University Of North Carolina At Chapel Hill,Chapel Hill, NC, USA

Introduction:
Burn inhalation patients are at risk for Acute Respiratory Distress Syndrome (ARDS) given pulmonary damage, systemic cytokine release, and large volume fluid resuscitation. As many as 86% of mechanically ventilated burn patients suffer from ARDS. Extracorporeal Membrane Oxygenation (ECMO) is a useful adjunct in patients with severe ARDS after failure of maximal ventilatory therapy. However, few studies have looked at the utility of ECMO following burn inhalation injury. We hypothesized that the use of ECMO in burn and inhalation injury patients is both safe and effective.

Methods:
This is a retrospective review of prospectively collected ECMO program data at the University of North Carolina. Patients included in the study were All adult and pediatric patients with burns and/or inhalation injury with ARDS that underwent Veno-venous (VV) or Veno-arterial (VA) ECMO cannulation between November 2008 and October 2017. Baseline characteristic information was collected. Primary outcomes included mortality on ECMO and 30-day mortality. Secondary outcomes included critical care and ECMO related complications. Frequencies and percentages were presented for categorical data and medians and interquartile ranges were presented for continuous data.

Results:
Of the 21 patients in this study, 16 (76%) were male. Six (29%) patients had burns only, 3 (14%) had inhalation injury only, and 12 (57%) had both burns and inhalation injury. Median percent burn was 28% of total body surface area. Patients had a median age of 48 years (IQR 26-55) with a range of 2 to 72 years. Median hours on ECMO was 116 hours and 90% percent of cannulations were VV. Substance abuse was common in this population at 33%. Eight (38%) patients required hemodialysis, which was performed via the ECMO circuit, and 12 (57%) patients were placed on a lasix infusion. Tracheostomy was performed in 18 (86%) patients. One (5%) patient died while on ECMO from cardiac causes. Total 30 day mortality was 19% (n=4) and 90-day mortality was 24% (n=5). These additional deaths were sepsis-related. Eight (38%) patients had ECMO-related complication; 3 (14%) had minor bleeding, 3 (14%) had bleeding requiring transfusion of more than 2 units, 1 (5%) had a deep venous thrombosis at the cannula site, 1(5%) had a malpositioned cannula, and 1(5%) had an arrhythmia. Only two of the patients who died had a complication related to ECMO. Both patients had bleeding requiring transfusion, however both patients died of sepsis unrelated to the bleeding. 

Conclusion:
In this study, 30-day survival was 81%, and 90-day survival was 76%. While 38% of patients had complications, the majority were minor and did not lead to morbidity or mortality. These numbers are comparable to the current literature on ECMO unrelated to burns, that demonstrate a survival of approximately 60-75%. ECMO in burn and inhalation injury patients appears to be safe and effective. Larger trials are needed to examine the use of ECMO in this population.
 

47.04 Survival Outcomes of Early-Stage Hormone Receptor Positive Breast Cancer in the Elderly

A. Nayyar1, K. K. Gallagher1, P. D. Strassle1, C. G. Moses1, K. P. McGuire2  1University Of North Carolina At Chapel Hill,Department Of Surgery,Chapel Hill, NC, USA 2Virginia Commonwealth University,Department Of Surgery,Richmond, VA, USA

Introduction:
Women ≥70 years old form a significant proportion of patients affected by breast cancer (BC). Treatment decisions for this patient population are complicated given presence of comorbidities, reduced tolerability of therapy and limited enrollment in clinical trials. A growing body of evidence suggests equivalent outcomes in elderly patients with hormone receptor positive, early-stage BC patients receiving primary endocrine therapy only or surgery with subsequent endocrine therapy. Whether these results are reproduced in the larger BC population outside of a clinical trial, currently remains unclear.

Methods:
Women ≥70 years old, diagnosed with early-stage invasive BC between January 2008 and December 2013, with tumor size T1 or T2 and minimal nodal involvement (N0 and N1), endocrine and/or progesterone receptor positive, and started endocrine therapy within a year of diagnosis were identified using the Surveillance, Epidemiology, and End Results (SEER)-Medicare linked datasets. Endocrine therapy use was identified using outpatient prescription fills for Anastrozole, Exemestane, Fulvestrant, Letrozole, Raloxifene, Tamoxifen, and Toremifene. Surgical intervention included either breast conserving surgery or mastectomy. Trends in the use of primary endocrine therapy only were assessed using Poisson regression. Multivariable Cox proportional hazard regression was used to estimate the association between undergoing surgery within a year of diagnosis and 5-year all-cause mortality, after adjusting for patient demographics, comorbidities, and clinical cancer characteristics. Similar methods were used to assess 5-year cancer-specific mortality, where non-cancer mortality was treated as a competing risk. 

Results:
Overall, 8,968 women were included in the analysis; 8,146 (91%) received surgery with endocrine therapy and 832 (9%) received primary endocrine therapy alone. The proportion of women not receiving surgery remained consistent between 2008 and 2013, p=0.24. The 5-year mortality was 7% (n=660), and 21% of all deaths were due to cancer causes (n=140). After adjustment, 5-year mortality was lower among women undergoing surgery (HR 0.55, 95% CI 0.44, 0.67, p<0.0001) (Figure). Similar results were found when looking at 5-year cancer-specific mortality (HR 0.35, 95% CI 0.22, 0.56, p<0.0001).

Conclusion:
Elderly BC patients with early-stage, hormone receptor positive disease receiving primary surgical intervention plus endocrine therapy had improved survival compared to those receiving primary endocrine therapy alone. This study reflects the importance of surgical intervention for elderly BC patients and warrants further investigation to evaluate whether surgery may be omitted safely in subsets of elderly patients.

42.03 Perspectives and Priorities of Surgery Residency Applicants in Choosing a Training Program

P. Marcinkowski1, P. Strassle1, T. Sadiq1, M. Meyers1  1University Of North Carolina At Chapel Hill,General Surgery,Chapel Hill, NC, USA

Introduction:
Applicants pursuing surgery residency have a number of variables to prioritize in selecting a training program. We sought to evaluate the importance of various criteria to applicants applying to surgery residency.

Methods:
An anonymous electronic survey was distributed to applicants who interviewed at a single surgery program over a six-year period (Match years 2013-2018). Respondents were asked to categorize the importance of various criteria in considering a training program on a 5-point scale (very important/above average/average/below average/unimportant). Fisher’s exact tests were used to assess whether the percentage of respondents considering each variable ‘more important’ varied across application year (categorized as 2013-2014, 2015-2016, and 2017-2018), sex, medical school region, or medical school type (public vs. private).  A p-value <0.05 was considered statistically significant. All analyses were performed using SAS 9.4 (SAS Inc., Cary, NC).

Results:
176 responses were received (35% response rate). 47% female. 47% were from the Southeast region followed by 20% Midwest, 19% Northeast, 7% Southwest, 6% West. 40% attended private medical schools. 100% of applicants applying 2015-2018 ranked operative experience as very important/above average importance versus applicants applying from 2013-2014 who ranked it very important/above average importance 94% of the time (p=0.04). Applicants applying 2017-2018 ranked non-operative clinical experience very important/above average importance 90.7% of the time compared to 2013-2014 and 2015-2016 who ranked it similarly 77.6% and 73.9% of the time respectively (p=0.04). Applicants from the northeast region ranked research opportunities as very important/above average importance 96.9% of the time compared to the other regions (West: 63.6%, Midwest: 73.5%, Southeast: 75.3%, Southwest: 83.3%) (p=0.02). Otherwise, there was no statistically significant variation in applicant demographics and criteria they believed important to them in choosing a residency program.  Overall, applicants rated resident attitude/relationship (91% very important), faculty attitude (80% very important), resident/faculty relationship (75% very important) and operative experience (89% very important) as the most important characteristics.

Conclusion:
Surgery residency applicants appear to place greatest importance on interpersonal interactions and operative experience over other training program/hospital characteristics. There was some variability depending on the year applied and the region that the applicant applied from, but in general applicants had similar preferences. This information may be helpful to applicants and programs alike as they navigate the application and match process.
 

39.06 Predicted End-Stage Renal Disease Risk in Living Kidney Donors at an Academic Medical Center

J. Jadi1, A. Nayyar2, C. Gaber3, P. D. Strassle3, D. Gerber2, A. Toledo2, P. Serrano2  1University Of North Carolina At Chapel Hill,School Of Medicine,Chapel Hill, NC, USA 2University Of North Carolina At Chapel Hill,Division Of Abdominal Transplant,Chapel Hill, NC, USA 3University Of North Carolina At Chapel Hill,School Of Public Health,Chapel Hill, NC, USA

Introduction:  Allogenic kidney transplantation is the definitive treatment course for patients with dialysis dependent end-stage renal disease (ESRD). While past research has shown that the overall risk for healthy kidney donors is negligible, there is a growing body of evidence that demonstrates certain donor characteristics may be associated with an increased risk of developing ESRD in a healthy kidney donor. The goal of this study was to assess how donor characteristics including gender, race, history of smoking, and BMI affect the long-term risk of ESRD in kidney donors at an academic medical center. 

Methods: Predicted pre-donation 15-year and lifetime risk of end-stage renal disease (ESRD) were calculated for kidney donors between 2006 and 2016 at University of North Carolina Medical Center, using the recently released ESRD Risk Tool for Kidney Donor Candidates (http://www.transplantmodels.com/esrdrisk/). Student’s t-tests were used to compare 15-year and lifetime predicted risk across sex, race, smoking status, and obesity status.

Results: Overall, the 15-year and lifetime predicted risks of ESRD were 0.19% and 1.04%, respectively. Compared to females, males had a higher mean 15-year (0.29% vs. 0.14%, p<0.0001) and lifetime (1.60% vs. 0.72%, p<0.0001) predicted risk of ESRD. Black donors, compared to whites, also had a higher 15-year (0.45% vs. 0.14%, p<0.0001) and lifetime (2.38% vs. 0.76%, p<0.0001) predicted risk of ESRD. Compared to non-smokers, smokers had a higher 15-year (0.28% vs. 0.16%, p<0.0001) and lifetime (1.50% vs. 0.86%, p<0.001) predicted risk of ESRD. No significant increase in ESRD risk was seen among obese patients compared to non-obese patients (15-year risk: 0.24% vs. 0.18%, p=0.08, and lifetime risk: 1.21% vs. 0.99%, p=0.26).

Conclusion: The overall 15 year and lifetime risk of ESRD in kidney donors was minimal (<1.5%). While overall predicted risks were low, certain characteristics were associated with greater percentages. Male gender, current smoking, and race had a significant impact on the 15 year and lifetime predicted risk of ESRD in our patient population. This study highlights the applicability of the novel ESRD risk assessment tool as an aid to the preoperative evaluation in kidney donors. Before allowing an individual to become a donor, these characteristics need to be considered as part of an individualized screening process to ensure that the benefit of the recipient is balanced by the risk for the donor patient.

 

36.07 The Efficacy of Trauma Transfers in a Resource Poor Setting

L. N. Purcell1, T. N. Reid1, C. Mabedi2, A. N. Charles1, R. N. Maine1  1University Of North Carolina At Chapel Hill,General Surgery,Chapel Hill, NC, USA 2Kamuzu Central Hospital,Lilongwe, LILONGWE, Malawi

Introduction: Trauma is a leading cause of morbidity and mortality worldwide with the burden borne by low- and middle-income countries (LMICs). Important trauma principles are early triage, expedited care, and transfer of patients to appropriate higher levels of care. Inappropriate transfers (IT), or overtriage, tax overburdened hospitals in LMICs. Little data exists on efficacy of trauma transfers in LIMCs. We sought to determine the rate and characteristics of inappropriate trauma transfer patients in Malawi.

Methods: A retrospective analysis of prospectively collected data was performed at Kamuzu Central Hospital (KCH) in Lilongwe, Malawi. IT were defined as patients discharged alive from the emergency department or patients admitted for less than one day without undergoing surgery. Variables included were demographics, injury severity and characteristics. Bivariate analysis, Kruskal-Wallis, t-Test, and logistic regression were utilized when appropriate.

Results: From February 2008 – July 2017, 120,573 trauma patients presented. Transferred patients constituted 17.0% (n=20,460), of these 57.3% (n=11,725) were IT. Inappropriately transferred patients were younger (mean 21.9±17.2 yrs, CI: 21.6 – 22.2) than appropriately transferred (mean 26.3±19.8 yrs, CI: 25.8 – 26.7), p<0.001. IT occurred more in women than men, 60.5% versus 56.0%, respectively (p<0.001). Primary extremity injury were more often IT (n=6,975, 61.7%) compared to primary torso (n=1,764, 48.5%) or head injuries (n=2,862, 54.0%), p<0.001. IT (median 1 hr, IQR 1-1 hr) arrived at KCH faster than appropriate transfers (median 1 hr, IQR 1-2), p=0.002. Fewer IT occurred at night (n=2554, 46.6%, p,0.001) vs day (n=9141, 61.4%) and on weekends (n=2653,55.8%, p=0.02) vs weekdays (n=2563, 55.8%). The injury mechanisms with the highest rate of IT were lacerations (n=320, 69.3%), animal bites (n=295, 70.7%), and falls (n=4199, 64.1%). IT rates were lowest in motor vehicle collisions (n=3098, 50.0%) and burns (n=429, 31.3%) injury mechanisms. In the logistic regression model, lacerations (OR 2.26, CI 1.63 – 3.13), animal bites (OR 1.97, CI 1.48 – 2.63), assault (OR 1.76, CI 1.54 – 2.00), falls (OR 1.25, CI 1.12 – 1.40), and female sex (OR 1.21, CI 1.10 – 1.32) had increased odds of IT, p<0.001. Night admits (OR 0.54, CI 0.49 – 0.59) and burn injuries (OR 0.44, CI 0.37–0.54) were protective for IT, p<0.001. Primary head (OR 1.34, CI 1.17 – 1.52) and extremity injuries (OR 1.86, CI 1.65 – 2.10) had increased odds of IT compared to torso injuries, p<0.001.

Conclusion: The majority of patients transferring to KCH for injury care are inappropriately transferred. The lack of clear transfer triage criteria and protocols contribute to this overtriage. Implementation of transfer criteria, trauma protocols, and inter-hospital clinician communication can mitigate the strain of IT in the resource limited setting.

31.07 Postoperative Length of Stay Following Colorectal Surgery Impact Readmissions

X. L. Baldwin1, P. D. Strassle1, S. Lumpkin1, K. Stitzenberg1  1University Of North Carolina At Chapel Hill,Chapel Hill, NC, USA

Introduction:
Enhanced recovery pathways have led to shorter lengths of stay (LOS) after colorectal surgery. There continues to be controversy about the relationship between LOS and readmission following colorectal surgery. The purpose of this study was to evaluate the association between LOS and readmissions in a nationally representative sample. We hypothesized that shorter LOS would increase readmission rate. 

Methods:
Hospitalizations of adult patients aged 18 – 85 years old, who underwent colon and/or rectal resection between January 2010 and August 2015 in the National Readmission Database were eligible for inclusion. Patients who were treated in December, who were not residents of the state in which they underwent surgery, who died, or who had a LOS <1 day were excluded. Multivariable logistic regression was used to assess the effect of LOS on 30-day readmission, adjusting for patient demographics, comorbidities, hospital characteristics, and inpatient complications.

Results:
We assessed 376,376 hospitalizations. Median LOS was 5 days (IQR 4-8) and 14% of patients (n=51,087) were readmitted within 30 days. As LOS increased, the incidence of readmissions also increased (7% patients with 1 day LOS to 29% in patients with LOS ≥20 days, p<0.0001), Figure 1. After adjustment for patient demographics, comorbidities, inpatient complications, and hospital characteristics, a 5-8 day LOS was associated with a 50% increase in odds of 30-day readmission (OR 1.53, 95% CI 1.05, 1.14, p<0.0001), a 9-12 day LOS was associated with a 100% increase in odds (OR 2.06 95% CI 1.99, 2.13, p<0.0001), and a LOS ≥13 days was associated with an almost 150% increase in odds (OR 2.45 95% CI 2.36, 2.55, p<0.0001), compared to a 1-4 day LOS.

Conclusion:
Contrary to our hypothesis, we found that an increased length of stay resulted in increased readmission rates following colorectal surgery. Shorter LOS decreased the odds of 30-day readmission, regardless of a patient’s pre-existing comorbidities and inpatient complications. Future studies should examine factors associated with prolonged hospitalization and identify possible interventions to decrease readmission in these patients.
 

28.03 Targeted Spatiotemporal Delivery of Peptide Amphiphile to the Vasculature

H. A. Kassam1, C. Gillis1, N. Tsihlis1, S. Stupp2, M. Kibbe1  1University Of North Carolina At Chapel Hill,Surgery,Chapel Hill, NC, USA 2Northwestern University,Chicago, IL, USA

Introduction:   The aim of this study is to develop and evaluate a novel, systemically delivered, targeted therapy that will prevent restenosis following all cardiovascular interventions.  Increased fractalkine (CX3CL1) levels have been seen in atherosclerotic arteries; however, limited studies have demonstrated presence in injured vessels.  Recently, our lab discovered that fractalkine levels are increased after arterial injury.  Thus, for this study we identified a unique peptide sequence that binds to fractalkine with the goal of synthesizing a self-assembled peptide amphiphile (PA) nanofiber targeted to fractalkine after arterial injury.  In addition, our goal was to determine the optimal dose of the fractalkine-targeted nanofiber to bind to the site of injury, along with binding duration of the targeted nanofiber to the site of arterial injury.

Methods:   The fractalkine-targeted PA (C16-VVAASFPELDLENFEYDDSAEA) nanofiber containing a fluorescent TAMRA tag was synthesized using solid-phase peptide synthesis and purified by reversed-phase high-pressure liquid chromatography (HPLC).  Nanofiber formation was assessed via transmission electron microscopy (TEM).  Controls included a PA without the binding sequence (backbone PA, C16-VVAAK), a lower charged fractalkine PA (C16-VVAASFPELDLQNFQYNNSAEA), and injury alone.  Male Sprague Dawley rats (250-300g) underwent the carotid artery balloon injury model followed by intravenous injection of the PA (0.5-5.0 mg) 24 hours after injury.  Arteries were harvested after 5 hours of circulation time (n=3/group).  To assess binding duration, balloon-injured rats received intravenous injections of fractalkine PA (5 mg) 24 hours after injury and arteries were harvested after 5, 24, 48, and 72 hours of circulation time (n=3/group).  Harvested carotid arteries were frozen, cross-sectioned at 5 microns thick, and imaged for fluorescence.

Results:  HPLC confirmed PAs were >95% pure, and TEM showed PAs formed nanofibers.  Animals whose carotid arteries were injured and underwent injection of the fractalkine-targeted PA showed fluorescence at the site of injury in the area of the arterial media.  In contrast, no binding was observed with the backbone PA or lower charged fractalkine PA.  No significant fluorescent signal was detected in the uninjured arteries.  We observed binding of fractalkine-targeted PA at doses as low as 0.5 mg.  Furthermore, PA binding was seen at the injured site for up to 48 hours after injection. 

Conclusion:  We have demonstrated specific binding of our fractalkine-targeted PA nanofiber to the site of arterial injury compared to control PA nanofibers, with binding duration observed up to 48 hours after injection and at doses as low as 0.5 mg.  This research serves as the foundation upon which a targeted, drug-eluting therapy to prevent restenosis, and possibly prevent atherosclerosis, will be evaluated.

 

27.06 Severe Injury Precipitates Prolonged Dysregulation of Circulating von Willebrand Factor and ADAMTS13

W. E. Plautz1, M. R. Dyer2, S. Haldeman2, M. Rollins-Raval3, J. S. Raval3, J. L. Sperry2, B. S. Zuckerbraun2, M. D. Neal2  1University Of Pittsburgh,School Of Medicine,Pittsburgh, PA, USA 2University Of Pittsburgh,Department Of Surgery,Pittsburgh, PA, USA 3University Of North Carolina At Chapel Hill,Department Of Pathology,Chapel Hill, NC, USA

Introduction: Increases in plasma von Willebrand Factor (vWF) levels, accompanied by decreases in its respective metalloprotease ADAMTS13, have been demonstrated in diseases of microvascular injury. We hypothesized that following severe trauma, a burst of ultra-large vWF is released into the bloodstream by damaged endothelium, resulting in a dysregulation of the circulating vWF multimeric composition. We further hypothesized that impaired ADAMTS13 activity would be insufficient to cleave the burst of ultra-large vWF, facilitating organ injury.

Methods: 37 severe trauma patients from a randomized control trial (RCT) on the use of pre-hospital plasma were analyzed for antigen levels of plasma vWF at 0- and 24-hours after admission. Circulating vWF multimeric composition from both time points was determined by vertical agarose gel electrophoresis, followed by a quantitative structural analysis of vWF multimeric length. ADAMTS13 antigen, activity, and antibody levels were obtained by ELISA and FRETS-73 analyses, and the circulating vWF multimeric composition at both 0- and 24-hours was evaluated for a dependence on ADAMTS13 activity. Finally, multivariate analyses were performed with data abstracted from the RCT database and electronic medical records to identify further dependences.

Results: VWF levels were increased in severe trauma patients when compared to healthy controls at presentation (189% (110-263) vs. 95% (74-120)) and persisted through 24-hours (213% (146-257) vs. 132% (57-160)). Ultralarge-vWF forms were elevated at both 0- and 24-hours when compared to pooled normal plasma ((10.0% (8.9-14.3) and 11.3% (9.1-21.2), respectively, vs 0.6%), while the proportion of small multimers concomitantly decreased. The largest vWF forms within trauma patient plasma circulated at 33±4 dimers vs 18±1 dimer in length within pooled normal plasma and were sustained through 24-hours. Trauma patient ADAMTS13 activity was decreased at 0-hours (66% (47-86) vs. 100% (98-100)) and at 24-hours (72.5% (56-87.3) vs 103% (103-103)) when compared to healthy patients, with antigen levels showing congruent trends. Furthermore, within the trauma patient population, the circulating vWF composition demonstrated a significant plasticity within its small multimeric forms at 24-hours that was dependent upon ADAMTS13 activity (Decreased ADAMTS13 Activity: 20.4% (15.0-22.7) vs Normal Activity: 25.8% (22.7-35.2)). ADAMTS13 activity independently predicted the development of coagulopathy, correlating with presentation INR (ρ =-0.63), activated clotting time of thromboelastography (TEG) (ρ=-0.36), and TEG maximum amplitude (ρ=0.36). ADAMTS13 activity also closely correlated with injury severity (ISS) (ρ=-0.34) and blood product transfusion (ρ =-.45).

Conclusion: Severe traumatic injury dysregulates ADAMTS13 and its target, vWF, contributing to a distorted vWF multimeric profile, persistently altered hemostasis, and the development of coagulopathy.

23.05 Tissue Factor-Targeted Peptide Amphiphile as an Injectable Therapy for Hemorrhage

M. Klein1, H. Kassam1, M. Karver2, M. Struble2, L. Palmer2, N. Tsihlis1, S. Stupp2, B. Gavitt4, T. Pritts3, M. Kibbe1  1University Of North Carolina At Chapel Hill,General Surgery,Chapel Hill, NC, USA 2Northwestern University,Chicago, IL, USA 3University Of Cincinnati,Cincinnati, OH, USA 4United States Air Force School of Aerospace Medicine,Cincinnati, OH, USA

Introduction: Non-compressible torso hemorrhage is a leading cause of preventable death in civilian and battlefield trauma. We sought to develop a tissue factor (TF)-targeted nanofiber that can be given intravenously to slow hemorrhage until definitive bleeding control can be obtained.  TF was chosen because it is only exposed to the intravascular space upon vessel disruption. Peptide amphiphile (PA) monomers that self-assemble into nanofibers were chosen as the delivery vehicle.  Here, we systematically analyzed the binding interface of TF to factor VII to generate a TF-binding sequence.  We hypothesize that TF-targeted nanofibers will localize to the site of bleeding in a liver hemorrhage model.

Methods:  PA monomers were synthesized by solid-phase peptide synthesis, purified by high pressure liquid chromatography (HPLC), and characterized by HPLC-mass spectrometry (HPLC-MS).  A TF-binding sequence SFEEARE (SFE) was added to the PA backbone.  SFE-PAs (75% by weight) were co-assembled with PA backbone (20%) and a fluorescently labeled TAMRA PA (5%) to create TF-targeted nanofibers.  Non-targeted PA nanofibers served as controls.  Nanofiber formation was assessed via conventional transmission electron microscopy (TEM) and structure by circular dichroism (CD) spectroscopy.  Male Sprague Dawley rats (250-290g) underwent a liver punch hemorrhage model in which the liver was exposed via a midline laparotomy, followed by tail vein injection of the nanofiber (2.5mg).  A 12mm biopsy punch was used to injure the left lateral lobe of the liver.  Shed blood was collected with pre-weighed gauze for 30 minutes and expressed as percent of total blood volume.  Data are presented as mean±SEM and analyzed by ANOVA.

Results: PAs were >95% pure, as shown by HPLC-MS.  Co-assemblies of PAs formed nanofibers, as shown by TEM, and displayed the expected β-sheet signal characteristic of PA nanofibers when analyzed by CD spectroscopy.  Injection of the SFE-PA nanofiber demonstrated binding to the site of liver injury using fluorescent microscopy, with uninjured liver showing minimal fluorescence and control nanofibers showing minimal fluorescence.  Somewhat surprisingly, given that there was no therapeutic molecule on the PA nanofiber, total blood loss for rats that received the SFE-PA nanofiber was 36% lower than sham rats (14.5±0.95% vs. 22.8±1.1%, n=6, P<0.05).  Blood loss for rats injected with the non-targeted PA nanofiber was similar to sham rats (21.59±2.8%, n=6).

Conclusion: We have successfully synthesized, purified, and characterized a novel PA nanofiber that targets sites of active bleeding in a rat model of hemorrhage.  Not only did our injectable TF-targeted PA nanofiber localize to the site of injury, it appears to decrease blood loss in the setting of abdominal hemorrhage. Incorporation of a therapeutic agent will make this promising technology even more effective in treatment of non-compressible torso hemorrhage.