45.15 Tunneled Central Venous Catheters in Pediatric Intestinal Failure Patients: A Single Center Review

R. F. Martin1, K. T. Anderson1,2, M. A. Bartz-Kurycki1,2, G. M. Garwood1, S. N. Wythe1, D. N. Supak1, R. Gutierrez1, M. T. Austin1,2, A. L. Kawaguchi1,2, A. L. Speer1,2, E. Imseis1,2, K. P. Lally1,2, K. Tsao1,2  1McGovern Medical School, University Of Texas Health Sciences Center At Houston,Houston, TX, USA 2Children’s Memorial Hermann Hospital,Houston, TX, USA

Introduction:  Children with intestinal failure (IF) often require a tunneled central venous catheter (CVC) for parenteral nutrition. The purpose of this study was to characterize complications after CVC placement and contributors to line loss in IF pediatric patients.

Methods:  A retrospective chart review of pediatric (<18 years) IF patients who had a silicone tunneled CVC newly inserted or exchanged over a wire from 2012-2016 was conducted. Patient demographics, catheter insertion service (surgery vs. interventional radiology), procedure type (new vs. exchange), vessel and complications related to CVCs were evaluated. Complications included dislodgement, infection, break or mechanical malfunction, and occlusion. An ethanol lock protocol for silicone CVCs in IF patients was instituted in January 2012. Descriptive statistics, t-test, ANOVA, chi2, and linear regression were used for analysis. 

Results:29 IF patients with tunneled CVCs were identified with 191 lines and 17,598 line days. Patients had a median age of 19.7 months (IQR 8.7 – 40.8) at the time of line insertion and had a median of 5 catheters (IQR 2-9). Necrotizing enterocolitis was the most common etiology of IF (59%). There were 13.4 complication events per 1000 line days. Line breaks were the most common complication (4.7 events/1000 line days) followed by occlusion (3.4 events/1000 line days), infection (3.0 events/1000 line days) and dislodgement (2.2 events/1000 line days). Median life of catheters was 54 days (IQR 24-140). Line lifetime did not vary by insertion service (p=0.33), vessel (p=0.82), or procedure type (new vs. exchanged, p=0.08). Younger age was associated with shorter line life (p=0.04). Reason for line removal included dislodgement (21%), infection (23%), occlusion (21%), line breaks/malfunction (31%), and other reasons (5%). On multinomial regression adjusting for age and procedure type (new vs. exchanged), dislodgement was associated with newly placed lines (RR 6.9, 95% CI 2.2-21.7). Dislodgement was the cause for removal in 45% of new lines but 11.5% of exchanged lines. Accounting for procedure type and cause of removal, age was not independently associated with catheter life (p=0.16).

Conclusion:
Pediatric IF patients depend on tunneled central venous catheters which have frequent complications. In this cohort, dislodgement of catheter was an unexpectedly common contributor to complications and loss of access, particularly in newly placed lines. Opportunities for simple interventions, such as closer attention to securing sutures and dressing application, should be investigated to mitigate these preventable complications.

45.12 HOSPITAL-ACQUIRED ASPIRATION: RISK FACTORS FOR MORTALITY

A. L. Lubitz1, T. A. Santora1, A. Pathak1, J. A. Shinefeld2, A. P. Johnson3, A. J. Goldberg1,2, H. A. Pitt2  1Temple University,Department Of Surgery, Lewis Katz School Of Medicine,Philadelpha, PA, USA 2Temple University,Temple University Health System,Philadelpha, PA, USA 3Thomas Jefferson University,Department Of Surgery, Sidney Kimmel Medical College,Philadelphia, PA, USA

Introduction: Hospital-acquired aspiration is an uncommon, but potentially lethal, condition. A recent retrospective analysis from our institution suggests that patients with fatal aspiration are a diverse group. Some common features include advanced age, male gender, and neurologic impairment. However, the clinically important characteristics of patients who aspirate and are at greatest risk for dying remain elusive. Therefore, the aim of this study was to determine the risk factors for hospital-acquired aspiration-related mortality.

Methods: Over a three-year period from 2014 to 2016, patients who experienced a significant aspiration event, verified with coded Vizient data, were included in the data set. Patients who presented with aspiration on admission were excluded. The 100% mortality review process at our institution was utilized to ascertain whether the aspiration event was a major factor in the patient’s demise. Hospital records were abstracted to determine which patient, clinical, and hospital-related factors led to a significant aspiration. The aspiration patients who died were compared to the aspirators who lived. Variables identified as significant (p<0.07) on univariate analysis were entered into a multivariable regression model to determine the independent risk factors for aspiration-related mortality.

Results:Of the 276 aspiration patients, 92 (33%) died over the three-year study period. For all patients, 74% were 55 years or older; 53% had received anesthesia; 35% had diabetes; 34% were prior non-current smokers; 32% were Caucasian; 20% were considered high risk after a speech and swallow evaluation; 19% had COPD; 18% had impaired gastrointestinal motility; 16% had received anticholinergic medications within 24 hours prior to their event; 14% had a pulmonary diagnosis on admission; and 12% had low magnesium levels (Table). Only 31% of the patients were in an ICU at the time of the aspiration, but 41% were in the hospital for more than a week when the aspiration event occurred. Each of these twelve risk factors were utilized in the multivariable analysis (Table). Independent risk factors for fatal aspiration were prior, non-current smoking (OR= 2.18); impaired gastrointestinal motility (OR= 2.17) and hospitalization greater than 1 week (OR=2.43) (Table).

Conclusion:Fatal aspiration is an under recognized hospital-acquired condition. Patients at greatest risk for dying after an aspiration event are prior non-current smokers, have a diagnosis of impaired gastrointestinal motility and/or a prolonged hospitalization. Hospital personnel should use this information to identify high-risk patients and implement strategies to prevent fatal hospital-acquired aspiration.

 

45.13 Palliative Care Consultation is Underutilized in Critically Ill Surgical Patients

M. C. Turner1, B. C. Evans3, C. Sommer1, L. Pickett1, A. Galanos2  1Duke University Medical Center,Surgery,Durham, NC, USA 2Duke University Medical Center,Medicine,Durham, NC, USA 3Duke University Medical Center,School Of Medicine,Durham, NC, USA

Introduction: The American College of Surgeons(ACS) recommends surgeons collaborate with palliative care specialists for surgical patients with poor prognosis to discuss symptom management, goals of care, and end of life decision-making. However, contemporary practice patterns of palliative care consultation for surgical patients is poorly defined. We aim to describe the use of palliative care consultation for patients admitted to surgical services who died in the hospital during their index admission. We hypothesize there is insufficient use of palliative care consultation in these patients at our institution.

Methods:
The Duke Enterprise Data Unified Content Explorer (DEDUCE) 2014-2016 was queried for patients admitted to general surgery services who died in the hospital. The primary endpoints of palliative care consultation was obtained from chart review. Secondary measures included admitting service, operative vs nonoperative, length of stay, total time spent in consultation, days from consultation to death, and agreement and execution of a care plan. This project was granted exemption from the IRB as a Quality Improvement Initiative. 

Results: Of the 105 patients identified, 6 died on the day of admission, and 39 (37%) received a palliative care consultation. Patients who received consultation were older, white, insured, and admitted to the vascular service. Of the patients who received consultation, a median of 60 minutes (Interquartile Range (IQR):30-110) was spent by the consulting team with the patient/family. Patients who received consultation had a median length of stay of 18 days (IQR:9-27) compared to those without (4d IQR: 1-14). The median number of days between palliative consult and death was 3 days (IQR: 1-8). Goals of care was the indication for consultation in 62.5% of patients. The proposed plan by the consultants was congruent with the primary team in 66.7% of cases. 

Conclusion: Palliative care consultation was underutilized in surgical patients who died in the hospital at our institution. Identification of barriers to consultation and promotion of the benefits of palliative care among surgical teams is warranted. Further investigation into family satisfaction and cost of care will be important in future prospective studies.

 

45.10 Identification of Social Determinants of Health Leads to Referrals to Community Resources

M. J. Adair2, R. Schimmoeller1, L. Hammerling1, R. Oostra1, S. F. Markowiak2, C. Das2, F. C. Brunicardi1,2  1ProMedica Health System,Department Of Medicine,Toledo, OHIO, USA 2University Of Toledo,Department Of Surgery,Toledo, OHIO, USA

Introduction: Social determinants of health (SDOH) are described as the circumstances in which people live and the systems put in place to deal with illness. Collectively, they contribute to social patterning of health, disease, and illness. The increased interest in population health and the growing body of scientific evidence have driven a growing appreciation for the role social factors have in determining health outcomes. The objective of this project was to evaluate SDOH survey implementation, the efficacy of screening questions and whether adequate community resources were available to meet referral needs.

Methods:   Data from surveys collected from 1/2017 – 8/2017 focused on ten SDOH risk domains: food insecurity, housing, education, transportation, financial strain, social support, behavioral health, childcare, domestic violence, and employment. Questionnaires were distributed to 558 patients in the ProMedica Health System. Collected were quantitative, categorical and nominal data describing the occurrence of SDOH risk domains across various zip codes in Toledo. Scoring of motivational questions assessed patients' willingness to 1) participate 2) change behaviors that negatively affect health 3) and accept referrals to community resources. Home care coordinators and social workers acted as navigators to manage referrals and coordinate interventions with community resources.

Results:  Of 558 patients, 384 agreed to be screened and completed screenings consisting of clinically validated questions. 174 patients declined. Of patients encountered 54% had positive SDOH risks; their needs identified and they elected to have referrals to appropriate community services.  The top three risk domains were financial strain (67%) training/employment (56%) and food insecurity (55%). 39% of those screened had needs in four domains or more. A total of 363 (91%) positively screened domains needs were either in the process of being resolved (47%) or have been confirmed resolved (44%). To date, 355 patient community resource referrals have been made as a result of SDOH screening (43% to a community food resource, 20% to a community financial strain resource, 10% to a community utilities resource and 9% to a community training or employment resource).

Conclusion: This study assessed the occurrence of SDOH risk domains, as well as determining prevalence of need, appropriate infrastructure, and availability of community resources to address the needs. The majority of patients surveyed agreed to be screened and participate.  Over half those screened had unmet SDOH at risks, and referrals were processed for 91% of domains. This has resulted in the screening tool being embedded in clinical workflow and in electronic health care records, which will be used to screen all ProMedica Health System patients and to develop an evidence based-continuous evaluation system to address SDOH risk domains and appropriate referrals in the greater Toledo community.

 

45.11 Qualitative Analysis of Clinical Decompensation in the Surgical Patient: Perceptions of Nurses and Physicians

C. R. Horwood1, M. Rayo1, M. Fitzgerald1, S. D. Moffatt-Bruce1  1Ohio State University,Columbus, OH, USA

Introduction: There are multiple early warning signals that can predict clinical decompensation. While these variables are assessed by all healthcare providers, there is little knowledge as to how different health care providers perceive and thereby appreciate clinical decompensation. It is also unclear how treatment is changed based on these perceptions. The aim of this study is to qualitatively assess how nurses, surgical residents, and attending surgeons perceive early warning signs that predict clinical decompensation, escalation of care, and clinical acuity in surgical patients.

Methods: Ethnographic interviews focused on patient decompensation and stability were performed on a surgical floor during three weeks in July and August of 2017. Thirteen nurses, 5 surgery residents and 2 surgical attending physicians who had direct involvement with patient care were interviewed, recorded and transcribed. Constant comparative analysis was used to analysis and draw conclusions from the interview data.

Results: Similarities between healthcare providers were seen in the data clinicians used to determine stability and decompensation. They primarily used vital signs, intake/output, physical exam, and lab values. Overall, healthcare providers determined decompensation and stability as trends, as opposed to thresholds, in the data analyzed. Continuity of care, as well as, an established intent of discharge resulted in less frequent monitoring, decreased patient’s perceived acuity and resulted in practitioners less likely to interpret patient clinical changes as warning signs of decompensation. Differences were seen primarily in how physicians versus nurses perceived stability. Physicians were more focused on stability during a work up for a diagnosis while nursing staff had a lower threshold to escalate care during this time. Furthermore, the amount of communication needed to relay decompensation was different between levels of providers. Interestingly, there was no correlation among physicians and nurses between years of experience and frequency of monitoring when a patient was determined unstable.  There was, however, a difference in the method that nurses and doctors used to monitor patients, and more interventions were taken by residents than nursing prior to escalation of care.

Conclusion:

Our study revealed that all healthcare providers agree on similar data values for decompensation and the importance of trends. However, differences were found in determining acuity of a patient and the actual escalation of care. The perception of stability resulted in decreased monitoring overall, which likely increased overall efficiency but increased the risk of delayed recognition of more subtle trends of decompensation.

45.07 The Effect of Author Gender on Sex Bias in Surgical Research

N. Xiao1, N. Mansukhani1, D. Fregolente2, M. Kibbe1,3  1Northwestern University,Department Of Surgery,Chicago, IL, USA 2Northwestern University,Department Of Chemical And Biological Engineering,Chicago, IL, USA 3University Of North Carolina At Chapel Hill,Department Of Surgery,Chapel Hill, NC, USA

Introduction:
Previous studies have demonstrated that sex bias exists in surgical research. Females are underrepresented as research subjects and as investigators in surgical scientific research. We aimed to investigate the effect of author gender on sex-bias in surgical research, and to explore whether investigators benefit from performing sex-inclusion research. We hypothesized that author gender impacts sex-bias.

Methods:
Data were abstracted from 1,921 original, peer-reviewed articles published from 01/01/11-12/31/12 in 5 general-interest surgery journals. Excluded were articles that pertained to a sex-specific disease, did not report the number of subjects, or contained gender ambiguous author names. Abstracted data included gender of the first and last author, number of female and male subjects included in each study, surgical specialty, and number of citations received per article. Quantification of sex bias was performed by examining the inclusion of male and female subjects and sex-matching of included subjects. Further analysis of the presence of sex-based reporting of data, sex-based statistical analysis of data, and sex-based discussion of the data was included. 

Results:
Of the 1,802 articles included in this study, a total of 2,791 (77.4%) first and last authors were male. 70.3% of first authors and 84.6% of last authors were male. The prevalence of male authors was consistent across all five journals and among both clinical and basic science research (p=NS). Investigations in breast, endocrine, and surgical education were conducted by more female investigators compared to other specialties (p<0.05). Female authors recruited a higher median number of female subjects compared to their male counterparts (p=0.01), but sex-matched the inclusion of subjects less frequently. There were no differences between male and female authors in sex-based reporting, sex-based statistical analysis, and sex-based discussion of the data, nor the number of citations received. However, studies which performed sex-based reporting yielded 2.8 more citations (95% CI 1.2 – 4.4, P<0.01), studies which performed sex-based statistical analysis yielded 3.5 more citations (95% CI 1.8 – 5.1, P<0.01), and studies containing a sex-based discussion of the data yielded 2.6 (95% CI 0.7 – 4.5, P<0.01) more citations compared to studies which did not report, analyze, or discuss data by sex. Articles with higher percent sex matching of subjects also received more citations, with an increase of 1 citation per 4.8% (95% CI 2.0 – 7.7%, P<0.01) increase in percent of sex matching.

Conclusion:

Sex bias in surgical research is prevalent among both men and women authors. However, women authors include proportionally more female subjects in their studies compared to male authors. Lastly, studies which address sex bias received significantly more citations. 

 

45.08 Black Patients have Higher Rates of Readmission following General Surgical Operations

S. E. Roberts1, C. J. Wirtalla1, P. Dowzicky1, R. Hoffman1, C. Aarons1, R. R. Kelz1  1Perelman School Of Medicine At University Of Pennsylvania,Department Of Surgery,Philadelphia, PA, USA

Background:

Patient and provider factors contribute to known racial disparities in length of stay (LOS) and readmission rates (RR). However, little is known about the role of LOS on RR in black patients. To gain a better understanding of the association between race and readmissions, we examined the odds of readmission for black patients based on the discharge efficiency.

 

Methods:

Discharge claims from California and New York were used to identify white and black patients undergoing a general surgical operation (2010-2011). Discharge efficiency (DE) was defined at the patient-level based on the distribution of LOS at the treating hospital for all patients undergoing the same operation and approach (laparoscopic vs open). Early discharge (ED) was assigned if LOS<25th%ile, routine discharge (RD) for LOS=25-75th%ile and late discharge (LD) for LOS>75th%ile. Multivariable mixed-effects logistic regression was used to examine the association between patient race and RR with control for potential confounders including DE and hospital level clustering. The analysis was stratified by operation category and DE. Bonferroni correction was used to correct for multiplicity. 

 

Results:

Among 350,019 patients included in the study, 44,156 were black (12.6%). While the majority of patients were RD, a greater proportion of black patients were LD compared to white patients and a greater proportion of white patients were ED (B: 8.1%ED, 48.6%RD, 43.3%LD compared to W: 9.3%ED, 51.5%RD, 39.2%LD; p<0.001). The unadjusted RR for white patients was higher following general surgery operations than for black patients (W: 4.9 % and B: 4.1%; p<0.001). The adjusted odds of readmission were greater for black patients when controlling for potential confounders including DE and hospital-level clustering (OR: 1.11; 95%CI: 1.04-1.18). When examining the adjusted odds of readmission by DE group, the disparity was most pronounced among RD patients (OR: 1.13; 95%CI: 1.03 -1.23).  When examining the adjusted odds of readmission by operation category, the odds were greatest for black patients who underwent complex operations (OR: 1.22; 95%CI: 1.04-1.43).

 

Conclusions:

Black patients are significantly more likely than white patients to be readmitted to the hospital following general surgical operations regardless of the hospital in which they receive their care.  The risk is most pronounced among patients who undergo complex operations. Furthermore, the disparity is greatest for RD black patients.  This suggests that targeted interventions in RD black patients may be able to ameliorate this racial disparity in surgical care.

45.09 Hospital Mortality Surveillance in a Tertiary Hospital in Rwanda: "CHUK"

F. BYIRINGIRO1, J. BYIRINGIRO1  1University Of Rwanda,Department Of Surgery,KIGALI, , Rwanda

Introduction: A Hospital mortality Surveillance is a tool used for reporting hospital deaths. The trends, the causes and predictive risk factors are well understood with reliable tools that accurately report hospital mortality data, and stimulate regular discussions behind them.

Methods: Through a retrospective descriptive study, we collected 404 medical records of patients who died at CHUK during an 8-months period (January 1 to August 31, 2014). Descriptive statistics and baseline characteristics are used for analysis; Students t-tests, chi-square tests, non-parametric tests and time to event with Kaplan Meier and regression where appropriate. We used Epidata and STATA 12 for data management.

Results: We found only 404 medical records of deceased patients while admitted in the hospital. We report 39.6% females and 60.15% males; though 19.36% were traumas, 11.39% Cerebral vascula accidents, 6.94 cardiovascular diseases, 4.73 intestinal obstructions, 3.82% shocks, 3.47% multiorgan failures, and 2.48% comas of unknown causes. The Documented cause of Death were 2.25% RTA, 1.5% stroke, 1.2% Hematemesis, 1.2% severe TBI, 16.3% with different causes of Death, 18.7% unclear causes of death, and 50.87% unspecified diseases. 61.9% were unstable and critically ill patients, 26.46% stable but critically ill, 10.5% stable but not dischargeable, and 1.17% stable and dischargeable. Critically ill patients were 88.3%. Death occurred within the first 6h, 12h, 24h, or 5days respectively at 19%, 6%, 2.8%, and 45.7%. The association of critical status and imminent death was at first 6h (OR: 3.16; p-0.001), at first 12h (OR: 1.78; p-0.283), at first 24h (OR: 1.66; p-0.460), and at first 5days (OR: 0.68; p-0.147). There were 77.98% incoherencies between the working diagnosis and causes of death, and 36.4% missing data of the early warning score. The hospital structure and environment explain the correlation and associations detected between the factors related directly or indirectly to hospital deaths. The imminent death within the first 6h is significantly related to the critical presentation of the deceased patients (P<0.001). In fact, the shorter the hospital stay, the higher the patient related factors to death; and the longer the hospital stay, the higher the hospital-structure and working environment related factors to death.

Conclusion: Hospitals have different ways and tools of reporting mortality data; and the most reliable tools are hospital mortality registries. In hospitals where these tools don't exist, the report of mortality remains a challenge regarded the constant search of accurate data related to deaths and the sensibility the data carry. The survey states clearly the challenges behind the inaccuracy of hospital mortality data, and the study proposes long-term solutions to understand and decrease the current hospital mortality rate.

 

45.05 Kyphoplasy: Height Restoration & Injected Cement Volume Not Associated with Increased Quality of Life

D. M. Self1, J. Amburgy2, J. Mooney2, M. R. Chambers2  1University Of Alabama at Birmingham,School Of Medicine,Birmingham, Alabama, USA 2University Of Alabama at Birmingham,Department Of Neurosurgery,Birmingham, Alabama, USA

 

Introduction:

Kyphoplasty is a minimally invasive surgery developed to restore height and reduce pain associated with vertebral compression fractures. However, there is minimal data on the association of vertebral height restoration or injected cement volume with patient outcomes. The objective of this analysis is to determine whether height restoration or injected cement volume following Kyphoplasty is associated with improvements in pain, disability, and quality of life.

Methods:

Fifty-Nine Medicare-eligible patients with 1 to 3 painful VCFs between T5 and L5 due to osteoporosis or cancer underwent kyphoplasty.  Anterior, middle, and posterior heights of vertebral bodies were measured pre/post-operatively utilizing computerized lateral thoracic and/or lumbar x-rays.  The total volume of bone cement injected (left side + right side volumes) were recorded for each patient. Additionally, patient outcomes with EQ5D, VAS and ODI were compared pre- and post-operatively.  Pearson correlations as well as linear regression models were derived for association of total cement volume with patient outcomes. 

Results:

For VAS, ODI, and EQ5D improvements, neither Pearson correlations (r= 0.042, 0.167, and 0.091), respectively, nor multiple linear regression models (R²= 0.002, 0.029, 0.023), respectively, reveal either a correlation or an association with total cement volume. Additionally, neither Pearson correlations (r coefficients ranging from 0.001-0.152) nor linear regression models (R² values ranging from 0.0002-0.1133) reveal either correlation or association between anterior, middle, or posterior vertebral body height improvements with VAS, ODI or EQ5D improvements. 

Conclusion:

This is the largest known study to assess associations of vertebral body height improvements and cement volumes with patient outcomes. Most patients improved regardless of the vertebral height improvements or cement volumes injected.

 

45.06 Do Surgeon Demographics and Surgical Specialty Drive Patient Experience Scores?

K. E. Engelhardt1,2, R. S. Matulewicz1, J. O. DeLancey1, C. Quinn1, L. Kreutzer1, K. Y. Bilimoria1  1Northwestern University,Chicago, IL, USA 2Medical University Of South Carolina,Charleston, Sc, USA

Introduction:  Optimizing the patient experience has become a focus for hospitals, physicians, and other healthcare providers in recent years.  The Centers for Medicare and Medicaid Services publicly reports patient-reported “likelihood to recommend” (LTR) for group practices and may report LTR for individual surgeons in the future. However, it is hypothesized that surgeon-level factors (e.g. age, sex, training, and specialty) may influence LTR scores.  The objective of this study was to assess the relationship between surgeon factors and surgeon-specific LTR scores.

Methods:  Patient experience survey data were analyzed from a common, third party, nationally-available survey for all surgeons at a single adult academic medical center for fiscal years 2013-2016. All surgical subspecialties were included. This survey includes questions about the patient’s experience with the surgeon in the clinic setting. Hierarchical logistic regression modeling was used to identify factors associated with a top box response (i.e., best score) on the surgeon-specific LTR question.

Results: A total of 18,100 surveys were returned for 118 surgical faculty members representing an overall response rate of 19.2%; mean individual question response rate among those who returned surveys was 94.4% (range 80.0-97.2%). Surgeons in our cohort were predominately male (78.0%) and fellowship-trained (72.9%). Surgeon-specific top box LTR percentages ranged from 54.5% to 97.5%.  In adjusted analyses (Table), certain specialties had a significantly lower likelihood of top box LTR score when compared to general surgery (ophthalmology OR 0.60, 95%CI 0.42-0.85; ENT OR 0.64, 95%CI 0.41-1.00).  Surgeon age, gender, and medical training characteristics (e.g., fellowship trained, top-25 medical school graduate, etc.) were not significantly associated with top box LTR response.  In our data, no surgeon was rated significantly better or worse overall than the mean of all other surgeons studied (i.e. there was poor residual intraclass correlation: 0.059).

Conclusion: Surgeon demographics and medical training history were not significantly associated with LTR scores for individual surgeons.  However, certain surgical specialties were associated with LTR responses. Adjustment for surgical specialty may be necessary to reduce bias and accurately portray patient experience when comparing LTR scores across departments of surgery.  
 

45.03 Electroceutical Technology Against Bacterial Drug Resistance

S. Roy1, A. Das1, S. Mathew1, P. D. Ghatak1, S. Khanna1, S. Prakash2, V. Subramaniam2, D. J. Wozniak3, C. K. Sen1  1The Ohio State University Wexner Medical Center,Comprehensive Wound Center, Center For Regenerative Medicine And Cell Based Therapies, Department Of Surgery, Davis Heart And Lung Research Institute,Columbus, OHIO, USA 2The Ohio State University,Department Of Mechanical And Aerospace Engineering,Columbus, OHIO, USA 3The Ohio State University,Department Of Microbial Infection And Immunity, Department Of Microbiology, Center For Microbial Interface Biology,Columbus, OHIO, USA

Introduction: Infection is a significant threat in the effective management of acute and chronic wounds. Adding further to the complexity is the occurrence of multidrug-resistant organisms and/or virulent pathogens as biofilm infection resulting in persistent infections. Interestingly, bacteria produce positively charged polysaccharides as a component of the overall extracellular polymeric substance, the hallmark of the biofilm matrix. Thus, electrostatic forces represent a natural contributor enabling the attachment and encasement of bacteria during biofilm development and maturation. The current work employs a therapeutic flow of electric charge (EDT) to disrupt bioelectric forces with the goal of biofilm disassembly. Because EDT is not subject to the metabolic pathways of drug resistance, it has the potential to circumvent drug resistance and disrupt biofilms. The primary objective of this study was to determine the ability of optimized electroceutical EDT dressings to eliminate mixed species biofilm infection in a preclinical porcine burn infection model.

Methods: Our first generation technology (EDTlo) is a commercialized (Procellera®) and FDA cleared as wound care dressing. This dressing, composed of silver and zinc redox couple printed on a polyester textile is wireless and does not need any external power supply making it well suited for application as disposable wound care dressing. We have developed a prototype next-generation EDT (EDThi) with inherent scalability of electric field for dosing purposes and performed in vitro and preclinical in vivo studies. EDThi is an interdigitated pattern printed (medical grade silver) dressing powered by a 6V DC battery. The design includes a safety fuse to prevent accidental damage along with a switch to manually control the ON/OFF capability. We have developed and reported first persistent infection long-term wound biofilm infection in porcine (preclinical) model. The safety and efficacy of EDT was tested in porcine mixed species biofilm infection model.

Results: CFU, SEM and CLSM studies clearly demonstrated a significant attenuation of bacteria growth (p<0.05, n=4) as well as biofilm integrity and persistence (p<0.05, n=4) following EDT treatments. The safety of EDT on host wound healing was determined using histopathology and wound re-epithelialization assessment. No host cell necrosis or growth inhibition was noted in EDT treated groups, a marked increase in wound re-epithelialization (p<0.05, n=4) was observed in EDT treated group indicating the intervention is safe for host. EDT treatment significantly protected against biofilm induced skin barrier function disruption via a miR-9-E-cadherin pathway.

Conclusion: The current pre-clinical work provided evidence that therapeutic flow of electric charge in form of EDT dressings may be effectively used as an alternative to pharmacological intervention to combat biofilm infection in wounds. 

 

45.04 Measurement of Factors Associated with Delays in Pain Medication Administration

J. Hwang1, S. E. De Palm2, M. Massimiani2, E. LaMura2, K. Sigafus2, S. M. Nazarian1  2Hospital Of The University Of Pennsylvania,Medical-Surgical Specialty Nursing,Philadelphia, PA, USA 1University Of Pennsylvania,Department Of Surgery, Division Of Transplant Surgery,Philadelphia, PA, USA

Introduction:
A patient’s impression of the quality of their care is strongly influenced by their pain management.  Quality of pain management is a point of emphasis in health quality evaluations:  3 of the 25 Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) questions relate to pain.  A frequent patient concern and topic of Press Ganey comments is long wait times for pain medication.  We sought to understand the process of pro re nata (PRN) pain medication administration, and identify potential opportunities for safe process improvement. 

Methods:
A process map for pain medication administration was created.  Time points along the map available from the electronic medical record were obtained for pain and non-pain PRN medications.  The smartphone app Emerald Timestamp (Emerald Sequoia, LLC) was used while shadowing registered nurses (RNs) to confirm electronically timestamped data, obtain time points for non-timestamped data, and record activity between time points.  The total time from RN notification to medication administration was calculated.  Using two-tailed unequal variances t-tests, times were compared between pain and non-pain PRN medications.  

Results:
A total of 21 PRN medication events were observed, two of which were determined to be outliers and removed from further analysis.  Patients did not always request pain medications directly from their RN.  Visitors, certified nurse assistants (CNAs) and other care providers served as messengers.  CNAs took 2.3 minutes on average (N = 11) to relay a medication request to an RN.  RNs also preemptively asked if the patient would like pain medication.  Patients waited an average of 37.6 minutes for all PRN medications (N = 19), with a significant difference between PRN pain (44.1 minutes, N = 15) and non-pain (13.0 minutes, N = 4) medications, P = 0.04.  The five longest wait times for pain medication were due to dosing restrictions on frequency, including the longest wait of 308 minutes.  When the instances of delay due to frequency restriction were eliminated, there was no significant difference between pain (N = 11) and non-pain (N = 4) PRN medication administration (25.3 vs. 13.0 minutes respectively, P = 0.53).

Conclusion:
The process of PRN pain medication administration includes many steps.  The main cause for pain medication delay was restriction on frequency.  Medication administration did not appear to take longer for pain medications compared to non-pain, although numbers may have been too small to detect a significant difference.  Based on these preliminary findings, we have expanded data collection and will soon begin the analysis of multiple years of hard timestamped data on the factors affecting the administration of PRN data (bolded boxes in the figure). 
 

45.02 Hospital Teaching Status Impacts Surgical Discharge Efficiency

P. M. Dowzicky1, C. Wirtalla1, J. H. Fieber1, I. Berger2, S. Raper1, R. R. Kelz1  1University Of Pennsylvania,Surgery,Philadelphia, PA, USA 2University Of Pennsylvania,School Of Medicine,Philadelphia, PA, USA

Introduction:

There is a paucity of data regarding the efficiency of care provided by teaching hospitals.  Yet, instruction on transitions in care and an understanding of systems-based practice are key components of modern graduate medical education. We aimed to determine the relationship between hospital teaching status and the discharge efficiency from a surgical service.

Methods:

The Healthcare Cost and Utilization Project National Inpatient Sample was used to identify patients undergoing a general surgical procedure in 2012.  Patient stratification was based on treating hospital teaching status (TH vs NTH). Procedure-specific early discharge (PSED) was defined for each operation type as a discharge that occurred within the lowest 25th percentile for overall length of stay.  PSED was used as the discharge efficiency metric.  To adjust for cofounders and hospital level clustering, multivariable mixed-effects logistic regression was used to examine the association between teaching status and PSED.  Sub-group analysis was performed by operation type.  Models were constructed with and without adjustment for inpatient complications. Bonferroni correction was used to account for multiplicity.

Results:

Of 272,090 patients, 140,878 (51.8%) received care at a TH. TH status was significantly associated with lower PSED (TH: 10.7% vs. NTH: 11.4%; p<.001) and longer length of stay (TH: 5.5 days vs. NTH: 4.5 days; p<.001).  In the adjusted model of the overall cohort, patients treated at a TH were 8% less likely to receive a PSED compared to those treated at a NTH (OR .92, 95%CI (.88, .97) ;p<.002).  Differences in the rates and odds of PSED were noted across the subgroups. [Table 1] 

Conclusion:

Teaching hospital status is associated with a reduced likelihood of PSED.  The effect of TH on PSED varied by procedure sub-group. Examining the recovery pathways and discharge practices at NTH may allow for the identification of more efficient methods of care that can be applied to the broader healthcare system.  Teaching efficient practices of care within TH will likely magnify the effects of these efforts as trainees’ transition to independence. 

44.18 Late GI Bleeding is More Prevalent With Transcatheter Compared to Surgical Aortic Valve Replacement

A. Iyengar1, E. Aguayo1, Y. Seo1, K. L. Bailey1, Y. Sanaiha3, O. Kwon2, W. Toppen4, P. Benharash2  1University Of California – Los Angeles,David Geffen School Of Medicine,Los Angeles, CA, USA 2University Of California – Los Angeles,Cardiac Surgery,Los Angeles, CA, USA 3University Of California – Los Angeles,General Surgery,Los Angeles, CA, USA 4University Of California – Los Angeles,Internal Medicine,Los Angeles, CA, USA

Introduction:
Late bleeding complications are known to contribute to morbidity in patients undergoing aortic valve repair. In particular, post-hoc analysis of the PARTNER study has highlighted the high prevalence and morbidity of late bleeding complications in both surgical (SAVR) & transcatheter (TAVR) aortic valve replacement in high-risk populations. The purpose of this study was to compare the incidence and financial impact of late gastrointestinal (GI) bleeding in transcatheter and surgical (SAVR) aortic valve replacements.

Methods:

Retrospective analysis of the National Readmissions Database was performed between January 2012 & December 2014 using the International Classification of Diseases, Ninth Revision procedural codes for TAVR (35.05 and 35.06) and SAVR (35.21 and 35.22), and diagnosis codes for GI bleeding (578.9). Costs were standardized to the 2014 US gross domestic product using US Department of Commerce consumer price indices and adjusted for diagnosis related group–based severity. The Kruskal-Wallis and chi-squared tests were used for comparisons between all cohorts. Multivariable logistic regression models were utilized to identify significant predictors for GI bleeding.

Results:

Overall, 32,796 patients were identified who underwent TAVR, while 231,324 patients underwent SAVR. Compared to SAVR, TAVR patients were older (82 vs. 69 years, p<0.001) and more likely to be female (46% vs. 36%, p<0.001). In addition, TAVR patients had higher incidence of congestive heart failure (76% vs. 39%, p<0.001), chronic kidney disease (37% vs. 18%), and higher median Elixhauser Comorbidity Index (6 vs. 5, p<0.001). Hospital length of stay was lower in TAVR compared to SAVR (5 vs. 8 days, p<0.001), but in-hospital mortality rates were similar (p=0.668).

Among the TAVR cohort, 868 (2.6%) of patients were rehospitalized for GI bleeding compared to 2,630 (1.1%) in the SAVR group (p<0.001). Median time to readmission was similar between cohorts (46 vs. 47 days, p=0.948). Average cost of TAVR readmission for GI bleeding was $17,136 compared to $18,737 for SAVR (p=0.392). Amongst the subset of patients over age 80, GI bleeding readmissions remained more prevalent in TAVR vs. SAVR (2.6% vs. 1.5%, p<0.001). After multivariable adjustment, TAVR remained significantly associated with GI bleeding compared to SAVR (AOR 1.73 [1.50-1.99], p<0.001). 

Conclusion:

In this national cohort study, TAVR was associated with more frequent readmissions for late GI bleeding compared to SAVR. After controlling for preoperative comorbidities, TAVR remained a significant predictor of late GI bleeding. Since the optimal anticoagulation regimen for TAVR is not known, strategies to reduce GI bleeding in both groups may serve as suitable targets for improvement in overall quality of care. 

44.19 Does Robotic Lobectomy Improve Outcomes in Patients with Poor Pulmonary Function?

P. J. Kneuertz1, D. M. D’Souza1, S. D. Moffatt-Bruce1, R. E. Merritt1  1The Ohio State University Wexner Medical Center,Department Of Surgery,Columbus, OH, USA

Introduction:
Patients with poor pulmonary function have a high risk for pulmonary complications following lobectomy. Robotic lobectomy is currently the least invasive approach. We hypothesized that robotic lobectomy may be of particular benefit in high-risk patients. 

Methods:
We reviewed our institutional Society of Thoracic Surgeons (STS) data on 762 lobectomy patients from 2012 to 2017. High-risk patients were identified by pulmonary function test [FEV1 (forced expiratory volume in 1 second) <60% or DLCO (diffusion capacity of the lung to carbon dioxide) <60% predicted].  Preoperative characteristics and perioperative outcomes were compared between robotic and open lobectomy. Risk of pulmonary complication was assessed by binary logistic regression analysis.

Results:

A total of 190 high risk patients underwent lobectomy by robotic (n= 83), and by open (n=107) procedure.  The robotic group included more patients with age >75 years (Robotic 22%, vs. Open 11%, p=0.05), patients with COPD (81% vs. 54%, p=0.001) and active smokers (46% vs. 31%, p=0.04).  Robotic lobectomy patients had a lower rate of prolonged air leak >5 days (5% vs. 15%, p=0.02), less atelectasis requiring bronchoscopy (7% vs. 22%, p=0.02), and shorter length of stay (4 vs.7 days, p=0.001). The observed difference in overall pulmonary complication rate between robotic and open lobectomy patients was greatest in high risk patients, and less in intermediate or low risk patients (Figure).  No significant difference was seen in the rate of major complications (19% vs. 24%, p=0.4), or 30 day mortality (1.2% vs. 1.9%, p=0.6) following robotic and open lobectomy, respectively. On multivariate analysis, when adjusting for age, FEV1, DCLO, COPD, Zubrod score and prior chest surgery, robotic approach remained independently associated with decreased pulmonary complications (HR 0.58, 95% CI[0.38-0.88] , p=0.005).   

Conclusion:
Robotic lobectomy may decrease the risk of pulmonary complication as compared with thoracotomy in high risk patients. Patients with limited pulmonary function derive the most benefit from a robotic approach.
 

44.15 Sternal Device for Open Heart Surgical Procedure

M. A. Kashem1, G. Ramakrishnan1, V. Dulam1, J. Gomez-Abraham1, S. Guy1, Y. Toyoda1  1Temple University,Cardiovascular Surgery,Philadelphia, PA, USA

Introduction:

In USA, around 700,000 open-heart surgeries are performed requiring mid-line sternotomies. We investigated a post-marketing FDA approved sternal device in patients who underwent open-heart surgery at our center. We hypothesized that the SternaLock Blu System would increase stability, provide greater strength, and reduce sternal separation compared to the current wire system used to close the chest.

 

Methods: 15 patients were randomized to the Biomet Microfixation Sternal Closure System that was intended for use in the stabilization and fixation of fractures of the anterior chest wall including sternal fixation following sternotomy and sternal reconstructive surgical procedures to promote fusion. All patients were given questionnaires to understand numerical pain scores during pre and post-operative days, during discharge, and during subsequent follow ups at 1, 3, 6, and 12 months.

Results: 8 patients were randomised to receive plate (SternaLock Blu System) and 7 patients were randomized to recieve regular sternal wire in the O.R. Average age of the patients was 59 ± 9 years old, 33% female, and they were randomized in a blinded fashion. Randomization envelope was opened in the operating room after the procedure. All patients survived without any procedural complications in the O.R. 1 patient died within 6 months of the surgery unrelated to the sternal device or wire. Numerical pain score showed milder post-operative pain experiences in the sternal device group relative to the sternal wire group. CT scans at 6 months showed improved sternal stability and reduced sternal separation in the sternal device group (Fig 1) 

Conclusion: In selected patients, the sternal device closure system could be a better choice for sternal fixation. 
 

44.16 A Narrative Review of 3D Printing in Cardiac Surgery

J. Wang1,4, J. Coles-Black1,4, G. Matalanis2, J. Chuen1,3,4  1The University Of Melbourne,Department Of Surgery,Melbourne, VICTORIA, Australia 2Austin Health,Department Of Cardiac Surgery,Melbourne, VICTORIA, Australia 3Austin Health,Department Of Vascular Surgery,Melbourne, VICTORIA, Australia 4Austin Health,3D Medical Printing Laboratory,Melbourne, VICTORIA, Australia

Introduction:  3D printing is a rapidly developing technology which has started to flourish in fields where the ability to visualise complex anatomy in novel ways may aid interventions. As such, the most prolific medical disciplines utilising 3D printing to date have been the surgical specialties. This paper reviews the published literature on 3D printing in the field of Cardiac Surgery.

Methods:  We performed a literature search using Ovid MEDLINE, Ovid EMBASE and PubMed. The search terms used were “Printing, Three-Dimensional” AND “Cardiac Surgical Procedures”, “Three-Dimensional Printing” AND “Heart Surgery”, and “Three Dimensional Printing” AND “Cardiac Surgery” respectively. This resulted in 38 articles, which were independently read in full to identify relevant studies.

Results: Our literature search demonstrated a paucity of literature in the field of 3D printing in Cardiac Surgery, with merely 27 publications identified. The articles generally reported that 3D printed models provide better anatomical clarity beyond what can be achieved with imaging modalities only, which correlates with our experiences with this technology.  The vast majority of articles (89%) pertained to the utility of 3D printing in pre-surgical planning, and only four (15%) discussed the applications in trainee education.

Conclusion: There is enormous potential for growth in the field of 3D printing in Cardiac Surgery. We attest to the ease of adoption of this new technology, which has the potential to drastically change the way we practice medicine. 

 

44.17 Expedited Discharge Does Not Increase the Rate or Cost of Readmission After Pulmonary Lobectomy

R. A. Jean1,2, A. S. Chiu1, D. J. Boffa3, A. W. Kim4, F. C. Detterbeck3, J. D. Blasberg3  1Yale University School Of Medicine,Department Of Surgery,New Haven, CT, USA 2Yale University School Of Medicine,National Clinician Scholars Program,New Haven, CT, USA 3Yale University School Of Medicine,Section Of Thoracic Surgery, Department Of Surgery,New Haven, CT, USA 4University Of Southern California,Division Of Thoracic Surgery, Department Of Surgery,Los Angeles, CA, USA

Introduction:  Readmission after pulmonary lobectomy has become an increasingly important measure of hospital quality, and a potentially avoidable source of healthcare costs. Expedited discharge within 3 days of pulmonary lobectomy for lung cancer has been used to reduce costs and hospital-associated complications. However, there is concern that expedited discharge put patients at risk for more frequent, and more expensive, postoperative readmissions. We sought to explore whether patients were at higher risk for costly readmission following expedited discharge.

Methods: The Healthcare Cost and Utilization Project’s Nationwide Readmission Database (NRD) was queried for cases of pulmonary lobectomy for lung cancer between 2010 and 2014. Patients aged 65 years and older, surviving to discharge, were categorized as “expedited” if they were discharged on hospital day 2 or 3, or “routine” if they were discharged between hospital days 4 and 7; all other patients were excluded from analysis. Patients were evaluated for 90-days following discharge to identify the primary endpoint of readmission. Risk-adjusted readmission rates and median hospital charge of the first readmission were compared between groups.

Results: A total of 46,287 patients underwent lobectomy for lung cancer during the study period. There were 10,447 (22.6%) expedited discharges and 35,840 (77.4%) routine discharges. Median charges for the index hospitalization were $49,037 (IQR $37,667 – $69,038) for the expedited group, and $63,009 (IQR $47,161 – $87,282) for the routine discharge group (p<0.0001). Patients in the routine discharge group had a 17.3% (N 6200; 95% CI 16.5% -18.1%) risk-adjusted probability of readmission, in comparison to a 14.2% (N 1486; 95% CI 13.0% – 18.1%) risk-adjusted rate among the expedited discharge group (p<0.0001). Despite this, there was no significant difference in the median charges following readmission for either group (Expedited $28,908 vs Routine $28,968, p=0.78).

Conclusion: Lobectomy patients with routine hospital length of stay had a 3.1% higher risk-adjusted readmission rate and almost $14,000 higher median index charges than patients following expedited discharge. Despite this, median charges for readmission were not different between groups. This data demonstrates that prolonged hospital length of stay does not reduce the risk of 90-day readmission following lobectomy, providing support for protocols that expedite patient discharge and in turn reduce overall healthcare utilization.

44.12 Hemodialysis as a Predictor of Outcomes After Isolated Coronary Artery Bypass Grafting

R. S. Elsayed1, B. Abt1, W. J. Mack2, A. Liu1, J. K. Siegel1, M. L. Barr1, R. G. Cohen1, C. J. Baker1, V. A. Starnes1, M. E. Bowdish1  1University Of Southern California,Cardiothoracic/Surgery/Keck School Of Medicine,Los Angeles, CA, USA 2University Of Southern California,Preventative Medicine,Los Angeles, CA, USA

Introduction: The need for hemodialysis is a known risk factor for mortality after isolated coronary artery bypass grafting (CABG). This study evaluated outcomes after isolated CABG in hemodialysis-dependent (HDD) and non-HDD patients.

Methods: A retrospective cohort study of 778 patients undergoing isolated CABG between 2006-2016. Patients were grouped by presence or absence of preoperative hemodialysis (696 non-HDD, 82 HDD). Mean follow-up was 20.8±33.3 months. Multivariable logistic regression models were developed to predict 30-day mortality and major adverse cardiovascular events (stroke, myocardial infarction (MI), death, and need for coronary reintervention i.e. MACE). Kaplan-Meier analysis was used to assess survival and multivariable Cox proportional hazard modeling was used to identify factors associated with overall mortality. Propensity scores and 1:1 Greedy matching (1:1) was used to create two groups of 65 non-HDD and 65 HD. Matched groups were compared for the primary outcomes.

Results:  Overall survival was 97.9, 96.5, 96.5, and 95.6% at 1, 3, 5, and 7 years.  Thirty-day mortality was 2.2% (n=17). On multivariable analysis, 30-day mortality was increased in those with diabetes (OR 7.3, 95% CI 1.0-52.2), COPD (OR 4.5, 95% CI 1.1-18.5), on preoperative inotropes (OR 4.8, 95% CI 1.1-21.2), with increasing cross clamp times (OR 1.04, 95% CI 1.0-1.1). MACE at 30 days was 4.1% (n-32). On multivariable analysis, MACE at 30-days was more common in those with diabetes (OR 4.1, 95% CI 1.3-12.5), COPD (3.7, 1.3-11.0), MI within 30 days (2.8, 95% CI 1.0-7.6), on preoperative inotropes (OR 6.1, 95% CI 1.8-19.8), and with increasing cross clamp times (OR 1.4, 95% CI 1.0-1.1). Thirty-day mortality was 1.9 and 4.9% in the non-HDD and HD groups, respectively (odds ratio (OR) 2.7, 95% confidence intervals (95% CI) 0.86-8.47, p=0.09). Median time on hemodialysis in the HD group was 78 weeks (IQR 48-156). Kaplan-Meier estimates of survival between non-HDD and HD patients showed a significant difference in survival (log-rank p = 0.0008, figure). After multivariable adjustment for age, sex, presence of diabetes, presence of COPD, and history of previous cardiac surgery, mortality was higher in the HDD group as compared to the non-HDD group (Hazard Ratio (HR) 3.1, 95% CI 1.05-9.1, P=0.04). After propensity matching, no survival difference was found between groups (unadjusted HR 4.0, 95% CI 0.47-35.1, p = 0.20).

Conclusions: Overall survival after isolated CABG remains excellent, with decreased survival in those with diabetes, COPD, needing preoperative inotropes, and those with longer aortic cross clamp times.  The need for preoperative hemodialysis remains a significant risk factor for long term mortality.

44.13 Primary Neoplasm of the Chest Wall: Outcomes after Surgical Resection

P. Sugarbaker1, K. S. Mehta1, I. Christie1, W. E. Gooding2, O. Awais1, M. J. Schuchert1, J. D. Luketich1, A. Pennathur1  1University Of Pittsburgh Medical Center,Deparment Of Cardiothoracic Surgery,Pittsburgh, PA, USA 2University Of Pittsburgh Cancer Institute Biostatistics Facility,Department Of Biostatistics,Pittsburgh, PA, USA

Introduction:
Chest wall tumors are rare thoracic malignancies that present a unique set of challenges to the surgeon. Primary chest wall tumors are uncommon (1-2% of all primary neoplasms), and are frequently malignant. The treatment of chest wall tumors includes resection with wide negative margins. These resections can leave significant defects that require reconstruction. The goal during reconstruction is to restore proper respiratory function, prevent anatomical deformity and protect/support intrathoracic structures. Reconstruction adds to the complexicities of the operation and often involves a multidisciplinary team including thoracic surgery, plastic surgery, neurosurgery, and physical medicine and rehabilitation. The primary objective of our study was to review our experience with resection of primary chest wall tumors and the oncological results

Methods:
We reviewed our experience with resection of primary chest wall tumors over a 11 year period. We reviewed the tumor characteristics, surgical technique, perioperative outcomes and oncological results. Patients were followed in the thoracic surgery clinic. Kaplan-Meier curves were constructed to evaluate the recurrence-free survival and overall survival

Results:
Twenty-eight patients (15 women,13 men; median age 52.5 years) underwent chest wall resection and reconstruction for primary chest wall neoplasm (median size 5.05 cms (2.0-15.0 cm). The most common tumor resected was sarcoma (n=14), and the predominant histology was chondrosarcoma (n=6). Chest wall reconstruction was performed on all 28 patients. A synthetic Polytetrafluoroethylene (PTFE) mesh was the most frequently used method of reconstruction (n=14). Methyl methacrylate for chest wall reconstruction was used in 6 patients, while primary closure was achieved in 8 patients. The median length of stay was 4 days. Perioperative morbidity occurred in 6 patients and the 30 day operative mortality was 0%. During follow-up (median follow-up 67 months), there were 8 deaths. The estimated 3-year overall-survival was 72% (95% confidence interval (CI) 56% to 92%) and the estimated 5-year overall-survival was 68% (95% confidence interval 51% to 94%) (Figure). There were 6 recurrences noted during follow-up. The 3-year and 5-year recurrence-free survival was 72% (95% CI 55% – 94%).

Conclusion:
Primary chest wall neoplasms are rare tumors that pose unique clinical challenges, requiring complex resection with wide margins and reconstruction. These tumors can be safely resected with  appropriate reconstruction techniques. The overall survival results are encouraging following successful resection and reconstruction. Further follow-up is required and is ongoing to fully evaluate the oncological results.