Elsevier

Resuscitation

Volume 160, March 2021, Pages 7-13
Resuscitation

Clinical paper
Software annotation of defibrillator files: Ready for prime time?

https://doi.org/10.1016/j.resuscitation.2020.12.019Get rights and content

Abstract

Background

High-quality chest compressions are associated with improved outcomes after cardiac arrest. Defibrillators record important information about chest compressions during cardiopulmonary resuscitation (CPR) and can be used in quality-improvement programs. Defibrillator review software can automatically annotate files and measure chest compression metrics. However, evidence is limited regarding the accuracy of such measurements.

Objective

To compare chest compression fraction (CCF) and rate measurements made with software annotation vs. manual annotation vs. limited manual annotation of defibrillator files recorded during out-of-hospital cardiac arrest (OHCA) CPR.

Methods

This was a retrospective, observational study of 100 patients who had CPR for OHCA. We assessed chest compression bioimpedance waveforms from the time of initial CPR until defibrillator removal. A reviewer revised software annotations in two ways: completely manual annotations and limited manual annotations, which marked the beginning and end of CPR and ROSC, but not chest compressions. Measurements were compared for CCF and rate using intraclass correlation coefficient (ICC) analysis.

Results

Case mean rate showed no significant difference between the methods (108.1–108.6 compressions per minute) and ICC was excellent (>0.90). The case mean (±SD) CCF for software, manual, and limited manual annotation was 0.64 ± 0.19, 0.86 ± 0.07, and 0.81 ± 0.10, respectively. The ICC for manual vs. limited manual annotation of CCF was 0.69 while for individual minute epochs it was 0.83.

Conclusion

Software annotation performed very well for chest compression rate. For CCF, the difference between manual and software annotation measurements was clinically important, while manual vs. limited manual annotation were similar with an ICC that was good-to-excellent.

Introduction

Cardiac arrest is a leading cause of death in the United States and around the world, with almost 400,000 occurring annually outside of hospitals.1, 2 The survival rates for out-of-hospital cardiac arrest (OHCA) have not changed for decades, and are of public health concern.3 Studies have shown that high-quality cardiopulmonary resuscitation (CPR) is the most effective intervention that can be provided prior to arrival at a hospital.4, 5 Many observational studies have measured critical characteristics of CPR such as chest compression rate, depth, and fraction (the proportion of total resuscitation time that compressions are performed), and have shown that each of these characteristics affects outcomes.6 The results of these studies have been incorporated into emergency medical services (EMS) quality improvement programs, which has led to improved EMS CPR training and improved survival for OHCA patients.7

There are many reasons to measure the quality of CPR performed on out-of-hospital cardiac arrest (OHCA) patients. Unfortunately, CPR skills deteriorate over time, CPR skills may appear better during training compared to actual CPR performance in patients, and higher or lower than recommended compression rates may be revealed.8, 9 Studies have also shown that chest compressions are often not deep enough and contain frequent interruptions,10 including pauses after defibrillation attempts.11 For these reasons, it is important to be able to accurately and efficiently measure CPR quality for the purpose of research and quality improvement programs.8, 12 As an added benefit, providers have been shown to improve their actual CPR performance when they know that it is being measured.13 In this study, we measured CPR quality with two metrics: chest compression fraction (CCF) and chest compression rate. CCF is the “proportion of time that compressions are performed during a cardiac arrest.”14 Chest compression rate was defined as the rate at which chest compressions were performed during a series of uninterrupted chest compressions. Studies have shown that a CCF of 60–80% and a compression rate of 100–120 compressions per minute are independently associated with higher survival rates in OHCA patients.1, 15

Electronic monitor-defibrillators record important information about the quality of chest compressions during CPR. They use thoracic bioimpedance and/or accelerometers to record and/or display chest compression and ventilation waveforms, interruptions in chest compressions, and cardiac rhythm. Defibrillators also record details of each shock. Software made for reviewing defibrillator files can automatically annotate and measure chest compression metrics. However, evidence is limited regarding the accuracy of such measurements. Physio-Control, the company that created the CODE-STAT 10 Data Review Software that we use to view and annotate the defibrillator recordings, recommends the following:

“The software will pick up 90−95% of the compressions and ventilations and annotate them automatically. Check the software’s work, and add and delete compression and ventilation annotations as needed.”16

Unfortunately, determining CCF and compression rate accurately requires manual review, which is labor-intensive. The guidelines published by Physio-Control and our prior research have shown that measurements using thoracic bioimpedance are subject to various artifacts that require time-consuming manual review and revision to improve accuracy. Manual annotation is needed to identify the start or end of CPR, periods of ROSC, and low amplitude waveforms or those that appear to be artifacts that may not have met the threshold to be counted as a compression. This requires 5−15 min of a reviewer’s time per patient recording. Many EMS agencies may not have the budget necessary to fund the staff needed to manually annotate defibrillator files.

The objectives of this study were to compare the accuracy of CCF and compression rate measurements made with software annotation vs. manual annotation of defibrillator files recorded during OHCA CPR and to determine if we could improve the efficiency of manual annotation.

Section snippets

Methods

This was a retrospective, observational study from the Dallas-Fort Worth (DFW) site of the Resuscitation Outcomes Consortium (ROC). The ROC was a network of regional research centers in the United States and Canada that conducted research focused on OHCA and severe traumatic injury. The ROC implemented a cardiac arrest registry in 20053; the DFW ROC site participated in the registry from 2005 to 2016. From 2016, DFW ROC maintained its own cardiac arrest registry from which the data used in this

Statistical methods

Up to 30 min of cardiopulmonary resuscitation data for each case were reviewed in this study. After adding any necessary manual annotations (marking chest compression waveforms not marked by software) and correcting artifact (waveforms that the software erroneously marked as chest compressions), the software calculated chest compression fraction (CPR%), mean compression rate, and number of seconds of pre-shock pause for the entire resuscitation as well as for every minute epoch starting with

Results

We analyzed the chest compression data of 100 patients with OHCA. The mean patient age was 63 years with 59% being male (Table 1). The mean (±SD) duration of CPR was 30:24 ± 10:35 min. The two-way random intraclass correlation (ICC) of the two independent reviewers’ manual annotations was 0.99 for chest compression rate and 0.96 for chest compression fraction (CCF). The case mean CCF for software, manual, and limited manual annotation was 0.64 ± 0.19, 0.86 ± 0.07, and 0.81 ± 0.10, respectively (

Discussion

We found clinically important differences in chest compression fraction measurements made by the automated software vs. manually annotated measurements. In addition, we found that limited manual annotation is a more efficient, less time-consuming way to accurately measure chest compression fraction, a key quality element of CPR. CPR has been shown to be the most effective intervention for patients with OHCA,4 and improving specific CPR quality metrics such as chest compression fraction and rate

Limitations

A limitation of this study is that data was only used from the DFW ROC site. Data from other ROC sites may differ. This study analyzed annotations using Physio-Control software. Software from other companies may provide different results. Another potential limitation is that the manual and limited manual annotations were each performed by two trained and experienced reviewers. Further testing with annotations performed by other reviewers may provide different results.

Conclusions

In conclusion, software annotation performed very well for measurement of chest compression rate. With respect to CCF measurement, the difference between manual and software annotation measurements was significant and clinically important, while manual vs. limited manual annotation were similar.

Conflict of interest

Dr. Idris receives grant support from the US National Institutes of Health (NIH), the American Heart Association, and the US Department of Defense. He serves as an unpaid volunteer on the American Heart Association National Emergency Cardiovascular Care Committee and the HeartSine, Inc. Clinical Advisory Board.

Funding

This study was supported in part by US National Institutes of Health grant HL 077887 (AHI) and American Heart Association National Center grant #100205. Sponsors had no involvement in the conception, execution, or writing of this study.

CRediT authorship contribution statement

Vishal Gupta: Conceptualization, Methodology, Writing - original draft, Writing - review & editing. Robert H. Schmicker: Formal analysis, Writing - review & editing. Pamela Owens: Data curation, Writing - review & editing. Ava E. Pierce: Writing - original draft, Writing - review & editing. Ahamed H. Idris: Conceptualization, Methodology, Writing - original draft, Supervision, Writing - review & editing.

Acknowledgements

The investigators express their deepest appreciation for the unwavering dedication of the EMS personnel who participated in the Dallas-Fort Worth (DFW) ROC site and executed the protocols that made this study possible.

References (18)

There are more references available in the full text version of this article.

Cited by (7)

  • Ventilation rates measured by capnography during out-of-hospital cardiac arrest resuscitations and their association with return of spontaneous circulation

    2023, Resuscitation
    Citation Excerpt :

    Second, the software cannot determine when the capnography device was initially applied to the patient after it is turned on nor if the capnography device fails to function properly. Third, intra-arrest ventilation rate is dependent on an accurate assessment that chest compressions are occurring, which is also error prone.12 Finally, the waveform can be affected by air movement from chest compressions or other patient movement, which can be falsely interpreted as a ventilation.13,14

  • Chest compression fraction calculation: A new, automated, robust method to identify periods of chest compressions from defibrillator data – Tested in Zoll X Series

    2022, Resuscitation
    Citation Excerpt :

    Shallow chest compressions not detected by the manufacturer were added manually. In conjunction with the 1st task, this task corresponds to the classic full annotation, implemented in the workflow of common manufacturers software as CODE-STAT Reviewer (v11.0, Stryker, Kalamazoo, Michigan, United States) and thereby resembles the “manual annotation” by Gupta et al.9 Two independent researchers annotated each recording separately for the first two tasks.

  • Methodology and framework for the analysis of cardiopulmonary resuscitation quality in large and heterogeneous cardiac arrest datasets

    2021, Resuscitation
    Citation Excerpt :

    The automated procedure used to set the start/end of the analysis interval was based on the event time-stamps of the device and its signal recordings. If start/end times as defined by Kramer-Johansen et al.32 were not automatically identified, and instead the power on/power off (or last signal recording) time-stamps of the device were used, the error in CCF raised to 23.0 (10.9–42.6) %, a problem that has recently been reported.40 The missannotation of periods with ROSC accounted for the majority of large errors in the automated procedure.

View all citing articles on Scopus
View full text