Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
J Electrocardiol ; 50(6): 841-846, 2017.
Article in English | MEDLINE | ID: mdl-28918214

ABSTRACT

BACKGROUND: The feasibility of using photoplethysmography (PPG) for estimating heart rate variability (HRV) has been the subject of many recent studies with contradicting results. Accurate measurement of cardiac cycles is more challenging in PPG than ECG due to its inherent characteristics. METHODS: We developed a PPG-only algorithm by computing a robust set of medians of the interbeat intervals between adjacent peaks, upslopes, and troughs. Abnormal intervals are detected and excluded by applying our criteria. RESULTS: We tested our algorithm on a large database from high-risk ICU patients containing arrhythmias and significant amounts of artifact. The average difference between PPG-based and ECG-based parameters is <1% for pNN50, <1bpm for meanHR, <1ms for SDNN, <3ms for meanNN, and <4ms for SDSD and RMSSD. CONCLUSIONS: Our performance testing shows that the pulse rate variability (PRV) parameters are comparable to the HRV parameters from simultaneous ECG recordings.


Subject(s)
Algorithms , Arrhythmias, Cardiac/physiopathology , Heart Rate Determination/methods , Heart Rate/physiology , Photoplethysmography/methods , Artifacts , Electrocardiography/methods , Humans , Intensive Care Units , Sensitivity and Specificity , Signal Processing, Computer-Assisted
2.
J Electrocardiol ; 47(6): 798-803, 2014.
Article in English | MEDLINE | ID: mdl-25172189

ABSTRACT

Defibrillation is often required to terminate a ventricular fibrillation or fast ventricular tachycardia rhythm and resume a perfusing rhythm in sudden cardiac arrest patients. Automated external defibrillators rely on automatic ECG analysis algorithms to detect the presence of shockable rhythms before advising the rescuer to deliver a shock. For a reliable rhythm analysis, chest compression must be interrupted to prevent corruption of the ECG waveform due to the artifact induced by the mechanical activity of compressions. However, these hands-off intervals adversely affect the success of treatment. To minimize the hands-off intervals and increase the chance of successful resuscitation, we developed a method which asks for interrupting the compressions only if the underlying ECG rhythm cannot be accurately determined during chest compressions. Using this method only a small percentage of cases need compressions interruption, hence a significant reduction in hands-off time is achieved. Our algorithm comprises a novel filtering technique for the ECG and thoracic impedance waveforms, and an innovative method to combine analysis from both filtered and unfiltered data. Requiring compression interruption for only 14% of cases, our algorithm achieved a sensitivity of 92% and specificity of 99%.


Subject(s)
Artifacts , Death, Sudden, Cardiac/prevention & control , Electric Countershock/methods , Heart Massage/methods , Therapy, Computer-Assisted/methods , Ventricular Fibrillation/prevention & control , Algorithms , Clinical Alarms , Combined Modality Therapy/methods , Defibrillators , Diagnosis, Computer-Assisted/methods , Electrocardiography/methods , Humans , Reproducibility of Results , Retrospective Studies , Sensitivity and Specificity , Treatment Outcome , Ventricular Fibrillation/diagnosis
3.
Am Heart J ; 167(2): 150-159.e1, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24439975

ABSTRACT

BACKGROUND AND PURPOSE: Automated measurements of electrocardiographic (ECG) intervals are widely used by clinicians for individual patient diagnosis and by investigators in population studies. We examined whether clinically significant systematic differences exist in ECG intervals measured by current generation digital electrocardiographs from different manufacturers and whether differences, if present, are dependent on the degree of abnormality of the selected ECGs. METHODS: Measurements of RR interval, PR interval, QRS duration, and QT interval were made blindly by 4 major manufacturers of digital electrocardiographs used in the United States from 600 XML files of ECG tracings stored in the US FDA ECG warehouse and released for the purpose of this study by the Cardiac Safety Research Consortium. Included were 3 groups based on expected QT interval and degree of repolarization abnormality, comprising 200 ECGs each from (1) placebo or baseline study period in normal subjects during thorough QT studies, (2) peak moxifloxacin effect in otherwise normal subjects during thorough QT studies, and (3) patients with genotyped variants of congenital long QT syndrome (LQTS). RESULTS: Differences of means between manufacturers were generally small in the normal and moxifloxacin subjects, but in the LQTS patients, differences of means ranged from 2.0 to 14.0 ms for QRS duration and from 0.8 to 18.1 ms for the QT interval. Mean absolute differences between algorithms were similar for QRS duration and QT intervals in the normal and in the moxifloxacin subjects (mean ≤6 ms) but were significantly larger in patients with LQTS. CONCLUSIONS: Small but statistically significant group differences in mean interval and duration measurements and means of individual absolute differences exist among automated algorithms of widely used, current generation digital electrocardiographs. Measurement differences, including QRS duration and the QT interval, are greatest for the most abnormal ECGs.


Subject(s)
Algorithms , Electrocardiography/instrumentation , Heart Conduction System/physiology , Heart Rate/physiology , Signal Processing, Computer-Assisted , Adult , Equipment Design , Female , Humans , Male , Middle Aged , Reproducibility of Results
4.
J Electrocardiol ; 46(6): 528-34, 2013.
Article in English | MEDLINE | ID: mdl-23948522

ABSTRACT

BACKGROUND: ECG detection of ST-segment elevation myocardial infarction (STEMI) in the presence of left bundle-branch block (LBBB) is challenging due to ST deviation from the altered conduction. The purpose of this study was to introduce a new algorithm for STEMI detection in LBBB and compare the performance to three existing algorithms. METHODS: Source data of the study group (143 with acute MI and 239 controls) comes from multiple sources. ECGs were selected by computer interpretation of LBBB. Acute MI reference was hospital discharge diagnosis. Automated measurements came from the Philips DXL algorithm. Three existing algorithms were compared, (1) Sgarbossa criteria, (2) Selvester 10% RS criteria and (3) Smith 25% S-wave criteria. The new algorithm uses an ST threshold based on QRS area. All algorithms share the concordant ST elevation and anterior ST depression criteria from the Sgarbossa score. The difference is in the threshold for discordant ST elevation. The Sgarbossa, Selvester, Smith and Philips discordant ST elevation criteria are (1) ST elevation ≥ 500 µV, (2) ST elevation ≥ 10% of |S|-|R| plus STEMI limits, (3) ST elevation ≥ 25% of the S-wave amplitude and (4) ST elevation ≥ 100 µV + 1050 µV/Ash * QRS area. The Smith S-wave and Philips QRS area criteria were tested using both a single and 2 lead requirement. Algorithm performance was measured by sensitivity, specificity, and positive likelihood ratio (LR+). RESULTS: Algorithm performance can be organized in bands of similar sensitivity and specificity ranging from Sgarbossa score ≥ 3 with the lowest sensitivity and highest specificity, 13.3% and 97.9%, to the Selvester 10% rule with the highest sensitivity and lower specificity of 30.1% and 93.2%. The Smith S-wave and Philips QRS area algorithms were in the middle band with sensitivity and specificity of (20.3%, 94.9%) and (23.8%, 95.8%) respectively. CONCLUSION: As can be seen from the difference between Sgarbossa score ≥ 3 and other algorithms for STEMI in LBBB, a discordant ST elevation criterion improves the sensitivity for detection but also results in a drop in specificity. For applications of automated STEMI detection that require higher sensitivity, the Selvester algorithm is better. For applications that require a low false positive rate such as relying on the algorithm for pre-hospital activation of cardiac catheterization laboratory for urgent PCI, it may be better to use the 2 lead Philips QRS area or Smith 25% S-wave algorithm.


Subject(s)
Algorithms , Bundle-Branch Block/complications , Bundle-Branch Block/diagnosis , Diagnosis, Computer-Assisted/methods , Electrocardiography/methods , Myocardial Infarction/complications , Myocardial Infarction/diagnosis , Diagnosis, Differential , Female , Humans , Male , Reproducibility of Results , Sensitivity and Specificity
5.
J Electrocardiol ; 46(6): 473-9, 2013.
Article in English | MEDLINE | ID: mdl-23871657

ABSTRACT

Although the importance of quality cardiopulmonary resuscitation (CPR) and its link to survival is still emphasized, there has been recent debate about the balance between CPR and defibrillation, particularly for long response times. Defibrillation shocks for ventricular fibrillation (VF) of recently perfused hearts have high success for the return of spontaneous circulation (ROSC), but hearts with depleted adenosine triphosphate (ATP) stores have low recovery rates. Since quality CPR has been shown to both slow the degradation process and restore cardiac viability, a measurement of patient condition to optimize the timing of defibrillation shocks may improve outcomes compared to time-based protocols. Researchers have proposed numerous predictive features of VF and shockable ventricular tachycardia (VT) which can be computed from the electrocardiogram (ECG) signal to distinguish between the rhythms which convert to spontaneous circulation and those which do not. We looked at the shock-success prediction performance of thirteen of these features on a single evaluation database including the recordings from 116 out-of-hospital cardiac arrest patients which were collected for a separate study using defibrillators in ambulances and medical centers in 4 European regions and the US between March 2002 and September 2004. A total of 469 shocks preceded by VF or shockable VT rhythm episodes were identified in the recordings. Based on the experts' annotation for the post-shock rhythm, the shocks were categorized to result in either pulsatile (ROSC) or non-pulsatile (no-ROSC) rhythm. The features were calculated on a 4-second ECG segment prior to the shock delivery. These features examined were: Mean Amplitude, Average Peak-Peak Amplitude, Amplitude Range, Amplitude Spectrum Analysis (AMSA), Peak Frequency, Centroid Frequency, Spectral Flatness Measure (SFM), Energy, Max Power, Centroid Power, Power Spectrum Analysis (PSA), Mean Slope, and Median Slope. Statistical hypothesis tests (two-tailed t-test and Wilcoxon with 5% significance level) were applied to determine if the means and medians of these features were significantly different between the ROSC and no-ROSC groups. The ROC curve was computed for each feature, and Area Under the Curve (AUC) was calculated. Specificity (Sp) with Sensitivity (Se) held at 90% as well as Se with Sp held at 90% was also computed. All features showed statistically different mean and median values between the ROSC and no-ROSC groups with all p-values less than 0.0001. The AUC was >76% for all features. For Sp = 90%, the Se range was 33-45%; for Se = 90%, the Sp range was 49-63%. The features showed good shock-success prediction performance. We believe that a defibrillator employing a clinical decision tool based on these features has the potential to improve overall survival from cardiac arrest.


Subject(s)
Defibrillators, Implantable/statistics & numerical data , Diagnosis, Computer-Assisted/methods , Electrocardiography/methods , Heart Arrest/mortality , Heart Arrest/prevention & control , Outcome Assessment, Health Care/methods , Survival Analysis , Algorithms , Diagnosis, Computer-Assisted/statistics & numerical data , Electrocardiography/statistics & numerical data , Heart Arrest/diagnosis , Humans , Outcome Assessment, Health Care/statistics & numerical data , Prevalence , Prognosis , Reproducibility of Results , Risk Factors , Sensitivity and Specificity , Treatment Outcome
6.
J Electrocardiol ; 45(6): 561-5, 2012.
Article in English | MEDLINE | ID: mdl-22995382

ABSTRACT

BACKGROUND: Interpretation of a patient's 12-lead ECG frequently involves comparison to a previously recorded ECG. Automated serial ECG comparison can be helpful not only to note significant ECG changes but also to improve the single-ECG interpretation. Corrections from the previous ECG are carried forward by the serial comparison algorithm when measurements do not change significantly. METHODS: A sample of patients from three hospitals was collected with two or more 12-lead ECGs from each patient. There were 233 serial comparisons from 143 patients. 41% of patients had two ECGs and 59% of patients had more than two ECGs. ECGs were taken from a difficult population as measured by ECG abnormalities, 197/233 abnormal, 11/233 borderline, 14/233 otherwise-normal and 11/233 normal. ECGs were processed with the Philips DXL algorithm and then in time order for each patient with the Philips serial comparison algorithm. To measure accuracy of interpretation and serial change, an expert cardiologist corrected the ECGs in stages. The first ECG was corrected and used as the reference for the second ECG. The second ECG was then corrected and used as the reference for the third ECG and so on. At each stage, the serial comparison algorithm compared an unedited ECG to an earlier edited ECG. Interpretation accuracy was measured by comparing the algorithm to the cardiologist on a statement by statement basis. The effect of serial comparison was measured by the sum of interpretive statement mismatches between the algorithm and cardiologist. Statement mismatches were measured in two ways, (1) exact match and (2) match within the same diagnostic category. RESULTS: The cardiologist used 910 statements over 233 ECGs for an average number of 3.9 statements per ECG and a mode of 4 statements. When automated serial comparison was used, the total number of exact statement mismatches decreased by 29% and the total same-category statement mismatches decreased by 47%. CONCLUSION: Automated serial comparison improves interpretation accuracy in addition to its main role of noting differences between ECGs.


Subject(s)
Algorithms , Arrhythmias, Cardiac/diagnosis , Artificial Intelligence , Diagnosis, Computer-Assisted/methods , Electrocardiography/methods , Pattern Recognition, Automated/methods , Humans , Reproducibility of Results , Sensitivity and Specificity
7.
J Electrocardiol ; 42(6): 522-6, 2009.
Article in English | MEDLINE | ID: mdl-19608194

ABSTRACT

Electrocardiographic (ECG) monitoring plays an important role in the management of patients with atrial fibrillation (AF). Automated real-time AF detection algorithm is an integral part of ECG monitoring during AF therapy. Before and after antiarrhythmic drug therapy and surgical procedures require ECG monitoring to ensure the success of AF therapy. This article reports our experience in developing a real-time AF monitoring algorithm and techniques to eliminate false-positive AF alarms. We start by designing an algorithm based on R-R intervals. This algorithm uses a Markov modeling approach to calculate an R-R Markov score. This score reflects the relative likelihood of observing a sequence of R-R intervals in AF episodes versus making the same observation outside AF episodes. Enhancement of the AF algorithm is achieved by adding atrial activity analysis. P-R interval variability and a P wave morphology similarity measure are used in addition to R-R Markov score in classification. A hysteresis counter is applied to eliminate short AF segments to reduce false AF alarms for better suitability in a monitoring environment. A large ambulatory Holter database (n = 633) was used for algorithm development and the publicly available MIT-BIH AF database (n = 23) was used for algorithm validation. This validation database allowed us to compare our algorithm performance with previously published algorithms. Although R-R irregularity is the main characteristic and strongest discriminator of AF rhythm, by adding atrial activity analysis and techniques to eliminate very short AF episodes, we have achieved 92% sensitivity and 97% positive predictive value in detecting AF episodes, and 93% sensitivity and 98% positive predictive value in quantifying AF segment duration.


Subject(s)
Algorithms , Atrial Fibrillation/diagnosis , Diagnosis, Computer-Assisted/methods , Electrocardiography, Ambulatory/methods , Software , Computer Systems , Humans , Reproducibility of Results , Sensitivity and Specificity , Software Design
8.
Ann Noninvasive Electrocardiol ; 14 Suppl 1: S3-8, 2009 Jan.
Article in English | MEDLINE | ID: mdl-19143739

ABSTRACT

BACKGROUND: Commonly used techniques for QT measurement that identify T wave end using amplitude thresholds or the tangent method are sensitive to baseline drift and to variations of terminal T wave shape. Such QT measurement techniques commonly underestimate or overestimate the "true" QT interval. METHODS: To find the end of the T wave, the new Philips QT interval measurement algorithms use the distance from an ancillary line drawn from the peak of the T wave to a point beyond the expected inflection point at the end of the T wave. We have adapted and optimized modifications of this basic approach for use in three different ECG application areas: resting diagnostic, ambulatory Holter, and in-hospital patient monitoring. The Philips DXL resting diagnostic algorithm uses an alpha-trimming technique and a measure of central tendency to determine the median QT value of eight most reliable leads. In ambulatory Holter ECG analysis, generally only two or three channels are available. QT is measured on a root-mean-square vector magnitude signal. Finally, QT measurement in the real time in-hospital application is among the most challenging areas of QT measurement. The Philips real time QT interval measurement algorithm employs features from both Philips DXL 12-lead and ambulatory Holter QT algorithms with further enhancements. RESULTS: The diagnostic 12-lead algorithm has been tested against the gold standard measurement database established by the CSE group with results surpassing the industrial ECG measurement accuracy standards. Holter and monitoring algorithm performance data on the PhysioNet QT database were shown to be similar to the manual measurements by two cardiologists. CONCLUSION: The three variations of the QT measurement algorithm we developed are suitable for diagnostic 12-lead, Holter, and patient monitoring applications.


Subject(s)
Electrocardiography/methods , Algorithms , Electrocardiography/standards , Electrocardiography, Ambulatory , Heart Rate , Humans , Monitoring, Physiologic , Rest , Signal Processing, Computer-Assisted
9.
J Electrocardiol ; 41(6): 466-73, 2008.
Article in English | MEDLINE | ID: mdl-18954606

ABSTRACT

Reduced-lead electrocardiographic systems are currently a widely accepted medical technology used in a number of applications. They provide increased patient comfort and superior performance in arrhythmia and ST monitoring. These systems have unique and compelling advantages over the traditional multichannel monitoring lead systems. However, the design and development of reduced-lead systems create numerous technical challenges. This article summarizes the major technical challenges commonly encountered in lead reconstruction for reduced-lead systems. We discuss the effects of basis lead and target lead selections, the differences between interpolated vs extrapolated leads, the database dependency of the coefficients, and the approaches in quantitative performance evaluation, and provide a comparison of different lead systems. In conclusion, existing reduced-lead systems differ significantly in regard to trade-offs from the technical, practical, and clinical points of view. Understanding the technical limitations, the strengths, and the trade-offs of these reduced-lead systems will hopefully guide future research.


Subject(s)
Electrocardiography/instrumentation , Electrocardiography/trends , Electrodes/trends , Forecasting , Internationality , Reproducibility of Results , Sensitivity and Specificity
10.
J Electrocardiol ; 41(6): 546-52, 2008.
Article in English | MEDLINE | ID: mdl-18817921

ABSTRACT

A 12-lead electrocardiogram (ECG) reconstructed from a reduced subset of leads is desired in continued arrhythmia and ST monitoring for less tangled wires and increased patient comfort. However, the impact of reconstructed 12-lead lead ECG on clinical ECG diagnosis has not been studied thoroughly. This study compares the differences between recorded and reconstructed 12-lead diagnostic ECG interpretation with 2 commonly used configurations: reconstruct precordial leads V(2), V(3), V(5), and V(6) from V(1),V(4), or reconstruct V(1), V(3), V(4), and V(6) from V(2),V(5). Limb leads are recorded in both configurations. A total of 1785 ECGs were randomly selected from a large database of 50,000 ECGs consecutively collected from 2 teaching hospitals. ECGs with extreme artifact and paced rhythm were excluded. Manual ECG annotations by 2 cardiologists were categorized and used in testing. The Philips resting 12-lead ECG algorithm was used to generate computer measurements and interpretations for comparison. Results were compared for both arrhythmia and morphology categories with high prevalence interpretations including atrial fibrillation, anterior myocardial infarct, right bundle-branch block, left bundle-branch block, left atrial enlargement, and left ventricular hypertrophy. Sensitivity and specificity were calculated for each reconstruction configuration in these arrhythmia and morphology categories. Compared to recorded 12-leads, the V(2),V(5) lead configuration shows weakness in interpretations where V(1) is important such as atrial arrhythmia, atrial enlargement, and bundle-branch blocks. The V(1),V(4) lead configuration shows a decreased sensitivity in detection of anterior myocardial infarct, left bundle-branch block (LBBB), and left ventricular hypertrophy (LVH). In conclusion, reconstructed precordial leads are not equivalent to recorded leads for clinical ECG diagnoses especially in ECGs presenting rhythm and morphology abnormalities. In addition, significant accuracy reduction in ECG interpretation is not strongly correlated with waveform differences between reconstructed and recorded 12-lead ECGs.


Subject(s)
Arrhythmias, Cardiac/diagnosis , Diagnostic Errors/prevention & control , Electrocardiography/instrumentation , Electrocardiography/methods , Humans , Reproducibility of Results , Sensitivity and Specificity
11.
J Electrocardiol ; 41(1): 8-14, 2008.
Article in English | MEDLINE | ID: mdl-18191652

ABSTRACT

The details of digital recording and computer processing of a 12-lead electrocardiogram (ECG) remain a source of confusion for many health care professionals. A better understanding of the design and performance tradeoffs inherent in the electrocardiograph design might lead to better quality in ECG recording and better interpretation in ECG reading. This paper serves as a tutorial from an engineering point of view to those who are new to the field of ECG and to those clinicians who want to gain a better understanding of the engineering tradeoffs involved. The problem arises when the benefit of various electrocardiograph features is widely understood while the cost or the tradeoffs are not equally well understood. An electrocardiograph is divided into 2 main components, the patient module for ECG signal acquisition and the remainder for ECG processing which holds the main processor, fast printer, and display. The low-level ECG signal from the body is amplified and converted to a digital signal for further computer processing. The Electrocardiogram is processed for display by user selectable filters to reduce various artifacts. A high-pass filter is used to attenuate the very low frequency baseline sway or wander. A low-pass filter attenuates the high-frequency muscle artifact and a notch filter attenuates interference from alternating current power. Although the target artifact is reduced in each case, the ECG signal is also distorted slightly by the applied filter. The low-pass filter attenuates high-frequency components of the ECG such as sharp R waves and a high-pass filter can cause ST segment distortion for instance. Good skin preparation and electrode placement reduce artifacts to eliminate the need for common usage of these filters.


Subject(s)
Diagnosis, Computer-Assisted/instrumentation , Diagnosis, Computer-Assisted/methods , Electrocardiography/instrumentation , Electrocardiography/methods , Electronics, Medical , Signal Processing, Computer-Assisted/instrumentation , Equipment Design
12.
J Electrocardiol ; 40(6 Suppl): S103-10, 2007.
Article in English | MEDLINE | ID: mdl-17993306

ABSTRACT

QT surveillance of neonatal patients, and especially premature infants, may be important because of the potential for concomitant exposure to QT-prolonging medications and because of the possibility that they may have hereditary QT prolongation (long-QT syndrome), which is implicated in the pathogenesis of approximately 10% of sudden infant death syndrome. In-hospital automated continuous QT interval monitoring for neonatal and pediatric patients may be beneficial but is difficult because of high heart rates; inverted, biphasic, or low-amplitude T waves; noisy signal; and a limited number of electrocardiogram (ECG) leads available. Based on our previous work on an automated adult QT interval monitoring algorithm, we further enhanced and expanded the algorithm for application in the neonatal and pediatric patient population. This article presents results from evaluation of the new algorithm in neonatal patients. Neonatal-monitoring ECGs (n = 66; admission age range, birth to 2 weeks) were collected from the neonatal intensive care unit in 2 major teaching hospitals in the United States. Each digital recording was at least 10 minutes in length with a sampling rate of 500 samples per second. Special handling of high heart rate was implemented, and threshold values were adjusted specifically for neonatal ECG. The ECGs studied were divided into a development/training ECG data set (TRN), with 24 recordings from hospital 1, and a testing data set (TST), with 42 recordings composed of cases from both hospital 1 (n = 16) and hospital 2 (n = 26). Each ECG recording was manually annotated for QT interval in a 15-second period by 2 cardiologists. Mean and standard deviation of the difference (algorithm minus cardiologist), regression slope, and correlation coefficient were used to describe algorithm accuracy. Considering the technical problems due to noisy recordings, a high fraction (approximately 80%) of the ECGs studied were measurable by the algorithm. Mean and standard deviation of the error were both low (TRN = -3 +/- 8 milliseconds; TST = 1 +/- 20 milliseconds); regression slope (TRN = 0.94; TST = 0.83) and correlation coefficients (TRN = 0.96; TST = 0.85) (P < .0001) were fairly high. Performance on the TST was similar to that on the TRN with the exception of 2 cases. These results confirm that automated continuous QT interval monitoring in the neonatal intensive care setting is feasible and accurate and may lead to earlier recognition of the "vulnerable" infant.


Subject(s)
Algorithms , Critical Care/methods , Diagnosis, Computer-Assisted/methods , Electrocardiography/methods , Long QT Syndrome/diagnosis , Humans , Infant, Newborn , Reproducibility of Results , Sensitivity and Specificity
13.
J Electrocardiol ; 39(4 Suppl): S123-7, 2006 Oct.
Article in English | MEDLINE | ID: mdl-16920145

ABSTRACT

QT interval measurement in the patient monitoring environment is receiving much interest because of the potential for proarrhythmic effects from both cardiac and noncardiac drugs. The American Heart Association and American Association of Critical Care Nurses practice standards for ECG monitoring in hospital settings now recommend frequent monitoring of QT interval when patients are started on a potentially proarrhythmic drug. We developed an algorithm to continuously measure QT interval in real-time in the patient monitoring setting. This study reports our experience in developing and testing this automated QT algorithm. Compared with the environment of resting ECG analysis, real-time ECG monitoring has a number of challenges: significantly more amounts of muscle and motion artifact, increased baseline wander, a varied number and location of ECG leads, and the need for trending and for alarm generation when QT interval prolongation is detected. We have used several techniques to address these challenges. In contiguous 15-second time windows, we average the signal of tightly clustered normal beats detected by a real-time arrhythmia-monitoring algorithm to minimize the impact of artifact. Baseline wander is reduced by zero-phase high-pass filtering and subtraction of isoelectric points as determined by median signal values in a localized region. We compute a root-mean-squared ECG waveform from all available leads and use a novel technique to measure the QT interval. We have tested this algorithm against standard and proprietary ECG databases. Our real-time QT interval measurement algorithm proved to be stable, accurate, and able to track changing QT values.


Subject(s)
Algorithms , Arrhythmias, Cardiac/diagnosis , Diagnosis, Computer-Assisted/methods , Electrocardiography/methods , Monitoring, Physiologic/methods , Computer Systems , Humans , Long QT Syndrome/diagnosis , Reproducibility of Results , Retrospective Studies , Sensitivity and Specificity
14.
Am J Cardiol ; 98(1): 88-92, 2006 Jul 01.
Article in English | MEDLINE | ID: mdl-16784927

ABSTRACT

QT-interval measurements have clinical importance for the electrocardiographic recognition of congenital and acquired heart disease and as markers of arrhythmogenic risk during drug therapy, but software algorithms for the automated measurement of electrocardiographic durations differ among manufacturers and evolve within manufacturers. To compare automated QT-interval measurements, simultaneous paired electrocardiograms were obtained in 218 subjects using digital recorders from the 2 major manufacturers of electrocardiographs used in the United States and analyzed by 2 currently used versions of each manufacturer's software. The 4 automated QT and QTc durations were examined by repeated-measures analysis of variance with post hoc testing. Significantly larger automated QT-interval measurements were found with the most recent software of each manufacturer (12- to 24-ms mean differences from earlier algorithms). Systematic differences in QT measurements between manufacturers were significant for the earlier algorithms (11-ms mean difference) but not for the most recent software (1.3-ms mean difference). Similar relations were found for the rate-corrected QTc, with large mean differences between earlier and later algorithms (15 to 26 ms). Although there was a <2-ms mean difference between the most recent automated QTc measurements of the 2 manufacturers, the SD of the difference was 12 ms. In conclusion, reference values for automated electrocardiographic intervals and serial QT measurements vary among electrocardiographs and analysis software. Technically based differences in automated QT and QTc measurements must be considered when these intervals are used as markers of heart disease, prognosis, or arrhythmogenic risk.


Subject(s)
Algorithms , Electrocardiography/instrumentation , Heart Rate/physiology , Evaluation Studies as Topic , Humans , Reaction Time , Regression Analysis
15.
J Electrocardiol ; 35 Suppl: 95-103, 2002.
Article in English | MEDLINE | ID: mdl-12539105

ABSTRACT

A new pacemaker pulse detection and paced electrocardiogram (ECG) rhythm classification algorithm with high sensitivity and positive predictive value has been implemented as part of the Philips Medical Systems' (Andover, MA) ECG analysis program. The detection algorithm was developed on 1,108 paced ECGs with 16,029 individual pulse locations. It operates on 12-lead, 500 sample per second, 150 Hz low-pass filtered ECG signals. Even after low-pass filtering, this algorithm distinguishes between pacemaker pulses and narrow QRS complexes from newborns. An individual pulse detection sensitivity of 99.7% and positive predictive value of 99.5% was obtained by the multi-lead detector. A 10-second, 12-lead ECG database (n = 13,155) of paced (n = 2,190), non-paced adult (n = 8,070), non-paced pediatric (n = 1,209) and "noisy" ECGs with spike noise and muscle artifact (n = 1,686) was assembled and annotated by two readers. The overall performance in identification of an ECG as paced with any pacing present versus non-paced is 97.2% in sensitivity and 99.9% in specificity. The paced ECGs were classified by the mode in which the beats were paced, such as, atrial, ventricular, A-V dual, or dual/inhibited chamber (ie, combinations of atrial, ventricular and dual) pacing. An algorithm was developed for paced rhythm classification. The algorithm performance results show that accurate and robust pacemaker pulse detection and classification can be done in software on diagnostic bandwidth ECG signals.


Subject(s)
Algorithms , Electrocardiography , Pacemaker, Artificial , Adult , Databases, Factual , Heart Rate , Humans , Infant, Newborn , Predictive Value of Tests , Sensitivity and Specificity , Software
SELECTION OF CITATIONS
SEARCH DETAIL
...