Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 91
Filter
1.
ACS Nano ; 17(21): 21506-21517, 2023 Nov 14.
Article in English | MEDLINE | ID: mdl-37877266

ABSTRACT

Mechanistic probing of surface potential changes arising from dynamic charge transport is the key to understanding and engineering increasingly complex nanoscale materials and devices. Spatiotemporal averaging in conventional heterodyne detection-based Kelvin probe force microscopy (KPFM) inherently limits its time resolution, causing an irretrievable loss of transient response and higher-order harmonics. Addressing this, we report a wavelet transform (WT)-based methodology capable of quantifying the sub-ms charge dynamics and probing the elusive transient response. The feedback-free, open-loop wavelet transform KPFM (OL-WT-KPFM) technique harnesses the WT's ability to simultaneously extract spatial and temporal information from the photodetector signal to provide a dynamic mapping of surface potential, capacitance gradient, and dielectric constant at a temporal resolution 3 orders of magnitude higher than the lock-in time constant. We further demonstrate the method's applicability to explore the surface-photovoltage-induced sub-ms hole-diffusion transient in bismuth oxyiodide semiconductor. The OL-WT-KPFM concept is readily applicable to commercial systems and can provide the underlying basis for the real-time analysis of transient electronic and electrochemical properties.

2.
Comput Methods Programs Biomed ; 241: 107780, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37651816

ABSTRACT

BACKGROUND AND OBJECTIVE: Quantitative measures extracted from ventricular fibrillation (VF) waveform reflect the metabolic state of the myocardium and are associated with survival outcome. The quality of delivered chest compressions during cardiopulmonary resuscitation are also linked with survival. The aim of this research is to explore the viability and effectiveness of a thoracic impedance (TI) based chest compression (CC) guidance system to control CC depth within individual subjects and influence VF waveform properties. METHODS: This porcine investigation includes an analysis of two protocols. CC were delivered in 2 min episodes at a constant rate of 110 CC min-1. Subject-specific CC depth was controlled using a TI-thresholding system where CC were performed according to the amplitude (ZRMS, 0.125 to 1.250 Ω) of a band-passed TI signal (ZCC). Protocol A was a retrospective analysis of a 12-porcine study to characterise the response of two VF waveform metrics: amplitude spectrum area (AMSA) and mean slope (MS), to varying CC quality. Protocol B was a prospective 12-porcine study to determine if changes in VF waveform metrics, due to CC quality, were associated with defibrillation outcome. RESULTS: Protocol A: A directly proportional relationship was observed between ZRMS and CC depth applied within each subject (r = 0.90; p <0.001). A positive relationship was observed between ZRMS and both AMSA (p <0.001) and MS (p <0.001), where greater TI thresholds were associated with greater waveform metrics. PROTOCOL B: MS was associated with return of circulation following defibrillation (odds ratio = 2.657; p = 0.043). CONCLUSION: TI-thresholding was an effective way to control CC depth within-subjects. Compressions applied according to higher TI thresholds evoked an increase in AMSA and MS. The response in MS due to deeper CC resulted in a greater incidence of ROSC compared to shallow chest compressions.


Subject(s)
Amsacrine , Ventricular Fibrillation , Swine , Animals , Ventricular Fibrillation/therapy , Electric Impedance , Prospective Studies , Retrospective Studies
3.
Sensors (Basel) ; 23(13)2023 Jun 25.
Article in English | MEDLINE | ID: mdl-37447749

ABSTRACT

Impedance cardiography (ICG) is a low-cost, non-invasive technique that enables the clinical assessment of haemodynamic parameters, such as cardiac output and stroke volume (SV). Conventional ICG recordings are taken from the patient's thorax. However, access to ICG vital signs from the upper-arm brachial artery (as an associated surrogate) can enable user-convenient wearable armband sensor devices to provide an attractive option for gathering ICG trend-based indicators of general health, which offers particular advantages in ambulatory long-term monitoring settings. This study considered the upper arm ICG and control Thorax-ICG recordings data from 15 healthy subject cases. A prefiltering stage included a third-order Savitzky-Golay finite impulse response (FIR) filter, which was applied to the raw ICG signals. Then, a multi-stage wavelet-based denoising strategy on a beat-by-beat (BbyB) basis, which was supported by a recursive signal-averaging optimal thresholding adaptation algorithm for Arm-ICG signals, was investigated for robust signal quality enhancement. The performance of the BbyB ICG denoising was evaluated for each case using a 700 ms frame centred on the heartbeat ICG pulse. This frame was extracted from a 600-beat ensemble signal-averaged ICG and was used as the noiseless signal reference vector (gold standard frame). Furthermore, in each subject case, enhanced Arm-ICG and Thorax-ICG above a threshold of correlation of 0.95 with the noiseless vector enabled the analysis of beat inclusion rate (BIR%), yielding an average of 80.9% for Arm-ICG and 100% for Thorax-ICG, and BbyB values of the ICG waveform feature metrics A, B, C and VET accuracy and precision, yielding respective error rates (ER%) of 0.83%, 11.1%, 3.99% and 5.2% for Arm-IG, and 0.41%, 3.82%, 1.66% and 1.25% for Thorax-ICG, respectively. Hence, the functional relationship between ICG metrics within and between the arm and thorax recording modes could be characterised and the linear regression (Arm-ICG vs. Thorax-ICG) trends could be analysed. Overall, it was found in this study that recursive averaging, set with a 36 ICG beats buffer size, was the best Arm-ICG BbyB denoising process, with an average of less than 3.3% in the Arm-ICG time metrics error rate. It was also found that the arm SV versus thorax SV had a linear regression coefficient of determination (R2) of 0.84.


Subject(s)
Cardiography, Impedance , Hemodynamics , Humans , Cardiac Output/physiology , Stroke Volume/physiology , Cardiography, Impedance/methods , Hemodynamics/physiology , Monitoring, Ambulatory
4.
J Electrocardiol ; 76: 17-21, 2023.
Article in English | MEDLINE | ID: mdl-36395631

ABSTRACT

BACKGROUND: Mobile Cardiac Outpatient Telemetry (MCOT) can be used to screen high risk patients for atrial fibrillation (AF). These devices rely primarily on algorithmic detection of AF events, which are then stored and transmitted to a clinician for review. It is critical the positive predictive value (PPV) of MCOT detected AF is high, and this often leads to reduced sensitivity, as device manufacturers try to limit false positives. OBJECTIVE: The purpose of this study was to design a two stage classifier using artificial intelligence (AI) to improve the PPV of MCOT detected atrial fibrillation episodes whilst maintaining high levels of detection sensitivity. METHODS: A low complexity, RR-interval based, AF classifier was paired with a deep convolutional neural network (DCNN) to create a two-stage classifier. The DCNN was limited in size to allow it to be embedded on MCOT devices. The DCNN was trained on 491,727 ECGs from a proprietary database and contained 128,612 parameters requiring only 158 KB of storage. The performance of the two-stage classifier was then assessed using publicly available datasets. RESULTS: The sensitivity of AF detected by the low complexity classifier was high across all datasets (>93%) however the PPV was poor (<76%). Subsequent analysis by the DCNN increased episode PPV across all datasets substantially (>11%), with only a minor loss in sensitivity (<5%). This increase in PPV was due to a decrease in the number of false positive detections. Further analysis showed that DCNN processing was only required on around half of analysis windows, offering a significant computational saving against using the DCNN as a one-stage classifier. CONCLUSION: DCNNs can be combined with existing MCOT classifiers to increase the PPV of detected AF episodes. This reduces the review burden for physicians and can be achieved with only a modest decrease in sensitivity.


Subject(s)
Atrial Fibrillation , Deep Learning , Humans , Atrial Fibrillation/diagnosis , Electrocardiography , Artificial Intelligence , Neural Networks, Computer
5.
Biosens Bioelectron ; 223: 115016, 2023 Mar 01.
Article in English | MEDLINE | ID: mdl-36586151

ABSTRACT

Cardiovascular Disease (CVD) is amongst the leading cause of death globally, which calls for rapid detection and treatment. Biosensing devices are used for the diagnosis of cardiovascular disease at the point-of-care (POC), with lateral flow assays (LFAs) being particularly useful. However, due to their low sensitivity, most LFAs have been shown to have difficulties detecting low analytic concentrations. Breakthroughs in artificial intelligence (AI) and image processing reduced this detection constraint and improved disease diagnosis. This paper presents a novel patches-selection approach for generating LFA images from the test line and control line of LFA images, analyzing the image features, and utilizing them to reliably predict and classify LFA images by deploying classification algorithms, specifically Convolutional Neural Networks (CNNs). The generated images were supplied as input data to the CNN model, a strong model for extracting crucial information from images, to classify the target images and provide risk stratification levels to medical professionals. With this approach, the classification model produced about 98% accuracy, and as per the literature review, this approach has not been investigated previously. These promising results show the proposed method may be useful for identifying a wide variety of diseases and conditions, including cardiovascular problems.


Subject(s)
Biosensing Techniques , Cardiovascular Diseases , Humans , Artificial Intelligence , Cardiovascular Diseases/diagnosis , Point-of-Care Systems , Biomarkers
6.
Sensors (Basel) ; 22(19)2022 Sep 24.
Article in English | MEDLINE | ID: mdl-36236340

ABSTRACT

Sudden cardiac death (SCD) risk can be reduced by early detection of short-lived and transient cardiac arrhythmias using long-term electrocardiographic (ECG) monitoring. Early detection of ventricular arrhythmias can reduce the risk of SCD by allowing appropriate interventions. Long-term continuous ECG monitoring, using a non-invasive armband-based wearable device is an appealing solution for detecting early heart rhythm abnormalities. However, there is a paucity of understanding on the number and best bipolar ECG electrode pairs axial orientation around the left mid-upper arm circumference (MUAC) for such devices. This study addresses the question on the best axial orientation of ECG bipolar electrode pairs around the left MUAC in non-invasive armband-based wearable devices, for the early detection of heart rhythm abnormalities. A total of 18 subjects with almost same BMI values in the WASTCArD arm-ECG database were selected to assess arm-ECG bipolar leads quality using proposed metrics of relative (normalized) signal strength measurement, arm-ECG detection performance of the main ECG waveform event component (QRS) and heart-rate variability (HRV) in six derived bipolar arm ECG-lead sensor pairs around the armband circumference, having regularly spaced axis angles (at 30° steps) orientation. The analysis revealed that the angular range from -30° to +30°of arm-lead sensors pair axis orientation around the arm, including the 0° axis (which is co-planar to chest plane), provided the best orientation on the arm for reasonably good QRS detection; presenting the highest sensitivity (Se) median value of 93.3%, precision PPV median value at 99.6%; HRV RMS correlation (p) of 0.97 and coefficient of determination (R2) of 0.95 with HRV gold standard values measured in the standard Lead-I ECG.


Subject(s)
Arm , Wearable Electronic Devices , Arrhythmias, Cardiac/diagnosis , Electrocardiography , Electrodes , Humans
7.
Artif Intell Med ; 132: 102381, 2022 10.
Article in English | MEDLINE | ID: mdl-36207087

ABSTRACT

BACKGROUND: The application of artificial intelligence to interpret the electrocardiogram (ECG) has predominantly included the use of knowledge engineered rule-based algorithms which have become widely used today in clinical practice. However, over recent decades, there has been a steady increase in the number of research studies that are using machine learning (ML) to read or interrogate ECG data. OBJECTIVE: The aim of this study is to review the use of ML with ECG data using a time series approach. METHODS: Papers that address the subject of ML and the ECG were identified by systematically searching databases that archive papers from January 1995 to October 2019. Time series analysis was used to study the changing popularity of the different types of ML algorithms that have been used with ECG data over the past two decades. Finally, a meta-analysis of how various ML techniques performed for various diagnostic classifications was also undertaken. RESULTS: A total of 757 papers was identified. Based on results, the use of ML with ECG data started to increase sharply (p < 0.001) from 2012. Healthcare applications, especially in heart abnormality classification, were the most common application of ML when using ECG data (p < 0.001). However, many new emerging applications include using ML and the ECG for biometrics and driver drowsiness. The support vector machine was the technique of choice for a decade. However, since 2018, deep learning has been trending upwards and is likely to be the leading technique in the coming few years. Despite the accuracy paradox, accuracy was the most frequently used metric in the studies reviewed, followed by sensitivity, specificity, F1 score and then AUC. CONCLUSION: Applying ML using ECG data has shown promise. Data scientists and physicians should collaborate to ensure that clinical knowledge is being applied appropriately and is informing the design of ML algorithms. Data scientists also need to consider knowledge guided feature engineering and the explicability of the ML algorithm as well as being transparent in the algorithm's performance to appropriately calibrate human-AI trust. Future work is required to enhance ML performance in ECG classification.


Subject(s)
Artificial Intelligence , Benchmarking , Algorithms , Electrocardiography , Humans , Machine Learning , Time Factors
8.
J Electrocardiol ; 74: 154-157, 2022.
Article in English | MEDLINE | ID: mdl-36283253

ABSTRACT

Deep Convolutional Neural Networks (DCNNs) have been shown to provide improved performance over traditional heuristic algorithms for the detection of arrhythmias from ambulatory ECG recordings. However, these DCNNs have primarily been trained and tested on device-specific databases with standardized electrode positions and uniform sampling frequencies. This work explores the possibility of training a DCNN for Atrial Fibrillation (AF) detection on a database of single­lead ECG rhythm strips extracted from resting 12­lead ECGs. We then test the performance of the DCNN on recordings from ambulatory ECG devices with different recording leads and sampling frequencies. We developed an extensive proprietary resting 12­lead ECG dataset of 549,211 patients. This dataset was randomly split into a training set of 494,289 patients and a testing set of the remaining 54,922 patients. We trained a 34-layer convolutional DCNN to detect AF and other arrhythmias on this dataset. The DCNN was then validated on two Physionet databases commonly used to benchmark automated ECG algorithms (1) MIT-BIH Arrhythmia Database and (2) MIT-BIH Atrial Fibrillation Database. Validation was performed following the EC57 guidelines, with performance assessed by gross episode and duration sensitivity and positive predictive value (PPV). Finally, validation was also performed on a selection of rhythm strips from an ambulatory ECG patch that a committee of board-certified cardiologists annotated. On MIT-BIH, The DCNN achieved a sensitivity of 100% and 84% PPV in detecting episodes of AF. and 100% sensitivity and 94% PPV in quantifying AF episode duration. On AFDB, The DCNN achieved a sensitivity of 94% and PPV of 98% in detecting episodes of AF, and 98% sensitivity and 100% PPV in quantifying AF episode duration. On the patch database, the DCNN demonstrated performance that was closely comparable to that of a cardiologist. The results indicate that DCNN models can learn features that generalize between resting 12­lead and ambulatory ECG recordings, allowing DCNNs to be device agnostic for detecting arrhythmias from single­lead ECG recordings and enabling a range of clinical applications.


Subject(s)
Atrial Fibrillation , Humans , Atrial Fibrillation/diagnosis , Electrocardiography , Rest
9.
Sci Rep ; 12(1): 14575, 2022 08 26.
Article in English | MEDLINE | ID: mdl-36028561

ABSTRACT

Public access automated external defibrillators (AEDs) represent emergency medical devices that may be used by untrained lay-persons in a life-critical event. As such their usability must be confirmed through simulation testing. In 2020 the novel coronavirus caused a global pandemic. In order to reduce the spread of the virus, many restrictions such as social distancing and travel bans were enforced. Usability testing of AEDs is typically conducted in-person, but due to these restrictions, other usability solutions must be investigated. Two studies were conducted, each with 18 participants: (1) an in-person usability study of an AED conducted in an office space, and (2) a synchronous remote usability study of the same AED conducted using video conferencing software. Key metrics associated with AED use, such as time to turn on, time to place pads and time to deliver a shock, were assessed in both studies. There was no difference in time taken to turn the AED on in the in-person study compared to the remote study, but the time to place electrode pads and to deliver a shock were significantly lower in the in-person study than in the remote study. Overall, the results of this study indicate that remote user testing of public access defibrillators may be appropriate in formative usability studies for determining understanding of the user interface.


Subject(s)
COVID-19 , Cardiopulmonary Resuscitation , Defibrillators/classification , Out-of-Hospital Cardiac Arrest/therapy , Physical Distancing , Cardiopulmonary Resuscitation/methods , Cardiopulmonary Resuscitation/standards , Defibrillators/standards , Defibrillators/statistics & numerical data , Humans , Pandemics , Time Factors , User-Centered Design , User-Computer Interface
10.
J Electrocardiol ; 73: 157-161, 2022.
Article in English | MEDLINE | ID: mdl-35853754

ABSTRACT

In this commentary paper, we discuss the use of the electrocardiogram to help clinicians make diagnostic and patient referral decisions in acute care settings. The paper discusses the factors that are likely to contribute to the variability and noise in the clinical decision making process for catheterization lab activation. These factors include the variable competence in reading ECGs, the intra/inter rater reliability, the lack of standard ECG training, the various ECG machine and filter settings, cognitive biases (such as automation bias which is the tendency to agree with the computer-aided diagnosis or AI diagnosis), the order of the information being received, tiredness or decision fatigue as well as ECG artefacts such as the signal noise or lead misplacement. We also discuss potential research questions and tools that could be used to mitigate this 'noise' and improve the quality of ECG based decision making.


Subject(s)
Diagnosis, Computer-Assisted , Electrocardiography , Clinical Decision-Making , Decision Making , Humans , Reproducibility of Results
11.
Sci Rep ; 12(1): 6545, 2022 04 21.
Article in English | MEDLINE | ID: mdl-35449196

ABSTRACT

Microvascular haemodynamic alterations are associated with coronary artery disease (CAD). The conjunctival microcirculation can easily be assessed non-invasively. However, the microcirculation of the conjunctiva has not been previously explored in clinical algorithms aimed at identifying patients with CAD. This case-control study involved 66 patients with post-myocardial infarction and 66 gender-matched healthy controls. Haemodynamic properties of the conjunctival microcirculation were assessed with a validated iPhone and slit lamp-based imaging tool. Haemodynamic properties were extracted with semi-automated software and compared between groups. Biomarkers implicated in the development of CAD were assessed in combination with conjunctival microcirculatory parameters. The conjunctival blood vessel parameters and biomarkers were used to derive an algorithm to aid in the screening of patients for CAD. Conjunctival blood velocity measured in combination with the blood biomarkers (N-terminal pro-brain natriuretic peptide and adiponectin) had an area under receiver operator characteristic curve (AUROC) of 0.967, sensitivity 93.0%, specificity 91.5% for CAD. This study demonstrated that the novel algorithm which included a combination of conjunctival blood vessel haemodynamic properties, and blood-based biomarkers could be used as a potential screening tool for CAD and should be validated for potential utility in asymptomatic individuals.


Subject(s)
Algorithms , Conjunctiva , Biomarkers , Blood Flow Velocity , Case-Control Studies , Conjunctiva/blood supply , Humans , Microcirculation
12.
Front Physiol ; 13: 760000, 2022.
Article in English | MEDLINE | ID: mdl-35399264

ABSTRACT

Introduction: Representation learning allows artificial intelligence (AI) models to learn useful features from large, unlabelled datasets. This can reduce the need for labelled data across a range of downstream tasks. It was hypothesised that wave segmentation would be a useful form of electrocardiogram (ECG) representation learning. In addition to reducing labelled data requirements, segmentation masks may provide a mechanism for explainable AI. This study details the development and evaluation of a Wave Segmentation Pretraining (WaSP) application. Materials and Methods: Pretraining: A non-AI-based ECG signal and image simulator was developed to generate ECGs and wave segmentation masks. U-Net models were trained to segment waves from synthetic ECGs. Dataset: The raw sample files from the PTB-XL dataset were downloaded. Each ECG was also plotted into an image. Fine-tuning and evaluation: A hold-out approach was used with a 60:20:20 training/validation/test set split. The encoder portions of the U-Net models were fine-tuned to classify PTB-XL ECGs for two tasks: sinus rhythm (SR) vs atrial fibrillation (AF), and myocardial infarction (MI) vs normal ECGs. The fine-tuning was repeated without pretraining. Results were compared. Explainable AI: an example pipeline combining AI-derived segmentation masks and a rule-based AF detector was developed and evaluated. Results: WaSP consistently improved model performance on downstream tasks for both ECG signals and images. The difference between non-pretrained models and models pretrained for wave segmentation was particularly marked for ECG image analysis. A selection of segmentation masks are shown. An AF detection algorithm comprising both AI and rule-based components performed less well than end-to-end AI models but its outputs are proposed to be highly explainable. An example output is shown. Conclusion: WaSP using synthetic data and labels allows AI models to learn useful features for downstream ECG analysis with real-world data. Segmentation masks provide an intermediate output that may facilitate confidence calibration in the context of end-to-end AI. It is possible to combine AI-derived segmentation masks and rule-based diagnostic classifiers for explainable ECG analysis.

13.
Resusc Plus ; 9: 100203, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35146463

ABSTRACT

AIM: Automated external defibrillators (AEDs) use various shock protocols with different characteristics when deployed in pediatric mode. The aim of this study is to assess and compare the safety and efficacy of different AED pediatric protocols using novel experimental approaches. METHODS: Two defibrillation protocols (A and B) were assessed across two studies: Protocol A: escalating (50-75-90 J) defibrillation waveform with higher voltage, shorter duration and equal phase durations. Protocol B; non-escalating (50-50-50 J) defibrillation waveform with lower voltage, longer duration and unequal phase durations.Experiment 1: Isolated shock damage was assessed following shocks to 12 anesthetized pigs. Animals were randomized into two groups, receiving three shocks from Protocol A (50-75-90 J) or B (50-50-50 J). Cardiac function, cardiac troponin I (cTnI), creatine phosphokinase (CPK) and histopathology were analyzed. Experiment 2: Defibrillation safety and efficacy were assessed through shock success, ROSC, ST-segment deviation and contractility following 16 randomized shocks from protocol A or B delivered to 10 anesthetized pigs in VF. RESULTS: Experiment 1: No clinically meaningful difference in cTnI, CPK, ST-segment deviation, ejection fraction or histopathological damage was observed following defibrillation with either protocol. No difference was observed between protocols at any timepoint. Experiment 2: all defibrillation types demonstrated shock success and ROSC ≥ 97.5%. Post-ROSC contractility was similar between protocols. CONCLUSIONS: There is no evidence that administration of clinically relevant shock sequences, without experimental confounders, result in significant myocardial damage in this model of pediatric resuscitation. Typical variations in AED pediatric mode settings do not affect defibrillation safety and efficacy.

14.
Sensors (Basel) ; 23(1)2022 Dec 29.
Article in English | MEDLINE | ID: mdl-36616958

ABSTRACT

Inertial sensors are widely used in human motion monitoring. Orientation and position are the two most widely used measurements for motion monitoring. Tracking with the use of multiple inertial sensors is based on kinematic modelling which achieves a good level of accuracy when biomechanical constraints are applied. More recently, there is growing interest in tracking motion with a single inertial sensor to simplify the measurement system. The dead reckoning method is commonly used for estimating position from inertial sensors. However, significant errors are generated after applying the dead reckoning method because of the presence of sensor offsets and drift. These errors limit the feasibility of monitoring upper limb motion via a single inertial sensing system. In this paper, error correction methods are evaluated to investigate the feasibility of using a single sensor to track the movement of one upper limb segment. These include zero velocity update, wavelet analysis and high-pass filtering. The experiments were carried out using the nine-hole peg test. The results show that zero velocity update is the most effective method to correct the drift from the dead reckoning-based position tracking. If this method is used, then the use of a single inertial sensor to track the movement of a single limb segment is feasible.


Subject(s)
Movement , Upper Extremity , Humans , Motion , Biomechanical Phenomena
15.
IEEE J Transl Eng Health Med ; 9: 1900415, 2021.
Article in English | MEDLINE | ID: mdl-34873497

ABSTRACT

Objective: To design and implement an easy-to-use, Point-of-Care (PoC) lateral flow immunoassays (LFA) reader and data analysis system, which provides a more in-depth quantitative analysis for LFA images than conventional approaches thereby supporting efficient decision making for potential early risk assessment of cardiovascular disease (CVD). Methods and procedures: A novel end-to-end system was developed including a portable device with CMOS camera integrated with optimized illumination and optics to capture the LFA images produced using high-sensitivity C-Reactive Protein (hsCRP) (concentration level < 5 mg/L). The images were transmitted via WiFi to a back-end server system for image analysis and classification. Unlike common image classification approaches which are based on averaging image intensity from a region-of-interest (ROI), a novel approach was developed which considered the signal along the sample's flow direction as a time series and, consequently, no need for ROI detection. Long Short-Term Memory (LSTM) networks were deployed for multilevel classification. The features based on Dynamic Time Warping (DTW) and histogram bin counts (HBC) were explored for classification. Results: For the classification of hsCRP, the LSTM outperformed the traditional machine learning classifiers with or without DTW and HBC features performed the best (with mean accuracy of 94%) compared to other features. Application of the proposed method to human plasma also suggests that HBC features from LFA time series performed better than the mean from ROI and raw LFA data. Conclusion: As a proof of concept, the results demonstrate the capability of the proposed framework for quantitative analysis of LFA images and suggest the potential for early risk assessment of CVD. Clinical impact: The hsCRP levels < 5 mg/L were aligned with clinically actionable categories for early risk assessment of CVD. The outcomes demonstrated the real-world applicability of the proposed system for quantitative analysis of LFA images, which is potentially useful for more LFA applications beyond presented in this study.


Subject(s)
C-Reactive Protein , Cardiovascular Diseases , Cardiovascular Diseases/diagnosis , Humans , Immunoassay , Machine Learning , Neural Networks, Computer
16.
J Biomed Inform ; 122: 103905, 2021 10.
Article in English | MEDLINE | ID: mdl-34481056

ABSTRACT

Compartment-based infectious disease models that consider the transmission rate (or contact rate) as a constant during the course of an epidemic can be limiting regarding effective capture of the dynamics of infectious disease. This study proposed a novel approach based on a dynamic time-varying transmission rate with a control rate governing the speed of disease spread, which may be associated with the information related to infectious disease intervention. Integration of multiple sources of data with disease modelling has the potential to improve modelling performance. Taking the global mobility trend of vehicle driving available via Apple Maps as an example, this study explored different ways of processing the mobility trend data and investigated their relationship with the control rate. The proposed method was evaluated based on COVID-19 data from six European countries. The results suggest that the proposed model with dynamic transmission rate improved the performance of model fitting and forecasting during the early stage of the pandemic. Positive correlation has been found between the average daily change of mobility trend and control rate. The results encourage further development for incorporation of multiple resources into infectious disease modelling in the future.


Subject(s)
COVID-19 , Malus , Forecasting , Humans , Pandemics , SARS-CoV-2
17.
Comput Methods Programs Biomed ; 211: 106398, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34563896

ABSTRACT

BACKGROUND AND OBJECTIVE: Cloud computing has the ability to offload processing tasks to a remote computing resources. Presently, the majority of biomedical digital signal processing involves a ground-up approach by writing code in a variety of languages. This may reduce the time a researcher or health professional has to process data, while increasing the barrier to entry to those with little or no software development experience. In this study, we aim to provide a service capable of handling and processing biomedical data via a code-free interface. Furthermore, our solution should support multiple file formats and processing languages while saving user inputs for repeated use. METHODS: A web interface via the Python-based Django framework was developed with the potential to shorten the time taken to create an algorithm, encourage code reuse, and democratise digital signal processing tasks for non-technical users using a code-free user interface. A user can upload data, create an algorithm and download the result. Using discrete functions and multi-lingual scripts (e.g. MATLAB or Python), the user can manipulate data rapidly in a repeatable manner. Multiple data file formats are supported by a decision-based file handler and user authentication-based storage allocation method. RESULTS: The proposed system has been demonstrated as effective in handling multiple input data types in various programming languages, including Python and MATLAB. This, in turn, has the potential to reduce currently experienced bottlenecks in cross-platform development of bio-signal processing algorithms. The source code for this system has been made available to encourage reuse. A cloud service for digital signal processing has the ability to reduce the apparent complexity and abstract the need to understand the intricacies of signal processing. CONCLUSION: We have introduced a web-based system capable of reducing the barrier to entry for inexperienced programmers. Furthermore, our system is reproducable and scalable for use in a variety of clinical or research fields.


Subject(s)
Cloud Computing , Software , Algorithms , Programming Languages , Signal Processing, Computer-Assisted
18.
J Electrocardiol ; 69S: 7-11, 2021.
Article in English | MEDLINE | ID: mdl-34548191

ABSTRACT

Automated interpretation of the 12-lead ECG has remained an underpinning interest in decades of research that has seen a diversity of computing applications in cardiology. The application of computers in cardiology began in the 1960s with early research focusing on the conversion of analogue ECG signals (voltages) to digital samples. Alongside this, software techniques that automated the extraction of wave measurements and provided basic diagnostic statements, began to emerge. In the years since then there have been many significant milestones which include the widespread commercialisation of 12-lead ECG interpretation software, associated clinical utility and the development of the related regulatory frameworks to promote standardised development. In the past few years, the research community has seen a significant rejuvenation in the development of ECG interpretation programs. This is evident in the research literature where a large number of studies have emerged tackling a variety of automated ECG interpretation problems. This is largely due to two factors. Specifically, the technical advances, both software and hardware, that have facilitated the broad adoption of modern artificial intelligence (AI) techniques, and, the increasing availability of large datasets that support modern AI approaches. In this article we provide a very high-level overview of the operation of and approach to the development of early 12-lead ECG interpretation programs and we contrast this to the approaches that are now seen in emerging AI approaches. Our overview is mainly focused on highlighting differences in how input data are handled prior to generation of the diagnostic statement.


Subject(s)
Cardiology , Deep Learning , Algorithms , Artificial Intelligence , Electrocardiography , Humans
19.
J Electrocardiol ; 69S: 1-6, 2021.
Article in English | MEDLINE | ID: mdl-34340817

ABSTRACT

This paper provides a brief description of how computer programs are used to automatically interpret electrocardiograms (ECGs), and also provides a discussion regarding new opportunities. The algorithms that are typically used today in hospitals are knowledge engineered where a computer programmer manually writes computer code and logical statements which are then used to deduce a possible diagnosis. The computer programmer's code represents the criteria and knowledge that is used by clinicians when reading ECGs. This is in contrast to supervised machine learning (ML) approaches which use large, labelled ECG datasets to induct their own 'rules' to automatically classify ECGs. Although there are many ML techniques, deep neural networks are being increasingly explored as ECG classification algorithms when trained on large ECG datasets. Whilst this paper presents some of the pros and cons of each of these approaches, perhaps there are opportunities to develop hybridised algorithms that combine both knowledge and data driven techniques. In this paper, it is pointed out that open ECG data can dramatically influence what international ECG ML researchers focus on and that, ideally, open datasets could align with real world clinical challenges. In addition, some of the pitfalls and opportunities for ML with ECGs are outlined. A potential opportunity for the ECG community is to provide guidelines to researchers to help guide ECG ML practices. For example, whilst general ML guidelines exist, there is perhaps a need to recommend approaches for 'stress testing' and evaluating ML algorithms for ECG analysis, e.g. testing the algorithm with noisy ECGs and ECGs acquired using common lead and electrode misplacements. This paper provides a primer on ECG ML and discusses some of the key challenges and opportunities.


Subject(s)
Algorithms , Electrocardiography , Exercise Test , Humans , Machine Learning , Neural Networks, Computer
20.
Comput Biol Med ; 136: 104666, 2021 09.
Article in English | MEDLINE | ID: mdl-34315032

ABSTRACT

Electrocardiographic imaging is an imaging modality that has been introduced recently to help in visualizing the electrical activity of the heart and consequently guide the ablation therapy for ventricular arrhythmias. One of the main challenges of this modality is that the electrocardiographic signals recorded at the torso surface are contaminated with noise from different sources. Low amplitude leads are more affected by noise due to their low peak-to-peak amplitude. In this paper, we have studied 6 datasets from two torso tank experiments (Bordeaux and Utah experiments) to investigate the impact of removing or interpolating these low amplitude leads on the inverse reconstruction of cardiac electrical activity. Body surface potential maps used were calculated by using the full set of recorded leads, removing 1, 6, 11, 16, or 21 low amplitude leads, or interpolating 1, 6, 11, 16, or 21 low amplitude leads using one of the three interpolation methods - Laplacian interpolation, hybrid interpolation, or the inverse-forward interpolation. The epicardial potential maps and activation time maps were computed from these body surface potential maps and compared with those recorded directly from the heart surface in the torso tank experiments. There was no significant change in the potential maps and activation time maps after the removal of up to 11 low amplitude leads. Laplacian interpolation and hybrid interpolation improved the inverse reconstruction in some datasets and worsened it in the rest. The inverse forward interpolation of low amplitude leads improved it in two out of 6 datasets and at least remained the same in the other datasets. It was noticed that after doing the inverse-forward interpolation, the selected lambda value was closer to the optimum lambda value that gives the inverse solution best correlated with the recorded one.

SELECTION OF CITATIONS
SEARCH DETAIL
...