RESUMEN
There are two widely used methods to measure the cardiac cycle and obtain heart rate measurements: the electrocardiogram (ECG) and the photoplethysmogram (PPG). The sensors used in these methods have gained great popularity in wearable devices, which have extended cardiac monitoring beyond the hospital environment. However, the continuous monitoring of ECG signals via mobile devices is challenging, as it requires users to keep their fingers pressed on the device during data collection, making it unfeasible in the long term. On the other hand, the PPG does not contain this limitation. However, the medical knowledge to diagnose these anomalies from this sign is limited by the need for familiarity, since the ECG is studied and used in the literature as the gold standard. To minimize this problem, this work proposes a method, PPG2ECG, that uses the correlation between the domains of PPG and ECG signals to infer from the PPG signal the waveform of the ECG signal. PPG2ECG consists of mapping between domains by applying a set of convolution filters, learning to transform a PPG input signal into an ECG output signal using a U-net inception neural network architecture. We assessed our proposed method using two evaluation strategies based on personalized and generalized models and achieved mean error values of 0.015 and 0.026, respectively. Our method overcomes the limitations of previous approaches by providing an accurate and feasible method for continuous monitoring of ECG signals through PPG signals. The short distances between the infer-red ECG and the original ECG demonstrate the feasibility and potential of our method to assist in the early identification of heart diseases.
Asunto(s)
Electrocardiografía , Frecuencia Cardíaca , Redes Neurales de la Computación , Fotopletismografía , Procesamiento de Señales Asistido por Computador , Humanos , Electrocardiografía/métodos , Fotopletismografía/métodos , Frecuencia Cardíaca/fisiología , Algoritmos , Dispositivos Electrónicos VestiblesRESUMEN
Efforts have been made to diagnose and predict the course of different neurodegenerative diseases through various imaging techniques. Particularly tauopathies, where the tau polypeptide is a key participant in molecular pathogenesis, have significantly increased their morbidity and mortality in the human population over the years. However, the standard approach to exploring the phenomenon of neurodegeneration in tauopathies has not been directed at understanding the molecular mechanism that causes the aberrant polymeric and fibrillar behavior of the tau protein, which forms neurofibrillary tangles that replace neuronal populations in the hippocampal and cortical regions. The main objective of this work is to implement a novel quantification protocol for different biomarkers based on pathological post-translational modifications undergone by tau in the brains of patients with tauopathies. The quantification protocol consists of an adaptation of the U-Net neural network architecture. We used the resulting segmentation masks for the quantification of combined fluorescent signals of the different molecular changes tau underwent in neurofibrillary tangles. The quantification considers the neurofibrillary tangles as an individual study structure separated from the rest of the quadrant present in the images. This allows us to detect unconventional interaction signals between the different biomarkers. Our algorithm provides information that will be fundamental to understanding the pathogenesis of dementias with another computational analysis approach in subsequent studies.
RESUMEN
Segmenting vessels in brain images is a critical step for many medical interventions and diagnoses of illnesses. Recent advances in artificial intelligence provide better models, achieving a human-like level of expertise in many tasks. In this paper, we present a new approach to segment Time-of-Flight Magnetic Resonance Angiography (TOF-MRA) images, relying on fewer training samples than state-of-the-art methods. We propose a conditional generative adversarial network with an adapted generator based on a concatenated U-Net with a residual U-Net architecture (UUr-cGAN) to carry out blood vessel segmentation in TOF-MRA images, relying on data augmentation to diminish the drawback of having few volumes at disposal for training the model, while preventing overfitting by using regularization techniques. The proposed model achieves 89.52% precision and 87.23% in Dice score on average from the cross-validated experiment for brain blood vessel segmentation tasks, which is similar to other state-of-the-art methods while using considerably fewer training samples. UUr-cGAN extracts important features from small datasets while preventing overfitting compared to other CNN-based methods and still achieve a relatively good performance in image segmentation tasks such as brain blood vessels from TOF-MRA.
RESUMEN
Considered a neglected tropical pathology, Chagas disease is responsible for thousands of deaths per year and it is caused by the parasite Trypanosoma cruzi. Since many infected people can remain asymptomatic, a fast diagnosis is necessary for proper intervention. Parasite microscopic observation in blood samples is the gold standard method to diagnose Chagas disease in its initial phase; however, this is a time-consuming procedure, requires expert intervention, and there is currently no efficient method to automatically perform this task. Therefore, we propose an efficient residual convolutional neural network, named Res2Unet, to perform a semantic segmentation of Trypanosoma cruzi parasites, with an active contour loss and improved residual connections, whose design is based on Heun's method for solving ordinary differential equations. The model was trained on a dataset of 626 blood sample images and tested on a dataset of 207 images. Validation experiments report that our model achieved a Dice coefficient score of 0.84, a precision value of 0.85, and a recall value of 0.82, outperforming current state-of-the-art methods. Since Chagas disease is a severe and silent illness, our computational model may benefit health care providers to give a prompt diagnose for this worldwide affection.
Asunto(s)
Enfermedad de Chagas , Parásitos , Animales , Enfermedad de Chagas/diagnóstico , Progresión de la Enfermedad , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la ComputaciónRESUMEN
The diagnosis of breast cancer in early stage is essential for successful treatment. Detection can be performed in several ways, the most common being through mammograms. The projections acquired by this type of examination are directly affected by the composition of the breast, which density can be similar to the suspicious masses, being a challenge the identification of malignant lesions. In this article, we propose a computer-aided detection (CAD) system to aid in the diagnosis of masses in digitized mammograms using a model based in the U-Net, allowing specialists to monitor the lesion over time. Unlike most of the studies, we propose the use of an entire base of digitized mammograms using normal, benign, and malignant cases. Our research is divided into four stages: (1) pre-processing, with the removal of irrelevant information, enhancement of the contrast of 7989 images of the Digital Database for Screening Mammography (DDSM), and obtaining regions of interest. (2) Data augmentation, with horizontal mirroring, zooming, and resizing of images; (3) training, with tests of six-based U-Net models, with different characteristics; (4) testing, evaluating four metrics, accuracy, sensitivity, specificity, and Dice Index. The tested models obtained different results regarding the assessed parameters. The best model achieved a sensitivity of 92.32%, specificity of 80.47%, accuracy of 85.95% Dice Index of 79.39%, and AUC of 86.40%. Even using a full base without case selection bias, the results obtained demonstrate that the use of a complete database can provide knowledge to the CAD expert.