Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 53.258
Filter
1.
Nat Commun ; 15(1): 4693, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38824154

ABSTRACT

Training large neural networks on big datasets requires significant computational resources and time. Transfer learning reduces training time by pre-training a base model on one dataset and transferring the knowledge to a new model for another dataset. However, current choices of transfer learning algorithms are limited because the transferred models always have to adhere to the dimensions of the base model and can not easily modify the neural architecture to solve other datasets. On the other hand, biological neural networks (BNNs) are adept at rearranging themselves to tackle completely different problems using transfer learning. Taking advantage of BNNs, we design a dynamic neural network that is transferable to any other network architecture and can accommodate many datasets. Our approach uses raytracing to connect neurons in a three-dimensional space, allowing the network to grow into any shape or size. In the Alcala dataset, our transfer learning algorithm trains the fastest across changing environments and input sizes. In addition, we show that our algorithm also outperformance the state of the art in EEG dataset. In the future, this network may be considered for implementation on real biological neural networks to decrease power consumption.


Subject(s)
Algorithms , Neural Networks, Computer , Humans , Neurons/physiology , Electroencephalography , Machine Learning , Models, Neurological
2.
J Vis ; 24(6): 1, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38829629

ABSTRACT

Computational models of the primary visual cortex (V1) have suggested that V1 neurons behave like Gabor filters followed by simple nonlinearities. However, recent work employing convolutional neural network (CNN) models has suggested that V1 relies on far more nonlinear computations than previously thought. Specifically, unit responses in an intermediate layer of VGG-19 were found to best predict macaque V1 responses to thousands of natural and synthetic images. Here, we evaluated the hypothesis that the poor performance of lower layer units in VGG-19 might be attributable to their small receptive field size rather than to their lack of complexity per se. We compared VGG-19 with AlexNet, which has much larger receptive fields in its lower layers. Whereas the best-performing layer of VGG-19 occurred after seven nonlinear steps, the first convolutional layer of AlexNet best predicted V1 responses. Although the predictive accuracy of VGG-19 was somewhat better than that of standard AlexNet, we found that a modified version of AlexNet could match the performance of VGG-19 after only a few nonlinear computations. Control analyses revealed that decreasing the size of the input images caused the best-performing layer of VGG-19 to shift to a lower layer, consistent with the hypothesis that the relationship between image size and receptive field size can strongly affect model performance. We conducted additional analyses using a Gabor pyramid model to test for nonlinear contributions of normalization and contrast saturation. Overall, our findings suggest that the feedforward responses of V1 neurons can be well explained by assuming only a few nonlinear processing stages.


Subject(s)
Neural Networks, Computer , Neurons , Animals , Neurons/physiology , Primary Visual Cortex/physiology , Photic Stimulation/methods , Models, Neurological , Macaca , Visual Cortex/physiology , Nonlinear Dynamics
3.
PLoS One ; 19(6): e0301691, 2024.
Article in English | MEDLINE | ID: mdl-38829846

ABSTRACT

Atrial Fibrillation (AF), a type of heart arrhythmia, becomes more common with aging and is associated with an increased risk of stroke and mortality. In light of the urgent need for effective automated AF monitoring, existing methods often fall short in balancing accuracy and computational efficiency. To address this issue, we introduce a framework based on Multi-Scale Dilated Convolution (AF-MSDC), aimed at achieving precise predictions with low cost and high efficiency. By integrating Multi-Scale Dilated Convolution (MSDC) modules, our model is capable of extracting features from electrocardiogram (ECG) datasets across various scales, thus achieving an optimal balance between precision and computational savings. We have developed three MSDC modules to construct the AF-MSDC framework and assessed its performance on renowned datasets, including the MIT-BIH Atrial Fibrillation Database and Physionet Challenge 2017. Empirical results unequivocally demonstrate that our technique surpasses existing state-of-the-art (SOTA) methods in the AF detection domain. Specifically, our model, with only a quarter of the parameters of a Residual Network (ResNet), achieved an impressive sensitivity of 99.45%, specificity of 99.64% (on the MIT-BIH AFDB dataset), and an [Formula: see text] score of 85.63% (on the Physionet Challenge 2017 AFDB dataset). This high efficiency makes our model particularly suitable for integration into wearable ECG devices powered by edge computing frameworks. Moreover, this innovative approach offers new possibilities for the early diagnosis of AF in clinical applications, potentially improving patient quality of life and reducing healthcare costs.


Subject(s)
Atrial Fibrillation , Electrocardiography , Neural Networks, Computer , Atrial Fibrillation/diagnosis , Atrial Fibrillation/physiopathology , Humans , Electrocardiography/methods , Algorithms , Databases, Factual
4.
PLoS One ; 19(6): e0304789, 2024.
Article in English | MEDLINE | ID: mdl-38829858

ABSTRACT

Malaria is a deadly disease that is transmitted through mosquito bites. Microscopists use a microscope to examine thin blood smears at high magnification (1000x) to identify parasites in red blood cells (RBCs). Estimating parasitemia is essential in determining the severity of the Plasmodium falciparum infection and guiding treatment. However, this process is time-consuming, labor-intensive, and subject to variation, which can directly affect patient outcomes. In this retrospective study, we compared three methods for measuring parasitemia from a collection of anonymized thin blood smears of patients with Plasmodium falciparum obtained from the Clinical Department of Parasitology-Mycology, National Reference Center (NRC) for Malaria in Paris, France. We first analyzed the impact of the number of field images on parasitemia count using our framework, MALARIS, which features a top-classifier convolutional neural network (CNN). Additionally, we studied the variation between different microscopists using two manual techniques to demonstrate the need for a reliable and reproducible automated system. Finally, we included thin blood smear images from an additional 102 patients to compare the performance and correlation of our system with manual microscopy and flow cytometry. Our results showed strong correlations between the three methods, with a coefficient of determination between 0.87 and 0.92.


Subject(s)
Malaria, Falciparum , Microscopy , Parasitemia , Plasmodium falciparum , Humans , Plasmodium falciparum/isolation & purification , Parasitemia/diagnosis , Parasitemia/blood , Parasitemia/parasitology , Malaria, Falciparum/diagnosis , Malaria, Falciparum/blood , Malaria, Falciparum/parasitology , Retrospective Studies , Microscopy/methods , Erythrocytes/parasitology , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Flow Cytometry/methods
5.
Sci Rep ; 14(1): 12699, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38830932

ABSTRACT

Medical image segmentation has made a significant contribution towards delivering affordable healthcare by facilitating the automatic identification of anatomical structures and other regions of interest. Although convolution neural networks have become prominent in the field of medical image segmentation, they suffer from certain limitations. In this study, we present a reliable framework for producing performant outcomes for the segmentation of pathological structures of 2D medical images. Our framework consists of a novel deep learning architecture, called deep multi-level attention dilated residual neural network (MADR-Net), designed to improve the performance of medical image segmentation. MADR-Net uses a U-Net encoder/decoder backbone in combination with multi-level residual blocks and atrous pyramid scene parsing pooling. To improve the segmentation results, channel-spatial attention blocks were added in the skip connection to capture both the global and local features and superseded the bottleneck layer with an ASPP block. Furthermore, we introduce a hybrid loss function that has an excellent convergence property and enhances the performance of the medical image segmentation task. We extensively validated the proposed MADR-Net on four typical yet challenging medical image segmentation tasks: (1) Left ventricle, left atrium, and myocardial wall segmentation from Echocardiogram images in the CAMUS dataset, (2) Skin cancer segmentation from dermoscopy images in ISIC 2017 dataset, (3) Electron microscopy in FIB-SEM dataset, and (4) Fluid attenuated inversion recovery abnormality from MR images in LGG segmentation dataset. The proposed algorithm yielded significant results when compared to state-of-the-art architectures such as U-Net, Residual U-Net, and Attention U-Net. The proposed MADR-Net consistently outperformed the classical U-Net by 5.43%, 3.43%, and 3.92% relative improvement in terms of dice coefficient, respectively, for electron microscopy, dermoscopy, and MRI. The experimental results demonstrate superior performance on single and multi-class datasets and that the proposed MADR-Net can be utilized as a baseline for the assessment of cross-dataset and segmentation tasks.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Neural Networks, Computer , Humans , Image Processing, Computer-Assisted/methods , Algorithms , Magnetic Resonance Imaging/methods
6.
Sci Rep ; 14(1): 12700, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38830957

ABSTRACT

Fungicide mixtures are an effective strategy in delaying the development of fungicide resistance. In this research, a fixed ratio ray design method was used to generate fifty binary mixtures of five fungicides with diverse modes of action. The interaction of these mixtures was then analyzed using CA and IA models. QSAR modeling was conducted to assess their fungicidal activity through multiple linear regression (MLR), support vector machine (SVM), and artificial neural network (ANN). Most mixtures exhibited additive interaction, with the CA model proving more accurate than the IA model in predicting fungicidal activity. The MLR model showed a good linear correlation between selected theoretical descriptors by the genetic algorithm and fungicidal activity. However, both ML-based models demonstrated better predictive performance than the MLR model. The ANN model showed slightly better predictability than the SVM model, with R2 and R2cv at 0.91 and 0.81, respectively. For external validation, the R2test value was 0.845. In contrast, the SVM model had values of 0.91, 0.78, and 0.77 for the same metrics. In conclusion, the proposed ML-based model can be a valuable tool for developing potent fungicidal mixtures to delay fungicidal resistance emergence.


Subject(s)
Fungicides, Industrial , Machine Learning , Quantitative Structure-Activity Relationship , Fungicides, Industrial/pharmacology , Fungicides, Industrial/chemistry , Support Vector Machine , Neural Networks, Computer , Linear Models
7.
Breast Cancer Res ; 26(1): 90, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38831336

ABSTRACT

BACKGROUND: Nottingham histological grade (NHG) is a well established prognostic factor in breast cancer histopathology but has a high inter-assessor variability with many tumours being classified as intermediate grade, NHG2. Here, we evaluate if DeepGrade, a previously developed model for risk stratification of resected tumour specimens, could be applied to risk-stratify tumour biopsy specimens. METHODS: A total of 11,955,755 tiles from 1169 whole slide images of preoperative biopsies from 896 patients diagnosed with breast cancer in Stockholm, Sweden, were included. DeepGrade, a deep convolutional neural network model, was applied for the prediction of low- and high-risk tumours. It was evaluated against clinically assigned grades NHG1 and NHG3 on the biopsy specimen but also against the grades assigned to the corresponding resection specimen using area under the operating curve (AUC). The prognostic value of the DeepGrade model in the biopsy setting was evaluated using time-to-event analysis. RESULTS: Based on preoperative biopsy images, the DeepGrade model predicted resected tumour cases of clinical grades NHG1 and NHG3 with an AUC of 0.908 (95% CI: 0.88; 0.93). Furthermore, out of the 432 resected clinically-assigned NHG2 tumours, 281 (65%) were classified as DeepGrade-low and 151 (35%) as DeepGrade-high. Using a multivariable Cox proportional hazards model the hazard ratio between DeepGrade low- and high-risk groups was estimated as 2.01 (95% CI: 1.06; 3.79). CONCLUSIONS: DeepGrade provided prediction of tumour grades NHG1 and NHG3 on the resection specimen using only the biopsy specimen. The results demonstrate that the DeepGrade model can provide decision support to identify high-risk tumours based on preoperative biopsies, thus improving early treatment decisions.


Subject(s)
Breast Neoplasms , Deep Learning , Neoplasm Grading , Humans , Female , Breast Neoplasms/pathology , Breast Neoplasms/surgery , Middle Aged , Biopsy , Risk Assessment/methods , Prognosis , Aged , Adult , Sweden/epidemiology , Preoperative Period , Neural Networks, Computer , Breast/pathology , Breast/surgery
8.
Sci Rep ; 14(1): 12615, 2024 06 01.
Article in English | MEDLINE | ID: mdl-38824217

ABSTRACT

Standard clinical practice to assess fetal well-being during labour utilises monitoring of the fetal heart rate (FHR) using cardiotocography. However, visual evaluation of FHR signals can result in subjective interpretations leading to inter and intra-observer disagreement. Therefore, recent studies have proposed deep-learning-based methods to interpret FHR signals and detect fetal compromise. These methods have typically focused on evaluating fixed-length FHR segments at the conclusion of labour, leaving little time for clinicians to intervene. In this study, we propose a novel FHR evaluation method using an input length invariant deep learning model (FHR-LINet) to progressively evaluate FHR as labour progresses and achieve rapid detection of fetal compromise. Using our FHR-LINet model, we obtained approximately 25% reduction in the time taken to detect fetal compromise compared to the state-of-the-art multimodal convolutional neural network while achieving 27.5%, 45.0%, 56.5% and 65.0% mean true positive rate at 5%, 10%, 15% and 20% false positive rate respectively. A diagnostic system based on our approach could potentially enable earlier intervention for fetal compromise and improve clinical outcomes.


Subject(s)
Cardiotocography , Deep Learning , Heart Rate, Fetal , Heart Rate, Fetal/physiology , Humans , Pregnancy , Female , Cardiotocography/methods , Neural Networks, Computer , Fetal Monitoring/methods , Signal Processing, Computer-Assisted , Fetus
9.
Sci Rep ; 14(1): 12598, 2024 06 01.
Article in English | MEDLINE | ID: mdl-38824219

ABSTRACT

To tackle the difficulty of extracting features from one-dimensional spectral signals using traditional spectral analysis, a metabolomics analysis method is proposed to locate two-dimensional correlated spectral feature bands and combine it with deep learning classification for wine origin traceability. Metabolomics analysis was performed on 180 wine samples from 6 different wine regions using UPLC-Q-TOF-MS. Indole, Sulfacetamide, and caffeine were selected as the main differential components. By analyzing the molecular structure of these components and referring to the main functional groups on the infrared spectrum, characteristic band regions with wavelengths in the range of 1000-1400 nm and 1500-1800 nm were selected. Draw two-dimensional correlation spectra (2D-COS) separately, generate synchronous correlation spectra and asynchronous correlation spectra, establish convolutional neural network (CNN) classification models, and achieve the purpose of wine origin traceability. The experimental results demonstrate that combining two segments of two-dimensional characteristic spectra determined by metabolomics screening with convolutional neural networks yields optimal classification results. This validates the effectiveness of using metabolomics screening to determine spectral feature regions in tracing wine origin. This approach effectively removes irrelevant variables while retaining crucial chemical information, enhancing spectral resolution. This integrated approach strengthens the classification model's understanding of samples, significantly increasing accuracy.


Subject(s)
Deep Learning , Metabolomics , Wine , Wine/analysis , Metabolomics/methods , Neural Networks, Computer , Chromatography, High Pressure Liquid/methods , Mass Spectrometry/methods
10.
Biomed Eng Online ; 23(1): 50, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38824547

ABSTRACT

BACKGROUND: Over 60% of epilepsy patients globally are children, whose early diagnosis and treatment are critical for their development and can substantially reduce the disease's burden on both families and society. Numerous algorithms for automated epilepsy detection from EEGs have been proposed. Yet, the occurrence of epileptic seizures during an EEG exam cannot always be guaranteed in clinical practice. Models that exclusively use seizure EEGs for detection risk artificially enhanced performance metrics. Therefore, there is a pressing need for a universally applicable model that can perform automatic epilepsy detection in a variety of complex real-world scenarios. METHOD: To address this problem, we have devised a novel technique employing a temporal convolutional neural network with self-attention (TCN-SA). Our model comprises two primary components: a TCN for extracting time-variant features from EEG signals, followed by a self-attention (SA) layer that assigns importance to these features. By focusing on key features, our model achieves heightened classification accuracy for epilepsy detection. RESULTS: The efficacy of our model was validated on a pediatric epilepsy dataset we collected and on the Bonn dataset, attaining accuracies of 95.50% on our dataset, and 97.37% (A v. E), and 93.50% (B vs E), respectively. When compared with other deep learning architectures (temporal convolutional neural network, self-attention network, and standardized convolutional neural network) using the same datasets, our TCN-SA model demonstrated superior performance in the automated detection of epilepsy. CONCLUSION: The proven effectiveness of the TCN-SA approach substantiates its potential as a valuable tool for the automated detection of epilepsy, offering significant benefits in diverse and complex real-world clinical settings.


Subject(s)
Electroencephalography , Epilepsy , Neural Networks, Computer , Epilepsy/diagnosis , Humans , Signal Processing, Computer-Assisted , Automation , Child , Deep Learning , Diagnosis, Computer-Assisted/methods , Time Factors
11.
Genome Biol ; 25(1): 142, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38825692

ABSTRACT

BACKGROUND: Like its parent base 5-methylcytosine (5mC), 5-hydroxymethylcytosine (5hmC) is a direct epigenetic modification of cytosines in the context of CpG dinucleotides. 5hmC is the most abundant oxidized form of 5mC, generated through the action of TET dioxygenases at gene bodies of actively-transcribed genes and at active or lineage-specific enhancers. Although such enrichments are reported for 5hmC, to date, predictive models of gene expression state or putative regulatory regions for genes using 5hmC have not been developed. RESULTS: Here, by using only 5hmC enrichment in genic regions and their vicinity, we develop neural network models that predict gene expression state across 49 cell types. We show that our deep neural network models distinguish high vs low expression state utilizing only 5hmC levels and these predictive models generalize to unseen cell types. Further, in order to leverage 5hmC signal in distal enhancers for expression prediction, we employ an Activity-by-Contact model and also develop a graph convolutional neural network model with both utilizing Hi-C data and 5hmC enrichment to prioritize enhancer-promoter links. These approaches identify known and novel putative enhancers for key genes in multiple immune cell subsets. CONCLUSIONS: Our work highlights the importance of 5hmC in gene regulation through proximal and distal mechanisms and provides a framework to link it to genome function. With the recent advances in 6-letter DNA sequencing by short and long-read techniques, profiling of 5mC and 5hmC may be done routinely in the near future, hence, providing a broad range of applications for the methods developed here.


Subject(s)
5-Methylcytosine , Enhancer Elements, Genetic , 5-Methylcytosine/analogs & derivatives , 5-Methylcytosine/metabolism , Humans , Neural Networks, Computer , Gene Expression Regulation , Epigenesis, Genetic , DNA Methylation
12.
PeerJ ; 12: e17437, 2024.
Article in English | MEDLINE | ID: mdl-38832031

ABSTRACT

Reference evapotranspiration (ET0 ) is a significant parameter for efficient irrigation scheduling and groundwater conservation. Different machine learning models have been designed for ET0 estimation for specific combinations of available meteorological parameters. However, no single model has been suggested so far that can handle diverse combinations of available meteorological parameters for the estimation of ET0. This article suggests a novel architecture of an improved hybrid quasi-fuzzy artificial neural network (ANN) model (EvatCrop) for this purpose. EvatCrop yielded superior results when compared with the other three popular models, decision trees, artificial neural networks, and adaptive neuro-fuzzy inference systems, irrespective of study locations and the combinations of input parameters. For real-field case studies, it was applied in the groundwater-stressed area of the Terai agro-climatic region of North Bengal, India, and trained and tested with the daily meteorological data available from the National Centres for Environmental Prediction from 2000 to 2014. The precision of the model was compared with the standard Penman-Monteith model (FAO56PM). Empirical results depicted that the model performances remarkably varied under different data-limited situations. When the complete set of input parameters was available, EvatCrop resulted in the best values of coefficient of determination (R2 = 0.988), degree of agreement (d = 0.997), root mean square error (RMSE = 0.183), and root mean square relative error (RMSRE = 0.034).


Subject(s)
Fuzzy Logic , Neural Networks, Computer , India , Groundwater , Plant Transpiration
13.
Front Public Health ; 12: 1397260, 2024.
Article in English | MEDLINE | ID: mdl-38832222

ABSTRACT

Objective: This study focuses on enhancing the precision of epidemic time series data prediction by integrating Gated Recurrent Unit (GRU) into a Graph Neural Network (GNN), forming the GRGNN. The accuracy of the GNN (Graph Neural Network) network with introduced GRU (Gated Recurrent Units) is validated by comparing it with seven commonly used prediction methods. Method: The GRGNN methodology involves multivariate time series prediction using a GNN (Graph Neural Network) network improved by the integration of GRU (Gated Recurrent Units). Additionally, Graphical Fourier Transform (GFT) and Discrete Fourier Transform (DFT) are introduced. GFT captures inter-sequence correlations in the spectral domain, while DFT transforms data from the time domain to the frequency domain, revealing temporal node correlations. Following GFT and DFT, outbreak data are predicted through one-dimensional convolution and gated linear regression in the frequency domain, graph convolution in the spectral domain, and GRU (Gated Recurrent Units) in the time domain. The inverse transformation of GFT and DFT is employed, and final predictions are obtained after passing through a fully connected layer. Evaluation is conducted on three datasets: the COVID-19 datasets of 38 African countries and 42 European countries from worldometers, and the chickenpox dataset of 20 Hungarian regions from Kaggle. Metrics include Average Root Mean Square Error (ARMSE) and Average Mean Absolute Error (AMAE). Result: For African COVID-19 dataset and Hungarian Chickenpox dataset, GRGNN consistently outperforms other methods in ARMSE and AMAE across various prediction step lengths. Optimal results are achieved even at extended prediction steps, highlighting the model's robustness. Conclusion: GRGNN proves effective in predicting epidemic time series data with high accuracy, demonstrating its potential in epidemic surveillance and early warning applications. However, further discussions and studies are warranted to refine its application and judgment methods, emphasizing the ongoing need for exploration and research in this domain.


Subject(s)
Neural Networks, Computer , Humans , COVID-19/epidemiology , Communicable Diseases/epidemiology , Fourier Analysis , Disease Outbreaks
14.
J Robot Surg ; 18(1): 237, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38833204

ABSTRACT

A major obstacle in applying machine learning for medical fields is the disparity between the data distribution of the training images and the data encountered in clinics. This phenomenon can be explained by inconsistent acquisition techniques and large variations across the patient spectrum. The result is poor translation of the trained models to the clinic, which limits their implementation in medical practice. Patient-specific trained networks could provide a potential solution. Although patient-specific approaches are usually infeasible because of the expenses associated with on-the-fly labeling, the use of generative adversarial networks enables this approach. This study proposes a patient-specific approach based on generative adversarial networks. In the presented training pipeline, the user trains a patient-specific segmentation network with extremely limited data which is supplemented with artificial samples generated by generative adversarial models. This approach is demonstrated in endoscopic video data captured during fetoscopic laser coagulation, a procedure used for treating twin-to-twin transfusion syndrome by ablating the placental blood vessels. Compared to a standard deep learning segmentation approach, the pipeline was able to achieve an intersection over union score of 0.60 using only 20 annotated images compared to 100 images using a standard approach. Furthermore, training with 20 annotated images without the use of the pipeline achieves an intersection over union score of 0.30, which, therefore, corresponds to a 100% increase in performance when incorporating the pipeline. A pipeline using GANs was used to generate artificial data which supplements the real data, this allows patient-specific training of a segmentation network. We show that artificial images generated using GANs significantly improve performance in vessel segmentation and that training patient-specific models can be a viable solution to bring automated vessel segmentation to the clinic.


Subject(s)
Placenta , Humans , Pregnancy , Placenta/blood supply , Placenta/diagnostic imaging , Female , Deep Learning , Image Processing, Computer-Assisted/methods , Fetofetal Transfusion/surgery , Fetofetal Transfusion/diagnostic imaging , Machine Learning , Robotic Surgical Procedures/methods , Neural Networks, Computer
15.
Med Eng Phys ; 127: 104162, 2024 May.
Article in English | MEDLINE | ID: mdl-38692762

ABSTRACT

OBJECTIVE: Early detection of cardiovascular diseases is based on accurate quantification of the left ventricle (LV) function parameters. In this paper, we propose a fully automatic framework for LV volume and mass quantification from 2D-cine MR images already segmented using U-Net. METHODS: The general framework consists of three main steps: Data preparation including automatic LV localization using a convolution neural network (CNN) and application of morphological operations to exclude papillary muscles from the LV cavity. The second step consists in automatically extracting the LV contours using U-Net architecture. Finally, by integrating temporal information which is manifested by a spatial motion of myocytes as a third dimension, we calculated LV volume, LV ejection fraction (LVEF) and left ventricle mass (LVM). Based on these parameters, we detected and quantified cardiac contraction abnormalities using Python software. RESULTS: CNN was trained with 35 patients and tested on 15 patients from the ACDC database with an accuracy of 99,15 %. U-Net architecture was trained using ACDC database and evaluated using local dataset with a Dice similarity coefficient (DSC) of 99,78 % and a Hausdorff Distance (HD) of 4.468 mm (p < 0,001). Quantification results showed a strong correlation with physiological measures with a Pearson correlation coefficient (PCC) of 0,991 for LV volume, 0.962 for LVEF, 0.98 for stroke volume (SV) and 0.923 for LVM after pillars' elimination. Clinically, our method allows regional and accurate identification of pathological myocardial segments and can serve as a diagnostic aid tool of cardiac contraction abnormalities. CONCLUSION: Experimental results prove the usefulness of the proposed method for LV volume and function quantification and verify its potential clinical applicability.


Subject(s)
Automation , Heart Ventricles , Image Processing, Computer-Assisted , Magnetic Resonance Imaging, Cine , Papillary Muscles , Humans , Heart Ventricles/diagnostic imaging , Magnetic Resonance Imaging, Cine/methods , Papillary Muscles/diagnostic imaging , Papillary Muscles/physiology , Image Processing, Computer-Assisted/methods , Organ Size , Male , Middle Aged , Neural Networks, Computer , Female , Stroke Volume
16.
PLoS One ; 19(5): e0298373, 2024.
Article in English | MEDLINE | ID: mdl-38691542

ABSTRACT

Pulse repetition interval modulation (PRIM) is integral to radar identification in modern electronic support measure (ESM) and electronic intelligence (ELINT) systems. Various distortions, including missing pulses, spurious pulses, unintended jitters, and noise from radar antenna scans, often hinder the accurate recognition of PRIM. This research introduces a novel three-stage approach for PRIM recognition, emphasizing the innovative use of PRI sound. A transfer learning-aided deep convolutional neural network (DCNN) is initially used for feature extraction. This is followed by an extreme learning machine (ELM) for real-time PRIM classification. Finally, a gray wolf optimizer (GWO) refines the network's robustness. To evaluate the proposed method, we develop a real experimental dataset consisting of sound of six common PRI patterns. We utilized eight pre-trained DCNN architectures for evaluation, with VGG16 and ResNet50V2 notably achieving recognition accuracies of 97.53% and 96.92%. Integrating ELM and GWO further optimized the accuracy rates to 98.80% and 97.58. This research advances radar identification by offering an enhanced method for PRIM recognition, emphasizing the potential of PRI sound to address real-world distortions in ESM and ELINT systems.


Subject(s)
Deep Learning , Neural Networks, Computer , Sound , Radar , Algorithms , Pattern Recognition, Automated/methods
17.
PLoS One ; 19(5): e0300216, 2024.
Article in English | MEDLINE | ID: mdl-38691574

ABSTRACT

This study integrates advanced machine learning techniques, namely Artificial Neural Networks, Long Short-Term Memory, and Gated Recurrent Unit models, to forecast monkeypox outbreaks in Canada, Spain, the USA, and Portugal. The research focuses on the effectiveness of these models in predicting the spread and severity of cases using data from June 3 to December 31, 2022, and evaluates them against test data from January 1 to February 7, 2023. The study highlights the potential of neural networks in epidemiology, especially concerning recent monkeypox outbreaks. It provides a comparative analysis of the models, emphasizing their capabilities in public health strategies. The research identifies optimal model configurations and underscores the efficiency of the Levenberg-Marquardt algorithm in training. The findings suggest that ANN models, particularly those with optimized Root Mean Squared Error, Mean Absolute Percentage Error, and the Coefficient of Determination values, are effective in infectious disease forecasting and can significantly enhance public health responses.


Subject(s)
Forecasting , Machine Learning , Mpox (monkeypox) , Neural Networks, Computer , Humans , Forecasting/methods , Mpox (monkeypox)/epidemiology , Portugal/epidemiology , Spain/epidemiology , Disease Outbreaks , Canada/epidemiology , United States/epidemiology , Algorithms
18.
Scand J Urol ; 59: 90-97, 2024 May 02.
Article in English | MEDLINE | ID: mdl-38698545

ABSTRACT

OBJECTIVE: To evaluate whether artificial intelligence (AI) based automatic image analysis utilising convolutional neural networks (CNNs) can be used to evaluate computed tomography urography (CTU) for the presence of urinary bladder cancer (UBC) in patients with macroscopic hematuria. METHODS: Our study included patients who had undergone evaluation for macroscopic hematuria. A CNN-based AI model was trained and validated on the CTUs included in the study on a dedicated research platform (Recomia.org). Sensitivity and specificity were calculated to assess the performance of the AI model. Cystoscopy findings were used as the reference method. RESULTS: The training cohort comprised a total of 530 patients. Following the optimisation process, we developed the last version of our AI model. Subsequently, we utilised the model in the validation cohort which included an additional 400 patients (including 239 patients with UBC). The AI model had a sensitivity of 0.83 (95% confidence intervals [CI], 0.76-0.89), specificity of 0.76 (95% CI 0.67-0.84), and a negative predictive value (NPV) of 0.97 (95% CI 0.95-0.98). The majority of tumours in the false negative group (n = 24) were solitary (67%) and smaller than 1 cm (50%), with the majority of patients having cTaG1-2 (71%). CONCLUSIONS: We developed and tested an AI model for automatic image analysis of CTUs to detect UBC in patients with macroscopic hematuria. This model showed promising results with a high detection rate and excessive NPV. Further developments could lead to a decreased need for invasive investigations and prioritising patients with serious tumours.


Subject(s)
Artificial Intelligence , Hematuria , Tomography, X-Ray Computed , Urinary Bladder Neoplasms , Urography , Humans , Hematuria/etiology , Hematuria/diagnostic imaging , Urinary Bladder Neoplasms/diagnostic imaging , Urinary Bladder Neoplasms/complications , Male , Aged , Female , Tomography, X-Ray Computed/methods , Urography/methods , Middle Aged , Neural Networks, Computer , Sensitivity and Specificity , Aged, 80 and over , Retrospective Studies , Adult
19.
PLoS One ; 19(5): e0301812, 2024.
Article in English | MEDLINE | ID: mdl-38696418

ABSTRACT

Kidney stones form when mineral salts crystallize in the urinary tract. While most stones exit the body in the urine stream, some can block the ureteropelvic junction or ureters, leading to severe lower back pain, blood in the urine, vomiting, and painful urination. Imaging technologies, such as X-rays or ureterorenoscopy (URS), are typically used to detect kidney stones. Subsequently, these stones are fragmented into smaller pieces using shock wave lithotripsy (SWL) or laser URS. Both treatments yield subtly different patient outcomes. To predict successful stone removal and complication outcomes, Artificial Neural Network models were trained on 15,126 SWL and 2,116 URS patient records. These records include patient metrics like Body Mass Index and age, as well as treatment outcomes obtained using various medical instruments and healthcare professionals. Due to the low number of outcome failures in the data (e.g., treatment complications), Nearest Neighbor and Synthetic Minority Oversampling Technique (SMOTE) models were implemented to improve prediction accuracies. To reduce noise in the predictions, ensemble modeling was employed. The average prediction accuracies based on Confusion Matrices for SWL stone removal and treatment complications were 84.8% and 95.0%, respectively, while those for URS were 89.0% and 92.2%, respectively. The average prediction accuracies for SWL based on Area-Under-the-Curve were 74.7% and 62.9%, respectively, while those for URS were 77.2% and 78.9%, respectively. Taken together, the approach yielded moderate to high accurate predictions, regardless of treatment or outcome. These models were incorporated into a Stone Decision Engine web application (http://peteranoble.com/webapps.html) that suggests the best interventions to healthcare providers based on individual patient metrics.


Subject(s)
Kidney Calculi , Lithotripsy , Ureteroscopy , Humans , Kidney Calculi/surgery , Kidney Calculi/therapy , Ureteroscopy/adverse effects , Ureteroscopy/methods , Lithotripsy/methods , Lithotripsy/adverse effects , Neural Networks, Computer , Female , Treatment Outcome , Male , Middle Aged , Adult
20.
PLoS One ; 19(5): e0302124, 2024.
Article in English | MEDLINE | ID: mdl-38696446

ABSTRACT

Image data augmentation plays a crucial role in data augmentation (DA) by increasing the quantity and diversity of labeled training data. However, existing methods have limitations. Notably, techniques like image manipulation, erasing, and mixing can distort images, compromising data quality. Accurate representation of objects without confusion is a challenge in methods like auto augment and feature augmentation. Preserving fine details and spatial relationships also proves difficult in certain techniques, as seen in deep generative models. To address these limitations, we propose OFIDA, an object-focused image data augmentation algorithm. OFIDA implements one-to-many enhancements that not only preserve essential target regions but also elevate the authenticity of simulating real-world settings and data distributions. Specifically, OFIDA utilizes a graph-based structure and object detection to streamline augmentation. Specifically, by leveraging graph properties like connectivity and hierarchy, it captures object essence and context for improved comprehension in real-world scenarios. Then, we introduce DynamicFocusNet, a novel object detection algorithm built on the graph framework. DynamicFocusNet merges dynamic graph convolutions and attention mechanisms to flexibly adjust receptive fields. Finally, the detected target images are extracted to facilitate one-to-many data augmentation. Experimental results validate the superiority of our OFIDA method over state-of-the-art methods across six benchmark datasets.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...