Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 727
Filter
1.
Sensors (Basel) ; 24(17)2024 Aug 31.
Article in English | MEDLINE | ID: mdl-39275594

ABSTRACT

Monolithic zirconia (MZ) crowns are widely utilized in dental restorations, particularly for substantial tooth structure loss. Inspection, tactile, and radiographic examinations can be time-consuming and error-prone, which may delay diagnosis. Consequently, an objective, automatic, and reliable process is required for identifying dental crown defects. This study aimed to explore the potential of transforming acoustic emission (AE) signals to continuous wavelet transform (CWT), combined with Conventional Neural Network (CNN) to assist in crack detection. A new CNN image segmentation model, based on multi-class semantic segmentation using Inception-ResNet-v2, was developed. Real-time detection of AE signals under loads, which induce cracking, provided significant insights into crack formation in MZ crowns. Pencil lead breaking (PLB) was used to simulate crack propagation. The CWT and CNN models were used to automate the crack classification process. The Inception-ResNet-v2 architecture with transfer learning categorized the cracks in MZ crowns into five groups: labial, palatal, incisal, left, and right. After 2000 epochs, with a learning rate of 0.0001, the model achieved an accuracy of 99.4667%, demonstrating that deep learning significantly improved the localization of cracks in MZ crowns. This development can potentially aid dentists in clinical decision-making by facilitating the early detection and prevention of crack failures.


Subject(s)
Crowns , Deep Learning , Zirconium , Zirconium/chemistry , Humans , Neural Networks, Computer , Acoustics , Wavelet Analysis
2.
Bioengineering (Basel) ; 11(9)2024 Aug 27.
Article in English | MEDLINE | ID: mdl-39329609

ABSTRACT

Dermatological conditions are primarily prevalent in humans and are primarily caused by environmental and climatic fluctuations, as well as various other reasons. Timely identification is the most effective remedy to avert minor ailments from escalating into severe conditions. Diagnosing skin illnesses is consistently challenging for health practitioners. Presently, they rely on conventional methods, such as examining the condition of the skin. State-of-the-art technologies can enhance the accuracy of skin disease diagnosis by utilizing data-driven approaches. This paper presents a Computer Assisted Diagnosis (CAD) framework that has been developed to detect skin illnesses at an early stage. We suggest a computationally efficient and lightweight deep learning model that utilizes a CNN architecture. We then do thorough experiments to compare the performance of shallow and deep learning models. The CNN model under consideration consists of seven convolutional layers and has obtained an accuracy of 87.64% when applied to three distinct disease categories. The studies were conducted using the International Skin Imaging Collaboration (ISIC) dataset, which exclusively consists of dermoscopic images. This study enhances the field of skin disease diagnostics by utilizing state-of-the-art technology, attaining exceptional levels of accuracy, and striving for efficiency improvements. The unique features and future considerations of this technology create opportunities for additional advancements in the automated diagnosis of skin diseases and tailored treatment.

3.
Radiol Artif Intell ; 6(6): e230529, 2024 Nov.
Article in English | MEDLINE | ID: mdl-39230423

ABSTRACT

Mammography screening supported by deep learning-based artificial intelligence (AI) solutions can potentially reduce workload without compromising breast cancer detection accuracy, but the site of deployment in the workflow might be crucial. This retrospective study compared three simulated AI-integrated screening scenarios with standard double reading with arbitration in a sample of 249 402 mammograms from a representative screening population. A commercial AI system replaced the first reader (scenario 1: integrated AIfirst), the second reader (scenario 2: integrated AIsecond), or both readers for triaging of low- and high-risk cases (scenario 3: integrated AItriage). AI threshold values were chosen based partly on previous validation and setting the screen-read volume reduction at approximately 50% across scenarios. Detection accuracy measures were calculated. Compared with standard double reading, integrated AIfirst showed no evidence of a difference in accuracy metrics except for a higher arbitration rate (+0.99%, P < .001). Integrated AIsecond had lower sensitivity (-1.58%, P < .001), negative predictive value (NPV) (-0.01%, P < .001), and recall rate (-0.06%, P = .04) but a higher positive predictive value (PPV) (+0.03%, P < .001) and arbitration rate (+1.22%, P < .001). Integrated AItriage achieved higher sensitivity (+1.33%, P < .001), PPV (+0.36%, P = .03), and NPV (+0.01%, P < .001) but lower arbitration rate (-0.88%, P < .001). Replacing one or both readers with AI seems feasible; however, the site of application in the workflow can have clinically relevant effects on accuracy and workload. Keywords: Mammography, Breast, Neoplasms-Primary, Screening, Epidemiology, Diagnosis, Convolutional Neural Network (CNN) Supplemental material is available for this article. Published under a CC BY 4.0 license.


Subject(s)
Breast Neoplasms , Feasibility Studies , Mammography , Humans , Mammography/methods , Female , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/diagnosis , Retrospective Studies , Middle Aged , Artificial Intelligence , Aged , Early Detection of Cancer/methods , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Mass Screening/methods , Sensitivity and Specificity , Reproducibility of Results
4.
Med Sci (Basel) ; 12(3)2024 Sep 20.
Article in English | MEDLINE | ID: mdl-39311162

ABSTRACT

Osteoporosis, a skeletal disorder, is expected to affect 60% of women aged over 50 years. Dual-energy X-ray absorptiometry (DXA) scans, the current gold standard, are typically used post-fracture, highlighting the need for early detection tools. Panoramic radiographs (PRs), common in annual dental evaluations, have been explored for osteoporosis detection using deep learning, but methodological flaws have cast doubt on otherwise optimistic results. This study aims to develop a robust artificial intelligence (AI) application for accurate osteoporosis identification in PRs, contributing to early and reliable diagnostics. A total of 250 PRs from three groups (A: osteoporosis group, B: non-osteoporosis group matching A in age and gender, C: non-osteoporosis group differing from A in age and gender) were cropped to the mental foramen region. A pretrained convolutional neural network (CNN) classifier was used for training, testing, and validation with a random split of the dataset into subsets (A vs. B, A vs. C). Detection accuracy and area under the curve (AUC) were calculated. The method achieved an F1 score of 0.74 and an AUC of 0.8401 (A vs. B). For young patients (A vs. C), it performed with 98% accuracy and an AUC of 0.9812. This study presents a proof-of-concept algorithm, demonstrating the potential of deep learning to identify osteoporosis in dental radiographs. It also highlights the importance of methodological rigor, as not all optimistic results are credible.


Subject(s)
Artificial Intelligence , Deep Learning , Osteoporosis , Radiography, Panoramic , Humans , Osteoporosis/diagnostic imaging , Female , Middle Aged , Aged , Male , Neural Networks, Computer , Absorptiometry, Photon , Mandible/diagnostic imaging
5.
Sensors (Basel) ; 24(18)2024 Sep 12.
Article in English | MEDLINE | ID: mdl-39338665

ABSTRACT

This research examines the application of non-invasive acoustic analysis for detecting obstructions in vascular access (fistulas) used by kidney dialysis patients. Obstructions in these fistulas can interrupt essential dialysis treatment. In this study, we utilized a condenser microphone to capture the blood flow sounds before and after angioplasty surgery, analyzing 3819 sound samples from 119 dialysis patients. These sound signals were transformed into spectrogram images to classify obstructed and unobstructed vascular accesses, that is fistula conditions before and after the angioplasty procedure. A novel lightweight two-dimension convolutional neural network (CNN) was developed and benchmarked against pretrained CNN models such as ResNet50 and VGG16. The proposed model achieved a prediction accuracy of 100%, surpassing the ResNet50 and VGG16 models, which recorded 99% and 95% accuracy, respectively. Additionally, the study highlighted the significantly smaller memory size of the proposed model (2.37 MB) compared to ResNet50 (91.3 MB) and VGG16 (57.9 MB), suggesting its suitability for edge computing environments. This study underscores the efficacy of diverse deep-learning approaches in the obstructed detection of dialysis fistulas, presenting a scalable solution that combines high accuracy with reduced computational demands.


Subject(s)
Neural Networks, Computer , Renal Dialysis , Humans , Constriction, Pathologic , Male , Vascular Access Devices , Sound , Female , Deep Learning , Middle Aged , Signal Processing, Computer-Assisted
6.
Curr Med Imaging ; 2024 Sep 18.
Article in English | MEDLINE | ID: mdl-39297463

ABSTRACT

BACKGROUND: Brain tumours represent a diagnostic challenge, especially in the imaging area, where the differentiation of normal and pathologic tissues should be precise. The use of up-to-date machine learning techniques would be of great help in terms of brain tumor identification accuracy from MRI data. Objective This research paper aims to check the efficiency of a federated learning method that joins two classifiers, such as convolutional neural networks (CNNs) and random forests (R.F.F.), with dual U-Net segmentation for federated learning. This procedure benefits the image identification task on preprocessed MRI scan pictures that have already been categorized. METHODS: In addition to using a variety of datasets, federated learning was utilized to train the CNN-RF model while taking data privacy into account. The processed MRI images with Median, Gaussian, and Wiener filters are used to filter out the noise level and make the feature extraction process easy and efficient. The surgical part used a dual U-Net layout, and the performance assessment was based on precision, recall, F1-score, and accuracy. RESULTS: The model achieved excellent classification performance on local datasets as CRPs were high, from 91.28% to 95.52% for macro, micro, and weighted averages. Throughout the process of federated averaging, the collective model outperformed by reaching 97% accuracy compared to those of 99%, which were subjected to different clients. The correctness of how data is used helps the federated averaging method convert individual model insights into a consistent global model while keeping all personal data private. CONCLUSION: The combined structure of the federated learning framework, CNN-RF hybrid model, and dual U-Net segmentation is a robust and privacypreserving approach for identifying MRI images from brain tumors. The results of the present study exhibited that the technique is promising in improving the quality of brain tumor categorization and provides a pathway for practical utilization in clinical settings.

7.
Physiol Behav ; 287: 114696, 2024 Sep 16.
Article in English | MEDLINE | ID: mdl-39293590

ABSTRACT

Behavior is fundamental to neuroscience research, providing insights into the mechanisms underlying thoughts, actions and responses. Various model organisms, including mice, flies, and fish, are employed to understand these mechanisms. Zebrafish, in particular, serve as a valuable model for studying anxiety-like behavior, typically measured through the novel tank diving (NTD) assay. Traditional methods for analyzing NTD assays are either manually intensive or costly when using specialized software. To address these limitations, it is useful to develop methods for the automated analysis of zebrafish NTD assays using deep-learning models. In this study, we classified zebrafish based on their anxiety levels using DeepLabCut. Subsequently, based on a training dataset of image frames, we compared deep-learning models to identify the model best suited to classify zebrafish as anxious or non anxious and found that specific architectures, such as InceptionV3, are able to effectively perform this classification task. Our findings suggest that these deep learning models hold promise for automated behavioral analysis in zebrafish, offering an efficient and cost-effective alternative to traditional methods.

8.
Chin Clin Oncol ; 13(Suppl 1): AB093, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39295411

ABSTRACT

BACKGROUND: Central nervous system (CNS) tumours, especially glioma, are a complex disease and many challenges are encountered in their treatment. Artificial intelligence (AI) has made a colossal impact in many walks of life at a low cost. However, this avenue still needs to be explored in healthcare settings, demanding investment of resources towards growth in this area. We aim to develop machine learning (ML) algorithms to facilitate the accurate diagnosis and precise mapping of the brain tumour. METHODS: We queried the data from 2019 to 2022 and brain magnetic resonance imaging (MRI) of glioma patients were extracted. Images that had both T1-contrast and T2-fluid-attenuated inversion recovery (T2-FLAIR) volume sequences available were included. MRI images were annotated by a team supervised by a neuroradiologist. The extracted MRIs thus obtained were then fed to the preprocessing pipeline to extract brains using SynthStrip. They were further fed to the deep learning-based semantic segmentation pipelines using UNet-based architecture with convolutional neural network (CNN) at its backbone. Subsequently, the algorithm was tested to assess the efficacy in the pixel-wise diagnosis of tumours. RESULTS: In total, 69 samples of low-grade glioma (LGG) were used out of which 62 were used for fine-tuning a pre-trained model trained on brain tumor segmentation (BraTS) 2020 and 7 were used for testing. For the evaluation of the model, the Dice coefficient was used as the metric. The average Dice coefficient on the 7 test samples was 0.94. CONCLUSIONS: With the advent of technology, AI continues to modify our lifestyles. It is critical to adapt this technology in healthcare with the aim of improving the provision of patient care. We present our preliminary data for the use of ML algorithms in the diagnosis and segmentation of glioma. The promising result with comparable accuracy highlights the importance of early adaptation of this nascent technology.


Subject(s)
Deep Learning , Glioma , Magnetic Resonance Imaging , Humans , Glioma/classification , Glioma/pathology , Magnetic Resonance Imaging/methods , Male , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/classification , Brain Neoplasms/pathology , Female
9.
Sci Rep ; 14(1): 21643, 2024 09 16.
Article in English | MEDLINE | ID: mdl-39284813

ABSTRACT

The main bottleneck in training a robust tumor segmentation algorithm for non-small cell lung cancer (NSCLC) on H&E is generating sufficient ground truth annotations. Various approaches for generating tumor labels to train a tumor segmentation model was explored. A large dataset of low-cost low-accuracy panCK-based annotations was used to pre-train the model and determine the minimum required size of the expensive but highly accurate pathologist annotations dataset. PanCK pre-training was compared to foundation models and various architectures were explored for model backbone. Proper study design and sample procurement for training a generalizable model that captured variations in NSCLC H&E was studied. H&E imaging was performed on 112 samples (three centers, two scanner types, different staining and imaging protocols). Attention U-Net architecture was trained using the large panCK-based annotations dataset (68 samples, total area 10,326 [mm2]) followed by fine-tuning using a small pathologist annotations dataset (80 samples, total area 246 [mm2]). This approach resulted in mean intersection over union (mIoU) of 82% [77 87]. Using panCK pretraining provided better performance compared to foundation models and allowed for 70% reduction in pathologist annotations with no drop in performance. Study design ensured model generalizability over variations on H&E where performance was consistent across centers, scanners, and subtypes.


Subject(s)
Carcinoma, Non-Small-Cell Lung , Deep Learning , Lung Neoplasms , Pathologists , Humans , Lung Neoplasms/pathology , Carcinoma, Non-Small-Cell Lung/pathology , Image Processing, Computer-Assisted/methods , Algorithms
10.
Biol Methods Protoc ; 9(1): bpae063, 2024.
Article in English | MEDLINE | ID: mdl-39258158

ABSTRACT

Deep learning applications in taxonomic classification for animals and plants from images have become popular, while those for microorganisms are still lagging behind. Our study investigated the potential of deep learning for the taxonomic classification of hundreds of filamentous fungi from colony images, which is typically a task that requires specialized knowledge. We isolated soil fungi, annotated their taxonomy using standard molecular barcode techniques, and took images of the fungal colonies grown in petri dishes (n = 606). We applied a convolutional neural network with multiple training approaches and model architectures to deal with some common issues in ecological datasets: small amounts of data, class imbalance, and hierarchically structured grouping. Model performance was overall low, mainly due to the relatively small dataset, class imbalance, and the high morphological plasticity exhibited by fungal colonies. However, our approach indicates that morphological features like color, patchiness, and colony extension rate could be used for the recognition of fungal colonies at higher taxonomic ranks (i.e. phylum, class, and order). Model explanation implies that image recognition characters appear at different positions within the colony (e.g. outer or inner hyphae) depending on the taxonomic resolution. Our study suggests the potential of deep learning applications for a better understanding of the taxonomy and ecology of filamentous fungi amenable to axenic culturing. Meanwhile, our study also highlights some technical challenges in deep learning image analysis in ecology, highlighting that the domain of applicability of these methods needs to be carefully considered.

11.
J Insur Med ; 51(2): 64-76, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39266002

ABSTRACT

Recent artificial intelligence (AI) advancements in cardiovascular medicine offer potential enhancements in diagnosis, prediction, treatment, and outcomes. This article aims to provide a basic understanding of AI enabled ECG technology. Specific conditions and findings will be discussed, followed by reviewing associated terminology and methodology. In the appendix, definitions of AUC versus accuracy are explained. The application of deep learning models enables detecting diseases from normal electrocardiograms at accuracy not previously achieved by technology or human experts. Results with AI enabled ECG are encouraging as they considerably exceeded current screening models for specific conditions (i.e., atrial fibrillation, left ventricular dysfunction, aortic stenosis, and hypertrophic cardiomyopathy). This could potentially lead to a revitalization of the utilization of the ECG in the insurance domain. While we are embracing the findings with this rapidly evolving technology, but cautious optimism is still necessary at this point.


Subject(s)
Artificial Intelligence , Electrocardiography , Humans , Electrocardiography/methods , Deep Learning , Atrial Fibrillation/diagnosis
12.
Heliyon ; 10(16): e36411, 2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39253213

ABSTRACT

This study introduces a groundbreaking method to enhance the accuracy and reliability of emotion recognition systems by combining electrocardiogram (ECG) with electroencephalogram (EEG) data, using an eye-tracking gated strategy. Initially, we propose a technique to filter out irrelevant portions of emotional data by employing pupil diameter metrics from eye-tracking data. Subsequently, we introduce an innovative approach for estimating effective connectivity to capture the dynamic interaction between the brain and the heart during emotional states of happiness and sadness. Granger causality (GC) is estimated and utilized to optimize input for a highly effective pre-trained convolutional neural network (CNN), specifically ResNet-18. To assess this methodology, we employed EEG and ECG data from the publicly available MAHNOB-HCI database, using a 5-fold cross-validation approach. Our method achieved an impressive average accuracy and area under the curve (AUC) of 91.00 % and 0.97, respectively, for GC-EEG-ECG images processed with ResNet-18. Comparative analysis with state-of-the-art studies clearly shows that augmenting ECG with EEG and refining data with an eye-tracking strategy significantly enhances emotion recognition performance across various emotions.

13.
Sci Rep ; 14(1): 20754, 2024 Sep 05.
Article in English | MEDLINE | ID: mdl-39237695

ABSTRACT

To ensure the reliability of machining quality, it is crucial to predict tool wear accurately. In this paper, a novel deep learning-based model is proposed, which synthesizes the advantages of power spectral density (PSD), convolutional neural networks (CNN), and vision transformer model (ViT), namely PSD-CVT. PSD maps can provide a comprehensive understanding of the spectral characteristics of the signals. It makes the spectral characteristics more obvious and makes it easy to analyze and compare different signals. CNN focuses on local feature extraction, which can capture local information such as the texture, edge, and shape of the image, while the attention mechanism in ViT can effectively capture the global structure and long-range dependencies present in the image. Two fully connected layers with a ReLU function are used to obtain the predicted tool wear values. The experimental results on the PHM 2010 dataset demonstrate that the proposed model has higher prediction accuracy than the CNN model or ViT model alone, as well as outperforms several existing methods in accurately predicting tool wear. The proposed prediction method can also be applied to predict tool wear in other machining fields.

14.
Radiol Artif Intell ; 6(5): e230342, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39166973

ABSTRACT

Purpose To develop an artificial intelligence model that uses supervised contrastive learning (SCL) to minimize bias in chest radiograph diagnosis. Materials and Methods In this retrospective study, the proposed method was evaluated on two datasets: the Medical Imaging and Data Resource Center (MIDRC) dataset with 77 887 chest radiographs in 27 796 patients collected as of April 20, 2023, for COVID-19 diagnosis and the National Institutes of Health ChestX-ray14 dataset with 112 120 chest radiographs in 30 805 patients collected between 1992 and 2015. In the ChestX-ray14 dataset, thoracic abnormalities included atelectasis, cardiomegaly, effusion, infiltration, mass, nodule, pneumonia, pneumothorax, consolidation, edema, emphysema, fibrosis, pleural thickening, and hernia. The proposed method used SCL with carefully selected positive and negative samples to generate fair image embeddings, which were fine-tuned for subsequent tasks to reduce bias in chest radiograph diagnosis. The method was evaluated using the marginal area under the receiver operating characteristic curve difference (∆mAUC). Results The proposed model showed a significant decrease in bias across all subgroups compared with the baseline models, as evidenced by a paired t test (P < .001). The ∆mAUCs obtained by the proposed method were 0.01 (95% CI: 0.01, 0.01), 0.21 (95% CI: 0.21, 0.21), and 0.10 (95% CI: 0.10, 0.10) for sex, race, and age subgroups, respectively, on the MIDRC dataset and 0.01 (95% CI: 0.01, 0.01) and 0.05 (95% CI: 0.05, 0.05) for sex and age subgroups, respectively, on the ChestX-ray14 dataset. Conclusion Employing SCL can mitigate bias in chest radiograph diagnosis, addressing concerns of fairness and reliability in deep learning-based diagnostic methods. Keywords: Thorax, Diagnosis, Supervised Learning, Convolutional Neural Network (CNN), Computer-aided Diagnosis (CAD) Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Johnson in this issue.


Subject(s)
COVID-19 , Radiography, Thoracic , Humans , Radiography, Thoracic/methods , Radiography, Thoracic/standards , Retrospective Studies , Female , Male , Middle Aged , Aged , COVID-19/diagnostic imaging , COVID-19/diagnosis , Adult , Artificial Intelligence , SARS-CoV-2 , Radiographic Image Interpretation, Computer-Assisted/methods , Supervised Machine Learning , Adolescent , Young Adult
15.
Sci Rep ; 14(1): 18537, 2024 Aug 09.
Article in English | MEDLINE | ID: mdl-39122797

ABSTRACT

Sandification can degrade the strength and quality of dolomite, and to a certain extent, compromise the stability of a tunnel's surrounding rock as an unfavorable geological boundary. Sandification degree classification of sandy dolomite is one of the non-trivial challenges faced by geotechnical engineering projects such as tunneling in complex geographical environments. The traditional methods quantitatively measuring the physical parameters or analyzing some visual features are either time-consuming or inaccurate in practical use. To address these issues, we, for the first time, introduce the convolutional neural network (CNN)-based image classification methods into dolomite sandification degree classification task. In this study, we have made a significant contribution by establishing a large-scale dataset comprising 5729 images, classified into four distinct sandification degrees of sandy dolomite. These images were collected from the vicinity of a tunnel located in the Yuxi section of the CYWD Project in China. We conducted comprehensive classification experiments using this dataset. The results of these experiments demonstrate the groundbreaking achievement of CNN-based models, which achieved an impressive accuracy rate of up to 91.4%. This accomplishment underscores the pioneering role of our work in creating this dataset and its potential for applications in complex geographical analyses.

16.
Sensors (Basel) ; 24(15)2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39123812

ABSTRACT

Maintaining security in communication networks has long been a major concern. This issue has become increasingly crucial due to the emergence of new communication architectures like the Internet of Things (IoT) and the advancement and complexity of infiltration techniques. For usage in networks based on the Internet of Things, previous intrusion detection systems (IDSs), which often use a centralized design to identify threats, are now ineffective. For the resolution of these issues, this study presents a novel and cooperative approach to IoT intrusion detection that may be useful in resolving certain current security issues. The suggested approach chooses the most important attributes that best describe the communication between objects by using Black Hole Optimization (BHO). Additionally, a novel method for describing the network's matrix-based communication properties is put forward. The inputs of the suggested intrusion detection model consist of these two feature sets. The suggested technique splits the network into a number of subnets using the software-defined network (SDN). Monitoring of each subnet is done by a controller node, which uses a parallel combination of convolutional neural networks (PCNN) to determine the presence of security threats in the traffic passing through its subnet. The proposed method also uses the majority voting approach for the cooperation of controller nodes in order to more accurately detect attacks. The findings demonstrate that, in comparison to the prior approaches, the suggested cooperative strategy can detect assaults in the NSLKDD and NSW-NB15 datasets with an accuracy of 99.89 and 97.72 percent, respectively. This is a minimum 0.6 percent improvement.

17.
Heliyon ; 10(15): e35183, 2024 Aug 15.
Article in English | MEDLINE | ID: mdl-39170306

ABSTRACT

The battery's performance heavily influences the safety, dependability, and operational efficiency of electric vehicles (EVs). This paper introduces an innovative hybrid deep learning architecture that dramatically enhances the estimation of the state of charge (SoC) of lithium-ion (Li-ion) batteries, crucial for efficient EV operation. Our model uniquely integrates a convolutional neural network (CNN) with bidirectional long short-term memory (Bi-LSTM), optimized through evolutionary intelligence, enabling an advanced level of precision in SoC estimation. A novel aspect of this work is the application of the Group Learning Algorithm (GLA) to tune the hyperparameters of the CNN-Bi-LSTM network meticulously. This approach not only refines the model's accuracy but also significantly enhances its efficiency by optimizing each parameter to best capture and integrate both spatial and temporal information from the battery data. This is in stark contrast to conventional models that typically focus on either spatial or temporal data, but not both effectively. The model's robustness is further demonstrated through its training across six diverse datasets that represent a range of EV discharge profiles, including the Highway Fuel Economy Test (HWFET), the US06 test, the Beijing Dynamic Stress Test (BJDST), the dynamic stress test (DST), the federal urban driving schedule (FUDS), and the urban development driving schedule (UDDS). These tests are crucial for ensuring that the model can perform under various real-world conditions. Experimentally, our hybrid model not only surpasses the performance of existing LSTM and CNN frameworks in tracking SoC estimation but also achieves an impressively quick convergence to true SoC values, maintaining an average root mean square error (RMSE) of less than 1 %. Furthermore, the experimental outcomes suggest that this new deep learning methodology outstrips conventional approaches in both convergence speed and estimation accuracy, thus promising to significantly enhance battery life and overall EV efficiency.

18.
Comput Biol Med ; 180: 108945, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39094328

ABSTRACT

Driver monitoring systems (DMS) are crucial in autonomous driving systems (ADS) when users are concerned about driver/vehicle safety. In DMS, the significant influencing factor of driver/vehicle safety is the classification of driver distractions or activities. The driver's distractions or activities convey meaningful information to the ADS, enhancing the driver/ vehicle safety in real-time vehicle driving. The classification of driver distraction or activity is challenging due to the unpredictable nature of human driving. This paper proposes a convolutional block attention module embedded in Visual Geometry Group (CBAM VGG16) deep learning architecture to improve the classification performance of driver distractions. The proposed CBAM VGG16 architecture is the hybrid network of the CBAM layer with conventional VGG16 network layers. Adding a CBAM layer into a traditional VGG16 architecture enhances the model's feature extraction capacity and improves the driver distraction classification results. To validate the significant performance of our proposed CBAM VGG16 architecture, we tested our model on the American University in Cairo (AUC) distracted driver dataset version 2 (AUCD2) for cameras 1 and 2 images. Our experiment results show that the proposed CBAM VGG16 architecture achieved 98.65% classification accuracy for camera 1 and 97.85% for camera 2 AUCD2 datasets. The CBAM VGG16 architecture also compared the driver distraction classification performance with DenseNet121, Xception, MoblieNetV2, InceptionV3, and VGG16 architectures based on the proposed model's accuracy, loss, precision, F1 score, recall, and confusion matrix. The drivers' distraction classification results indicate that the proposed CBAM VGG16 has 3.7% classification improvements for AUCD2 camera 1 images and 5% for camera 2 images compared to the conventional VGG16 deep learning classification model. We also tested our proposed architecture with different hyperparameter values and estimated the optimal values for best driver distraction classification. The significance of data augmentation techniques for the data diversity performance of the CBAM VGG16 model is also validated in terms of overfitting scenarios. The Grad-CAM visualization of our proposed CBAM VGG16 architecture is also considered in our study, and the results show that VGG16 architecture without CBAM layers is less attentive to the essential parts of the driver distraction images. Furthermore, we tested the effective classification performance of our proposed CBAM VGG16 architecture with the number of model parameters, model size, various input image resolutions, cross-validation, Bayesian search optimization and different CBAM layers. The results indicate that CBAM layers in our proposed architecture enhance the classification performance of conventional VGG16 architecture and outperform the state-of-the-art deep learning architectures.


Subject(s)
Automobile Driving , Deep Learning , Humans , Distracted Driving , Attention , Neural Networks, Computer
19.
Quant Imaging Med Surg ; 14(8): 6048-6059, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39144003

ABSTRACT

Background: Noninvasively detecting epidermal growth factor receptor (EGFR) mutation status in lung adenocarcinoma patients before targeted therapy remains a challenge. This study aimed to develop a 3-dimensional (3D) convolutional neural network (CNN)-based deep learning model to predict EGFR mutation status using computed tomography (CT) images. Methods: We retrospectively collected 660 patients from 2 large medical centers. The patients were divided into training (n=528) and external test (n=132) sets according to hospital source. The CNN model was trained in a supervised end-to-end manner, and its performance was evaluated using an external test set. To compare the performance of the CNN model, we constructed 1 clinical and 3 radiomics models. Furthermore, we constructed a comprehensive model combining the highest-performing radiomics and CNN models. The receiver operating characteristic (ROC) curves were used as primary measures of performance for each model. Delong test was used to compare performance differences between different models. Results: Compared with the clinical [training set, area under the curve (AUC) =69.6%, 95% confidence interval (CI), 0.661-0.732; test set, AUC =68.4%, 95% CI, 0.609-0.752] and the highest-performing radiomics models (training set, AUC =84.3%, 95% CI, 0.812-0.873; test set, AUC =72.4%, 95% CI, 0.653-0.794) models, the CNN model (training set, AUC =94.3%, 95% CI, 0.920-0.961; test set, AUC =94.7%, 95% CI, 0.894-0.978) had significantly better predictive performance for predicting EGFR mutation status. In addition, compared with the comprehensive model (training set, AUC =95.7%, 95% CI, 0.942-0.971; test set, AUC =87.4%, 95% CI, 0.820-0.924), the CNN model had better stability. Conclusions: The CNN model has excellent performance in non-invasively predicting EGFR mutation status in patients with lung adenocarcinoma and is expected to become an auxiliary tool for clinicians.

20.
Plant Dis ; 2024 Aug 19.
Article in English | MEDLINE | ID: mdl-39160128

ABSTRACT

Visual detection of stromata (brown-black, elevated fungal fruiting bodies) is a primary method for quantifying tar spot early in the season, as these structures are definitive signs of the disease and essential for effective disease monitoring and management. Here, we present Stromata Contour Detection Algorithm version 2 (SCDA v2), which addresses the limitations of the previously developed SCDA version 1 (SCDA v1) without the need for empirical search of the optimal Decision Making Input Parameters (DMIPs), while achieving higher and consistent accuracy in tar spot stromata detection. SCDA v2 operates in two components: (i) SCDA v1 producing tar-spot-like region proposals for a given input corn leaf Red-Green-Blue (RGB) image, and (ii) a pre-trained Convolutional Neural Network (CNN) classifier identifying true tar spot stromata from the region proposals. To demonstrate the enhanced performance of the SCDA v2, we utilized datasets of RGB images of corn leaves from field (low, middle, and upper canopies) and glasshouse conditions under variable environments, exhibiting different tar spot severities at various corn developmental stages. Various accuracy analyses (F1-score, linear regression, and Lin's concordance correlation), showed that SCDA v2 had a greater agreement with the reference data (human visual annotation) than SCDA v1. SCDA v2 achievd 73.7% mean Dice values (overall accuracy), compared to 30.8% for SCDA v1. The enhanced F1-score primarily resulted from eliminating overestimation cases using the CNN classifier. Our findings indicate the promising potential of SCDA v2 for glasshouse and field-scale applications, including tar spot phenotyping and surveillance projects.

SELECTION OF CITATIONS
SEARCH DETAIL