Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 526
Filtrar
1.
Sci Rep ; 14(1): 22693, 2024 09 30.
Artigo em Inglês | MEDLINE | ID: mdl-39349728

RESUMO

Wall Shear Stress (WSS) is one of the most important parameters used in cardiovascular fluid mechanics, and it provides a lot of information like the risk level caused by any vascular occlusion. Since WSS cannot be measured directly and other available relevant methods have issues like low resolution, uncertainty and high cost, this study proposes a novel method by combining computational fluid dynamics (CFD), fluid-structure interaction (FSI), conditional generative adversarial network (cGAN) and convolutional neural network (CNN) to predict coronary artery occlusion risk using only noninvasive images accurately and rapidly. First, a cGAN model called WSSGAN was developed to predict the WSS contours on the vessel wall by training and testing the model based on the calculated WSS contours using coupling CFD-FSI simulations. Then, an 11-layer CNN was used to classify the WSS contours into three grades of occlusions, i.e. low risk, medium risk and high risk. To verify the proposed method for predicting the coronary artery occlusion risk in a real case, the patient's Magnetic Resonance Imaging (MRI) images were converted into a 3D geometry for use in the WASSGAN model. Then, the predicted WSS contours by the WSSGAN were entered into the CNN model to classify the occlusion grade.


Assuntos
Oclusão Coronária , Redes Neurais de Computação , Humanos , Oclusão Coronária/diagnóstico por imagem , Hidrodinâmica , Imageamento por Ressonância Magnética/métodos , Estresse Mecânico , Modelos Cardiovasculares , Masculino , Vasos Coronários/diagnóstico por imagem
2.
J Funct Biomater ; 15(8)2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39194677

RESUMO

Understanding bone surface curvatures is crucial for the advancement of bone material design, as these curvatures play a significant role in the mechanical behavior and functionality of bone structures. Previous studies have demonstrated that bone surface curvature distributions could be used to characterize bone geometry and have been proposed as key parameters for biomimetic microstructure design and optimization. However, understanding of how bone surface curvature distributions correlate with bone microstructure and mechanical properties remains limited. This study hypothesized that bone surface curvature distributions could be used to predict the microstructure as well as mechanical properties of trabecular bone. To test the hypothesis, a convolutional neural network (CNN) model was trained and validated to predict the histomorphometric parameters (e.g., BV/TV, BS, Tb.Th, DA, Conn.D, and SMI), geometric parameters (e.g., plate area PA, plate thickness PT, rod length RL, rod diameter RD, plate-to-plate nearest neighbor distance NNDPP, rod-to-rod nearest neighbor distance NNDRR, plate number PN, and rod number RN), as well as the apparent stiffness tensor of trabecular bone using various bone surface curvature distributions, including maximum principal curvature distribution, minimum principal curvature distribution, Gaussian curvature distribution, and mean curvature distribution. The results showed that the surface curvature distribution-based deep learning model achieved high fidelity in predicting the major histomorphometric parameters and geometric parameters as well as the stiffness tenor of trabecular bone, thus supporting the hypothesis of this study. The findings of this study underscore the importance of incorporating bone surface curvature analysis in the design of synthetic bone materials and implants.

3.
Cell Rep ; 43(8): 114583, 2024 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-39110597

RESUMO

Vast shotgun metagenomics data remain an underutilized resource for novel enzymes. Artificial intelligence (AI) has increasingly been applied to protein mining, but its conventional performance evaluation is interpolative in nature, and these trained models often struggle to extrapolate effectively when challenged with unknown data. In this study, we present a framework (DeepMineLys [deep mining of phage lysins from human microbiome]) based on the convolutional neural network (CNN) to identify phage lysins from three human microbiome datasets. When validated with an independent dataset, our method achieved an F1-score of 84.00%, surpassing existing methods by 20.84%. We expressed 16 lysin candidates from the top 100 sequences in E. coli, confirming 11 as active. The best one displayed an activity 6.2-fold that of lysozyme derived from hen egg white, establishing it as the most potent lysin from the human microbiome. Our study also underscores several important issues when applying AI to biology questions. This framework should be applicable for mining other proteins.


Assuntos
Bacteriófagos , Microbiota , Humanos , Bacteriófagos/genética , Bacteriófagos/metabolismo , Mineração de Dados , Proteínas Virais/metabolismo , Redes Neurais de Computação , Animais , Muramidase/metabolismo , Escherichia coli/genética , Escherichia coli/metabolismo
4.
Heliyon ; 10(15): e35358, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39170369

RESUMO

As a technique in artificial intelligence, a convolution neural network model has been utilized to extract average surface roughness from the geometric characteristics of a membrane image featuring micro- and nanostructures. For surface roughness measurement, e.g. atomic force microscopy and optical profiler, the previous methods have been performed to analyze a porous membrane surface on an interest of region with a few micrometers of the restricted area according to the depth resolution. However, an image from the scanning electron microscope, combined with the feature extraction process, provides clarity on surface roughness for multiple areas with various depth resolutions. Through image preprocessing, the geometric pattern is elucidated by amplifying the disparity in pixel intensity values between the bright and dark regions of the image. The geometric pattern of the binary image and magnitude spectrum confirmed the classification of the surface roughness of images in a categorical scatter plot. A group of cropped images from an original image is used to predict the logarithmic average surface roughness values. The model predicted 4.80 % MAPE for the test dataset. The method of extracting geometric patterns through a feature map-based CNN, combined with a statistical approach, suggests an indirect surface measurement. The process is achieved through a bundle of predicted output data, which helps reduce the randomness error of the structural characteristics. A novel feature extraction approach of CNN with statistical analysis is a valuable method for revealing hidden physical characteristics in surface geometries from irregular pixel patterns in an array of images.

5.
Open Respir Med J ; 18: e18743064296470, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39130650

RESUMO

Background: Electronic health records (EHRs) are live, digital patient records that provide a thorough overview of a person's complete health data. Electronic health records (EHRs) provide better healthcare decisions and evidence-based patient treatment and track patients' clinical development. The EHR offers a new range of opportunities for analyzing and contrasting exam findings and other data, creating a proper information management mechanism to boost effectiveness, quick resolutions, and identifications. Aim: The aim of this studywas to implement an interoperable EHR system to improve the quality of care through the decision support system for the identification of lung cancer in its early stages. Objective: The main objective of the proposed system was to develop an Android application for maintaining an EHR system and decision support system using deep learning for the early detection of diseases. The second objective was to study the early stages of lung disease to predict/detect it using a decision support system. Methods: To extract the EHR data of patients, an android application was developed. The android application helped in accumulating the data of each patient. The accumulated data were used to create a decision support system for the early prediction of lung cancer. To train, test, and validate the prediction of lung cancer, a few samples from the ready dataset and a few data from patients were collected. The valid data collection from patients included an age range of 40 to 70, and both male and female patients. In the process of experimentation, a total of 316 images were considered. The testing was done by considering the data set into 80:20 partitions. For the evaluation purpose, a manual classification was done for 3 different diseases, such as large cell carcinoma, adenocarcinoma, and squamous cell carcinoma diseases in lung cancer detection. Results: The first model was tested for interoperability constraints of EHR with data collection and updations. When it comes to the disease detection system, lung cancer was predicted for large cell carcinoma, adenocarcinoma, and squamous cell carcinoma type by considering 80:20 training and testing ratios. Among the considered 336 images, the prediction of large cell carcinoma was less compared to adenocarcinoma and squamous cell carcinoma. The analysis also showed that large cell carcinoma occurred majorly in males due to smoking and was found as breast cancer in females. Conclusion: As the challenges are increasing daily in healthcare industries, a secure, interoperable EHR could help patients and doctors access patient data efficiently and effectively using an Android application. Therefore, a decision support system using a deep learning model was attempted and successfully used for disease detection. Early disease detection for lung cancer was evaluated, and the model achieved an accuracy of 93%. In future work, the integration of EHR data can be performed to detect various diseases early.

6.
Comput Methods Programs Biomed ; 255: 108323, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39029417

RESUMO

BACKGROUND AND OBJECTIVE: Patient-ventilator asynchrony (PVA) is associated with poor clinical outcomes and remains under-monitored. Automated PVA detection would enable complete monitoring standard observational methods do not allow. While model-based and machine learning PVA approaches exist, they have variable performance and can miss specific PVA events. This study compares a model and rule-based algorithm with a machine learning PVA method by retrospectively validating both methods using an independent patient cohort. METHODS: Hysteresis loop analysis (HLA) which is a rule-based method (RBM) and a tri-input convolutional neural network (TCNN) machine learning model are used to classify 7 different types of PVA, including: 1) flow asynchrony; 2) reverse triggering; 3) premature cycling; 4) double triggering; 5) delayed cycling; 6) ineffective efforts; and 7) auto triggering. Class activation mapping (CAM) heatmaps visualise sections of respiratory waveforms the TCNN model uses for decision making, improving result interpretability. Both PVA classification methods were used to classify incidence in an independent retrospective clinical cohort of 11 mechanically ventilated patients for validation and performance comparison. RESULTS: Self-validation with the training dataset shows overall better HLA performance (accuracy, sensitivity, specificity: 97.5 %, 96.6 %, 98.1 %) compared to the TCNN model (accuracy, sensitivity, specificity: 89.5 %, 98.3 %, 83.9 %). In this study, the TCNN model demonstrates higher sensitivity in detecting PVA, but HLA was better at identifying non-PVA breathing cycles due to its rule-based nature. While the overall AI identified by both classification methods are very similar, the intra-patient distribution of each PVA type varies between HLA and TCNN. CONCLUSION: The collective findings underscore the efficacy of both HLA and TCNN in PVA detection, indicating the potential for real-time continuous monitoring of PVA. While ML methods such as TCNN demonstrate good PVA identification performance, it is essential to ensure optimal model architecture and diversity in training data before widespread uptake as standard care. Moving forward, further validation and adoption of RBM methods, such as HLA, offers an effective approach to PVA detection while providing clear distinction into the underlying patterns of PVA, better aligning with clinical needs for transparency, explicability, adaptability and reliability of these emerging tools for clinical care.


Assuntos
Algoritmos , Aprendizado de Máquina , Redes Neurais de Computação , Respiração Artificial , Humanos , Estudos Retrospectivos , Masculino , Feminino , Pessoa de Meia-Idade , Idoso , Ventiladores Mecânicos , Assincronia Paciente-Ventilador
7.
PeerJ Comput Sci ; 10: e2152, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38983193

RESUMO

With the rapid extensive development of the Internet, users not only enjoy great convenience but also face numerous serious security problems. The increasing frequency of data breaches has made it clear that the network security situation is becoming increasingly urgent. In the realm of cybersecurity, intrusion detection plays a pivotal role in monitoring network attacks. However, the efficacy of existing solutions in detecting such intrusions remains suboptimal, perpetuating the security crisis. To address this challenge, we propose a sparse autoencoder-Bayesian optimization-convolutional neural network (SA-BO-CNN) system based on convolutional neural network (CNN). Firstly, to tackle the issue of data imbalance, we employ the SMOTE resampling function during system construction. Secondly, we enhance the system's feature extraction capabilities by incorporating SA. Finally, we leverage BO in conjunction with CNN to enhance system accuracy. Additionally, a multi-round iteration approach is adopted to further refine detection accuracy. Experimental findings demonstrate an impressive system accuracy of 98.36%. Comparative analyses underscore the superior detection rate of the SA-BO-CNN system.

8.
Front Comput Neurosci ; 18: 1418280, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38988988

RESUMO

Neuroscience is a swiftly progressing discipline that aims to unravel the intricate workings of the human brain and mind. Brain tumors, ranging from non-cancerous to malignant forms, pose a significant diagnostic challenge due to the presence of more than 100 distinct types. Effective treatment hinges on the precise detection and segmentation of these tumors early. We introduce a cutting-edge deep-learning approach employing a binary convolutional neural network (BCNN) to address this. This method is employed to segment the 10 most prevalent brain tumor types and is a significant improvement over current models restricted to only segmenting four types. Our methodology begins with acquiring MRI images, followed by a detailed preprocessing stage where images undergo binary conversion using an adaptive thresholding method and morphological operations. This prepares the data for the next step, which is segmentation. The segmentation identifies the tumor type and classifies it according to its grade (Grade I to Grade IV) and differentiates it from healthy brain tissue. We also curated a unique dataset comprising 6,600 brain MRI images specifically for this study. The overall performance achieved by our proposed model is 99.36%. The effectiveness of our model is underscored by its remarkable performance metrics, achieving 99.40% accuracy, 99.32% precision, 99.45% recall, and a 99.28% F-Measure in segmentation tasks.

9.
Heliyon ; 10(12): e33377, 2024 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-39027444

RESUMO

Detecting crop diseases before they spread poses a significant challenge for farmers. While both deep learning (DL) and computer vision are valuable for image classification, DL necessitates larger datasets and more extensive training periods. To overcome the limitations of working with constrained datasets, this paper proposes an ensemble model to enhance overall performance. The proposed ensemble model combines the convolution neural network (CNN)-based models as feature extractors with random forest (RF) as the output classifier. Our method is built on popular CNN-based models such as VGG16, InceptionV3, Xception, and ResNet50. Traditionally, these CNN-based architectures are referred to as one-way models, but in our approach, they are connected in parallel to form a two-way configuration, enabling the extraction of more diverse features and reducing the risk of underfitting, particularly with limited datasets. To demonstrate the effectiveness of our ensemble approach, we train models using the grape leaf dataset, which is divided into two subsets: original and modified. In the original set, background removal is applied to the images, while the modified set includes preprocessing techniques such as intensity averaging and bilateral filtering for noise reduction and image smoothing. Our findings reveal that ensemble models trained on modified images outperform those trained on the original dataset. We observe improvements of up to 5.6 % in accuracy, precision, and sensitivity, thus validating the effectiveness of our approach in enhancing disease pattern recognition within limited datasets.

10.
Water Res ; 261: 122027, 2024 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-39018904

RESUMO

Depletion of dissolved oxygen (DO) is a significant incentive for biological catastrophic events in freshwater lakes. Although predicting the DO concentrations in lakes with high-frequency real-time data to prevent hypoxic events is effective, few related experimental studies were made. In this study, a short-term predicting model was developed for DO concentrations in three problematic areas in China's Chaohu Lake. To predict the DO concentrations at these representative sites, which coincide with biological abnormal death areas, water quality indicators at the three sampling sites and hydrometeorological features were adopted as input variables. The monitoring data were collected every 4 h between 2020 and 2023 and applied separately to train and test the model at a ratio of 8:2. A new AC-BiLSTM coupling model of the convolution neural network (CNN) and the bidirectional long short-term memory (BiLSTM) with the attention mechanism (AM) was proposed to tackle characteristics of discontinuous dynamic change of DO concentrations in long time series. Compared with the BiLSTM and CNN-BiLSTM models, the AC-BiLSTM showed better performance in the evaluation criteria of MSE, MAE, and R2 and a stronger ability to capture global dependency relationships. Although the prediction accuracy of hypoxic events was slightly worse, the general time series characteristics of abrupt DO depletion were captured. Water temperature regularly affects DO concentrations due to its periodic variations. The high correlation and the universal importance of total nitrogen (TN) and total phosphorus (TP) with DO reveals that point source pollution are critical cause of DO depletion in the freshwater lake. The importance of NTU at the Zhong Miao Station indicates the self-purification capacity of the lake is affected by the flow rate changes brought by the tributaries. Calculating linear correlations of variables in conjunction with a permutation variable importance analysis enhanced the interpretability of the proposed model results. This study demonstrates that the AC-BiLSTM model can complete the task of short-term prediction of DO concentration of lakes and reveal its response features of timing and magnitude of abrupt DO depletion.


Assuntos
Lagos , Redes Neurais de Computação , Oxigênio , Lagos/química , Oxigênio/análise , China , Monitoramento Ambiental/métodos , Qualidade da Água
11.
J Mol Model ; 30(8): 264, 2024 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-38995407

RESUMO

CONTEXT: Accurately predicting plasma protein binding rate (PPBR) and oral bioavailability (OBA) helps to better reveal the absorption and distribution of drugs in the human body and subsequent drug design. Although machine learning models have achieved good results in prediction accuracy, they often suffer from insufficient accuracy when dealing with data with irregular topological structures. METHODS: In view of this, this study proposes a pharmacokinetic parameter prediction framework based on graph convolutional networks (GCN), which predicts the PPBR and OBA of small molecule drugs. In the framework, GCN is first used to extract spatial feature information on the topological structure of drug molecules, in order to better learn node features and association information between nodes. Then, based on the principle of drug similarity, this study calculates the similarity between small molecule drugs, selects different thresholds to construct datasets, and establishes a prediction model centered on the GCN algorithm. The experimental results show that compared with traditional machine learning prediction models, the prediction model constructed based on the GCN method performs best on PPBR and OBA datasets with an inter-molecular similarity threshold of 0.25, with MAE of 0.155 and 0.167, respectively. In addition, in order to further improve the accuracy of the prediction model, GCN is combined with other algorithms. Compared to using a single GCN method, the distribution of the predicted values obtained by the combined model is highly consistent with the true values. In summary, this work provides a new method for improving the rate of early drug screening in the future.


Assuntos
Aprendizado de Máquina , Humanos , Algoritmos , Preparações Farmacêuticas/química , Preparações Farmacêuticas/metabolismo , Redes Neurais de Computação , Disponibilidade Biológica , Ligação Proteica , Bibliotecas de Moléculas Pequenas/farmacocinética , Bibliotecas de Moléculas Pequenas/química , Farmacocinética , Proteínas Sanguíneas/metabolismo
12.
Sensors (Basel) ; 24(14)2024 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-39066156

RESUMO

Semi-supervised graph convolutional networks (SSGCNs) have been proven to be effective in hyperspectral image classification (HSIC). However, limited training data and spectral uncertainty restrict the classification performance, and the computational demands of a graph convolution network (GCN) present challenges for real-time applications. To overcome these issues, a dual-branch fusion of a GCN and convolutional neural network (DFGCN) is proposed for HSIC tasks. The GCN branch uses an adaptive multi-scale superpixel segmentation method to build fusion adjacency matrices at various scales, which improves the graph convolution efficiency and node representations. Additionally, a spectral feature enhancement module (SFEM) enhances the transmission of crucial channel information between the two graph convolutions. Meanwhile, the CNN branch uses a convolutional network with an attention mechanism to focus on detailed features of local areas. By combining the multi-scale superpixel features from the GCN branch and the local pixel features from the CNN branch, this method leverages complementary features to fully learn rich spatial-spectral information. Our experimental results demonstrate that the proposed method outperforms existing advanced approaches in terms of classification efficiency and accuracy across three benchmark data sets.

13.
Front Neuroergon ; 5: 1287794, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38962279

RESUMO

A recent development in deep learning techniques has attracted attention to the decoding and classification of electroencephalogram (EEG) signals. Despite several efforts to utilize different features in EEG signals, a significant research challenge is using time-dependent features in combination with local and global features. Several attempts have been made to remodel the deep learning convolution neural networks (CNNs) to capture time-dependency information. These features are usually either handcrafted features, such as power ratios, or splitting data into smaller-sized windows related to specific properties, such as a peak at 300 ms. However, these approaches partially solve the problem but simultaneously hinder CNNs' capability to learn from unknown information that might be present in the data. Other approaches, like recurrent neural networks, are very suitable for learning time-dependent information from EEG signals in the presence of unrelated sequential data. To solve this, we have proposed an encoding kernel (EnK), a novel time-encoding approach, which uniquely introduces time decomposition information during the vertical convolution operation in CNNs. The encoded information lets CNNs learn time-dependent features in addition to local and global features. We performed extensive experiments on several EEG data sets-physical human-robot collaborations, P300 visual-evoked potentials, motor imagery, movement-related cortical potentials, and the Dataset for Emotion Analysis Using Physiological Signals. The EnK outperforms the state of the art with an up to 6.5% reduction in mean squared error (MSE) and a 9.5% improvement in F1-scores compared to the average for all data sets together compared to base models. These results support our approach and show a high potential to improve the performance of physiological and non-physiological data. Moreover, the EnK can be applied to virtually any deep learning architecture with minimal effort.

14.
Phys Eng Sci Med ; 47(3): 1037-1050, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38862778

RESUMO

Alzheimer's disease (AD) is a progressive and incurable neurologi-cal disorder with a rising mortality rate, worsened by error-prone, time-intensive, and expensive clinical diagnosis methods. Automatic AD detection methods using hand-crafted Electroencephalogram (EEG) signal features lack accuracy and reliability. A lightweight convolution neural network for AD detection (LCADNet) is investigated to extract disease-specific features while reducing the detection time. The LCADNet uses two convolutional layers for extracting complex EEG features, two fully connected layers for selecting disease-specific features, and a softmax layer for predicting AD detection probability. A max-pooling layer interlaced between convolutional layers decreases the time-domain redundancy in the EEG signal. The efficiency of the LCADNet and four pre-trained models using transfer learning is compared using a publicly available AD detection dataset. The LCADNet shows the lowest computation complexity in terms of both the number of floating point operations and inference time and the highest classification performance across six measures. The generalization of the LCADNet is assessed by cross-testing it with two other publicly available AD detection datasets. It outperforms existing EEG-based AD detection methods with an accuracy of 98.50%. The LCADNet may be a valuable aid for neurologists and its Python implemen- tation can be found at github.com/SandeepSangle12/LCADNet.git.


Assuntos
Doença de Alzheimer , Eletroencefalografia , Redes Neurais de Computação , Doença de Alzheimer/diagnóstico , Doença de Alzheimer/diagnóstico por imagem , Humanos , Processamento de Sinais Assistido por Computador , Algoritmos
15.
Sensors (Basel) ; 24(11)2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38894371

RESUMO

The Rich spatial and angular information in light field images enables accurate depth estimation, which is a crucial aspect of environmental perception. However, the abundance of light field information also leads to high computational costs and memory pressure. Typically, selectively pruning some light field information can significantly improve computational efficiency but at the expense of reduced depth estimation accuracy in the pruned model, especially in low-texture regions and occluded areas where angular diversity is reduced. In this study, we propose a lightweight disparity estimation model that balances speed and accuracy and enhances depth estimation accuracy in textureless regions. We combined cost matching methods based on absolute difference and correlation to construct cost volumes, improving both accuracy and robustness. Additionally, we developed a multi-scale disparity cost fusion architecture, employing 3D convolutions and a UNet-like structure to handle matching costs at different depth scales. This method effectively integrates information across scales, utilizing the UNet structure for efficient fusion and completion of cost volumes, thus yielding more precise depth maps. Extensive testing shows that our method achieves computational efficiency on par with the most efficient existing methods, yet with double the accuracy. Moreover, our approach achieves comparable accuracy to the current highest-accuracy methods but with an order of magnitude improvement in computational performance.

16.
Math Biosci Eng ; 21(4): 5521-5535, 2024 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-38872546

RESUMO

Early diagnosis of abnormal electrocardiogram (ECG) signals can provide useful information for the prevention and detection of arrhythmia diseases. Due to the similarities in Normal beat (N) and Supraventricular Premature Beat (S) categories and imbalance of ECG categories, arrhythmia classification cannot achieve satisfactory classification results under the inter-patient assessment paradigm. In this paper, a multi-path parallel deep convolutional neural network was proposed for arrhythmia classification. Furthermore, a global average RR interval was introduced to address the issue of similarities between N vs. S categories, and a weighted loss function was developed to solve the imbalance problem using the dynamically adjusted weights based on the proportion of each class in the input batch. The MIT-BIH arrhythmia dataset was used to validate the classification performances of the proposed method. Experimental results under the intra-patient evaluation paradigm and inter-patient evaluation paradigm showed that the proposed method could achieve better classification results than other methods. Among them, the accuracy, average sensitivity, average precision, and average specificity under the intra-patient paradigm were 98.73%, 94.89%, 89.38%, and 98.24%, respectively. The accuracy, average sensitivity, average precision, and average specificity under the inter-patient paradigm were 91.22%, 89.91%, 68.23%, and 95.23%, respectively.


Assuntos
Algoritmos , Arritmias Cardíacas , Eletrocardiografia , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Humanos , Arritmias Cardíacas/classificação , Arritmias Cardíacas/diagnóstico , Arritmias Cardíacas/fisiopatologia , Eletrocardiografia/métodos , Sensibilidade e Especificidade , Aprendizado Profundo , Reprodutibilidade dos Testes , Bases de Dados Factuais
17.
Technol Health Care ; 2024 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-38943414

RESUMO

BACKGROUND: Brain variations are responsible for developmental impairments, including autism spectrum disorder (ASD). EEG signals efficiently detect neurological conditions by revealing crucial information about brain function abnormalities. OBJECTIVE: This study aims to utilize EEG data collected from both autistic and typically developing children to investigate the potential of a Graph Convolutional Neural Network (GCNN) in predicting ASD based on neurological abnormalities revealed through EEG signals. METHODS: In this study, EEG data were gathered from eight autistic children and eight typically developing children diagnosed using the Childhood Autism Rating Scale at the Central Institute of Psychiatry, Ranchi. EEG recording was done using a HydroCel GSN with 257 channels, and 71 channels with 10-10 international equivalents were utilized. Electrodes were divided into 12 brain regions. A GCNN was introduced for ASD prediction, preceded by autoregressive and spectral feature extraction. RESULTS: The anterior-frontal brain region, crucial for cognitive functions like emotion, memory, and social interaction, proved most predictive of ASD, achieving 87.07% accuracy. This underscores the suitability of the GCNN method for EEG-based ASD detection. CONCLUSION: The detailed dataset collected enhances understanding of the neurological basis of ASD, benefiting healthcare practitioners involved in ASD diagnosis.

18.
Comput Biol Med ; 178: 108727, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38897146

RESUMO

Electroencephalograph (EEG) brain-computer interfaces (BCI) have potential to provide new paradigms for controlling computers and devices. The accuracy of brain pattern classification in EEG BCI is directly affected by the quality of features extracted from EEG signals. Currently, feature extraction heavily relies on prior knowledge to engineer features (for example from specific frequency bands); therefore, better extraction of EEG features is an important research direction. In this work, we propose an end-to-end deep neural network that automatically finds and combines features for motor imagery (MI) based EEG BCI with 4 or more imagery classes (multi-task). First, spectral domain features of EEG signals are learned by compact convolutional neural network (CCNN) layers. Then, gated recurrent unit (GRU) neural network layers automatically learn temporal patterns. Lastly, an attention mechanism dynamically combines (across EEG channels) the extracted spectral-temporal features, reducing redundancy. We test our method using BCI Competition IV-2a and a data set we collected. The average classification accuracy on 4-class BCI Competition IV-2a was 85.1 % ± 6.19 %, comparable to recent work in the field and showing low variability among participants; average classification accuracy on our 6-class data was 64.4 % ± 8.35 %. Our dynamic fusion of spectral-temporal features is end-to-end and has relatively few network parameters, and the experimental results show its effectiveness and potential.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Humanos , Eletroencefalografia/métodos , Imaginação/fisiologia , Encéfalo/fisiologia
19.
Sensors (Basel) ; 24(12)2024 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-38931616

RESUMO

The latest survey results show an increase in accidents on the roads involving pedestrians and cyclists. The reasons for such situations are many, the fault actually lies on both sides. Equipping vehicles, especially autonomous vehicles, with frequency-modulated continuous-wave (FMCW) radar and dedicated algorithms for analyzing signals in the time-frequency domain as well as algorithms for recognizing objects in radar imaging through deep neural networks can positively affect safety. This paper presents a method for recognizing and distinguishing a group of objects based on radar signatures of objects and a special convolutional neural network structure. The proposed approach is based on a database of radar signatures generated on pedestrian, cyclist, and car models in a Matlab environment. The obtained results of simulations and positive tests provide a basis for the application of the system in many sectors and areas of the economy. Innovative aspects of the work include the method of discriminating between multiple objects on a single radar signature, the dedicated architecture of the convolutional neural network, and the use of a method of generating a custom input database.

20.
Med Biol Eng Comput ; 62(10): 3057-3071, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38760598

RESUMO

The leading cause of cancer-related deaths worldwide is skin cancer. Effective therapy depends on the early diagnosis of skin cancer through the precise classification of skin lesions. However, dermatologists may find it difficult and time-consuming to accurately classify skin lesions. The use of transfer learning to boost skin cancer classification model precision is a promising strategy. In this work, we proposed a hybrid CNN with a transfer learning model and a random forest classifier for skin cancer disease detection. To evaluate the efficacy of the proposed model, it was verified over two datasets of benign skin moles and malignant skin moles. The proposed model is able to classify images with an accuracy of up to 90.11%. The empirical results and analysis assure the feasibility and effectiveness of the proposed model for skin cancer classification.


Assuntos
Redes Neurais de Computação , Neoplasias Cutâneas , Humanos , Neoplasias Cutâneas/diagnóstico , Aprendizado de Máquina , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Nevo/diagnóstico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA