Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters











Database
Language
Publication year range
1.
Sci Rep ; 14(1): 23107, 2024 10 04.
Article in English | MEDLINE | ID: mdl-39367046

ABSTRACT

Identification of retinal diseases in automated screening methods, such as those used in clinical settings or computer-aided diagnosis, usually depends on the localization and segmentation of the Optic Disc (OD) and fovea. However, this task is difficult since these anatomical features have irregular spatial, texture, and shape characteristics, limited sample sizes, and domain shifts due to different data distributions across datasets. This study proposes a novel Multiresolution Cascaded Attention U-Net (MCAU-Net) model that addresses these problems by optimally balancing receptive field size and computational efficiency. The MCAU-Net utilizes two skip connections to accurately localize and segment the OD and fovea in fundus images. We incorporated a Multiresolution Wavelet Pooling Module (MWPM) into the CNN at each stage of U-Net input to compensate for spatial information loss. Additionally, we integrated a cascaded connection of the spatial and channel attentions as a skip connection in MCAU-Net to concentrate precisely on the target object and improve model convergence for segmenting and localizing OD and fovea centers. The proposed model has a low parameter count of 0.8 million, improving computational efficiency and reducing the risk of overfitting. For OD segmentation, the MCAU-Net achieves high IoU values of 0.9771, 0.945, and 0.946 for the DRISHTI-GS, DRIONS-DB, and IDRiD datasets, respectively, outperforming previous results for all three datasets. For the IDRiD dataset, the MCAU-Net locates the OD center with an Euclidean Distance (ED) of 16.90 pixels and the fovea center with an ED of 33.45 pixels, demonstrating its effectiveness in overcoming the common limitations of state-of-the-art methods.


Subject(s)
Fovea Centralis , Fundus Oculi , Optic Disk , Humans , Optic Disk/diagnostic imaging , Fovea Centralis/diagnostic imaging , Neural Networks, Computer , Algorithms , Image Processing, Computer-Assisted/methods
2.
Stud Health Technol Inform ; 316: 575-579, 2024 Aug 22.
Article in English | MEDLINE | ID: mdl-39176807

ABSTRACT

Developing novel predictive models with complex biomedical information is challenging due to various idiosyncrasies related to heterogeneity, standardization or sparseness of the data. We previously introduced a person-centric ontology to organize information about individual patients, and a representation learning framework to extract person-centric knowledge graphs (PKGs) and to train Graph Neural Networks (GNNs). In this paper, we propose a systematic approach to examine the results of GNN models trained with both structured and unstructured information from the MIMIC-III dataset. Through ablation studies on different clinical, demographic, and social data, we show the robustness of this approach in identifying predictive features in PKGs for the task of readmission prediction.


Subject(s)
Neural Networks, Computer , Humans , Patient Readmission
3.
Sensors (Basel) ; 23(21)2023 Oct 24.
Article in English | MEDLINE | ID: mdl-37960385

ABSTRACT

The occurrence of tomato diseases has substantially reduced agricultural output and financial losses. The timely detection of diseases is crucial to effectively manage and mitigate the impact of episodes. Early illness detection can improve output, reduce chemical use, and boost a nation's economy. A complete system for plant disease detection using EfficientNetV2B2 and deep learning (DL) is presented in this paper. This research aims to develop a precise and effective automated system for identifying several illnesses that impact tomato plants. This will be achieved by analyzing tomato leaf photos. A dataset of high-resolution photographs of healthy and diseased tomato leaves was created to achieve this goal. The EfficientNetV2B2 model is the foundation of the deep learning system and excels at picture categorization. Transfer learning (TF) trains the model on a tomato leaf disease dataset using EfficientNetV2B2's pre-existing weights and a 256-layer dense layer. Tomato leaf diseases can be identified using the EfficientNetV2B2 model and a dense layer of 256 nodes. An ideal loss function and algorithm train and tune the model. Next, the concept is deployed in smartphones and online apps. The user can accurately diagnose tomato leaf diseases with this application. Utilizing an automated system facilitates the rapid identification of diseases, assisting in making informed decisions on disease management and promoting sustainable tomato cultivation practices. The 5-fold cross-validation method achieved 99.02% average weighted training accuracy, 99.22% average weighted validation accuracy, and 98.96% average weighted test accuracy. The split method achieved 99.93% training accuracy and 100% validation accuracy. Using the DL approach, tomato leaf disease identification achieves nearly 100% accuracy on a test dataset.


Subject(s)
Artificial Intelligence , Solanum lycopersicum , Smartphone , Algorithms , Plant Leaves
4.
Biomedicines ; 11(6)2023 May 28.
Article in English | MEDLINE | ID: mdl-37371661

ABSTRACT

Diabetic retinopathy (DR) is the foremost cause of blindness in people with diabetes worldwide, and early diagnosis is essential for effective treatment. Unfortunately, the present DR screening method requires the skill of ophthalmologists and is time-consuming. In this study, we present an automated system for DR severity classification employing the fine-tuned Compact Convolutional Transformer (CCT) model to overcome these issues. We assembled five datasets to generate a more extensive dataset containing 53,185 raw images. Various image pre-processing techniques and 12 types of augmentation procedures were applied to improve image quality and create a massive dataset. A new DR-CCTNet model is proposed. It is a modification of the original CCT model to address training time concerns and work with a large amount of data. Our proposed model delivers excellent accuracy even with low-pixel images and still has strong performance with fewer images, indicating that the model is robust. We compare our model's performance with transfer learning models such as VGG19, VGG16, MobileNetV2, and ResNet50. The test accuracy of the VGG19, ResNet50, VGG16, and MobileNetV2 were, respectively, 72.88%, 76.67%, 73.22%, and 71.98%. Our proposed DR-CCTNet model to classify DR outperformed all of these with a 90.17% test accuracy. This approach provides a novel and efficient method for the detection of DR, which may lower the burden on ophthalmologists and expedite treatment for patients.

5.
BMC Bioinformatics ; 24(1): 227, 2023 Jun 02.
Article in English | MEDLINE | ID: mdl-37268890

ABSTRACT

BACKGROUND: Entity normalization is an important information extraction task which has recently gained attention, particularly in the clinical/biomedical and life science domains. On several datasets, state-of-the-art methods perform rather well on popular benchmarks. Yet, we argue that the task is far from resolved. RESULTS: We have selected two gold standard corpora and two state-of-the-art methods to highlight some evaluation biases. We present non-exhaustive initial findings on the existence of evaluation problems of the entity normalization task. CONCLUSIONS: Our analysis suggests better evaluation practices to support the methodological research in this field.


Subject(s)
Biological Science Disciplines , Information Storage and Retrieval , Research Design , Bias , Natural Language Processing
6.
Sensors (Basel) ; 23(5)2023 Feb 23.
Article in English | MEDLINE | ID: mdl-36904678

ABSTRACT

Sleep posture has a crucial impact on the incidence and severity of obstructive sleep apnea (OSA). Therefore, the surveillance and recognition of sleep postures could facilitate the assessment of OSA. The existing contact-based systems might interfere with sleeping, while camera-based systems introduce privacy concerns. Radar-based systems might overcome these challenges, especially when individuals are covered with blankets. The aim of this research is to develop a nonobstructive multiple ultra-wideband radar sleep posture recognition system based on machine learning models. We evaluated three single-radar configurations (top, side, and head), three dual-radar configurations (top + side, top + head, and side + head), and one tri-radar configuration (top + side + head), in addition to machine learning models, including CNN-based networks (ResNet50, DenseNet121, and EfficientNetV2) and vision transformer-based networks (traditional vision transformer and Swin Transformer V2). Thirty participants (n = 30) were invited to perform four recumbent postures (supine, left side-lying, right side-lying, and prone). Data from eighteen participants were randomly chosen for model training, another six participants' data (n = 6) for model validation, and the remaining six participants' data (n = 6) for model testing. The Swin Transformer with side and head radar configuration achieved the highest prediction accuracy (0.808). Future research may consider the application of the synthetic aperture radar technique.


Subject(s)
Radar , Sleep Apnea, Obstructive , Humans , Posture , Machine Learning , Sleep
7.
Front Med (Lausanne) ; 9: 924979, 2022.
Article in English | MEDLINE | ID: mdl-36052321

ABSTRACT

Interpretation of medical images with a computer-aided diagnosis (CAD) system is arduous because of the complex structure of cancerous lesions in different imaging modalities, high degree of resemblance between inter-classes, presence of dissimilar characteristics in intra-classes, scarcity of medical data, and presence of artifacts and noises. In this study, these challenges are addressed by developing a shallow convolutional neural network (CNN) model with optimal configuration performing ablation study by altering layer structure and hyper-parameters and utilizing a suitable augmentation technique. Eight medical datasets with different modalities are investigated where the proposed model, named MNet-10, with low computational complexity is able to yield optimal performance across all datasets. The impact of photometric and geometric augmentation techniques on different datasets is also evaluated. We selected the mammogram dataset to proceed with the ablation study for being one of the most challenging imaging modalities. Before generating the model, the dataset is augmented using the two approaches. A base CNN model is constructed first and applied to both the augmented and non-augmented mammogram datasets where the highest accuracy is obtained with the photometric dataset. Therefore, the architecture and hyper-parameters of the model are determined by performing an ablation study on the base model using the mammogram photometric dataset. Afterward, the robustness of the network and the impact of different augmentation techniques are assessed by training the model with the rest of the seven datasets. We obtain a test accuracy of 97.34% on the mammogram, 98.43% on the skin cancer, 99.54% on the brain tumor magnetic resonance imaging (MRI), 97.29% on the COVID chest X-ray, 96.31% on the tympanic membrane, 99.82% on the chest computed tomography (CT) scan, and 98.75% on the breast cancer ultrasound datasets by photometric augmentation and 96.76% on the breast cancer microscopic biopsy dataset by geometric augmentation. Moreover, some elastic deformation augmentation methods are explored with the proposed model using all the datasets to evaluate their effectiveness. Finally, VGG16, InceptionV3, and ResNet50 were trained on the best-performing augmented datasets, and their performance consistency was compared with that of the MNet-10 model. The findings may aid future researchers in medical data analysis involving ablation studies and augmentation techniques.

8.
Front Hum Neurosci ; 16: 930291, 2022.
Article in English | MEDLINE | ID: mdl-35880106

ABSTRACT

We consider the problem of extracting features from passive, multi-channel electroencephalogram (EEG) devices for downstream inference tasks related to high-level mental states such as stress and cognitive load. Our proposed feature extraction method uses recently developed spectral-based multi-graph tools and applies them to the time series of graphs implied by the statistical dependence structure (e.g., correlation) amongst the multiple sensors. We study the features in the context of two datasets each consisting of at least 30 participants and recorded using multi-channel EEG systems. We compare the classification performance of a classifier trained on the proposed features to a classifier trained on the traditional band power-based features in three settings and find that the two feature sets offer complementary predictive information. We conclude by showing that the importance of particular channels and pairs of channels for classification when using the proposed features is neuroscientifically valid.

9.
Biology (Basel) ; 10(12)2021 Dec 17.
Article in English | MEDLINE | ID: mdl-34943262

ABSTRACT

BACKGROUND: Identification and treatment of breast cancer at an early stage can reduce mortality. Currently, mammography is the most widely used effective imaging technique in breast cancer detection. However, an erroneous mammogram based interpretation may result in false diagnosis rate, as distinguishing cancerous masses from adjacent tissue is often complex and error-prone. METHODS: Six pre-trained and fine-tuned deep CNN architectures: VGG16, VGG19, MobileNetV2, ResNet50, DenseNet201, and InceptionV3 are evaluated to determine which model yields the best performance. We propose a BreastNet18 model using VGG16 as foundational base, since VGG16 performs with the highest accuracy. An ablation study is performed on BreastNet18, to evaluate its robustness and achieve the highest possible accuracy. Various image processing techniques with suitable parameter values are employed to remove artefacts and increase the image quality. A total dataset of 1442 preprocessed mammograms was augmented using seven augmentation techniques, resulting in a dataset of 11,536 images. To investigate possible overfitting issues, a k-fold cross validation is carried out. The model was then tested on noisy mammograms to evaluate its robustness. Results were compared with previous studies. RESULTS: Proposed BreastNet18 model performed best with a training accuracy of 96.72%, a validating accuracy of 97.91%, and a test accuracy of 98.02%. In contrast to this, VGGNet19 yielded test accuracy of 96.24%, MobileNetV2 77.84%, ResNet50 79.98%, DenseNet201 86.92%, and InceptionV3 76.87%. CONCLUSIONS: Our proposed approach based on image processing, transfer learning, fine-tuning, and ablation study has demonstrated a high correct breast cancer classification while dealing with a limited number of complex medical images.

SELECTION OF CITATIONS
SEARCH DETAIL