Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 16022, 2024 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-38992069

RESUMO

Crop diseases can significantly affect various aspects of crop cultivation, including crop yield, quality, production costs, and crop loss. The utilization of modern technologies such as image analysis via machine learning techniques enables early and precise detection of crop diseases, hence empowering farmers to effectively manage and avoid the occurrence of crop diseases. The proposed methodology involves the use of modified MobileNetV3Large model deployed on edge device for real-time monitoring of grape leaf disease while reducing computational memory demands and ensuring satisfactory classification performance. To enhance applicability of MobileNetV3Large, custom layers consisting of two dense layers were added, each followed by a dropout layer, helped mitigate overfitting and ensured that the model remains efficient. Comparisons among other models showed that the proposed model outperformed those with an average train and test accuracy of 99.66% and 99.42%, with a precision, recall, and F1 score of approximately 99.42%. The model was deployed on an edge device (Nvidia Jetson Nano) using a custom developed GUI app and predicted from both saved and real-time data with high confidence values. Grad-CAM visualization was used to identify and represent image areas that affect the convolutional neural network (CNN) classification decision-making process with high accuracy. This research contributes to the development of plant disease classification technologies for edge devices, which have the potential to enhance the ability of autonomous farming for farmers, agronomists, and researchers to monitor and mitigate plant diseases efficiently and effectively, with a positive impact on global food security.


Assuntos
Agricultura , Redes Neurais de Computação , Doenças das Plantas , Folhas de Planta , Vitis , Agricultura/métodos , Produtos Agrícolas/crescimento & desenvolvimento , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina
2.
Sci Rep ; 13(1): 20063, 2023 11 16.
Artigo em Inglês | MEDLINE | ID: mdl-37973820

RESUMO

The COVID-19 disease caused by coronavirus is constantly changing due to the emergence of different variants and thousands of people are dying every day worldwide. Early detection of this new form of pulmonary disease can reduce the mortality rate. In this paper, an automated method based on machine learning (ML) and deep learning (DL) has been developed to detect COVID-19 using computed tomography (CT) scan images extracted from three publicly available datasets (A total of 11,407 images; 7397 COVID-19 images and 4010 normal images). An unsupervised clustering approach that is a modified region-based clustering technique for segmenting COVID-19 CT scan image has been proposed. Furthermore, contourlet transform and convolution neural network (CNN) have been employed to extract features individually from the segmented CT scan images and to fuse them in one feature vector. Binary differential evolution (BDE) approach has been employed as a feature optimization technique to obtain comprehensible features from the fused feature vector. Finally, a ML/DL-based ensemble classifier considering bagging technique has been employed to detect COVID-19 from the CT images. A fivefold and generalization cross-validation techniques have been used for the validation purpose. Classification experiments have also been conducted with several pre-trained models (AlexNet, ResNet50, GoogleNet, VGG16, VGG19) and found that the ensemble classifier technique with fused feature has provided state-of-the-art performance with an accuracy of 99.98%.


Assuntos
COVID-19 , Humanos , COVID-19/diagnóstico por imagem , Análise por Conglomerados , Generalização Psicológica , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
3.
Comput Med Imaging Graph ; 110: 102313, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38011781

RESUMO

Brain tumors have become a severe medical complication in recent years due to their high fatality rate. Radiologists segment the tumor manually, which is time-consuming, error-prone, and expensive. In recent years, automated segmentation based on deep learning has demonstrated promising results in solving computer vision problems such as image classification and segmentation. Brain tumor segmentation has recently become a prevalent task in medical imaging to determine the tumor location, size, and shape using automated methods. Many researchers have worked on various machine and deep learning approaches to determine the most optimal solution using the convolutional methodology. In this review paper, we discuss the most effective segmentation techniques based on the datasets that are widely used and publicly available. We also proposed a survey of federated learning methodologies to enhance global segmentation performance and ensure privacy. A comprehensive literature review is suggested after studying more than 100 papers to generalize the most recent techniques in segmentation and multi-modality information. Finally, we concentrated on unsolved problems in brain tumor segmentation and a client-based federated model training strategy. Based on this review, future researchers will understand the optimal solution path to solve these issues.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Humanos , Redes Neurais de Computação , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Encefálicas/diagnóstico por imagem
4.
Expert Syst Appl ; 229: 120528, 2023 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-37274610

RESUMO

Numerous epidemic lung diseases such as COVID-19, tuberculosis (TB), and pneumonia have spread over the world, killing millions of people. Medical specialists have experienced challenges in correctly identifying these diseases due to their subtle differences in Chest X-ray images (CXR). To assist the medical experts, this study proposed a computer-aided lung illness identification method based on the CXR images. For the first time, 17 different forms of lung disorders were considered and the study was divided into six trials with each containing two, two, three, four, fourteen, and seventeen different forms of lung disorders. The proposed framework combined robust feature extraction capabilities of a lightweight parallel convolutional neural network (CNN) with the classification abilities of the extreme learning machine algorithm named CNN-ELM. An optimistic accuracy of 90.92% and an area under the curve (AUC) of 96.93% was achieved when 17 classes were classified side by side. It also accurately identified COVID-19 and TB with 99.37% and 99.98% accuracy, respectively, in 0.996 microseconds for a single image. Additionally, the current results also demonstrated that the framework could outperform the existing state-of-the-art (SOTA) models. On top of that, a secondary conclusion drawn from this study was that the prospective framework retained its effectiveness over a range of real-world environments, including balanced-unbalanced or large-small datasets, large multiclass or simple binary class, and high- or low-resolution images. A prototype Android App was also developed to establish the potential of the framework in real-life implementation.

5.
Sensors (Basel) ; 23(9)2023 May 03.
Artigo em Inglês | MEDLINE | ID: mdl-37177662

RESUMO

Rapid identification of COVID-19 can assist in making decisions for effective treatment and epidemic prevention. The PCR-based test is expert-dependent, is time-consuming, and has limited sensitivity. By inspecting Chest R-ray (CXR) images, COVID-19, pneumonia, and other lung infections can be detected in real time. The current, state-of-the-art literature suggests that deep learning (DL) is highly advantageous in automatic disease classification utilizing the CXR images. The goal of this study is to develop models by employing DL models for identifying COVID-19 and other lung disorders more efficiently. For this study, a dataset of 18,564 CXR images with seven disease categories was created from multiple publicly available sources. Four DL architectures including the proposed CNN model and pretrained VGG-16, VGG-19, and Inception-v3 models were applied to identify healthy and six lung diseases (fibrosis, lung opacity, viral pneumonia, bacterial pneumonia, COVID-19, and tuberculosis). Accuracy, precision, recall, f1 score, area under the curve (AUC), and testing time were used to evaluate the performance of these four models. The results demonstrated that the proposed CNN model outperformed all other DL models employed for a seven-class classification with an accuracy of 93.15% and average values for precision, recall, f1-score, and AUC of 0.9343, 0.9443, 0.9386, and 0.9939. The CNN model equally performed well when other multiclass classifications including normal and COVID-19 as the common classes were considered, yielding accuracy values of 98%, 97.49%, 97.81%, 96%, and 96.75% for two, three, four, five, and six classes, respectively. The proposed model can also identify COVID-19 with shorter training and testing times compared to other transfer learning models.


Assuntos
COVID-19 , Pneumonia Viral , Humanos , COVID-19/diagnóstico , Pneumonia Viral/diagnóstico por imagem , Área Sob a Curva , Tomada de Decisões , Aprendizado de Máquina
6.
Biocybern Biomed Eng ; 2023 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38620111

RESUMO

Around the world, several lung diseases such as pneumonia, cardiomegaly, and tuberculosis (TB) contribute to severe illness, hospitalization or even death, particularly for elderly and medically vulnerable patients. In the last few decades, several new types of lung-related diseases have taken the lives of millions of people, and COVID-19 has taken almost 6.27 million lives. To fight against lung diseases, timely and correct diagnosis with appropriate treatment is crucial in the current COVID-19 pandemic. In this study, an intelligent recognition system for seven lung diseases has been proposed based on machine learning (ML) techniques to aid the medical experts. Chest X-ray (CXR) images of lung diseases were collected from several publicly available databases. A lightweight convolutional neural network (CNN) has been used to extract characteristic features from the raw pixel values of the CXR images. The best feature subset has been identified using the Pearson Correlation Coefficient (PCC). Finally, the extreme learning machine (ELM) has been used to perform the classification task to assist faster learning and reduced computational complexity. The proposed CNN-PCC-ELM model achieved an accuracy of 96.22% with an Area Under Curve (AUC) of 99.48% for eight class classification. The outcomes from the proposed model demonstrated better performance than the existing state-of-the-art (SOTA) models in the case of COVID-19, pneumonia, and tuberculosis detection in both binary and multiclass classifications. For eight class classification, the proposed model achieved precision, recall and fi-score and ROC are 100%, 99%, 100% and 99.99% respectively for COVID-19 detection demonstrating its robustness. Therefore, the proposed model has overshadowed the existing pioneering models to accurately differentiate COVID-19 from the other lung diseases that can assist the medical physicians in treating the patient effectively.

7.
Sensors (Basel) ; 22(19)2022 Sep 25.
Artigo em Inglês | MEDLINE | ID: mdl-36236367

RESUMO

Diabetes is a chronic disease that continues to be a primary and worldwide health concern since the health of the entire population has been affected by it. Over the years, many academics have attempted to develop a reliable diabetes prediction model using machine learning (ML) algorithms. However, these research investigations have had a minimal impact on clinical practice as the current studies focus mainly on improving the performance of complicated ML models while ignoring their explainability to clinical situations. Therefore, the physicians find it difficult to understand these models and rarely trust them for clinical use. In this study, a carefully constructed, efficient, and interpretable diabetes detection method using an explainable AI has been proposed. The Pima Indian diabetes dataset was used, containing a total of 768 instances where 268 are diabetic, and 500 cases are non-diabetic with several diabetic attributes. Here, six machine learning algorithms (artificial neural network (ANN), random forest (RF), support vector machine (SVM), logistic regression (LR), AdaBoost, XGBoost) have been used along with an ensemble classifier to diagnose the diabetes disease. For each machine learning model, global and local explanations have been produced using the Shapley additive explanations (SHAP), which are represented in different types of graphs to help physicians in understanding the model predictions. The balanced accuracy of the developed weighted ensemble model was 90% with a F1 score of 89% using a five-fold cross-validation (CV). The median values were used for the imputation of the missing values and the synthetic minority oversampling technique (SMOTETomek) was used to balance the classes of the dataset. The proposed approach can improve the clinical understanding of a diabetes diagnosis and help in taking necessary action at the very early stages of the disease.


Assuntos
Diabetes Mellitus , Iodeto de Potássio , Diabetes Mellitus/diagnóstico , Humanos , Modelos Logísticos , Aprendizado de Máquina , Redes Neurais de Computação
8.
Sensors (Basel) ; 22(12)2022 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-35746136

RESUMO

Malaria is a life-threatening disease caused by female anopheles mosquito bites. Various plasmodium parasites spread in the victim's blood cells and keep their life in a critical situation. If not treated at the early stage, malaria can cause even death. Microscopy is a familiar process for diagnosing malaria, collecting the victim's blood samples, and counting the parasite and red blood cells. However, the microscopy process is time-consuming and can produce an erroneous result in some cases. With the recent success of machine learning and deep learning in medical diagnosis, it is quite possible to minimize diagnosis costs and improve overall detection accuracy compared with the traditional microscopy method. This paper proposes a multiheaded attention-based transformer model to diagnose the malaria parasite from blood cell images. To demonstrate the effectiveness of the proposed model, the gradient-weighted class activation map (Grad-CAM) technique was implemented to identify which parts of an image the proposed model paid much more attention to compared with the remaining parts by generating a heatmap image. The proposed model achieved a testing accuracy, precision, recall, f1-score, and AUC score of 96.41%, 96.99%, 95.88%, 96.44%, and 99.11%, respectively, for the original malaria parasite dataset and 99.25%, 99.08%, 99.42%, 99.25%, and 99.99%, respectively, for the modified dataset. Various hyperparameters were also finetuned to obtain optimum results, which were also compared with state-of-the-art (SOTA) methods for malaria parasite detection, and the proposed method outperformed the existing methods.


Assuntos
Aprendizado Profundo , Malária , Parasitos , Plasmodium , Animais , Eritrócitos/parasitologia , Feminino , Malária/diagnóstico , Malária/parasitologia
9.
Comput Biol Med ; 146: 105602, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35569335

RESUMO

Diabetic Retinopathy (DR) is a major complication in human eyes among the diabetic patients. Early detection of the DR can save many patients from permanent blindness. Various artificial intelligent based systems have been proposed and they outperform human analysis in accurate detection of the DR. In most of the traditional deep learning models, the cross-entropy is used as a common loss function in a single stage end-to-end training method. However, it has been recently identified that this loss function has some limitations such as poor margin leading to false results, sensitive to noisy data and hyperparameter variations. To overcome these issues, supervised contrastive learning (SCL) has been introduced. In this study, SCL method, a two-stage training method with supervised contrastive loss function was proposed for the first time to the best of authors' knowledge to identify the DR and its severity stages from fundus images (FIs) using "APTOS 2019 Blindness Detection" dataset. "Messidor-2" dataset was also used to conduct experiments for further validating the model's performance. Contrast Limited Adaptive Histogram Equalization (CLAHE) was applied for enhancing the image quality and the pre-trained Xception CNN model was deployed as the encoder with transfer learning. To interpret the SCL of the model, t-SNE method was used to visualize the embedding space (unit hyper sphere) composed of 128 D space into a 2 D space. The proposed model achieved a test accuracy of 98.36%, and AUC score of 98.50% to identify the DR (Binary classification) and a test accuracy of 84.364%, and AUC score of 93.819% for five stages grading with the APTOS 2019 dataset. Other evaluation metrics (precision, recall, F1-score) were also determined with APTOS 2019 as well as with Messidor-2 for analyzing the performance of the proposed model. It was also concluded that the proposed method achieved better performance in detecting the DR compared to the conventional CNN without SCL and other state-of-the-art methods.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Inteligência Artificial , Cegueira , Retinopatia Diabética/diagnóstico por imagem , Fundo de Olho , Humanos
10.
Sensors (Basel) ; 21(4)2021 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-33672585

RESUMO

Currently, COVID-19 is considered to be the most dangerous and deadly disease for the human body caused by the novel coronavirus. In December 2019, the coronavirus spread rapidly around the world, thought to be originated from Wuhan in China and is responsible for a large number of deaths. Earlier detection of the COVID-19 through accurate diagnosis, particularly for the cases with no obvious symptoms, may decrease the patient's death rate. Chest X-ray images are primarily used for the diagnosis of this disease. This research has proposed a machine vision approach to detect COVID-19 from the chest X-ray images. The features extracted by the histogram-oriented gradient (HOG) and convolutional neural network (CNN) from X-ray images were fused to develop the classification model through training by CNN (VGGNet). Modified anisotropic diffusion filtering (MADF) technique was employed for better edge preservation and reduced noise from the images. A watershed segmentation algorithm was used in order to mark the significant fracture region in the input X-ray images. The testing stage considered generalized data for performance evaluation of the model. Cross-validation analysis revealed that a 5-fold strategy could successfully impair the overfitting problem. This proposed feature fusion using the deep learning technique assured a satisfactory performance in terms of identifying COVID-19 compared to the immediate, relevant works with a testing accuracy of 99.49%, specificity of 95.7% and sensitivity of 93.65%. When compared to other classification techniques, such as ANN, KNN, and SVM, the CNN technique used in this study showed better classification performance. K-fold cross-validation demonstrated that the proposed feature fusion technique (98.36%) provided higher accuracy than the individual feature extraction methods, such as HOG (87.34%) or CNN (93.64%).


Assuntos
COVID-19/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador , China , Humanos , Radiografia Torácica , Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...