Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Heliyon ; 10(9): e29802, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38707335

RESUMO

There is an increasing demand for efficient and precise plant disease detection methods that can quickly identify disease outbreaks. For this, researchers have developed various machine learning and image processing techniques. However, real-field images present challenges due to complex backgrounds, similarities between different disease symptoms, and the need to detect multiple diseases simultaneously. These obstacles hinder the development of a reliable classification model. The attention mechanisms emerge as a critical factor in enhancing the robustness of classification models by selectively focusing on relevant regions or features within infected regions in an image. This paper provides details about various types of attention mechanisms and explores the utilization of these techniques for the machine learning solutions created by researchers for image segmentation, feature extraction, object detection, and classification for efficient plant disease identification. Experiments are conducted on three models: MobileNetV2, EfficientNetV2, and ShuffleNetV2, to assess the effectiveness of attention modules. For this, Squeeze and Excitation layers, the Convolutional Block Attention Module, and transformer modules have been integrated into these models, and their performance has been evaluated using different metrics. The outcomes show that adding attention modules enhances the original models' functionality.

2.
BMC Med Imaging ; 24(1): 110, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38750436

RESUMO

Brain tumor classification using MRI images is a crucial yet challenging task in medical imaging. Accurate diagnosis is vital for effective treatment planning but is often hindered by the complex nature of tumor morphology and variations in imaging. Traditional methodologies primarily rely on manual interpretation of MRI images, supplemented by conventional machine learning techniques. These approaches often lack the robustness and scalability needed for precise and automated tumor classification. The major limitations include a high degree of manual intervention, potential for human error, limited ability to handle large datasets, and lack of generalizability to diverse tumor types and imaging conditions.To address these challenges, we propose a federated learning-based deep learning model that leverages the power of Convolutional Neural Networks (CNN) for automated and accurate brain tumor classification. This innovative approach not only emphasizes the use of a modified VGG16 architecture optimized for brain MRI images but also highlights the significance of federated learning and transfer learning in the medical imaging domain. Federated learning enables decentralized model training across multiple clients without compromising data privacy, addressing the critical need for confidentiality in medical data handling. This model architecture benefits from the transfer learning technique by utilizing a pre-trained CNN, which significantly enhances its ability to classify brain tumors accurately by leveraging knowledge gained from vast and diverse datasets.Our model is trained on a diverse dataset combining figshare, SARTAJ, and Br35H datasets, employing a federated learning approach for decentralized, privacy-preserving model training. The adoption of transfer learning further bolsters the model's performance, making it adept at handling the intricate variations in MRI images associated with different types of brain tumors. The model demonstrates high precision (0.99 for glioma, 0.95 for meningioma, 1.00 for no tumor, and 0.98 for pituitary), recall, and F1-scores in classification, outperforming existing methods. The overall accuracy stands at 98%, showcasing the model's efficacy in classifying various tumor types accurately, thus highlighting the transformative potential of federated learning and transfer learning in enhancing brain tumor classification using MRI images.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Aprendizado de Máquina , Interpretação de Imagem Assistida por Computador/métodos
3.
BMC Med Imaging ; 24(1): 118, 2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38773391

RESUMO

Brain tumor diagnosis using MRI scans poses significant challenges due to the complex nature of tumor appearances and variations. Traditional methods often require extensive manual intervention and are prone to human error, leading to misdiagnosis and delayed treatment. Current approaches primarily include manual examination by radiologists and conventional machine learning techniques. These methods rely heavily on feature extraction and classification algorithms, which may not capture the intricate patterns present in brain MRI images. Conventional techniques often suffer from limited accuracy and generalizability, mainly due to the high variability in tumor appearance and the subjective nature of manual interpretation. Additionally, traditional machine learning models may struggle with the high-dimensional data inherent in MRI images. To address these limitations, our research introduces a deep learning-based model utilizing convolutional neural networks (CNNs).Our model employs a sequential CNN architecture with multiple convolutional, max-pooling, and dropout layers, followed by dense layers for classification. The proposed model demonstrates a significant improvement in diagnostic accuracy, achieving an overall accuracy of 98% on the test dataset. The proposed model demonstrates a significant improvement in diagnostic accuracy, achieving an overall accuracy of 98% on the test dataset. The precision, recall, and F1-scores ranging from 97 to 98% with a roc-auc ranging from 99 to 100% for each tumor category further substantiate the model's effectiveness. Additionally, the utilization of Grad-CAM visualizations provides insights into the model's decision-making process, enhancing interpretability. This research addresses the pressing need for enhanced diagnostic accuracy in identifying brain tumors through MRI imaging, tackling challenges such as variability in tumor appearance and the need for rapid, reliable diagnostic tools.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Imageamento por Ressonância Magnética/métodos , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Masculino , Feminino
4.
BMC Med Imaging ; 24(1): 105, 2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38730390

RESUMO

Categorizing Artificial Intelligence of Medical Things (AIoMT) devices within the realm of standard Internet of Things (IoT) and Internet of Medical Things (IoMT) devices, particularly at the server and computational layers, poses a formidable challenge. In this paper, we present a novel methodology for categorizing AIoMT devices through the application of decentralized processing, referred to as "Federated Learning" (FL). Our approach involves deploying a system on standard IoT devices and labeled IoMT devices for training purposes and attribute extraction. Through this process, we extract and map the interconnected attributes from a global federated cum aggression server. The aim of this terminology is to extract interdependent devices via federated learning, ensuring data privacy and adherence to operational policies. Consequently, a global training dataset repository is coordinated to establish a centralized indexing and synchronization knowledge repository. The categorization process employs generic labels for devices transmitting medical data through regular communication channels. We evaluate our proposed methodology across a variety of IoT, IoMT, and AIoMT devices, demonstrating effective classification and labeling. Our technique yields a reliable categorization index for facilitating efficient access and optimization of medical devices within global servers.


Assuntos
Inteligência Artificial , Blockchain , Internet das Coisas , Humanos
5.
Heliyon ; 10(7): e28195, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38571667

RESUMO

People who work in dangerous environments include farmers, sailors, travelers, and mining workers. Due to the fact that they must evaluate the changes taking place in their immediate surroundings, they must gather information and data from the real world. It becomes crucial to regularly monitor meteorological parameters such air quality, rainfall, water level, pH value, wind direction and speed, temperature, atmospheric pressure, humidity, soil moisture, light intensity, and turbidity in order to avoid risks or calamities. Enhancing environmental standards is largely influenced by IoT. It greatly advances sustainable living with its innovative and cutting-edge techniques for monitoring air quality and treating water. With the aid of various sensors, microcontroller (Arduino Uno), GSM, Wi-Fi, and HTTP protocols, the suggested system is a real-time smart monitoring system based on the Internet of Things. Also, the proposed system has HTTP-based webpage enabled by Wi-Fi to transfer the data to remote locations. This technology makes it feasible to track changes in the weather from any location at any distance. The proposed system is a sophisticated, efficient, accurate, cost-effective, and dependable weather station that will be valuable to anyone who wants to monitor environmental changes on a regular basis.

6.
Front Med (Lausanne) ; 11: 1373244, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38515985

RESUMO

Breast cancer, a prevalent cancer among women worldwide, necessitates precise and prompt detection for successful treatment. While conventional histopathological examination is the benchmark, it is a lengthy process and prone to variations among different observers. Employing machine learning to automate the diagnosis of breast cancer presents a viable option, striving to improve both precision and speed. Previous studies have primarily focused on applying various machine learning and deep learning models for the classification of breast cancer images. These methodologies leverage convolutional neural networks (CNNs) and other advanced algorithms to differentiate between benign and malignant tumors from histopathological images. Current models, despite their potential, encounter obstacles related to generalizability, computational performance, and managing datasets with imbalances. Additionally, a significant number of these models do not possess the requisite transparency and interpretability, which are vital for medical diagnostic purposes. To address these limitations, our study introduces an advanced machine learning model based on EfficientNetV2. This model incorporates state-of-the-art techniques in image processing and neural network architecture, aiming to improve accuracy, efficiency, and robustness in classification. We employed the EfficientNetV2 model, fine-tuned for the specific task of breast cancer image classification. Our model underwent rigorous training and validation using the BreakHis dataset, which includes diverse histopathological images. Advanced data preprocessing, augmentation techniques, and a cyclical learning rate strategy were implemented to enhance model performance. The introduced model exhibited remarkable efficacy, attaining an accuracy rate of 99.68%, balanced precision and recall as indicated by a significant F1 score, and a considerable Cohen's Kappa value. These indicators highlight the model's proficiency in correctly categorizing histopathological images, surpassing current techniques in reliability and effectiveness. The research emphasizes improved accessibility, catering to individuals with disabilities and the elderly. By enhancing visual representation and interpretability, the proposed approach aims to make strides in inclusive medical image interpretation, ensuring equitable access to diagnostic information.

7.
PLoS One ; 19(3): e0298731, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38527047

RESUMO

A shell and tube heat exchanger (STHE) for heat recovery applications was studied to discover the intricacies of its optimization. To optimize performance, a hybrid optimization methodology was developed by combining the Neural Fitting Tool (NFTool), Particle Swarm Optimization (PSO), and Grey Relational Analysis (GRE). STHE heat exchangers were analyzed systematically using the Taguchi method to analyze the critical elements related to a particular response. To clarify the complex relationship between the heat exchanger efficiency and operational parameters, grey relational grades (GRGs) are first computed. A forecast of the grey relation coefficients was then conducted using NFTool to provide more insight into the complex dynamics. An optimized parameter with a grey coefficient was created after applying PSO analysis, resulting in a higher grey coefficient and improved performance of the heat exchanger. A major and far-reaching application of this study was based on heat recovery. A detailed comparison was conducted between the estimated values and the experimental results as a result of the hybrid optimization algorithm. In the current study, the results demonstrate that the proposed counter-flow shell and tube strategy is effective for optimizing performance.


Assuntos
Algoritmos , Temperatura Alta
8.
Diagnostics (Basel) ; 13(22)2023 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-37998555

RESUMO

The mortality rates of patients contracting the Omicron and Delta variants of COVID-19 are very high, and COVID-19 is the worst variant of COVID. Hence, our objective is to detect COVID-19 Omicron and Delta variants from lung CT-scan images. We designed a unique ensemble model that combines the CNN architecture of a deep neural network-Capsule Network (CapsNet)-and pre-trained architectures, i.e., VGG-16, DenseNet-121, and Inception-v3, to produce a reliable and robust model for diagnosing Omicron and Delta variant data. Despite the solo model's remarkable accuracy, it can often be difficult to accept its results. The ensemble model, on the other hand, operates according to the scientific tenet of combining the majority votes of various models. The adoption of the transfer learning model in our work is to benefit from previously learned parameters and lower data-hunger architecture. Likewise, CapsNet performs consistently regardless of positional changes, size changes, and changes in the orientation of the input image. The proposed ensemble model produced an accuracy of 99.93%, an AUC of 0.999 and a precision of 99.9%. Finally, the framework is deployed in a local cloud web application so that the diagnosis of these particular variants can be accomplished remotely.

9.
Diagnostics (Basel) ; 13(22)2023 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-37998588

RESUMO

Prompt diagnostics and appropriate cancer therapy necessitate the use of gene expression databases. The integration of analytical methods can enhance detection precision by capturing intricate patterns and subtle connections in the data. This study proposes a diagnostic-integrated approach combining Empirical Bayes Harmonization (EBS), Jensen-Shannon Divergence (JSD), deep learning, and contour mathematics for cancer detection using gene expression data. EBS preprocesses the gene expression data, while JSD measures the distributional differences between cancerous and non-cancerous samples, providing invaluable insights into gene expression patterns. Deep learning (DL) models are employed for automatic deep feature extraction and to discern complex patterns from the data. Contour mathematics is applied to visualize decision boundaries and regions in the high-dimensional feature space. JSD imparts significant information to the deep learning model, directing it to concentrate on pertinent features associated with cancerous samples. Contour visualization elucidates the model's decision-making process, bolstering interpretability. The amalgamation of JSD, deep learning, and contour mathematics in gene expression dataset analysis diagnostics presents a promising pathway for precise cancer detection. This method taps into the prowess of deep learning for feature extraction while employing JSD to pinpoint distributional differences and contour mathematics for visual elucidation. The outcomes underscore its potential as a formidable instrument for cancer detection, furnishing crucial insights for timely diagnostics and tailor-made treatment strategies.

10.
Diagnostics (Basel) ; 13(22)2023 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-37998597

RESUMO

One of the most prevalent cancers is oral squamous cell carcinoma, and preventing mortality from this disease primarily depends on early detection. Clinicians will greatly benefit from automated diagnostic techniques that analyze a patient's histopathology images to identify abnormal oral lesions. A deep learning framework was designed with an intermediate layer between feature extraction layers and classification layers for classifying the histopathological images into two categories, namely, normal and oral squamous cell carcinoma. The intermediate layer is constructed using the proposed swarm intelligence technique called the Modified Gorilla Troops Optimizer. While there are many optimization algorithms used in the literature for feature selection, weight updating, and optimal parameter identification in deep learning models, this work focuses on using optimization algorithms as an intermediate layer to convert extracted features into features that are better suited for classification. Three datasets comprising 2784 normal and 3632 oral squamous cell carcinoma subjects are considered in this work. Three popular CNN architectures, namely, InceptionV2, MobileNetV3, and EfficientNetB3, are investigated as feature extraction layers. Two fully connected Neural Network layers, batch normalization, and dropout are used as classification layers. With the best accuracy of 0.89 among the examined feature extraction models, MobileNetV3 exhibits good performance. This accuracy is increased to 0.95 when the suggested Modified Gorilla Troops Optimizer is used as an intermediary layer.

11.
Diagnostics (Basel) ; 13(22)2023 Nov 20.
Artigo em Inglês | MEDLINE | ID: mdl-37998620

RESUMO

According to the WHO (World Health Organization), lung cancer is the leading cause of cancer deaths globally. In the future, more than 2.2 million people will be diagnosed with lung cancer worldwide, making up 11.4% of every primary cause of cancer. Furthermore, lung cancer is expected to be the biggest driver of cancer-related mortality worldwide in 2020, with an estimated 1.8 million fatalities. Statistics on lung cancer rates are not uniform among geographic areas, demographic subgroups, or age groups. The chance of an effective treatment outcome and the likelihood of patient survival can be greatly improved with the early identification of lung cancer. Lung cancer identification in medical pictures like CT scans and MRIs is an area where deep learning (DL) algorithms have shown a lot of potential. This study uses the Hybridized Faster R-CNN (HFRCNN) to identify lung cancer at an early stage. Among the numerous uses for which faster R-CNN has been put to good use is identifying critical entities in medical imagery, such as MRIs and CT scans. Many research investigations in recent years have examined the use of various techniques to detect lung nodules (possible indicators of lung cancer) in scanned images, which may help in the early identification of lung cancer. One such model is HFRCNN, a two-stage, region-based entity detector. It begins by generating a collection of proposed regions, which are subsequently classified and refined with the aid of a convolutional neural network (CNN). A distinct dataset is used in the model's training process, producing valuable outcomes. More than a 97% detection accuracy was achieved with the suggested model, making it far more accurate than several previously announced methods.

12.
Diagnostics (Basel) ; 13(20)2023 Oct 18.
Artigo em Inglês | MEDLINE | ID: mdl-37892065

RESUMO

Kidney tumors represent a significant medical challenge, characterized by their often-asymptomatic nature and the need for early detection to facilitate timely and effective intervention. Although neural networks have shown great promise in disease prediction, their computational demands have limited their practicality in clinical settings. This study introduces a novel methodology, the UNet-PWP architecture, tailored explicitly for kidney tumor segmentation, designed to optimize resource utilization and overcome computational complexity constraints. A key novelty in our approach is the application of adaptive partitioning, which deconstructs the intricate UNet architecture into smaller submodels. This partitioning strategy reduces computational requirements and enhances the model's efficiency in processing kidney tumor images. Additionally, we augment the UNet's depth by incorporating pre-trained weights, therefore significantly boosting its capacity to handle intricate and detailed segmentation tasks. Furthermore, we employ weight-pruning techniques to eliminate redundant zero-weighted parameters, further streamlining the UNet-PWP model without compromising its performance. To rigorously assess the effectiveness of our proposed UNet-PWP model, we conducted a comparative evaluation alongside the DeepLab V3+ model, both trained on the "KiTs 19, 21, and 23" kidney tumor dataset. Our results are optimistic, with the UNet-PWP model achieving an exceptional accuracy rate of 97.01% on both the training and test datasets, surpassing the DeepLab V3+ model in performance. Furthermore, to ensure our model's results are easily understandable and explainable. We included a fusion of the attention and Grad-CAM XAI methods. This approach provides valuable insights into the decision-making process of our model and the regions of interest that affect its predictions. In the medical field, this interpretability aspect is crucial for healthcare professionals to trust and comprehend the model's reasoning.

13.
Bioengineering (Basel) ; 10(8)2023 Aug 09.
Artigo em Inglês | MEDLINE | ID: mdl-37627835

RESUMO

In this study, we use LSTM (Long-Short-Term-Memory) networks to evaluate Magnetic Resonance Imaging (MRI) data to overcome the shortcomings of conventional Alzheimer's disease (AD) detection techniques. Our method offers greater reliability and accuracy in predicting the possibility of AD, in contrast to cognitive testing and brain structure analyses. We used an MRI dataset that we downloaded from the Kaggle source to train our LSTM network. Utilizing the temporal memory characteristics of LSTMs, the network was created to efficiently capture and evaluate the sequential patterns inherent in MRI scans. Our model scored a remarkable AUC of 0.97 and an accuracy of 98.62%. During the training process, we used Stratified Shuffle-Split Cross Validation to make sure that our findings were reliable and generalizable. Our study adds significantly to the body of knowledge by demonstrating the potential of LSTM networks in the specific field of AD prediction and extending the variety of methods investigated for image classification in AD research. We have also designed a user-friendly Web-based application to help with the accessibility of our developed model, bridging the gap between research and actual deployment.

14.
Diagnostics (Basel) ; 13(16)2023 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-37627946

RESUMO

Deep learning is playing a major role in identifying complicated structure, and it outperforms in term of training and classification tasks in comparison to traditional algorithms. In this work, a local cloud-based solution is developed for classification of Alzheimer's disease (AD) as MRI scans as input modality. The multi-classification is used for AD variety and is classified into four stages. In order to leverage the capabilities of the pre-trained GoogLeNet model, transfer learning is employed. The GoogLeNet model, which is pre-trained for image classification tasks, is fine-tuned for the specific purpose of multi-class AD classification. Through this process, a better accuracy of 98% is achieved. As a result, a local cloud web application for Alzheimer's prediction is developed using the proposed architectures of GoogLeNet. This application enables doctors to remotely check for the presence of AD in patients.

15.
Sensors (Basel) ; 23(14)2023 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-37514569

RESUMO

Recently, research into Wireless Body-Area Sensor Networks (WBASN) or Wireless Body-Area Networks (WBAN) has gained much importance in medical applications, and now plays a significant role in patient monitoring. Among the various operations, routing is still recognized as a resource-intensive activity. As a result, designing an energy-efficient routing system for WBAN is critical. The existing routing algorithms focus more on energy efficiency than security. However, security attacks will lead to more energy consumption, which will reduce overall network performance. To handle the issues of reliability, energy efficiency, and security in WBAN, a new cluster-based secure routing protocol called the Secure Optimal Path-Routing (SOPR) protocol has been proposed in this paper. This proposed algorithm provides security by identifying and avoiding black-hole attacks on one side, and by sending data packets in encrypted form on the other side to strengthen communication security in WBANs. The main advantages of implementing the proposed protocol include improved overall network performance by increasing the packet-delivery ratio and reducing attack-detection overheads, detection time, energy consumption, and delay.

16.
Sensors (Basel) ; 23(11)2023 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-37300076

RESUMO

The emergence of the Internet of Things (IoT) and its subsequent evolution into the Internet of Everything (IoE) is a result of the rapid growth of information and communication technologies (ICT). However, implementing these technologies comes with certain obstacles, such as the limited availability of energy resources and processing power. Consequently, there is a need for energy-efficient and intelligent load-balancing models, particularly in healthcare, where real-time applications generate large volumes of data. This paper proposes a novel, energy-aware artificial intelligence (AI)-based load balancing model that employs the Chaotic Horse Ride Optimization Algorithm (CHROA) and big data analytics (BDA) for cloud-enabled IoT environments. The CHROA technique enhances the optimization capacity of the Horse Ride Optimization Algorithm (HROA) using chaotic principles. The proposed CHROA model balances the load, optimizes available energy resources using AI techniques, and is evaluated using various metrics. Experimental results show that the CHROA model outperforms existing models. For instance, while the Artificial Bee Colony (ABC), Gravitational Search Algorithm (GSA), and Whale Defense Algorithm with Firefly Algorithm (WD-FA) techniques attain average throughputs of 58.247 Kbps, 59.957 Kbps, and 60.819 Kbps, respectively, the CHROA model achieves an average throughput of 70.122 Kbps. The proposed CHROA-based model presents an innovative approach to intelligent load balancing and energy optimization in cloud-enabled IoT environments. The results highlight its potential to address critical challenges and contribute to developing efficient and sustainable IoT/IoE solutions.


Assuntos
Algoritmos , Inteligência Artificial , Animais , Cavalos , Inteligência , Conscientização , Internet
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...