Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Biol Med ; 179: 108847, 2024 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-39004046

RESUMO

The UNet architecture, which is widely used for biomedical image segmentation, has limitations like blurred feature maps and over- or under-segmented regions. To overcome these limitations, we propose a novel network architecture called MACCoM (Multiple Attention and Convolutional Cross-Mixer) - an end-to-end depthwise encoder-decoder fully convolutional network designed for binary and multi-class biomedical image segmentation built upon deeperUNet. We proposed a multi-scope attention module (MSAM) that allows the model to attend to diverse scale features, preserving fine details and high-level semantic information thus useful at the encoder-decoder connection. As the depth increases, our proposed spatial multi-head attention (SMA) is added to facilitate inter-layer communication and information exchange, enabling the network to effectively capture long-range dependencies and global context. MACCoM is also equipped with a convolutional cross-mixer we proposed to enhance the feature extraction capability of the model. By incorporating these modules, we effectively combine semantically similar features and reduce artifacts during the early stages of training. Experimental results on 4 biomedical datasets crafted from 3 datasets of varying modalities consistently demonstrate that MACCoM outperforms or matches state-of-the-art baselines in the segmentation tasks. With Breast Ultrasound Image (BUSI), MACCoM recorded 99.06 % Jaccard, 77.58 % Dice, and 93.92 % Accuracy, while recording 99.50 %, 98.44 %, and 99.29 % respectively for Jaccard, Dice, and Accuracy on the Chest X-ray (CXR) images used. The Jaccard, Dice, and Accuracy for the High-Resolution Fundus (HRF) images are 95.77 %, 74.35 %, and 95.95 % respectively. The findings here highlight MACCoM's effectiveness in improving segmentation performance and its valuable potential in image analysis.

2.
Sci Total Environ ; 947: 174302, 2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-38945244

RESUMO

As the imperative to address climate change becomes more pressing, there is an increasing focus on limiting global temperature increase to 1.5 °C by the end of the century relative to pre-industrial levels. During the recent Conference of Parties (COP28), nations committed to tripling renewable energy generation to a minimum of 11,000 GW by 2030 and increasing the global annual energy efficiency from 2 % to 4 % annually until 2030. Additionally, the Food and Agricultural Organization (FAO) introduced a roadmap to transition the Agri-food system from a net emitter to a carbon sink. The role of carbon dioxide removal (CDR) is important; first to accelerate the near-term reduction in net emissions, counterbalance residual emissions at the point of net-zero by mid-century, and sustain large net negative emissions beyond mid-century to return warming to safe levels after decades of temporal overshoot. This paper assesses the impact of the COP 28 agreements, alongside the complementary role of CDR on emission levels, energy structure, land use, and global warming temperature. The findings indicate that implementing the COP28 pledges and FAO roadmap leads to a warming temperature of 2 °C, falling short of the ambitious 1.5 °C temperature limit. Likewise, more stringent actions on transitioning away from fossil plants is a high-priority mitigation action which drives significant emissions reduction. The modelled result shows that Agricultural soil carbon and biochar contribute 47-58 % share of the total CDR deployed in the stylized scenarios. In conclusion, CDR can expedite climate goals but must complement emission reduction efforts; hence, the transition away from fossil fuels should prompt the development of detailed roadmaps. Also, more global efforts should be placed on nature-based CDR methods, as they offer diverse co-benefits.

3.
Network ; : 1-38, 2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38511557

RESUMO

Interpretable machine learning models are instrumental in disease diagnosis and clinical decision-making, shedding light on relevant features. Notably, Boruta, SHAP (SHapley Additive exPlanations), and BorutaShap were employed for feature selection, each contributing to the identification of crucial features. These selected features were then utilized to train six machine learning algorithms, including LR, SVM, ETC, AdaBoost, RF, and LR, using diverse medical datasets obtained from public sources after rigorous preprocessing. The performance of each feature selection technique was evaluated across multiple ML models, assessing accuracy, precision, recall, and F1-score metrics. Among these, SHAP showcased superior performance, achieving average accuracies of 80.17%, 85.13%, 90.00%, and 99.55% across diabetes, cardiovascular, statlog, and thyroid disease datasets, respectively. Notably, the LGBM emerged as the most effective algorithm, boasting an average accuracy of 91.00% for most disease states. Moreover, SHAP enhanced the interpretability of the models, providing valuable insights into the underlying mechanisms driving disease diagnosis. This comprehensive study contributes significant insights into feature selection techniques and machine learning algorithms for disease diagnosis, benefiting researchers and practitioners in the medical field. Further exploration of feature selection methods and algorithms holds promise for advancing disease diagnosis methodologies, paving the way for more accurate and interpretable diagnostic models.

4.
Biofactors ; 50(1): 114-134, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37695269

RESUMO

Recent research indicates that early detection of breast cancer (BC) is critical in achieving favorable treatment outcomes and reducing the mortality rate associated with it. With the difficulty in obtaining a balanced dataset that is primarily sourced for the diagnosis of the disease, many researchers have relied on data augmentation techniques, thereby having varying datasets with varying quality and results. The dataset we focused on in this study is crafted from SHapley Additive exPlanations (SHAP)-augmentation and random augmentation (RA) approaches to dealing with imbalanced data. This was carried out on the Wisconsin BC dataset and the effectiveness of this approach to the diagnosis of BC was checked using six machine-learning algorithms. RA synthetically generated some parts of the dataset while SHAP helped in assessing the quality of the attributes, which were selected and used for the training of the models. The result from our analysis shows that the performance of the models used generally increased to more than 3% for most of the models using the dataset obtained by the integration of SHAP and RA. Additionally, after diagnosis, it is important to focus on providing quality care to ensure the best possible outcomes for patients. The need for proper management of the disease state is crucial so as to reduce the recurrence of the disease and other associated complications. Thus the interpretability provided by SHAP enlightens the management strategies in this study focusing on the quality of care given to the patient and how timely the care is.


Assuntos
Neoplasias da Mama , Humanos , Feminino , Neoplasias da Mama/diagnóstico , Algoritmos
5.
J King Saud Univ Comput Inf Sci ; 35(7): 101596, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37275558

RESUMO

COVID-19 is a contagious disease that affects the human respiratory system. Infected individuals may develop serious illnesses, and complications may result in death. Using medical images to detect COVID-19 from essentially identical thoracic anomalies is challenging because it is time-consuming, laborious, and prone to human error. This study proposes an end-to-end deep-learning framework based on deep feature concatenation and a Multi-head Self-attention network. Feature concatenation involves fine-tuning the pre-trained backbone models of DenseNet, VGG-16, and InceptionV3, which are trained on a large-scale ImageNet, whereas a Multi-head Self-attention network is adopted for performance gain. End-to-end training and evaluation procedures are conducted using the COVID-19_Radiography_Dataset for binary and multi-classification scenarios. The proposed model achieved overall accuracies (96.33% and 98.67%) and F1_scores (92.68% and 98.67%) for multi and binary classification scenarios, respectively. In addition, this study highlights the difference in accuracy (98.0% vs. 96.33%) and F_1 score (97.34% vs. 95.10%) when compared with feature concatenation against the highest individual model performance. Furthermore, a virtual representation of the saliency maps of the employed attention mechanism focusing on the abnormal regions is presented using explainable artificial intelligence (XAI) technology. The proposed framework provided better COVID-19 prediction results outperforming other recent deep learning models using the same dataset.

6.
Diagnostics (Basel) ; 13(2)2023 Jan 13.
Artigo em Inglês | MEDLINE | ID: mdl-36673109

RESUMO

Breast cancer is one of the leading causes of death among women worldwide. Histopathological images have proven to be a reliable way to find out if someone has breast cancer over time, however, it could be time consuming and require much resources when observed physically. In order to lessen the burden on the pathologists and save lives, there is need for an automated system to effectively analysis and predict the disease diagnostic. In this paper, a lightweight separable convolution network (LWSC) is proposed to automatically learn and classify breast cancer from histopathological images. The proposed architecture aims to treat the problem of low quality by extracting the visual trainable features of the histopathological image using a contrast enhancement algorithm. LWSC model implements separable convolution layers stacked in parallel with multiple filters of different sizes in order to obtain wider receptive fields. Additionally, the factorization and the utilization of bottleneck convolution layers to reduce model dimension were introduced. These methods reduce the number of trainable parameters as well as the computational cost sufficiently with greater non-linear expressive capacity than plain convolutional networks. The evaluation results depict that the proposed LWSC model performs optimally, obtaining 97.23% accuracy, 97.71% sensitivity, and 97.93% specificity on multi-class categories. Compared with other models, the proposed LWSC obtains comparable performance.

7.
J Adv Res ; 48: 191-211, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36084812

RESUMO

INTRODUCTION: Pneumonia is a microorganism infection that causes chronic inflammation of the human lung cells. Chest X-ray imaging is the most well-known screening approach used for detecting pneumonia in the early stages. While chest-Xray images are mostly blurry with low illumination, a strong feature extraction approach is required for promising identification performance. OBJECTIVES: A new hybrid explainable deep learning framework is proposed for accurate pneumonia disease identification using chest X-ray images. METHODS: The proposed hybrid workflow is developed by fusing the capabilities of both ensemble convolutional networks and the Transformer Encoder mechanism. The ensemble learning backbone is used to extract strong features from the raw input X-ray images in two different scenarios: ensemble A (i.e., DenseNet201, VGG16, and GoogleNet) and ensemble B (i.e., DenseNet201, InceptionResNetV2, and Xception). Whereas, the Transformer Encoder is built based on the self-attention mechanism with multilayer perceptron (MLP) for accurate disease identification. The visual explainable saliency maps are derived to emphasize the crucial predicted regions on the input X-ray images. The end-to-end training process of the proposed deep learning models over all scenarios is performed for binary and multi-class classification scenarios. RESULTS: The proposed hybrid deep learning model recorded 99.21% classification performance in terms of overall accuracy and F1-score for the binary classification task, while it achieved 98.19% accuracy and 97.29% F1-score for multi-classification task. For the ensemble binary identification scenario, ensemble A recorded 97.22% accuracy and 97.14% F1-score, while ensemble B achieved 96.44% for both accuracy and F1-score. For the ensemble multiclass identification scenario, ensemble A recorded 97.2% accuracy and 95.8% F1-score, while ensemble B recorded 96.4% accuracy and 94.9% F1-score. CONCLUSION: The proposed hybrid deep learning framework could provide promising and encouraging explainable identification performance comparing with the individual, ensemble models, or even the latest AI models in the literature. The code is available here: https://github.com/chiagoziemchima/Pneumonia_Identificaton.


Assuntos
Pneumonia , Humanos , Raios X , Pneumonia/diagnóstico por imagem , Inflamação , Tórax , Fontes de Energia Elétrica
8.
Comput Biol Med ; 151(Pt A): 106324, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36423531

RESUMO

Numerous machine learning and image processing algorithms, most recently deep learning, allow the recognition and classification of COVID-19 disease in medical images. However, feature extraction, or the semantic gap between low-level visual information collected by imaging modalities and high-level semantics, is the fundamental shortcoming of these techniques. On the other hand, several techniques focused on the first-order feature extraction of the chest X-Ray thus making the employed models less accurate and robust. This study presents Dual_Pachi: Attention Based Dual Path Framework with Intermediate Second Order-Pooling for more accurate and robust Chest X-ray feature extraction for Covid-19 detection. Dual_Pachi consists of 4 main building Blocks; Block one converts the received chest X-Ray image to CIE LAB coordinates (L & AB channels which are separated at the first three layers of a modified Inception V3 Architecture.). Block two further exploit the global features extracted from block one via a global second-order pooling while block three focuses on the low-level visual information and the high-level semantics of Chest X-ray image features using a multi-head self-attention and an MLP Layer without sacrificing performance. Finally, the fourth block is the classification block where classification is done using fully connected layers and SoftMax activation. Dual_Pachi is designed and trained in an end-to-end manner. According to the results, Dual_Pachi outperforms traditional deep learning models and other state-of-the-art approaches described in the literature with an accuracy of 0.96656 (Data_A) and 0.97867 (Data_B) for the Dual_Pachi approach and an accuracy of 0.95987 (Data_A) and 0.968 (Data_B) for the Dual_Pachi without attention block model. A Grad-CAM-based visualization is also built to highlight where the applied attention mechanism is concentrated.


Assuntos
COVID-19 , Humanos , COVID-19/diagnóstico por imagem , Raios X , Tórax , Aprendizado de Máquina , Algoritmos
9.
Bioengineering (Basel) ; 9(11)2022 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-36421110

RESUMO

According to research, classifiers and detectors are less accurate when images are blurry, have low contrast, or have other flaws which raise questions about the machine learning model's ability to recognize items effectively. The chest X-ray image has proven to be the preferred image modality for medical imaging as it contains more information about a patient. Its interpretation is quite difficult, nevertheless. The goal of this research is to construct a reliable deep-learning model capable of producing high classification accuracy on chest x-ray images for lung diseases. To enable a thorough study of the chest X-ray image, the suggested framework first derived richer features using an ensemble technique, then a global second-order pooling is applied to further derive higher global features of the images. Furthermore, the images are then separated into patches and position embedding before analyzing the patches individually via a vision transformer approach. The proposed model yielded 96.01% sensitivity, 96.20% precision, and 98.00% accuracy for the COVID-19 Radiography Dataset while achieving 97.84% accuracy, 96.76% sensitivity and 96.80% precision, for the Covid-ChestX-ray-15k dataset. The experimental findings reveal that the presented models outperform traditional deep learning models and other state-of-the-art approaches provided in the literature.

10.
Sci Rep ; 12(1): 9644, 2022 06 10.
Artigo em Inglês | MEDLINE | ID: mdl-35688900

RESUMO

Solar energy-based technologies have developed rapidly in recent years, however, the inability to appropriately estimate solar energy resources is still a major drawback for these technologies. In this study, eight different artificial intelligence (AI) models namely; convolutional neural network (CNN), artificial neural network (ANN), long short-term memory recurrent model (LSTM), eXtreme gradient boost algorithm (XG Boost), multiple linear regression (MLR), polynomial regression (PLR), decision tree regression (DTR), and random forest regression (RFR) are designed and compared for solar irradiance prediction. Additionally, two hybrid deep neural network models (ANN-CNN and CNN-LSTM-ANN) are developed in this study for the same task. This study is novel as each of the AI models developed was used to estimate solar irradiance considering different timesteps (hourly, every minute, and daily average). Also, different solar irradiance datasets (from six countries in Africa) measured with various instruments were used to train/test the AI models. With the aim to check if there is a universal AI model for solar irradiance estimation in developing countries, the results of this study show that various AI models are suitable for different solar irradiance estimation tasks. However, XG boost has a consistently high performance for all the case studies and is the best model for 10 of the 13 case studies considered in this paper. The result of this study also shows that the prediction of hourly solar irradiance is more accurate for the models when compared to daily average and minutes timestep. The specific performance of each model for all the case studies is explicated in the paper.


Assuntos
Inteligência Artificial , Energia Solar , Luz Solar , Algoritmos , Redes Neurais de Computação , Fatores de Tempo
11.
Diagnostics (Basel) ; 12(5)2022 May 05.
Artigo em Inglês | MEDLINE | ID: mdl-35626307

RESUMO

INTRODUCTION AND BACKGROUND: Despite fast developments in the medical field, histological diagnosis is still regarded as the benchmark in cancer diagnosis. However, the input image feature extraction that is used to determine the severity of cancer at various magnifications is harrowing since manual procedures are biased, time consuming, labor intensive, and error-prone. Current state-of-the-art deep learning approaches for breast histopathology image classification take features from entire images (generic features). Thus, they are likely to overlook the essential image features for the unnecessary features, resulting in an incorrect diagnosis of breast histopathology imaging and leading to mortality. METHODS: This discrepancy prompted us to develop DEEP_Pachi for classifying breast histopathology images at various magnifications. The suggested DEEP_Pachi collects global and regional features that are essential for effective breast histopathology image classification. The proposed model backbone is an ensemble of DenseNet201 and VGG16 architecture. The ensemble model extracts global features (generic image information), whereas DEEP_Pachi extracts spatial information (regions of interest). Statistically, the evaluation of the proposed model was performed on publicly available dataset: BreakHis and ICIAR 2018 Challenge datasets. RESULTS: A detailed evaluation of the proposed model's accuracy, sensitivity, precision, specificity, and f1-score metrics revealed the usefulness of the backbone model and the DEEP_Pachi model for image classifying. The suggested technique outperformed state-of-the-art classifiers, achieving an accuracy of 1.0 for the benign class and 0.99 for the malignant class in all magnifications of BreakHis datasets and an accuracy of 1.0 on the ICIAR 2018 Challenge dataset. CONCLUSIONS: The acquired findings were significantly resilient and proved helpful for the suggested system to assist experts at big medical institutions, resulting in early breast cancer diagnosis and a reduction in the death rate.

12.
Diagnostics (Basel) ; 12(2)2022 Jan 27.
Artigo em Inglês | MEDLINE | ID: mdl-35204418

RESUMO

Pneumonia is a prevalent severe respiratory infection that affects the distal and alveoli airways. Across the globe, it is a serious public health issue that has caused high mortality rate of children below five years old and the aged citizens who must have had previous chronic-related ailment. Pneumonia can be caused by a wide range of microorganisms, including virus, fungus, bacteria, which varies greatly across the globe. The spread of the ailment has gained computer-aided diagnosis (CAD) attention. This paper presents a multi-channel-based image processing scheme to automatically extract features and identify pneumonia from chest X-ray images. The proposed approach intends to address the problem of low quality and identify pneumonia in CXR images. Three channels of CXR images, namely, the Local Binary Pattern (LBP), Contrast Enhanced Canny Edge Detection (CECED), and Contrast Limited Adaptive Histogram Equalization (CLAHE) CXR images are processed by deep neural networks. CXR-related features of LBP images are extracted using shallow CNN, features of the CLAHE CXR images are extracted by pre-trained inception-V3, whereas the features of CECED CXR images are extracted using pre-trained MobileNet-V3. The final feature weights of the three channels are concatenated and softmax classification is utilized to determine the final identification result. The proposed network can accurately classify pneumonia according to the experimental result. The proposed method tested on publicly available dataset reports accuracy of 98.3%, sensitivity of 98.9%, and specificity of 99.2%. Compared with the single models and the state-of-the-art models, our proposed network achieves comparable performance.

13.
Comput Biol Med ; 150: 106195, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-37859288

RESUMO

According to the World Health Organization, an estimate of more than five million infections and 355,000 deaths have been recorded worldwide since the emergence of the coronavirus disease (COVID-19). Various researchers have developed interesting and effective deep learning frameworks to tackle this disease. However, poor feature extraction from the Chest X-ray images and the high computational cost of the available models impose difficulties to an accurate and fast Covid-19 detection framework. Thus, the major purpose of this study is to offer an accurate and efficient approach for extracting COVID-19 features from chest X-rays that is also less computationally expensive than earlier research. To achieve the specified goal, we explored the Inception V3 deep artificial neural network. This study proposed LCSB-Inception; a two-path (L and AB channel) Inception V3 network along the first three convolutional layers. The RGB input image is first transformed to CIE LAB coordinates (L channel which is aimed at learning the textural and edge features of the Chest X-Ray and AB channel which is aimed at learning the color variations of the Chest X-ray images). The L achromatic channel and the AB channels filters are set to 50%L-50%AB. This method saves between one-third and one-half of the parameters in the divided branches. We further introduced a global second-order pooling at the last two convolutional blocks for more robust image feature extraction against the conventional max-pooling. The detection accuracy of the LCSB-Inception is further improved by employing the Contrast Limited Adaptive Histogram Equalization (CLAHE) image enhancement technique on the input image before feeding them to the network. The proposed LCSB-Inception network is experimented on using two loss functions (Categorically smooth loss and categorically Cross-entropy) and two learning rates whereas Accuracy, Precision, Sensitivity, Specificity F1-Score, and AUC Score were used for evaluation via the chestX-ray-15k (Data_1) and COVID-19 Radiography dataset (Data_2). The proposed models produced an acceptable outcome with an accuracy of 0.97867 (Data_1) and 0.98199 (Data_2) according to the experimental findings. In terms of COVID-19 identification, the suggested models outperform conventional deep learning models and other state-of-the-art techniques presented in the literature based on the results.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , COVID-19/diagnóstico por imagem , Raios X , SARS-CoV-2 , Redes Neurais de Computação
14.
Diagnostics (Basel) ; 13(1)2022 Dec 28.
Artigo em Inglês | MEDLINE | ID: mdl-36611379

RESUMO

The development of automatic monitoring and diagnosis systems for cardiac patients over the internet has been facilitated by recent advancements in wearable sensor devices from electrocardiographs (ECGs), which need the use of patient-specific approaches. Premature ventricular contraction (PVC) is a common chronic cardiovascular disease that can cause conditions that are potentially fatal. Therefore, for the diagnosis of likely heart failure, precise PVC detection from ECGs is crucial. In the clinical settings, cardiologists typically employ long-term ECGs as a tool to identify PVCs, where a cardiologist must put in a lot of time and effort to appropriately assess the long-term ECGs which is time consuming and cumbersome. By addressing these issues, we have investigated a deep learning method with a pre-trained deep residual network, ResNet-18, to identify PVCs automatically using transfer learning mechanism. Herein, features are extracted by the inner layers of the network automatically compared to hand-crafted feature extraction methods. Transfer learning mechanism handles the difficulties of required large volume of training data for a deep model. The pre-trained model is evaluated on the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) Arrhythmia and Institute of Cardiological Technics (INCART) datasets. First, we used the Pan-Tompkins algorithm to segment 44,103 normal and 6423 PVC beats, as well as 106,239 normal and 9987 PVC beats from the MIT-BIH Arrhythmia and IN-CART datasets, respectively. The pre-trained model employed the segmented beats as input after being converted into 2D (two-dimensional) images. The method is optimized with the using of weighted random samples, on-the-fly augmentation, Adam optimizer, and call back feature. The results from the proposed method demonstrate the satisfactory findings without the using of any complex pre-processing and feature extraction technique as well as design complexity of model. Using LOSOCV (leave one subject out cross-validation), the received accuracies on MIT-BIH and INCART are 99.93% and 99.77%, respectively, suppressing the state-of-the-art methods for PVC recognition on unseen data. This demonstrates the efficacy and generalizability of the proposed method on the imbalanced datasets. Due to the absence of device-specific (patient-specific) information at the evaluating stage on the target datasets in this study, the method might be used as a general approach to handle the situations in which ECG signals are obtained from different patients utilizing a variety of smart sensor devices.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...