Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Network ; : 1-33, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38626055

RESUMO

Aiming at early detection and accurate prediction of cardiovascular disease (CVD) to reduce mortality rates, this study focuses on the development of an intelligent predictive system to identify individuals at risk of CVD. The primary objective of the proposed system is to combine deep learning models with advanced data mining techniques to facilitate informed decision-making and precise CVD prediction. This approach involves several essential steps, including the preprocessing of acquired data, optimized feature selection, and disease classification, all aimed at enhancing the effectiveness of the system. The chosen optimal features are fed as input to the disease classification models and into some Machine Learning (ML) algorithms for improved performance in CVD classification. The experiment was simulated in the Python platform and the evaluation metrics such as accuracy, sensitivity, and F1_score were employed to assess the models' performances. The ML models (Extra Trees (ET), Random Forest (RF), AdaBoost, and XG-Boost) classifiers achieved high accuracies of 94.35%, 97.87%, 96.44%, and 99.00%, respectively, on the test set, while the proposed CardioVitalNet (CVN) achieved 87.45% accuracy. These results offer valuable insights into the process of selecting models for medical data analysis, ultimately enhancing the ability to make more accurate diagnoses and predictions.

2.
Network ; : 1-38, 2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38511557

RESUMO

Interpretable machine learning models are instrumental in disease diagnosis and clinical decision-making, shedding light on relevant features. Notably, Boruta, SHAP (SHapley Additive exPlanations), and BorutaShap were employed for feature selection, each contributing to the identification of crucial features. These selected features were then utilized to train six machine learning algorithms, including LR, SVM, ETC, AdaBoost, RF, and LR, using diverse medical datasets obtained from public sources after rigorous preprocessing. The performance of each feature selection technique was evaluated across multiple ML models, assessing accuracy, precision, recall, and F1-score metrics. Among these, SHAP showcased superior performance, achieving average accuracies of 80.17%, 85.13%, 90.00%, and 99.55% across diabetes, cardiovascular, statlog, and thyroid disease datasets, respectively. Notably, the LGBM emerged as the most effective algorithm, boasting an average accuracy of 91.00% for most disease states. Moreover, SHAP enhanced the interpretability of the models, providing valuable insights into the underlying mechanisms driving disease diagnosis. This comprehensive study contributes significant insights into feature selection techniques and machine learning algorithms for disease diagnosis, benefiting researchers and practitioners in the medical field. Further exploration of feature selection methods and algorithms holds promise for advancing disease diagnosis methodologies, paving the way for more accurate and interpretable diagnostic models.

3.
Diagnostics (Basel) ; 13(2)2023 Jan 13.
Artigo em Inglês | MEDLINE | ID: mdl-36673109

RESUMO

Breast cancer is one of the leading causes of death among women worldwide. Histopathological images have proven to be a reliable way to find out if someone has breast cancer over time, however, it could be time consuming and require much resources when observed physically. In order to lessen the burden on the pathologists and save lives, there is need for an automated system to effectively analysis and predict the disease diagnostic. In this paper, a lightweight separable convolution network (LWSC) is proposed to automatically learn and classify breast cancer from histopathological images. The proposed architecture aims to treat the problem of low quality by extracting the visual trainable features of the histopathological image using a contrast enhancement algorithm. LWSC model implements separable convolution layers stacked in parallel with multiple filters of different sizes in order to obtain wider receptive fields. Additionally, the factorization and the utilization of bottleneck convolution layers to reduce model dimension were introduced. These methods reduce the number of trainable parameters as well as the computational cost sufficiently with greater non-linear expressive capacity than plain convolutional networks. The evaluation results depict that the proposed LWSC model performs optimally, obtaining 97.23% accuracy, 97.71% sensitivity, and 97.93% specificity on multi-class categories. Compared with other models, the proposed LWSC obtains comparable performance.

4.
Diagnostics (Basel) ; 12(11)2022 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-36359413

RESUMO

The COVID-19 pandemic has had a significant impact on many lives and the economies of many countries since late December 2019. Early detection with high accuracy is essential to help break the chain of transmission. Several radiological methodologies, such as CT scan and chest X-ray, have been employed in diagnosing and monitoring COVID-19 disease. Still, these methodologies are time-consuming and require trial and error. Machine learning techniques are currently being applied by several studies to deal with COVID-19. This study exploits the latent embeddings of variational autoencoders combined with ensemble techniques to propose three effective EVAE-Net models to detect COVID-19 disease. Two encoders are trained on chest X-ray images to generate two feature maps. The feature maps are concatenated and passed to either a combined or individual reparameterization phase to generate latent embeddings by sampling from a distribution. The latent embeddings are concatenated and passed to a classification head for classification. The COVID-19 Radiography Dataset from Kaggle is the source of chest X-ray images. The performances of the three models are evaluated. The proposed model shows satisfactory performance, with the best model achieving 99.19% and 98.66% accuracy on four classes and three classes, respectively.

5.
Diagnostics (Basel) ; 12(10)2022 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-36292173

RESUMO

Today, Magnetic Resonance Imaging (MRI) is a prominent technique used in medicine, produces a significant and varied range of tissue contrasts in each imaging modalities, and is frequently employed by medical professionals to identify brain malignancies. With brain tumor being a very deadly disease, early detection will help increase the likelihood that the patient will receive the appropriate medical care leading to either a full elimination of the tumor or the prolongation of the patient's life. However, manually examining the enormous volume of magnetic resonance imaging (MRI) images and identifying a brain tumor or cancer is extremely time-consuming and requires the expertise of a trained medical expert or brain doctor to manually detect and diagnose brain cancer using multiple Magnetic Resonance images (MRI) with various modalities. Due to this underlying issue, there is a growing need for increased efforts to automate the detection and diagnosis process of brain tumor without human intervention. Another major concern most research articles do not consider is the low quality nature of MRI images which can be attributed to noise and artifacts. This article presents a Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm to precisely handle the problem of low quality MRI images by eliminating noisy elements and enhancing the visible trainable features of the image. The enhanced image is then fed to the proposed PCNN to learn the features and classify the tumor using sigmoid classifier. To properly train the model, a publicly available dataset is collected and utilized for this research. Additionally, different optimizers and different values of dropout and learning rates are used in the course of this study. The proposed PCNN with Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm achieved an accuracy of 98.7%, sensitivity of 99.7%, and specificity of 97.4%. In comparison with other state-of-the-art brain tumor methods and pre-trained deep transfer learning models, the proposed PCNN model obtained satisfactory performance.

6.
Diagnostics (Basel) ; 12(7)2022 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-35885573

RESUMO

Invasive carcinoma of no special type (IC-NST) is known to be one of the most prevalent kinds of breast cancer, hence the growing research interest in studying automated systems that can detect the presence of breast tumors and appropriately classify them into subtypes. Machine learning (ML) and, more specifically, deep learning (DL) techniques have been used to approach this problem. However, such techniques usually require massive amounts of data to obtain competitive results. This requirement makes their application in specific areas such as health problematic as privacy concerns regarding the release of patients' data publicly result in a limited number of publicly available datasets for the research community. This paper proposes an approach that leverages federated learning (FL) to securely train mathematical models over multiple clients with local IC-NST images partitioned from the breast histopathology image (BHI) dataset to obtain a global model. First, we used residual neural networks for automatic feature extraction. Then, we proposed a second network consisting of Gabor kernels to extract another set of features from the IC-NST dataset. After that, we performed a late fusion of the two sets of features and passed the output through a custom classifier. Experiments were conducted for the federated learning (FL) and centralized learning (CL) scenarios, and the results were compared. Competitive results were obtained, indicating the positive prospects of adopting FL for IC-NST detection. Additionally, fusing the Gabor features with the residual neural network features resulted in the best performance in terms of accuracy, F1 score, and area under the receiver operation curve (AUC-ROC). The models show good generalization by performing well on another domain dataset, the breast cancer histopathological (BreakHis) image dataset. Our method also outperformed other methods from the literature.

7.
Healthcare (Basel) ; 10(3)2022 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-35326900

RESUMO

Since it was first reported, coronavirus disease 2019, also known as COVID-19, has spread expeditiously around the globe. COVID-19 must be diagnosed as soon as possible in order to control the disease and provide proper care to patients. The chest X-ray (CXR) has been identified as a useful diagnostic tool, but the disease outbreak has put a lot of pressure on radiologists to read the scans, which could give rise to fatigue-related misdiagnosis. Automatic classification algorithms that are reliable can be extremely beneficial; however, they typically depend upon a large amount of COVID-19 data for training, which are troublesome to obtain in the nick of time. Therefore, we propose a novel method for the classification of COVID-19. Concretely, a novel neurowavelet capsule network is proposed for COVID-19 classification. To be more precise, first, we introduce a multi-resolution analysis of a discrete wavelet transform to filter noisy and inconsistent information from the CXR data in order to improve the feature extraction robustness of the network. Secondly, the discrete wavelet transform of the multi-resolution analysis also performs a sub-sampling operation in order to minimize the loss of spatial details, thereby enhancing the overall classification performance. We examined the proposed model on a public-sourced dataset of pneumonia-related illnesses, including COVID-19 confirmed cases and healthy CXR images. The proposed method achieves an accuracy of 99.6%, sensitivity of 99.2%, specificity of 99.1% and precision of 99.7%. Our approach achieves an up-to-date performance that is useful for COVID-19 screening according to the experimental results. This latest paradigm will contribute significantly in the battle against COVID-19 and other diseases.

8.
Diagnostics (Basel) ; 12(3)2022 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-35328271

RESUMO

Coronavirus disease has rapidly spread globally since early January of 2020. With millions of deaths, it is essential for an automated system to be utilized to aid in the clinical diagnosis and reduce time consumption for image analysis. This article presents a generative adversarial network (GAN)-based deep learning application for precisely regaining high-resolution (HR) CXR images from low-resolution (LR) CXR correspondents for COVID-19 identification. Respectively, using the building blocks of GAN, we introduce a modified enhanced super-resolution generative adversarial network plus (MESRGAN+) to implement a connected nonlinear mapping collected from noise-contaminated low-resolution input images to produce deblurred and denoised HR images. As opposed to the latest trends of network complexity and computational costs, we incorporate an enhanced VGG19 fine-tuned twin network with the wavelet pooling strategy in order to extract distinct features for COVID-19 identification. We demonstrate our proposed model on a publicly available dataset of 11,920 samples of chest X-ray images, with 2980 cases of COVID-19 CXR, healthy, viral and bacterial cases. Our proposed model performs efficiently both on the binary and four-class classification. The proposed method achieves accuracy of 98.8%, precision of 98.6%, sensitivity of 97.5%, specificity of 98.9%, an F1 score of 97.8% and ROC AUC of 98.8% for the multi-class task, while, for the binary class, the model achieves accuracy of 99.7%, precision of 98.9%, sensitivity of 98.7%, specificity of 99.3%, an F1 score of 98.2% and ROC AUC of 99.7%. Our method obtains state-of-the-art (SOTA) performance, according to the experimental results, which is helpful for COVID-19 screening. This new conceptual framework is proposed to play an influential role in addressing the issues facing COVID-19 examination and other diseases.

9.
Diagnostics (Basel) ; 12(3)2022 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-35328294

RESUMO

Chest X-ray (CXR) is becoming a useful method in the evaluation of coronavirus disease 19 (COVID-19). Despite the global spread of COVID-19, utilizing a computer-aided diagnosis approach for COVID-19 classification based on CXR images could significantly reduce the clinician burden. There is no doubt that low resolution, noise and irrelevant annotations in chest X-ray images are a major constraint to the performance of AI-based COVID-19 diagnosis. While a few studies have made huge progress, they underestimate these bottlenecks. In this study, we propose a super-resolution-based Siamese wavelet multi-resolution convolutional neural network called COVID-SRWCNN for COVID-19 classification using chest X-ray images. Concretely, we first reconstruct high-resolution (HR) counterparts from low-resolution (LR) CXR images in order to enhance the quality of the dataset for improved performance of our model by proposing a novel enhanced fast super-resolution convolutional neural network (EFSRCNN) to capture texture details in each given chest X-ray image. Exploiting a mutual learning approach, the HR images are passed to the proposed Siamese wavelet multi-resolution convolutional neural network to learn the high-level features for COVID-19 classification. We validate the proposed COVID-SRWCNN model on public-source datasets, achieving accuracy of 98.98%. Our screening technique achieves 98.96% AUC, 99.78% sensitivity, 98.53% precision, and 98.86% specificity. Owing to the fact that COVID-19 chest X-ray datasets are low in quality, experimental results show that our proposed algorithm obtains up-to-date performance that is useful for COVID-19 screening.

10.
Diagnostics (Basel) ; 12(3)2022 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-35328318

RESUMO

Timely discovery of COVID-19 could aid in formulating a suitable treatment plan for disease mitigation and containment decisions. The widely used COVID-19 test necessitates a regular method and has a low sensitivity value. Computed tomography and chest X-ray are also other methods utilized by numerous studies for detecting COVID-19. In this article, we propose a CNN called depthwise separable convolution network with wavelet multiresolution analysis module (WMR-DepthwiseNet) that is robust to automatically learn details from both spatialwise and channelwise for COVID-19 identification with a limited radiograph dataset, which is critical due to the rapid growth of COVID-19. This model utilizes an effective strategy to prevent loss of spatial details, which is a prevalent issue in traditional convolutional neural network, and second, the depthwise separable connectivity framework ensures reusability of feature maps by directly connecting previous layer to all subsequent layers for extracting feature representations from few datasets. We evaluate the proposed model by utilizing a public domain dataset of COVID-19 confirmed case and other pneumonia illness. The proposed method achieves 98.63% accuracy, 98.46% sensitivity, 97.99% specificity, and 98.69% precision on chest X-ray dataset, whereas using the computed tomography dataset, the model achieves 96.83% accuracy, 97.78% sensitivity, 96.22% specificity, and 97.02% precision. According to the results of our experiments, our model achieves up-to-date accuracy with only a few training cases available, which is useful for COVID-19 screening. This latest paradigm is expected to contribute significantly in the battle against COVID-19 and other life-threatening diseases.

11.
Diagnostics (Basel) ; 12(2)2022 Jan 27.
Artigo em Inglês | MEDLINE | ID: mdl-35204418

RESUMO

Pneumonia is a prevalent severe respiratory infection that affects the distal and alveoli airways. Across the globe, it is a serious public health issue that has caused high mortality rate of children below five years old and the aged citizens who must have had previous chronic-related ailment. Pneumonia can be caused by a wide range of microorganisms, including virus, fungus, bacteria, which varies greatly across the globe. The spread of the ailment has gained computer-aided diagnosis (CAD) attention. This paper presents a multi-channel-based image processing scheme to automatically extract features and identify pneumonia from chest X-ray images. The proposed approach intends to address the problem of low quality and identify pneumonia in CXR images. Three channels of CXR images, namely, the Local Binary Pattern (LBP), Contrast Enhanced Canny Edge Detection (CECED), and Contrast Limited Adaptive Histogram Equalization (CLAHE) CXR images are processed by deep neural networks. CXR-related features of LBP images are extracted using shallow CNN, features of the CLAHE CXR images are extracted by pre-trained inception-V3, whereas the features of CECED CXR images are extracted using pre-trained MobileNet-V3. The final feature weights of the three channels are concatenated and softmax classification is utilized to determine the final identification result. The proposed network can accurately classify pneumonia according to the experimental result. The proposed method tested on publicly available dataset reports accuracy of 98.3%, sensitivity of 98.9%, and specificity of 99.2%. Compared with the single models and the state-of-the-art models, our proposed network achieves comparable performance.

12.
Diagnostics (Basel) ; 12(2)2022 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-35204628

RESUMO

It is a well-known fact that diabetic retinopathy (DR) is one of the most common causes of visual impairment between the ages of 25 and 74 around the globe. Diabetes is caused by persistently high blood glucose levels, which leads to blood vessel aggravations and vision loss. Early diagnosis can minimise the risk of proliferated diabetic retinopathy, which is the advanced level of this disease, and having higher risk of severe impairment. Therefore, it becomes important to classify DR stages. To this effect, this paper presents a weighted fusion deep learning network (WFDLN) to automatically extract features and classify DR stages from fundus scans. The proposed framework aims to treat the issue of low quality and identify retinopathy symptoms in fundus images. Two channels of fundus images, namely, the contrast-limited adaptive histogram equalization (CLAHE) fundus images and the contrast-enhanced canny edge detection (CECED) fundus images are processed by WFDLN. Fundus-related features of CLAHE images are extracted by fine-tuned Inception V3, whereas the features of CECED fundus images are extracted using fine-tuned VGG-16. Both channels' outputs are merged in a weighted approach, and softmax classification is used to determine the final recognition result. Experimental results show that the proposed network can identify the DR stages with high accuracy. The proposed method tested on the Messidor dataset reports an accuracy level of 98.5%, sensitivity of 98.9%, and specificity of 98.0%, whereas on the Kaggle dataset, the proposed model reports an accuracy level of 98.0%, sensitivity of 98.7%, and specificity of 97.8%. Compared with other models, our proposed network achieves comparable performance.

13.
Healthcare (Basel) ; 10(2)2022 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-35207017

RESUMO

Computed Tomography has become a vital screening method for the detection of coronavirus 2019 (COVID-19). With the high mortality rate and overload for domain experts, radiologists, and clinicians, there is a need for the application of a computerized diagnostic technique. To this effect, we have taken into consideration improving the performance of COVID-19 identification by tackling the issue of low quality and resolution of computed tomography images by introducing our method. We have reported about a technique named the modified enhanced super resolution generative adversarial network for a better high resolution of computed tomography images. Furthermore, in contrast to the fashion of increasing network depth and complexity to beef up imaging performance, we incorporated a Siamese capsule network that extracts distinct features for COVID-19 identification.The qualitative and quantitative results establish that the proposed model is effective, accurate, and robust for COVID-19 screening. We demonstrate the proposed model for COVID-19 identification on a publicly available dataset COVID-CT, which contains 349 COVID-19 and 463 non-COVID-19 computed tomography images. The proposed method achieves an accuracy of 97.92%, sensitivity of 98.85%, specificity of 97.21%, AUC of 98.03%, precision of 98.44%, and F1 score of 97.52%. Our approach obtained state-of-the-art performance, according to experimental results, which is helpful for COVID-19 screening. This new conceptual framework is proposed to play an influential task in the issue facing COVID-19 and related ailments, with the availability of few datasets.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...