Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 5895, 2024 03 11.
Artigo em Inglês | MEDLINE | ID: mdl-38467755

RESUMO

A significant issue in computer-aided diagnosis (CAD) for medical applications is brain tumor classification. Radiologists could reliably detect tumors using machine learning algorithms without extensive surgery. However, a few important challenges arise, such as (i) the selection of the most important deep learning architecture for classification (ii) an expert in the field who can assess the output of deep learning models. These difficulties motivate us to propose an efficient and accurate system based on deep learning and evolutionary optimization for the classification of four types of brain modalities (t1 tumor, t1ce tumor, t2 tumor, and flair tumor) on a large-scale MRI database. Thus, a CNN architecture is modified based on domain knowledge and connected with an evolutionary optimization algorithm to select hyperparameters. In parallel, a Stack Encoder-Decoder network is designed with ten convolutional layers. The features of both models are extracted and optimized using an improved version of Grey Wolf with updated criteria of the Jaya algorithm. The improved version speeds up the learning process and improves the accuracy. Finally, the selected features are fused using a novel parallel pooling approach that is classified using machine learning and neural networks. Two datasets, BraTS2020 and BraTS2021, have been employed for the experimental tasks and obtained an improved average accuracy of 98% and a maximum single-classifier accuracy of 99%. Comparison is also conducted with several classifiers, techniques, and neural nets; the proposed method achieved improved performance.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Recuperação Demorada da Anestesia , Humanos , Redes Neurais de Computação , Encéfalo/diagnóstico por imagem , Neoplasias Encefálicas/diagnóstico por imagem
2.
Diagnostics (Basel) ; 13(19)2023 Sep 26.
Artigo em Inglês | MEDLINE | ID: mdl-37835807

RESUMO

Cancer is one of the leading significant causes of illness and chronic disease worldwide. Skin cancer, particularly melanoma, is becoming a severe health problem due to its rising prevalence. The considerable death rate linked with melanoma requires early detection to receive immediate and successful treatment. Lesion detection and classification are more challenging due to many forms of artifacts such as hairs, noise, and irregularity of lesion shape, color, irrelevant features, and textures. In this work, we proposed a deep-learning architecture for classifying multiclass skin cancer and melanoma detection. The proposed architecture consists of four core steps: image preprocessing, feature extraction and fusion, feature selection, and classification. A novel contrast enhancement technique is proposed based on the image luminance information. After that, two pre-trained deep models, DarkNet-53 and DensNet-201, are modified in terms of a residual block at the end and trained through transfer learning. In the learning process, the Genetic algorithm is applied to select hyperparameters. The resultant features are fused using a two-step approach named serial-harmonic mean. This step increases the accuracy of the correct classification, but some irrelevant information is also observed. Therefore, an algorithm is developed to select the best features called marine predator optimization (MPA) controlled Reyni Entropy. The selected features are finally classified using machine learning classifiers for the final classification. Two datasets, ISIC2018 and ISIC2019, have been selected for the experimental process. On these datasets, the obtained maximum accuracy of 85.4% and 98.80%, respectively. To prove the effectiveness of the proposed methods, a detailed comparison is conducted with several recent techniques and shows the proposed framework outperforms.

3.
Diagnostics (Basel) ; 13(18)2023 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-37761236

RESUMO

Background: Using artificial intelligence (AI) with the concept of a deep learning-based automated computer-aided diagnosis (CAD) system has shown improved performance for skin lesion classification. Although deep convolutional neural networks (DCNNs) have significantly improved many image classification tasks, it is still difficult to accurately classify skin lesions because of a lack of training data, inter-class similarity, intra-class variation, and the inability to concentrate on semantically significant lesion parts. Innovations: To address these issues, we proposed an automated deep learning and best feature selection framework for multiclass skin lesion classification in dermoscopy images. The proposed framework performs a preprocessing step at the initial step for contrast enhancement using a new technique that is based on dark channel haze and top-bottom filtering. Three pre-trained deep learning models are fine-tuned in the next step and trained using the transfer learning concept. In the fine-tuning process, we added and removed a few additional layers to lessen the parameters and later selected the hyperparameters using a genetic algorithm (GA) instead of manual assignment. The purpose of hyperparameter selection using GA is to improve the learning performance. After that, the deeper layer is selected for each network and deep features are extracted. The extracted deep features are fused using a novel serial correlation-based approach. This technique reduces the feature vector length to the serial-based approach, but there is little redundant information. We proposed an improved anti-Lion optimization algorithm for the best feature selection to address this issue. The selected features are finally classified using machine learning algorithms. Main Results: The experimental process was conducted using two publicly available datasets, ISIC2018 and ISIC2019. Employing these datasets, we obtained an accuracy of 96.1 and 99.9%, respectively. Comparison was also conducted with state-of-the-art techniques and shows the proposed framework improved accuracy. Conclusions: The proposed framework successfully enhances the contrast of the cancer region. Moreover, the selection of hyperparameters using the automated techniques improved the learning process of the proposed framework. The proposed fusion and improved version of the selection process maintains the best accuracy and shorten the computational time.

4.
Diagnostics (Basel) ; 13(9)2023 May 03.
Artigo em Inglês | MEDLINE | ID: mdl-37175009

RESUMO

The early detection of breast cancer using mammogram images is critical for lowering women's mortality rates and allowing for proper treatment. Deep learning techniques are commonly used for feature extraction and have demonstrated significant performance in the literature. However, these features do not perform well in several cases due to redundant and irrelevant information. We created a new framework for diagnosing breast cancer using entropy-controlled deep learning and flower pollination optimization from the mammogram images. In the proposed framework, a filter fusion-based method for contrast enhancement is developed. The pre-trained ResNet-50 model is then improved and trained using transfer learning on both the original and enhanced datasets. Deep features are extracted and combined into a single vector in the following phase using a serial technique known as serial mid-value features. The top features are then classified using neural networks and machine learning classifiers in the following stage. To accomplish this, a technique for flower pollination optimization with entropy control has been developed. The exercise used three publicly available datasets: CBIS-DDSM, INbreast, and MIAS. On these selected datasets, the proposed framework achieved 93.8, 99.5, and 99.8% accuracy, respectively. Compared to the current methods, the increase in accuracy and decrease in computational time are explained.

5.
Diagnostics (Basel) ; 13(7)2023 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-37046456

RESUMO

One of the most frequent cancers in women is breast cancer, and in the year 2022, approximately 287,850 new cases have been diagnosed. From them, 43,250 women died from this cancer. An early diagnosis of this cancer can help to overcome the mortality rate. However, the manual diagnosis of this cancer using mammogram images is not an easy process and always requires an expert person. Several AI-based techniques have been suggested in the literature. However, still, they are facing several challenges, such as similarities between cancer and non-cancer regions, irrelevant feature extraction, and weak training models. In this work, we proposed a new automated computerized framework for breast cancer classification. The proposed framework improves the contrast using a novel enhancement technique called haze-reduced local-global. The enhanced images are later employed for the dataset augmentation. This step aimed at increasing the diversity of the dataset and improving the training capability of the selected deep learning model. After that, a pre-trained model named EfficientNet-b0 was employed and fine-tuned to add a few new layers. The fine-tuned model was trained separately on original and enhanced images using deep transfer learning concepts with static hyperparameters' initialization. Deep features were extracted from the average pooling layer in the next step and fused using a new serial-based approach. The fused features were later optimized using a feature selection algorithm known as Equilibrium-Jaya controlled Regula Falsi. The Regula Falsi was employed as a termination function in this algorithm. The selected features were finally classified using several machine learning classifiers. The experimental process was conducted on two publicly available datasets-CBIS-DDSM and INbreast. For these datasets, the achieved average accuracy is 95.4% and 99.7%. A comparison with state-of-the-art (SOTA) technology shows that the obtained proposed framework improved the accuracy. Moreover, the confidence interval-based analysis shows consistent results of the proposed framework.

6.
Comput Intell Neurosci ; 2023: 4776770, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36864930

RESUMO

Malfunctions in the immune system cause multiple sclerosis (MS), which initiates mild to severe nerve damage. MS will disturb the signal communication between the brain and other body parts, and early diagnosis will help reduce the harshness of MS in humankind. Magnetic resonance imaging (MRI) supported MS detection is a standard clinical procedure in which the bio-image recorded with a chosen modality is considered to assess the severity of the disease. The proposed research aims to implement a convolutional neural network (CNN) supported scheme to detect MS lesions in the chosen brain MRI slices. The stages of this framework include (i) image collection and resizing, (ii) deep feature mining, (iii) hand-crafted feature mining, (iii) feature optimization with firefly algorithm, and (iv) serial feature integration and classification. In this work, five-fold cross-validation is executed, and the final result is considered for the assessment. The brain MRI slices with/without the skull section are examined separately, presenting the attained results. The experimental outcome of this study confirms that the VGG16 with random forest (RF) classifier offered a classification accuracy of >98% MRI with skull, and VGG16 with K-nearest neighbor (KNN) provided an accuracy of >98% without the skull.


Assuntos
Esclerose Múltipla , Humanos , Esclerose Múltipla/diagnóstico por imagem , Cabeça , Encéfalo/diagnóstico por imagem , Algoritmos , Análise por Conglomerados
7.
Sensors (Basel) ; 23(5)2023 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-36904963

RESUMO

The performance of human gait recognition (HGR) is affected by the partial obstruction of the human body caused by the limited field of view in video surveillance. The traditional method required the bounding box to recognize human gait in the video sequences accurately; however, it is a challenging and time-consuming approach. Due to important applications, such as biometrics and video surveillance, HGR has improved performance over the last half-decade. Based on the literature, the challenging covariant factors that degrade gait recognition performance include walking while wearing a coat or carrying a bag. This paper proposed a new two-stream deep learning framework for human gait recognition. The first step proposed a contrast enhancement technique based on the local and global filters information fusion. The high-boost operation is finally applied to highlight the human region in a video frame. Data augmentation is performed in the second step to increase the dimension of the preprocessed dataset (CASIA-B). In the third step, two pre-trained deep learning models-MobilenetV2 and ShuffleNet-are fine-tuned and trained on the augmented dataset using deep transfer learning. Features are extracted from the global average pooling layer instead of the fully connected layer. In the fourth step, extracted features of both streams are fused using a serial-based approach and further refined in the fifth step by using an improved equilibrium state optimization-controlled Newton-Raphson (ESOcNR) selection method. The selected features are finally classified using machine learning algorithms for the final classification accuracy. The experimental process was conducted on 8 angles of the CASIA-B dataset and obtained an accuracy of 97.3, 98.6, 97.7, 96.5, 92.9, 93.7, 94.7, and 91.2%, respectively. Comparisons were conducted with state-of-the-art (SOTA) techniques, and showed improved accuracy and reduced computational time.


Assuntos
Aprendizado Profundo , Humanos , Algoritmos , Marcha , Aprendizado de Máquina , Biometria/métodos
8.
Comput Intell Neurosci ; 2022: 1339469, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36465951

RESUMO

Image processing is an important domain for identifying various crop varieties. Due to the large amount of rice and its varieties, manually detecting its qualities is a very tedious and time-consuming task. In this work, we propose a two-stage deep learning framework for detecting and classifying multiclass rice grain varieties. A series of steps is included in the proposed framework. The first step is to perform preprocessing on the selected dataset. The second step involves selecting and fine-tuning pretrained deep models from Darknet19 and SqueezeNet. Transfer learning is used to train the fine-tuned models on the selected dataset. The 50% sample images are employed for the training and rest 50% are used for the testing. Features are extracted and fused using a maximum correlation-based approach. This approach improved the classification performance; however, redundant information has also been included. An improved butterfly optimization algorithm (BOA) is proposed, in the next step, for the selection of the best features that are finally classified using several machine learning classifiers. The experimental process was conducted on selected rice datasets that include five types of rice varieties and achieves a maximum accuracy of 100% that was improved than the recent method. The average accuracy of the proposed method is obtained at 99.2%, through confidence interval-based analysis that shows the significance of this work.


Assuntos
Oryza , Grão Comestível , Inteligência , Algoritmos , Confiabilidade dos Dados
9.
Front Public Health ; 10: 1046296, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36408000

RESUMO

The COVID-19 virus's rapid global spread has caused millions of illnesses and deaths. As a result, it has disastrous consequences for people's lives, public health, and the global economy. Clinical studies have revealed a link between the severity of COVID-19 cases and the amount of virus present in infected people's lungs. Imaging techniques such as computed tomography (CT) and chest x-rays can detect COVID-19 (CXR). Manual inspection of these images is a difficult process, so computerized techniques are widely used. Deep convolutional neural networks (DCNNs) are a type of machine learning that is frequently used in computer vision applications, particularly in medical imaging, to detect and classify infected regions. These techniques can assist medical personnel in the detection of patients with COVID-19. In this article, a Bayesian optimized DCNN and explainable AI-based framework is proposed for the classification of COVID-19 from the chest X-ray images. The proposed method starts with a multi-filter contrast enhancement technique that increases the visibility of the infected part. Two pre-trained deep models, namely, EfficientNet-B0 and MobileNet-V2, are fine-tuned according to the target classes and then trained by employing Bayesian optimization (BO). Through BO, hyperparameters have been selected instead of static initialization. Features are extracted from the trained model and fused using a slicing-based serial fusion approach. The fused features are classified using machine learning classifiers for the final classification. Moreover, visualization is performed using a Grad-CAM that highlights the infected part in the image. Three publically available COVID-19 datasets are used for the experimental process to obtain improved accuracies of 98.8, 97.9, and 99.4%, respectively.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , Raios X , COVID-19/diagnóstico por imagem , Teorema de Bayes , Redes Neurais de Computação
10.
Diagnostics (Basel) ; 12(11)2022 Nov 07.
Artigo em Inglês | MEDLINE | ID: mdl-36359566

RESUMO

In the last few years, artificial intelligence has shown a lot of promise in the medical domain for the diagnosis and classification of human infections. Several computerized techniques based on artificial intelligence (AI) have been introduced in the literature for gastrointestinal (GIT) diseases such as ulcer, bleeding, polyp, and a few others. Manual diagnosis of these infections is time consuming, expensive, and always requires an expert. As a result, computerized methods that can assist doctors as a second opinion in clinics are widely required. The key challenges of a computerized technique are accurate infected region segmentation because each infected region has a change of shape and location. Moreover, the inaccurate segmentation affects the accurate feature extraction that later impacts the classification accuracy. In this paper, we proposed an automated framework for GIT disease segmentation and classification based on deep saliency maps and Bayesian optimal deep learning feature selection. The proposed framework is made up of a few key steps, from preprocessing to classification. Original images are improved in the preprocessing step by employing a proposed contrast enhancement technique. In the following step, we proposed a deep saliency map for segmenting infected regions. The segmented regions are then used to train a pre-trained fine-tuned model called MobileNet-V2 using transfer learning. The fine-tuned model's hyperparameters were initialized using Bayesian optimization (BO). The average pooling layer is then used to extract features. However, several redundant features are discovered during the analysis phase and must be removed. As a result, we proposed a hybrid whale optimization algorithm for selecting the best features. Finally, the selected features are classified using an extreme learning machine classifier. The experiment was carried out on three datasets: Kvasir 1, Kvasir 2, and CUI Wah. The proposed framework achieved accuracy of 98.20, 98.02, and 99.61% on these three datasets, respectively. When compared to other methods, the proposed framework shows an improvement in accuracy.

11.
Comput Math Methods Med ; 2022: 7502504, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36276999

RESUMO

Melanoma is a dangerous form of skin cancer that results in the demise of patients at the developed stage. Researchers have attempted to develop automated systems for the timely recognition of this deadly disease. However, reliable and precise identification of melanoma moles is a tedious and complex activity as there exist huge differences in the mass, structure, and color of the skin lesions. Additionally, the incidence of noise, blurring, and chrominance changes in the suspected images further enhance the complexity of the detection procedure. In the proposed work, we try to overcome the limitations of the existing work by presenting a deep learning (DL) model. Descriptively, after accomplishing the preprocessing task, we have utilized an object detection approach named CornerNet model to detect melanoma lesions. Then the localized moles are passed as input to the fuzzy K-means (FLM) clustering approach to perform the segmentation task. To assess the segmentation power of the proposed approach, two standard databases named ISIC-2017 and ISIC-2018 are employed. Extensive experimentation has been conducted to demonstrate the robustness of the proposed approach through both numeric and pictorial results. The proposed approach is capable of detecting and segmenting the moles of arbitrary shapes and orientations. Furthermore, the presented work can tackle the presence of noise, blurring, and brightness variations as well. We have attained the segmentation accuracy values of 99.32% and 99.63% over the ISIC-2017 and ISIC-2018 databases correspondingly which clearly depicts the effectiveness of our model for the melanoma mole segmentation.


Assuntos
Melanoma , Toupeiras , Neoplasias Cutâneas , Humanos , Animais , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Melanoma/diagnóstico por imagem , Análise por Conglomerados , Neoplasias Cutâneas/diagnóstico por imagem , Dermoscopia/métodos
12.
Comput Intell Neurosci ; 2022: 1465173, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35965745

RESUMO

Early detection of brain tumors can save precious human life. This work presents a fully automated design to classify brain tumors. The proposed scheme employs optimal deep learning features for the classification of FLAIR, T1, T2, and T1CE tumors. Initially, we normalized the dataset to pass them to the ResNet101 pretrained model to perform transfer learning for our dataset. This approach results in fine-tuning the ResNet101 model for brain tumor classification. The problem with this approach is the generation of redundant features. These redundant features degrade accuracy and cause computational overhead. To tackle this problem, we find optimal features by utilizing differential evaluation and particle swarm optimization algorithms. The obtained optimal feature vectors are then serially fused to get a single-fused feature vector. PCA is applied to this fused vector to get the final optimized feature vector. This optimized feature vector is fed as input to various classifiers to classify tumors. Performance is analyzed at various stages. Performance results show that the proposed technique achieved a speedup of 25.5x in prediction time on the medium neural network with an accuracy of 94.4%. These results show significant improvement over the state-of-the-art techniques in terms of computational overhead by maintaining approximately the same accuracy.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Algoritmos , Humanos , Redes Neurais de Computação
13.
Comput Math Methods Med ; 2022: 5869529, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36017156

RESUMO

Breast cancer is one of the leading causes of increasing deaths in women worldwide. The complex nature (microcalcification and masses) of breast cancer cells makes it quite difficult for radiologists to diagnose it properly. Subsequently, various computer-aided diagnosis (CAD) systems have previously been developed and are being used to aid radiologists in the diagnosis of cancer cells. However, due to intrinsic risks associated with the delayed and/or incorrect diagnosis, it is indispensable to improve the developed diagnostic systems. In this regard, machine learning has recently been playing a potential role in the early and precise detection of breast cancer. This paper presents a new machine learning-based framework that utilizes the Random Forest, Gradient Boosting, Support Vector Machine, Artificial Neural Network, and Multilayer Perception approaches to efficiently predict breast cancer from the patient data. For this purpose, the Wisconsin Diagnostic Breast Cancer (WDBC) dataset has been utilized and classified using a hybrid Multilayer Perceptron Model (MLP) and 5-fold cross-validation framework as a working prototype. For the improved classification, a connection-based feature selection technique has been used that also eliminates the recursive features. The proposed framework has been validated on two separate datasets, i.e., the Wisconsin Prognostic dataset (WPBC) and Wisconsin Original Breast Cancer (WOBC) datasets. The results demonstrate improved accuracy of 99.12% due to efficient data preprocessing and feature selection applied to the input data.


Assuntos
Neoplasias da Mama , Mama , Neoplasias da Mama/diagnóstico por imagem , Diagnóstico por Computador/métodos , Feminino , Humanos , Redes Neurais de Computação , Máquina de Vetores de Suporte
14.
Sensors (Basel) ; 22(3)2022 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-35161552

RESUMO

After lung cancer, breast cancer is the second leading cause of death in women. If breast cancer is detected early, mortality rates in women can be reduced. Because manual breast cancer diagnosis takes a long time, an automated system is required for early cancer detection. This paper proposes a new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features. The proposed framework is divided into five major steps: (i) data augmentation is performed to increase the size of the original dataset for better learning of Convolutional Neural Network (CNN) models; (ii) a pre-trained DarkNet-53 model is considered and the output layer is modified based on the augmented dataset classes; (iii) the modified model is trained using transfer learning and features are extracted from the global average pooling layer; (iv) the best features are selected using two improved optimization algorithms known as reformed differential evaluation (RDE) and reformed gray wolf (RGW); and (v) the best selected features are fused using a new probability-based serial approach and classified using machine learning algorithms. The experiment was conducted on an augmented Breast Ultrasound Images (BUSI) dataset, and the best accuracy was 99.1%. When compared with recent techniques, the proposed framework outperforms them.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Mama , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Probabilidade , Ultrassonografia Mamária
15.
Diagnostics (Basel) ; 13(1)2022 Dec 29.
Artigo em Inglês | MEDLINE | ID: mdl-36611393

RESUMO

BACKGROUND AND OBJECTIVE: In 2019, a corona virus disease (COVID-19) was detected in China that affected millions of people around the world. On 11 March 2020, the WHO declared this disease a pandemic. Currently, more than 200 countries in the world have been affected by this disease. The manual diagnosis of this disease using chest X-ray (CXR) images and magnetic resonance imaging (MRI) is time consuming and always requires an expert person; therefore, researchers introduced several computerized techniques using computer vision methods. The recent computerized techniques face some challenges, such as low contrast CTX images, the manual initialization of hyperparameters, and redundant features that mislead the classification accuracy. METHODS: In this paper, we proposed a novel framework for COVID-19 classification using deep Bayesian optimization and improved canonical correlation analysis (ICCA). In this proposed framework, we initially performed data augmentation for better training of the selected deep models. After that, two pre-trained deep models were employed (ResNet50 and InceptionV3) and trained using transfer learning. The hyperparameters of both models were initialized through Bayesian optimization. Both trained models were utilized for feature extractions and fused using an ICCA-based approach. The fused features were further optimized using an improved tree growth optimization algorithm that finally was classified using a neural network classifier. RESULTS: The experimental process was conducted on five publically available datasets and achieved an accuracy of 99.6, 98.5, 99.9, 99.5, and 100%. CONCLUSION: The comparison with recent methods and t-test-based analysis showed the significance of this proposed framework.

16.
Sensors (Basel) ; 21(23)2021 Nov 28.
Artigo em Inglês | MEDLINE | ID: mdl-34883944

RESUMO

Human action recognition (HAR) has gained significant attention recently as it can be adopted for a smart surveillance system in Multimedia. However, HAR is a challenging task because of the variety of human actions in daily life. Various solutions based on computer vision (CV) have been proposed in the literature which did not prove to be successful due to large video sequences which need to be processed in surveillance systems. The problem exacerbates in the presence of multi-view cameras. Recently, the development of deep learning (DL)-based systems has shown significant success for HAR even for multi-view camera systems. In this research work, a DL-based design is proposed for HAR. The proposed design consists of multiple steps including feature mapping, feature fusion and feature selection. For the initial feature mapping step, two pre-trained models are considered, such as DenseNet201 and InceptionV3. Later, the extracted deep features are fused using the Serial based Extended (SbE) approach. Later on, the best features are selected using Kurtosis-controlled Weighted KNN. The selected features are classified using several supervised learning algorithms. To show the efficacy of the proposed design, we used several datasets, such as KTH, IXMAS, WVU, and Hollywood. Experimental results showed that the proposed design achieved accuracies of 99.3%, 97.4%, 99.8%, and 99.9%, respectively, on these datasets. Furthermore, the feature selection step performed better in terms of computational time compared with the state-of-the-art.


Assuntos
Aprendizado Profundo , Algoritmos , Atividades Humanas , Humanos , Reconhecimento Automatizado de Padrão
17.
Sensors (Basel) ; 21(21)2021 Nov 02.
Artigo em Inglês | MEDLINE | ID: mdl-34770595

RESUMO

In healthcare, a multitude of data is collected from medical sensors and devices, such as X-ray machines, magnetic resonance imaging, computed tomography (CT), and so on, that can be analyzed by artificial intelligence methods for early diagnosis of diseases. Recently, the outbreak of the COVID-19 disease caused many deaths. Computer vision researchers support medical doctors by employing deep learning techniques on medical images to diagnose COVID-19 patients. Various methods were proposed for COVID-19 case classification. A new automated technique is proposed using parallel fusion and optimization of deep learning models. The proposed technique starts with a contrast enhancement using a combination of top-hat and Wiener filters. Two pre-trained deep learning models (AlexNet and VGG16) are employed and fine-tuned according to target classes (COVID-19 and healthy). Features are extracted and fused using a parallel fusion approach-parallel positive correlation. Optimal features are selected using the entropy-controlled firefly optimization method. The selected features are classified using machine learning classifiers such as multiclass support vector machine (MC-SVM). Experiments were carried out using the Radiopaedia database and achieved an accuracy of 98%. Moreover, a detailed analysis is conducted and shows the improved performance of the proposed scheme.


Assuntos
COVID-19 , Aprendizado Profundo , Animais , Inteligência Artificial , Entropia , Vaga-Lumes , Humanos , SARS-CoV-2 , Tomografia Computadorizada por Raios X
18.
Sensors (Basel) ; 21(22)2021 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-34833658

RESUMO

Human Gait Recognition (HGR) is a biometric technique that has been utilized for security purposes for the last decade. The performance of gait recognition can be influenced by various factors such as wearing clothes, carrying a bag, and the walking surfaces. Furthermore, identification from differing views is a significant difficulty in HGR. Many techniques have been introduced in the literature for HGR using conventional and deep learning techniques. However, the traditional methods are not suitable for large datasets. Therefore, a new framework is proposed for human gait recognition using deep learning and best feature selection. The proposed framework includes data augmentation, feature extraction, feature selection, feature fusion, and classification. In the augmentation step, three flip operations were used. In the feature extraction step, two pre-trained models were employed, Inception-ResNet-V2 and NASNet Mobile. Both models were fine-tuned and trained using transfer learning on the CASIA B gait dataset. The features of the selected deep models were optimized using a modified three-step whale optimization algorithm and the best features were chosen. The selected best features were fused using the modified mean absolute deviation extended serial fusion (MDeSF) approach. Then, the final classification was performed using several classification algorithms. The experimental process was conducted on the entire CASIA B dataset and achieved an average accuracy of 89.0. Comparison with existing techniques showed an improvement in accuracy, recall rate, and computational time.


Assuntos
Aprendizado Profundo , Algoritmos , Marcha , Humanos
19.
Diagnostics (Basel) ; 10(8)2020 Aug 06.
Artigo em Inglês | MEDLINE | ID: mdl-32781795

RESUMO

Manual identification of brain tumors is an error-prone and tedious process for radiologists; therefore, it is crucial to adopt an automated system. The binary classification process, such as malignant or benign is relatively trivial; whereas, the multimodal brain tumors classification (T1, T2, T1CE, and Flair) is a challenging task for radiologists. Here, we present an automated multimodal classification method using deep learning for brain tumor type classification. The proposed method consists of five core steps. In the first step, the linear contrast stretching is employed using edge-based histogram equalization and discrete cosine transform (DCT). In the second step, deep learning feature extraction is performed. By utilizing transfer learning, two pre-trained convolutional neural network (CNN) models, namely VGG16 and VGG19, were used for feature extraction. In the third step, a correntropy-based joint learning approach was implemented along with the extreme learning machine (ELM) for the selection of best features. In the fourth step, the partial least square (PLS)-based robust covariant features were fused in one matrix. The combined matrix was fed to ELM for final classification. The proposed method was validated on the BraTS datasets and an accuracy of 97.8%, 96.9%, 92.5% for BraTs2015, BraTs2017, and BraTs2018, respectively, was achieved.

20.
Sensors (Basel) ; 20(13)2020 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-32640710

RESUMO

Congenital heart disease (CHD) is a heart disorder associated with the devastating indications that result in increased mortality, increased morbidity, increased healthcare expenditure, and decreased quality of life. Ventricular Septal Defects (VSDs) and Arterial Septal Defects (ASDs) are the most common types of CHD. CHDs can be controlled before reaching a serious phase with an early diagnosis. The phonocardiogram (PCG) or heart sound auscultation is a simple and non-invasive technique that may reveal obvious variations of different CHDs. Diagnosis based on heart sounds is difficult and requires a high level of medical training and skills due to human hearing limitations and the non-stationary nature of PCGs. An automated computer-aided system may boost the diagnostic objectivity and consistency of PCG signals in the detection of CHDs. The objective of this research was to assess the effects of various pattern recognition modalities for the design of an automated system that effectively differentiates normal, ASD, and VSD categories using short term PCG time series. The proposed model in this study adopts three-stage processing: pre-processing, feature extraction, and classification. Empirical mode decomposition (EMD) was used to denoise the raw PCG signals acquired from subjects. One-dimensional local ternary patterns (1D-LTPs) and Mel-frequency cepstral coefficients (MFCCs) were extracted from the denoised PCG signal for precise representation of data from different classes. In the final stage, the fused feature vector of 1D-LTPs and MFCCs was fed to the support vector machine (SVM) classifier using 10-fold cross-validation. The PCG signals were acquired from the subjects admitted to local hospitals and classified by applying various experiments. The proposed methodology achieves a mean accuracy of 95.24% in classifying ASD, VSD, and normal subjects. The proposed model can be put into practice and serve as a second opinion for cardiologists by providing more objective and faster interpretations of PCG signals.


Assuntos
Cardiopatias Congênitas , Ruídos Cardíacos , Processamento de Sinais Assistido por Computador , Algoritmos , Cardiopatias Congênitas/diagnóstico , Humanos , Fonocardiografia , Qualidade de Vida , Máquina de Vetores de Suporte
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...