Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
2.
Diagnostics (Basel) ; 13(7)2023 Apr 06.
Article in English | MEDLINE | ID: mdl-37046581

ABSTRACT

Medical image analysis and classification is an important application of computer vision wherein disease prediction based on an input image is provided to assist healthcare professionals. There are many deep learning architectures that accept the different medical image modalities and provide the decisions about the diagnosis of various cancers, including breast cancer, cervical cancer, etc. The Pap-smear test is the commonly used diagnostic procedure for early identification of cervical cancer, but it has a high rate of false-positive results due to human error. Therefore, computer-aided diagnostic systems based on deep learning need to be further researched to classify the pap-smear images accurately. A fuzzy min-max neural network is a neuro fuzzy architecture that has many advantages, such as training with a minimum number of passes, handling overlapping class classification, supporting online training and adaptation, etc. This paper has proposed a novel hybrid technique that combines the deep learning architectures with machine learning classifiers and fuzzy min-max neural network for feature extraction and Pap-smear image classification, respectively. The deep learning pretrained models used are Alexnet, ResNet-18, ResNet-50, and GoogleNet. Benchmark datasets used for the experimentation are Herlev and Sipakmed. The highest classification accuracy of 95.33% is obtained using Resnet-50 fine-tuned architecture followed by Alexnet on Sipakmed dataset. In addition to the improved accuracies, the proposed model has utilized the advantages of fuzzy min-max neural network classifiers mentioned in the literature.

3.
Sensors (Basel) ; 23(5)2023 Mar 06.
Article in English | MEDLINE | ID: mdl-36905057

ABSTRACT

Word-level sign language recognition (WSLR) is the backbone for continuous sign language recognition (CSLR) that infers glosses from sign videos. Finding the relevant gloss from the sign sequence and detecting explicit boundaries of the glosses from sign videos is a persistent challenge. In this paper, we propose a systematic approach for gloss prediction in WLSR using the Sign2Pose Gloss prediction transformer model. The primary goal of this work is to enhance WLSR's gloss prediction accuracy with reduced time and computational overhead. The proposed approach uses hand-crafted features rather than automated feature extraction, which is computationally expensive and less accurate. A modified key frame extraction technique is proposed that uses histogram difference and Euclidean distance metrics to select and drop redundant frames. To enhance the model's generalization ability, pose vector augmentation using perspective transformation along with joint angle rotation is performed. Further, for normalization, we employed YOLOv3 (You Only Look Once) to detect the signing space and track the hand gestures of the signers in the frames. The proposed model experiments on WLASL datasets achieved the top 1% recognition accuracy of 80.9% in WLASL100 and 64.21% in WLASL300. The performance of the proposed model surpasses state-of-the-art approaches. The integration of key frame extraction, augmentation, and pose estimation improved the performance of the proposed gloss prediction model by increasing the model's precision in locating minor variations in their body posture. We observed that introducing YOLOv3 improved gloss prediction accuracy and helped prevent model overfitting. Overall, the proposed model showed 17% improved performance in the WLASL 100 dataset.

4.
Soft comput ; 27(5): 2635-2643, 2023.
Article in English | MEDLINE | ID: mdl-32904395

ABSTRACT

The novel coronavirus infection (COVID-19) that was first identified in China in December 2019 has spread across the globe rapidly infecting over ten million people. The World Health Organization (WHO) declared it as a pandemic on March 11, 2020. What makes it even more critical is the lack of vaccines available to control the disease, although many pharmaceutical companies and research institutions all over the world are working toward developing effective solutions to battle this life-threatening disease. X-ray and computed tomography (CT) images scanning is one of the most encouraging exploration zones; it can help in finding and providing early diagnosis to diseases and gives both quick and precise outcomes. In this study, convolution neural networks method is used for binary classification pneumonia-based conversion of VGG-19, Inception_V2 and decision tree model on X-ray and CT scan images dataset, which contains 360 images. It can infer that fine-tuned version VGG-19, Inception_V2 and decision tree model show highly satisfactory performance with a rate of increase in training and validation accuracy (91%) other than Inception_V2 (78%) and decision tree (60%) models.

5.
PeerJ Comput Sci ; 8: e1100, 2022.
Article in English | MEDLINE | ID: mdl-36262147

ABSTRACT

The exponential rise in social media via microblogging sites like Twitter has sparked curiosity in sentiment analysis that exploits user feedback towards a targeted product or service. Considering its significance in business intelligence and decision-making, numerous efforts have been made in this area. However, lack of dictionaries, unannotated data, large-scale unstructured data, and low accuracies have plagued these approaches. Also, sentiment classification through classifier ensemble has been underexplored in literature. In this article, we propose a Semantic Relational Machine Learning (SRML) model that automatically classifies the sentiment of tweets by using classifier ensemble and optimal features. The model employs the Cascaded Feature Selection (CFS) strategy, a novel statistical assessment approach based on Wilcoxon rank sum test, univariate logistic regression assisted significant predictor test and cross-correlation test. It further uses the efficacy of word2vec-based continuous bag-of-words and n-gram feature extraction in conjunction with SentiWordNet for finding optimal features for classification. We experiment on six public Twitter sentiment datasets, the STS-Gold dataset, the Obama-McCain Debate (OMD) dataset, the healthcare reform (HCR) dataset and the SemEval2017 Task 4A, 4B and 4C on a heterogeneous classifier ensemble comprising fourteen individual classifiers from different paradigms. Results from the experimental study indicate that CFS supports in attaining a higher classification accuracy with up to 50% lesser features compared to count vectorizer approach. In Intra-model performance assessment, the Artificial Neural Network-Gradient Descent (ANN-GD) classifier performs comparatively better than other individual classifiers, but the Best Trained Ensemble (BTE) strategy outperforms on all metrics. In inter-model performance assessment with existing state-of-the-art systems, the proposed model achieved higher accuracy and outperforms more accomplished models employing quantum-inspired sentiment representation (QSR), transformer-based methods like BERT, BERTweet, RoBERTa and ensemble techniques. The research thus provides critical insights into implementing similar strategy into building more generic and robust expert system for sentiment analysis that can be leveraged across industries.

6.
Appl Intell (Dordr) ; 51(3): 1690-1700, 2021.
Article in English | MEDLINE | ID: mdl-34764553

ABSTRACT

Covid-19 is a rapidly spreading viral disease that infects not only humans, but animals are also infected because of this disease. The daily life of human beings, their health, and the economy of a country are affected due to this deadly viral disease. Covid-19 is a common spreading disease, and till now, not a single country can prepare a vaccine for COVID-19. A clinical study of COVID-19 infected patients has shown that these types of patients are mostly infected from a lung infection after coming in contact with this disease. Chest x-ray (i.e., radiography) and chest CT are a more effective imaging technique for diagnosing lunge related problems. Still, a substantial chest x-ray is a lower cost process in comparison to chest CT. Deep learning is the most successful technique of machine learning, which provides useful analysis to study a large amount of chest x-ray images that can critically impact on screening of Covid-19. In this work, we have taken the PA view of chest x-ray scans for covid-19 affected patients as well as healthy patients. After cleaning up the images and applying data augmentation, we have used deep learning-based CNN models and compared their performance. We have compared Inception V3, Xception, and ResNeXt models and examined their accuracy. To analyze the model performance, 6432 chest x-ray scans samples have been collected from the Kaggle repository, out of which 5467 were used for training and 965 for validation. In result analysis, the Xception model gives the highest accuracy (i.e., 97.97%) for detecting Chest X-rays images as compared to other models. This work only focuses on possible methods of classifying covid-19 infected patients and does not claim any medical accuracy.

7.
Sensors (Basel) ; 19(15)2019 Jul 26.
Article in English | MEDLINE | ID: mdl-31357390

ABSTRACT

This paper presents a system dedicated to monitoring the heart activity parameters using Electrocardiography (ECG) mobile devices and a Wearable Heart Monitoring Inductive Sensor (WHMIS) that represents a new method and device, developed by us as an experimental model, used to assess the mechanical activity of the hearth using inductive sensors that are inserted in the fabric of the clothes. Only one inductive sensor is incorporated in the clothes in front of the apex area and it is able to assess the cardiorespiratory activity while in the prior of the art are presented methods that predict sensors arrays which are distributed in more places of the body. The parameters that are assessed are heart data-rate and respiration. The results are considered preliminary in order to prove the feasibility of this method. The main goal of the study is to extract the respiration and the data-rate parameters from the same output signal generated by the inductance-to-number convertor using a proper algorithm. The conceived device is meant to be part of the "wear and forget" equipment dedicated to monitoring the vital signs continuously.


Subject(s)
Biosensing Techniques , Electrocardiography/methods , Heart/physiology , Monitoring, Physiologic , Algorithms , Heart Rate/physiology , Humans , Respiration , Textiles , Wearable Electronic Devices
8.
J Med Syst ; 42(12): 247, 2018 Oct 31.
Article in English | MEDLINE | ID: mdl-30382410

ABSTRACT

Disease diagnosis from medical images has become increasingly important in medical science. Abnormality identification in retinal images has become a challenging task in medical science. Effective machine learning and soft computing methods should be used to facilitate Diabetic Retinopathy Diagnosis from Retinal Images. Artificial Neural Networks are widely preferred for Diabetic Retinopathy Diagnosis from Retinal Images. It was observed that the conventional neural networks especially the Hopfield Neural Network (HNN) may be inaccurate due to the weight values are not adjusted in the training process. This paper presents a new Modified Hopfield Neural Network (MHNN) for abnormality classification from human retinal images. It relies on the idea that both weight values and output values can be adjusted simultaneously. The novelty of the proposed method lies in the training algorithm. In conventional method, the weights remain fixed but the weights are changing in the proposed method. Experimental performed on the Lotus Eye Care Hospital containing 540 images collected showed that the proposed MHNN yields an average sensitivity and specificity of 0.99 and accuracy of 99.25%. The proposed MHNN is better than HNN and other neural network approaches for Diabetic Retinopathy Diagnosis from Retinal Images.


Subject(s)
Diabetic Retinopathy/diagnosis , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Aged , Algorithms , Diabetic Retinopathy/diagnostic imaging , Female , Humans , Male , Middle Aged , Sensitivity and Specificity
9.
Comput Med Imaging Graph ; 68: 40-54, 2018 09.
Article in English | MEDLINE | ID: mdl-29890404

ABSTRACT

According to the American Cancer Society, melanoma is one of the most common types of cancer in the world. In 2017, approximately 87,110 new cases of skin cancer were diagnosed in the United States alone. A dermatoscope is a tool that captures lesion images with high resolution and is one of the main clinical tools to diagnose, evaluate and monitor this disease. This paper presents a new approach to classify melanoma automatically using structural co-occurrence matrix (SCM) of main frequencies extracted from dermoscopy images. The main advantage of this approach consists in transform the SCM in an adaptive feature extractor improving his power of discrimination using only the image as parameter. The images were collected from the International Skin Imaging Collaboration (ISIC) 2016, 2017 and Pedro Hispano Hospital (PH2) datasets. Specificity (Spe), sensitivity (Sen), positive predictive value, F Score, Harmonic Mean, accuracy (Acc) and area under the curve (AUC) were used to verify the efficiency of the SCM. The results show that the SCM in the frequency domain work automatically, where it obtained better results in comparison with local binary patterns, gray-level co-occurrence matrix and invariant moments of Hu as well as compared with recent works with the same datasets. The results of the proposed approach were: Spe 95.23%, 92.15% and 99.4%, Sen 94.57%, 89.9% and 99.2%, Acc 94.5%, 89.93% and 99%, and AUC 92%, 90% and 99% in ISIC 2016, 2017 and PH2 datasets, respectively.


Subject(s)
Melanoma/classification , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/pathology , Algorithms , Humans , Image Processing, Computer-Assisted , Machine Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...