Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
BMC Oral Health ; 24(1): 601, 2024 May 23.
Article in English | MEDLINE | ID: mdl-38783295

ABSTRACT

PROBLEM: Oral squamous cell carcinoma (OSCC) is the eighth most prevalent cancer globally, leading to the loss of structural integrity within the oral cavity layers and membranes. Despite its high prevalence, early diagnosis is crucial for effective treatment. AIM: This study aimed to utilize recent advancements in deep learning for medical image classification to automate the early diagnosis of oral histopathology images, thereby facilitating prompt and accurate detection of oral cancer. METHODS: A deep learning convolutional neural network (CNN) model categorizes benign and malignant oral biopsy histopathological images. By leveraging 17 pretrained DL-CNN models, a two-step statistical analysis identified the pretrained EfficientNetB0 model as the most superior. Further enhancement of EfficientNetB0 was achieved by incorporating a dual attention network (DAN) into the model architecture. RESULTS: The improved EfficientNetB0 model demonstrated impressive performance metrics, including an accuracy of 91.1%, sensitivity of 92.2%, specificity of 91.0%, precision of 91.3%, false-positive rate (FPR) of 1.12%, F1 score of 92.3%, Matthews correlation coefficient (MCC) of 90.1%, kappa of 88.8%, and computational time of 66.41%. Notably, this model surpasses the performance of state-of-the-art approaches in the field. CONCLUSION: Integrating deep learning techniques, specifically the enhanced EfficientNetB0 model with DAN, shows promising results for the automated early diagnosis of oral cancer through oral histopathology image analysis. This advancement has significant potential for improving the efficacy of oral cancer treatment strategies.


Subject(s)
Carcinoma, Squamous Cell , Deep Learning , Mouth Neoplasms , Neural Networks, Computer , Humans , Mouth Neoplasms/pathology , Mouth Neoplasms/diagnostic imaging , Mouth Neoplasms/diagnosis , Carcinoma, Squamous Cell/pathology , Carcinoma, Squamous Cell/diagnostic imaging , Carcinoma, Squamous Cell/diagnosis , Early Detection of Cancer/methods , Sensitivity and Specificity
2.
Contemp Oncol (Pozn) ; 28(1): 37-44, 2024.
Article in English | MEDLINE | ID: mdl-38800533

ABSTRACT

Introduction: This study introduces a novel methodology for classifying human papillomavirus (HPV) using colposcopy images, focusing on its potential in diagnosing cervical cancer, the second most prevalent malignancy among women globally. Addressing a crucial gap in the literature, this study highlights the unexplored territory of HPV-based colposcopy image diagnosis for cervical cancer. Emphasising the suitability of colposcopy screening in underdeveloped and low-income regions owing to its small, cost-effective setup that eliminates the need for biopsy specimens, the methodological framework includes robust dataset augmentation and feature extraction using EfficientNetB0 architecture. Material and methods: The optimal convolutional neural network model was selected through experimentation with 19 architectures, and fine-tuning with the fine κ-nearest neighbour algorithm enhanced the classification precision, enabling detailed distinctions with a single neighbour. Results: The proposed methodology achieved outstanding results, with a validation accuracy of 99.9% and an area under the curve (AUC) of 99.86%, with robust performance on test data, 91.4% accuracy, and an AUC of 91.76%. These remarkable findings underscore the effectiveness of the integrated approach, which offers a highly accurate and reliable system for HPV classification.Conclusions: This research sets the stage for advancements in medical imaging applications, prompting future refinement and validation in diverse clinical settings.

3.
Int Ophthalmol ; 44(1): 174, 2024 Apr 13.
Article in English | MEDLINE | ID: mdl-38613630

ABSTRACT

PURPOSE: This study aims to address the challenge of identifying retinal damage in medical applications through a computer-aided diagnosis (CAD) approach. Data was collected from four prominent eye hospitals in India for analysis and model development. METHODS: Data was collected from Silchar Medical College and Hospital (SMCH), Aravind Eye Hospital (Tamil Nadu), LV Prasad Eye Hospital (Hyderabad), and Medanta (Gurugram). A modified version of the ResNet-101 architecture, named ResNet-RS, was utilized for retinal damage identification. In this modified architecture, the last layer's softmax function was replaced with a support vector machine (SVM). The resulting model, termed ResNet-RS-SVM, was trained and evaluated on each hospital's dataset individually and collectively. RESULTS: The proposed ResNet-RS-SVM model achieved high accuracies across the datasets from the different hospitals: 99.17% for Aravind, 98.53% for LV Prasad, 98.33% for Medanta, and 100% for SMCH. When considering all hospitals collectively, the model attained an accuracy of 97.19%. CONCLUSION: The findings demonstrate the effectiveness of the ResNet-RS-SVM model in accurately identifying retinal damage in diverse datasets collected from multiple eye hospitals in India. This approach presents a promising advancement in computer-aided diagnosis for improving the detection and management of retinal diseases.


Subject(s)
Retinal Diseases , Support Vector Machine , Humans , India/epidemiology , Diagnosis, Computer-Assisted , Hospitals , Retinal Diseases/diagnosis
4.
Cancer Inform ; 22: 11769351231161477, 2023.
Article in English | MEDLINE | ID: mdl-37008072

ABSTRACT

The second most frequent malignancy in women worldwide is cervical cancer. In the transformation(transitional) zone, which is a region of the cervix, columnar cells are continuously converting into squamous cells. The most typical location on the cervix for the development of aberrant cells is the transformation zone, a region of transforming cells. This article suggests a 2-phase method that includes segmenting and classifying the transformation zone to identify the type of cervical cancer. In the initial stage, the transformation zone is segmented from the colposcopy images. The segmented images are then subjected to the augmentation process and identified with the improved inception-resnet-v2. Here, multi-scale feature fusion framework that utilizes 3 × 3 convolution kernels from Reduction-A and Reduction-B of inception-resnet-v2 is introduced. The feature extracted from Reduction-A and Reduction -B is concatenated and fed to SVM for classification. This way, the model combines the benefits of residual networks and Inception convolution, increasing network width and resolving the deep network's training issue. The network can extract several scales of contextual information due to the multi-scale feature fusion, which increases accuracy. The experimental results reveal 81.24% accuracy, 81.24% sensitivity, 90.62% specificity, 87.52% precision, 9.38% FPR, and 81.68% F1 score, 75.27% MCC, and 57.79% Kappa coefficient.

5.
J Xray Sci Technol ; 31(1): 211-221, 2023.
Article in English | MEDLINE | ID: mdl-36463485

ABSTRACT

Among malignant tumors, lung cancer has the highest morbidity and fatality rates worldwide. Screening for lung cancer has been investigated for decades in order to reduce mortality rates of lung cancer patients, and treatment options have improved dramatically in recent years. Pathologists utilize various techniques to determine the stage, type, and subtype of lung cancers, but one of the most common is a visual assessment of histopathology slides. The most common subtypes of lung cancer are adenocarcinoma and squamous cell carcinoma, lung benign, and distinguishing between them requires visual inspection by a skilled pathologist. The purpose of this article was to develop a hybrid network for the categorization of lung histopathology images, and it did so by combining AlexNet, wavelet, and support vector machines. In this study, we feed the integrated discrete wavelet transform (DWT) coefficients and AlexNet deep features into linear support vector machines (SVMs) for lung nodule sample classification. The LC25000 Lung and colon histopathology image dataset, which contains 5,000 digital histopathology images in three categories of benign (normal cells), adenocarcinoma, and squamous carcinoma cells (both are cancerous cells) is used in this study to train and test SVM classifiers. The study results of using a 10-fold cross-validation method achieve an accuracy of 99.3% and an area under the curve (AUC) of 0.99 in classifying these digital histopathology images of lung nodule samples.


Subject(s)
Adenocarcinoma , Carcinoma, Squamous Cell , Lung Neoplasms , Humans , Tomography, X-Ray Computed/methods , Lung Neoplasms/diagnostic imaging , Diagnosis, Computer-Assisted/methods , Adenocarcinoma/diagnostic imaging , Carcinoma, Squamous Cell/diagnostic imaging , Lung/diagnostic imaging , Support Vector Machine
6.
J Digit Imaging ; 35(5): 1207-1216, 2022 10.
Article in English | MEDLINE | ID: mdl-35524077

ABSTRACT

The skin is the main organ. It is approximately 8 pounds for the average adult. Our skin is a truly wonderful organ. It isolates us and shields our bodies from hazards. However, the skin is also vulnerable to damage and distracted from its original appearance: brown, black, or blue, or combinations of those colors, known as pigmented skin lesions. These common pigmented skin lesions (CPSL) are the leading factor of skin cancer, or can say these are the primary causes of skin cancer. In the healthcare sector, the categorization of CPSL is the main problem because of inaccurate outputs, overfitting, and higher computational costs. Hence, we proposed a classification model based on multi-deep feature and support vector machine (SVM) for the classification of CPSL. The proposed system comprises two phases: First, evaluate the 11 CNN model's performance in the deep feature extraction approach with SVM, and then, concatenate the top performed three CNN model's deep features and with the help of SVM to categorize the CPSL. In the second step, 8192 and 12,288 features are obtained by combining binary and triple networks of 4096 features from the top performed CNN model. These features are also given to the SVM classifiers. The SVM results are also evaluated with principal component analysis (PCA) algorithm to the combined feature of 8192 and 12,288. The highest results are obtained with 12,288 features. The experimentation results, the combination of the deep feature of Alexnet, VGG16 and VGG19, achieved the highest accuracy of 91.7% using SVM classifier. As a result, the results show that the proposed methods are a useful tool for CPSL classification.


Subject(s)
Skin Neoplasms , Support Vector Machine , Adult , Humans , Algorithms , Skin
7.
Contemp Oncol (Pozn) ; 26(4): 268-274, 2022.
Article in English | MEDLINE | ID: mdl-36816391

ABSTRACT

Introduction: Cancer of the nervous system is one of the most common types of cancer in the world and mostly due to presence of a tumour in the brain. The symptoms and severity of the brain tumour depend on its location. The tumour within the brain may develop from nerves, dura (meningioma), pituitary gland (pituitary adenoma), or from the brain tissue itself (glioma). Material and methods: In this study we proposed a feature engineering approach for classification magnetic resonance imaging (MRI) of 3 kinds of most common brain tumour, i.e. glioma, meningioma, pituitary, and no-tumour. Here 5 machine learning classifiers were used, i.e. support vector machine, K-nearest neighbour (KNN), Naive Bayes, Decision Tree, and Ensemble classifier with their paradigms. Results: The handcrafted features such as histogram of oriented gradients, local binary pattern features, and grey level co-occurrence matrix are extracted from the MRI, and the feature fusion technique is adopted to enhance the dimension of feature vector. The Fine KNN outperforms among the classifiers for recognition of 4 kinds of MRI: glioma, meningioma, pituitary, and no tumour, and achieved 91.1% accuracy and 0.95 area under the curve (AUC). Conclusions: The proposed method, i.e. Fine KNN, achieved 91.1% accuracy and 0.96 AUC. Furthermore, this model has the possibility to integrate in low-end devices unlike deep learning, which required a complex system.

8.
J Xray Sci Technol ; 29(2): 197-210, 2021.
Article in English | MEDLINE | ID: mdl-33492267

ABSTRACT

The objective of this study is to conduct a critical analysis to investigate and compare a group of computer aid screening methods of COVID-19 using chest X-ray images and computed tomography (CT) images. The computer aid screening method includes deep feature extraction, transfer learning, and machine learning image classification approach. The deep feature extraction and transfer learning method considered 13 pre-trained CNN models. The machine learning approach includes three sets of handcrafted features and three classifiers. The pre-trained CNN models include AlexNet, GoogleNet, VGG16, VGG19, Densenet201, Resnet18, Resnet50, Resnet101, Inceptionv3, Inceptionresnetv2, Xception, MobileNetv2 and ShuffleNet. The handcrafted features are GLCM, LBP & HOG, and machine learning based classifiers are KNN, SVM & Naive Bayes. In addition, the different paradigms of classifiers are also analyzed. Overall, the comparative analysis is carried out in 65 classification models, i.e., 13 in deep feature extraction, 13 in transfer learning, and 39 in the machine learning approaches. Finally, all classification models perform better when applying to the chest X-ray image set as comparing to the use of CT scan image set. Among 65 classification models, the VGG19 with SVM achieved the highest accuracy of 99.81%when applying to the chest X-ray images. In conclusion, the findings of this analysis study are beneficial for the researchers who are working towards designing computer aid tools for screening COVID-19 infection diseases.


Subject(s)
COVID-19/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Deep Learning , Humans , Machine Learning , Neural Networks, Computer , Radiography, Thoracic , SARS-CoV-2 , Tomography, X-Ray Computed
9.
J Electr Eng Technol ; 16(4): 1799-1819, 2021.
Article in English | MEDLINE | ID: mdl-38624776

ABSTRACT

This paper proposes a SUGPDS model based on Detection and Isolation algorithm and smart sensors, namely micro phasor measurement unit, smart sensing and switching device, phasor data concentrator, and ZigBee technology, etc. for the identification, classification, and isolation of the various fault occurs in the underground power cable in the distribution system. The proposed SUGPDS is a quick and smart tool in supervising, managing, and controlling various faults and issues and maintaining the reliability, stability, and uninterrupted flow of electricity. First, the SUGPDS model is analyzed using a distributed parameter approach. Then, the proper arrangement of the system required for the implantation of SUGPDS is demonstrated using figures. The Phasor data concentrator plays an essential role in developing the detection and classification report for identification and classification. Finally, smart sensing and switching device installed at a different location isolated the faulty phase from a healthy network. This approach helps to decrease power consumption. Hence, SUGPDS has super abilities compared to the underground power distribution system. The effectiveness of the proposed method and model is demonstrated via figures and tables.

SELECTION OF CITATIONS
SEARCH DETAIL
...