Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 37
Filter
1.
Mar Pollut Bull ; 205: 116616, 2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38936001

ABSTRACT

Accurately classifying microalgae species is vital for monitoring marine ecosystems and managing the emergence of marine mucilage, which is crucial for monitoring mucilage phenomena in marine environments. Traditional methods have been inadequate due to time-consuming processes and the need for expert knowledge. The purpose of this article is to employ convolutional neural networks (CNNs) and support vector machines (SVMs) to improve classification accuracy and efficiency. By employing advanced computational techniques, including MobileNet and GoogleNet models, alongside SVM classification, the study demonstrates significant advancements over conventional identification methods. In the classification of a dataset consisting of 7820 images using four different SVM kernel functions, the linear kernel achieved the highest success rate at 98.79 %. It is followed by the RBF kernel at 98.73 %, the polynomial kernel at 97.84 %, and the sigmoid kernel at 97.20 %. This research not only provides a methodological framework for future studies in marine biodiversity monitoring but also highlights the potential for real-time applications in ecological conservation and understanding mucilage dynamics amidst climate change and environmental pollution.

2.
Heliyon ; 10(11): e31827, 2024 Jun 15.
Article in English | MEDLINE | ID: mdl-38845915

ABSTRACT

Epilepsy is one of the most common brain disorders, and seizures of epilepsy have severe adverse effects on patients. Real-time epilepsy seizure detection using electroencephalography (EEG) signals is an important research area aimed at improving the diagnosis and treatment of epilepsy. This paper proposed a real-time approach based on EEG signal for detecting epilepsy seizures using the STFT and Google-net convolutional neural network (CNN). The CHB-MIT database was used to evaluate the performance, and received the results of 97.74 % in accuracy, 98.90 % in sensitivity, 1.94 % in false positive rate. Additionally, the proposed method was implemented in a real-time manner using the sliding window technique. The processing time of the proposed method just 0.02 s for every 2-s EEG episode and achieved average 9.85- second delay in each seizure onset.

3.
Sensors (Basel) ; 24(10)2024 May 14.
Article in English | MEDLINE | ID: mdl-38793972

ABSTRACT

Delamination represents one of the most significant and dangerous damages in composite plates. Recently, many papers have presented the capability of structural health monitoring (SHM) techniques for the investigation of structural delamination with various shapes and thickness depths. However, few studies have been conducted regarding the utilization of convolutional neural network (CNN) methods for automating the non-destructive testing (NDT) techniques database to identify the delamination size and depth. In this paper, an automated system qualified for distinguishing between pristine and damaged structures and classifying three classes of delamination with various depths is presented. This system includes a proposed CNN model and the Lamb wave technique. In this work, a unidirectional composite plate with three samples of delamination inserted at different depths was prepared for numerical and experimental investigations. In the numerical part, the guided wave propagation and interaction with three samples of delamination were studied to observe how the delamination depth can affect the scattered and trapped waves over the delamination region. This numerical study was validated experimentally using an efficient ultrasonic guided waves technique. This technique involved piezoelectric wafer active sensors (PWASs) and a scanning laser Doppler vibrometer (SLDV). Both numerical and experimental studies demonstrate that the delamination depth has a direct effect on the trapped waves' energy and distribution. Three different datasets were collected from the numerical and experimental studies, involving the numerical wavefield image dataset, experimental wavefield image dataset, and experimental wavenumber spectrum image dataset. These three datasets were used independently with the proposed CNN model to develop a system that can automatically classify four classes (pristine class and three different delamination classes). The results of all three datasets show the capability of the proposed CNN model for predicting the delamination depth with high accuracy. The proposed CNN model results of the three different datasets were validated using the GoogLeNet CNN. The results of both methods show an excellent agreement. The results proved the capability of the wavefield image and wavenumber spectrum datasets to be used as input data to the CNN for the detection of delamination depth.

4.
J Genet Eng Biotechnol ; 22(1): 100359, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38494268

ABSTRACT

BACKGROUND: Examining functions and characteristics of DNA sequences is a highly challenging task. When it comes to the human genome, which is made up of exons and introns, this task is more challenging. Human exons and introns contain millions to billions of nucleotides, which contributes to the complexity observed in this sequences. Considering how complicated the subject of genomics is, it is obvious that using signal processing techniques and deep learning tools to build a strong predictive model can be very helpful for the development of the research of the human genome. RESULTS: After representing human exons and introns with color images using Frequency Chaos Game Representation, two pre-trained convolutional neural network models (Resnet-50 and GoogleNet) and a proposed CNN model having 13 hidden layers were used to classify our obtained images. We have reached a value of 92% for the accuracy rate for Resnet-50 model in about 7 h for the execution time, a value of 91.5% for the accuracy rate for the GoogleNet model in 2 h and a half for the execution time. For our proposed CNN model, we have reached 91.6% for the accuracy rate in 2 h and 37 min. CONCLUSIONS: Our proposed CNN model is faster than the Resnet-50 model in terms of execution time. It was able to slightly exceed the GoogleNet model for the accuracy rate value.

5.
Nanotechnology ; 35(26)2024 Apr 09.
Article in English | MEDLINE | ID: mdl-38513283

ABSTRACT

PIN diodes, due to their simple structure and variable resistance characteristics under high-frequency high-power excitation, are often used in radar front-end as limiters to filter high power microwaves (HPM) to prevent its power from entering the internal circuit and causing damage. This paper carries out theoretical derivation and research on the HPM effects of PIN diodes, and then uses an optimized neural network algorithm to replace traditional physical modeling to calculate and predict two types of HPM limiting indicators of PIN diode limiters. We proposes a neural network model for each of the following two prediction scenarios: in the scenario of time-junction temperature curves under different HPM irradiation, the weighted mean squared error (MSE) between the predicted values from the test dataset and the simulated values is below 0.004. While in predicting PIN limiter's power limitation threshold, insertion loss, and maximum isolation under different HPM irradiation, the MSE of the test set prediction values and simulation values are all less than 0.03. The method proposed in this research, which applies an optimized neural network algorithm to replace traditional physical modeling algorithms for studying the high-power microwave effects of PIN diode limiters, significantly improves the computational and simulation speed, reduces the calculation cost, and provides a new method for studying the high-power microwave effects of PIN diode limiters.

6.
Med Sci Law ; 64(1): 8-14, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37063071

ABSTRACT

Determining sex is a critical process in estimating biological profiles from skeletal remains. The clavicle is interesting in studying sex determination because it is durable to the environment, slow to decay, challenging to destroy, making the clavicle useful in autopsies and identification which can then lead to verification. The goal of this study was to use deep learning in determining sex from clavicles within the Thai population and obtain the accuracies for the validation set using a convolutional neural network (GoogLeNet). A total of 200 pairs of clavicles were obtained from 200 Thai persons (100 males and 100 females) as part of a training group. For the deep learning approach, the clavicle was photographed, and each clavicle image was submitted to the training model for sex determination. Training groups of 200 samples were made. Images of the same size were input into the training model. The percentage of the validation set accuracy was calculated from the MATLAB program. GoogLeNet was the best training model and get the result of validation set accuracy. The results of this study found accuracies for a validation set with the highest overall right lateral view of the clavicle with an accuracy of 95%. Accuracy from the validation set of each view of the clavicle can demonstrate the forensic value of sex determination. A deep learning approach with clavicles can determine the sex and is simple to utilize for forensic anthropology professionals.


Subject(s)
Deep Learning , Sex Determination by Skeleton , Male , Female , Humans , Clavicle/diagnostic imaging , Clavicle/anatomy & histology , Thailand , Sex Determination by Skeleton/methods , Forensic Anthropology
7.
Technol Health Care ; 2023 Oct 26.
Article in English | MEDLINE | ID: mdl-37955065

ABSTRACT

BACKGROUND: Lung cancer is the most common type of cancer, accounting for 12.8% of cancer cases worldwide. As initially non-specific symptoms occur, it is difficult to diagnose in the early stages. OBJECTIVE: Image processing techniques developed using machine learning methods have played a crucial role in the development of decision support systems. This study aimed to classify benign and malignant lung lesions with a deep learning approach and convolutional neural networks (CNNs). METHODS: The image dataset includes 4459 Computed tomography (CT) scans (benign, 2242; malignant, 2217). The research type was retrospective; the case-control analysis. A method based on GoogLeNet architecture, which is one of the deep learning approaches, was used to make maximum inference on images and minimize manual control. RESULTS: The dataset used to develop the CNNs model is included in the training (3567) and testing (892) datasets. The model's highest accuracy rate in the training phase was estimated as 0.98. According to accuracy, sensitivity, specificity, positive predictive value, and negative predictive values of testing data, the highest classification performance ratio was positive predictive value with 0.984. CONCLUSION: The deep learning methods are beneficial in the diagnosis and classification of lung cancer through computed tomography images.

8.
Front Plant Sci ; 14: 1230886, 2023.
Article in English | MEDLINE | ID: mdl-37621882

ABSTRACT

Pepper leaf disease identification based on convolutional neural networks (CNNs) is one of the interesting research areas. However, most existing CNN-based pepper leaf disease detection models are suboptimal in terms of accuracy and computing performance. In particular, it is challenging to apply CNNs on embedded portable devices due to a large amount of computation and memory consumption for leaf disease recognition in large fields. Therefore, this paper introduces an enhanced lightweight model based on GoogLeNet architecture. The initial step involves compressing the Inception structure to reduce model parameters, leading to a remarkable enhancement in recognition speed. Furthermore, the network incorporates the spatial pyramid pooling structure to seamlessly integrate local and global features. Subsequently, the proposed improved model has been trained on the real dataset of 9183 images, containing 6 types of pepper diseases. The cross-validation results show that the model accuracy is 97.87%, which is 6% higher than that of GoogLeNet based on Inception-V1 and Inception-V3. The memory requirement of the model is only 10.3 MB, which is reduced by 52.31%-86.69%, comparing to GoogLeNet. We have also compared the model with the existing CNN-based models including AlexNet, ResNet-50 and MobileNet-V2. The result shows that the average inference time of the proposed model decreases by 61.49%, 41.78% and 23.81%, respectively. The results show that the proposed enhanced model can significantly improve performance in terms of accuracy and computing efficiency, which has potential to improve productivity in the pepper farming industry.

9.
Diagnostics (Basel) ; 13(16)2023 Aug 15.
Article in English | MEDLINE | ID: mdl-37627946

ABSTRACT

Deep learning is playing a major role in identifying complicated structure, and it outperforms in term of training and classification tasks in comparison to traditional algorithms. In this work, a local cloud-based solution is developed for classification of Alzheimer's disease (AD) as MRI scans as input modality. The multi-classification is used for AD variety and is classified into four stages. In order to leverage the capabilities of the pre-trained GoogLeNet model, transfer learning is employed. The GoogLeNet model, which is pre-trained for image classification tasks, is fine-tuned for the specific purpose of multi-class AD classification. Through this process, a better accuracy of 98% is achieved. As a result, a local cloud web application for Alzheimer's prediction is developed using the proposed architectures of GoogLeNet. This application enables doctors to remotely check for the presence of AD in patients.

10.
Bioengineering (Basel) ; 10(3)2023 Mar 21.
Article in English | MEDLINE | ID: mdl-36978774

ABSTRACT

Lung and colon cancer are among humanity's most common and deadly cancers. In 2020, there were 4.19 million people diagnosed with lung and colon cancer, and more than 2.7 million died worldwide. Some people develop lung and colon cancer simultaneously due to smoking which causes lung cancer, leading to an abnormal diet, which also causes colon cancer. There are many techniques for diagnosing lung and colon cancer, most notably the biopsy technique and its analysis in laboratories. Due to the scarcity of health centers and medical staff, especially in developing countries. Moreover, manual diagnosis takes a long time and is subject to differing opinions of doctors. Thus, artificial intelligence techniques solve these challenges. In this study, three strategies were developed, each with two systems for early diagnosis of histological images of the LC25000 dataset. Histological images have been improved, and the contrast of affected areas has been increased. The GoogLeNet and VGG-19 models of all systems produced high dimensional features, so redundant and unnecessary features were removed to reduce high dimensionality and retain essential features by the PCA method. The first strategy for diagnosing the histological images of the LC25000 dataset by ANN uses crucial features of GoogLeNet and VGG-19 models separately. The second strategy uses ANN with the combined features of GoogLeNet and VGG-19. One system reduced dimensions and combined, while the other combined high features and then reduced high dimensions. The third strategy uses ANN with fusion features of CNN models (GoogLeNet and VGG-19) and handcrafted features. With the fusion features of VGG-19 and handcrafted features, the ANN reached a sensitivity of 99.85%, a precision of 100%, an accuracy of 99.64%, a specificity of 100%, and an AUC of 99.86%.

11.
Entropy (Basel) ; 25(3)2023 Feb 24.
Article in English | MEDLINE | ID: mdl-36981301

ABSTRACT

To solve the problems of backward means of coal mine gas and coal dust explosion monitoring, late reporting, and low leakage rate, a sound recognition method of coal mine gas and coal dust explosion based on GoogLeNet was proposed. After installing mining pickups in key monitoring areas of coal mines to collect the sounds of the working equipment and the environment, the collected sound was analyzed by continuous wavelet to obtain its scale coefficient map. This was then imported into GoogLeNet to obtain the recognition model of coal mine gas and coal dust explosions. The test sound was obtained by continuous wavelet analysis to obtain the scale coefficient map, brought into the completed training recognition model to obtain the sound signal class, and verified by experiment. Firstly, the scale coefficient map extracted from the sound signal by continuous wavelet analysis showed that the similarity between the subjective and objective indicators of the wavelet coefficient maps of the gas explosion sound and coal dust explosion sound was higher, but the difference between these and the rest of the coal mine sounds was clearer, helping to effectively distinguish gas and coal dust explosion sounds from other sounds. Secondly, the experimental results of GoogLeNet parameters can be obtained. When the dropout parameter is 0.5 and the initial learning rate is 0.001, the recognition effect of the model established by GoogLeNet was optimal. According to the selected parameters, the training loss, testing loss, training recognition rate, and testing recognition rate of the model are all in line with expectations. Finally, the experimental recognition results show that the recognition rate of the proposed method is 97.38%, the recall rate is 86.1%, and the accuracy rate is 100% for the case of a 9:1 ratio of test data to training data, and the overall recognition effect of the proposed GoogLeNet is significantly better than that of vgg and Alexnet, which can effectively solve the problem of under-sampling of coal mine gas and coal dust explosion sounds and can meet the need for the intelligent recognition of coal mine gas and dust explosions.

12.
Diagnostics (Basel) ; 13(5)2023 Feb 23.
Article in English | MEDLINE | ID: mdl-36900008

ABSTRACT

Refined hybrid convolutional neural networks are proposed in this work for classifying brain tumor classes based on MRI scans. A dataset of 2880 T1-weighted contrast-enhanced MRI brain scans are used. The dataset contains three main classes of brain tumors: gliomas, meningiomas, and pituitary tumors, as well as a class of no tumors. Firstly, two pre-trained, fine-tuned convolutional neural networks, GoogleNet and AlexNet, were used for classification process, with validation and classification accuracy being 91.5% and 90.21%, respectively. Then, to improving the performance of the fine-tuning AlexNet, two hybrid networks (AlexNet-SVM and AlexNet-KNN) were applied. These hybrid networks achieved 96.9% and 98.6% validation and accuracy, respectively. Thus, the hybrid network AlexNet-KNN was shown to be able to apply the classification process of the present data with high accuracy. After exporting these networks, a selected dataset was employed for testing process, yielding accuracies of 88%, 85%, 95%, and 97% for the fine-tuned GoogleNet, the fine-tuned AlexNet, AlexNet-SVM, and AlexNet-KNN, respectively. The proposed system would help for automatic detection and classification of the brain tumor from the MRI scans and safe the time for the clinical diagnosis.

13.
Front Physiol ; 14: 1125952, 2023.
Article in English | MEDLINE | ID: mdl-36793418

ABSTRACT

Generally, cloud computing is integrated with wireless sensor network to enable the monitoring systems and it improves the quality of service. The sensed patient data are monitored with biosensors without considering the patient datatype and this minimizes the work of hospitals and physicians. Wearable sensor devices and the Internet of Medical Things (IoMT) have changed the health service, resulting in faster monitoring, prediction, diagnosis, and treatment. Nevertheless, there have been difficulties that need to be resolved by the use of AI methods. The primary goal of this study is to introduce an AI-powered, IoMT telemedicine infrastructure for E-healthcare. In this paper, initially the data collection from the patient body is made using the sensed devices and the information are transmitted through the gateway/Wi-Fi and is stored in IoMT cloud repository. The stored information is then acquired, preprocessed to refine the collected data. The features from preprocessed data are extracted by means of high dimensional Linear Discriminant analysis (LDA) and the best optimal features are selected using reconfigured multi-objective cuckoo search algorithm (CSA). The prediction of abnormal/normal data is made by using Hybrid ResNet 18 and GoogleNet classifier (HRGC). The decision is then made whether to send alert to hospitals/healthcare personnel or not. If the expected results are satisfactory, the participant information is saved in the internet for later use. At last, the performance analysis is carried so as to validate the efficiency of proposed mechanism.

14.
Multimed Tools Appl ; 82(15): 22497-22523, 2023.
Article in English | MEDLINE | ID: mdl-36415331

ABSTRACT

Due the quick spread of coronavirus disease 2019 (COVID-19), identification of that disease, prediction of mortality rate and recovery rate are considered as one of the critical challenges in the whole world. The occurrence of COVID-19 dissemination beyond the world is analyzed in this research and an artificial-intelligence (AI) based deep learning algorithm is suggested to detect positive cases of COVID19 patients, mortality rate and recovery rate using real-world datasets. Initially, the unwanted data like prepositions, links, hashtags etc., are removed using some pre-processing techniques. After that, term frequency inverse-term frequency (TF-IDF) andBag of Words (BoW) techniques are utilized to extract the features from pre-processed dataset. Then, Mayfly Optimization (MO) algorithm is performed to pick the relevant features from the set of features. Finally, two deep learning procedures, ResNet model and GoogleNet model, are hybridized to achieve the prediction process. Our system examines two different kinds of publicly available text datasets to identify COVID-19 disease as well as to predict mortality rate and recovery rate using those datasets. There are four different datasets are taken to analyse the performance, in which the proposed method achieves 97.56% accuracy which is 1.40% greater than Linear Regression (LR) and Multinomial Naive Bayesian (MNB), 3.39% higher than Random Forest (RF) and Stochastic gradient boosting (SGB) as well as 5.32% higher than Decision tree (DT) and Bagging techniques if first dataset. When compared to existing machine learning models, the simulation result indicates that a proposed hybrid deep learning method is valuable in corona virus identification and future mortality forecast study.

15.
Front Artif Intell ; 6: 1230383, 2023.
Article in English | MEDLINE | ID: mdl-38174109

ABSTRACT

Introduction: Developing efficient methods to infer relations among different faces consisting of numerous expressions or on the same face at different times (e.g., disease progression) is an open issue in imaging related research. In this study, we present a novel method for facial feature extraction, characterization, and identification based on classical computer vision coupled with deep learning and, more specifically, convolutional neural networks. Methods: We describe the hybrid face characterization system named FRetrAIval (FRAI), which is a hybrid of the GoogleNet and the AlexNet Neural Network (NN) models. Images analyzed by the FRAI network are preprocessed by computer vision techniques such as the oriented gradient-based algorithm that can extract only the face region from any kind of picture. The Aligned Face dataset (AFD) was used to train and test the FRAI solution for extracting image features. The Labeled Faces in the Wild (LFW) holdout dataset has been used for external validation. Results and discussion: Overall, in comparison to previous techniques, our methodology has shown much better results on k-Nearest Neighbors (KNN) by yielding the maximum precision, recall, F1, and F2 score values (92.00, 92.66, 92.33, and 92.52%, respectively) for AFD and (95.00% for each variable) for LFW dataset, which were used as training and testing datasets. The FRAI model may be potentially used in healthcare and criminology as well as many other applications where it is important to quickly identify face features such as fingerprint for a specific identification target.

16.
Front Neurorobot ; 17: 1324831, 2023.
Article in English | MEDLINE | ID: mdl-38351965

ABSTRACT

The field of multimodal robotic musical performing arts has garnered significant interest due to its innovative potential. Conventional robots face limitations in understanding emotions and artistic expression in musical performances. Therefore, this paper explores the application of multimodal robots that integrate visual and auditory perception to enhance the quality and artistic expression in music performance. Our approach involves integrating GRU (Gated Recurrent Unit) and GoogLeNet models for sentiment analysis. The GRU model processes audio data and captures the temporal dynamics of musical elements, including long-term dependencies, to extract emotional information. The GoogLeNet model excels in image processing, extracting complex visual details and aesthetic features. This synergy deepens the understanding of musical and visual elements, aiming to produce more emotionally resonant and interactive robot performances. Experimental results demonstrate the effectiveness of our approach, showing significant improvements in music performance by multimodal robots. These robots, equipped with our method, deliver high-quality, artistic performances that effectively evoke emotional engagement from the audience. Multimodal robots that merge audio-visual perception in music performance enrich the art form and offer diverse human-machine interactions. This research demonstrates the potential of multimodal robots in music performance, promoting the integration of technology and art. It opens new realms in performing arts and human-robot interactions, offering a unique and innovative experience. Our findings provide valuable insights for the development of multimodal robots in the performing arts sector.

17.
Entropy (Basel) ; 24(6)2022 Jun 08.
Article in English | MEDLINE | ID: mdl-35741521

ABSTRACT

A brain tumour is one of the major reasons for death in humans, and it is the tenth most common type of tumour that affects people of all ages. However, if detected early, it is one of the most treatable types of tumours. Brain tumours are classified using biopsy, which is not usually performed before definitive brain surgery. An image classification technique for tumour diseases is important for accelerating the treatment process and avoiding surgery and errors from manual diagnosis by radiologists. The advancement of technology and machine learning (ML) can assist radiologists in tumour diagnostics using magnetic resonance imaging (MRI) images without invasive procedures. This work introduced a new hybrid CNN-based architecture to classify three brain tumour types through MRI images. The method suggested in this paper uses hybrid deep learning classification based on CNN with two methods. The first method combines a pre-trained Google-Net model of the CNN algorithm for feature extraction with SVM for pattern classification. The second method integrates a finely tuned Google-Net with a soft-max classifier. The proposed approach was evaluated using MRI brain images that contain a total of 1426 glioma images, 708 meningioma images, 930 pituitary tumour images, and 396 normal brain images. The reported results showed that an accuracy of 93.1% was achieved from the finely tuned Google-Net model. However, the synergy of Google-Net as a feature extractor with an SVM classifier improved recognition accuracy to 98.1%.

18.
Med Phys ; 49(5): 3067-3079, 2022 May.
Article in English | MEDLINE | ID: mdl-35157332

ABSTRACT

BACKGROUND: The automatic recognition of human body parts in three-dimensional medical images is important in many clinical applications. However, methods presented in prior studies have mainly classified each two-dimensional (2D) slice independently rather than recognizing a batch of consecutive slices as a specific body part. PURPOSE: In this study, we aim to develop a deep learning-based method designed to automatically divide computed tomography (CT) and magnetic resonance imaging (MRI) scans into five consecutive body parts: head, neck, chest, abdomen, and pelvis. METHODS: A deep learning framework was developed to recognize body parts in two stages. In the first preclassification stage, a convolutional neural network (CNN) using the GoogLeNet Inception v3 architecture and a long short-term memory (LSTM) network were combined to classify each 2D slice; the CNN extracted information from a single slice, whereas the LSTM employed rich contextual information among consecutive slices. In the second postprocessing stage, the input scan was further partitioned into consecutive body parts by identifying the optimal boundaries between them based on the slice classification results of the first stage. To evaluate the performance of the proposed method, 662 CT and 1434 MRI scans were used. RESULTS: Our method achieved a very good performance in 2D slice classification compared with state-of-the-art methods, with overall classification accuracies of 97.3% and 98.2% for CT and MRI scans, respectively. Moreover, our method further divided whole scans into consecutive body parts with mean boundary errors of 8.9 and 3.5 mm for CT and MRI data, respectively. CONCLUSIONS: The proposed method significantly improved the slice classification accuracy compared with state-of-the-art methods, and further accurately divided CT and MRI scans into consecutive body parts based on the results of slice classification. The developed method can be employed as an important step in various computer-aided diagnosis and medical image analysis schemes.


Subject(s)
Deep Learning , Algorithms , Human Body , Humans , Imaging, Three-Dimensional , Neural Networks, Computer
19.
Med Biol Eng Comput ; 60(3): 643-662, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35028864

ABSTRACT

Cancer is among the common causes of death around the world. Skin cancer is one of the most lethal types of cancer. Early diagnosis and treatment are vital in skin cancer. In addition to traditional methods, method such as deep learning is frequently used to diagnose and classify the disease. Expert experience plays a major role in diagnosing skin cancer. Therefore, for more reliable results in the diagnosis of skin lesions, deep learning algorithms can help in the correct diagnosis. In this study, we propose InSiNet, a deep learning-based convolutional neural network to detect benign and malignant lesions. The performance of the method is tested on International Skin Imaging Collaboration HAM10000 images (ISIC 2018), ISIC 2019, and ISIC 2020, under the same conditions. The computation time and accuracy comparison analysis was performed between the proposed algorithm and other machine learning techniques (GoogleNet, DenseNet-201, ResNet152V2, EfficientNetB0, RBF-support vector machine, logistic regression, and random forest). The results show that the developed InSiNet architecture outperforms the other methods achieving an accuracy of 94.59%, 91.89%, and 90.54% in ISIC 2018, 2019, and 2020 datasets, respectively. Since the deep learning algorithms eliminate the human factor during diagnosis, they can give reliable results in addition to traditional methods.


Subject(s)
Skin Diseases , Skin Neoplasms , Algorithms , Dermoscopy/methods , Humans , Neural Networks, Computer , Skin , Skin Neoplasms/diagnosis
SELECTION OF CITATIONS
SEARCH DETAIL
...