Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Diagnostics (Basel) ; 13(22)2023 Nov 20.
Article in English | MEDLINE | ID: mdl-37998620

ABSTRACT

According to the WHO (World Health Organization), lung cancer is the leading cause of cancer deaths globally. In the future, more than 2.2 million people will be diagnosed with lung cancer worldwide, making up 11.4% of every primary cause of cancer. Furthermore, lung cancer is expected to be the biggest driver of cancer-related mortality worldwide in 2020, with an estimated 1.8 million fatalities. Statistics on lung cancer rates are not uniform among geographic areas, demographic subgroups, or age groups. The chance of an effective treatment outcome and the likelihood of patient survival can be greatly improved with the early identification of lung cancer. Lung cancer identification in medical pictures like CT scans and MRIs is an area where deep learning (DL) algorithms have shown a lot of potential. This study uses the Hybridized Faster R-CNN (HFRCNN) to identify lung cancer at an early stage. Among the numerous uses for which faster R-CNN has been put to good use is identifying critical entities in medical imagery, such as MRIs and CT scans. Many research investigations in recent years have examined the use of various techniques to detect lung nodules (possible indicators of lung cancer) in scanned images, which may help in the early identification of lung cancer. One such model is HFRCNN, a two-stage, region-based entity detector. It begins by generating a collection of proposed regions, which are subsequently classified and refined with the aid of a convolutional neural network (CNN). A distinct dataset is used in the model's training process, producing valuable outcomes. More than a 97% detection accuracy was achieved with the suggested model, making it far more accurate than several previously announced methods.

2.
Diagnostics (Basel) ; 13(16)2023 Aug 15.
Article in English | MEDLINE | ID: mdl-37627946

ABSTRACT

Deep learning is playing a major role in identifying complicated structure, and it outperforms in term of training and classification tasks in comparison to traditional algorithms. In this work, a local cloud-based solution is developed for classification of Alzheimer's disease (AD) as MRI scans as input modality. The multi-classification is used for AD variety and is classified into four stages. In order to leverage the capabilities of the pre-trained GoogLeNet model, transfer learning is employed. The GoogLeNet model, which is pre-trained for image classification tasks, is fine-tuned for the specific purpose of multi-class AD classification. Through this process, a better accuracy of 98% is achieved. As a result, a local cloud web application for Alzheimer's prediction is developed using the proposed architectures of GoogLeNet. This application enables doctors to remotely check for the presence of AD in patients.

3.
Bioengineering (Basel) ; 10(1)2023 Jan 09.
Article in English | MEDLINE | ID: mdl-36671659

ABSTRACT

Recently, artificial intelligence (AI) is an extremely revolutionized domain of medical image processing. Specifically, image segmentation is a task that generally aids in such an improvement. This boost performs great developments in the conversion of AI approaches in the research lab to real medical applications, particularly for computer-aided diagnosis (CAD) and image-guided operation. Mitotic nuclei estimates in breast cancer instances have a prognostic impact on diagnosis of cancer aggressiveness and grading methods. The automated analysis of mitotic nuclei is difficult due to its high similarity with nonmitotic nuclei and heteromorphic form. This study designs an artificial hummingbird algorithm with transfer-learning-based mitotic nuclei classification (AHBATL-MNC) on histopathologic breast cancer images. The goal of the AHBATL-MNC technique lies in the identification of mitotic and nonmitotic nuclei on histopathology images (HIs). For HI segmentation process, the PSPNet model is utilized to identify the candidate mitotic patches. Next, the residual network (ResNet) model is employed as feature extractor, and extreme gradient boosting (XGBoost) model is applied as a classifier. To enhance the classification performance, the parameter tuning of the XGBoost model takes place by making use of the AHBA approach. The simulation values of the AHBATL-MNC system are tested on medical imaging datasets and the outcomes are investigated in distinct measures. The simulation values demonstrate the enhanced outcomes of the AHBATL-MNC method compared to other current approaches.

4.
Front Psychol ; 13: 963765, 2022.
Article in English | MEDLINE | ID: mdl-36389517

ABSTRACT

Background and aims: Excessive pain during medical procedures is a worldwide medical problem. Most scald burns occur in children under 6, who are often undermedicated. Adjunctive Virtual Reality (VR) distraction has been shown to reduce pain in children aged 6-17, but little is known about VR analgesia in young children. This study tests whether desktop VR (VR Animal Rescue World) can reduce the just noticeable pressure pain of children aged 2-10. Methods: A within-subject repeated measures design was used. With treatment order randomized, each healthy volunteer pediatric participant underwent brief cutaneous pressure stimuli under three conditions: (1) no distraction, (2) a verbal color naming task (no VR), and (3) a large TV-based desktop VR distraction. A hand-held Wagner pressure pain stimulation device was used to generate just noticeable pain sensations. Participants indicated when a steadily increasing non-painful pressure stimulus first turned into a painful pressure sensation (just noticeable pain). Results: A total of 40 healthy children participated (43% aged 2-5 years; and 57% aged 6-10 years). Compared to the no distraction condition, the 40 children showed significant VR analgesia (i.e., a significant reduction in pain sensitivity during the VR Animal Rescue World condition), t(39) = 9.83, p < 0.001, SD = 6.24. VR was also significantly more effective at reducing pain sensitivity vs. an auditory color naming task, t(39) = 5.42, p < 0.001, SD = 5.94. The subset of children aged 2-5 showed significant reductions in pain during VR. Children under 6 showed greater sensitivity to pain during no distraction than children aged 6-10. Conclusion: During no distraction, children under 6 years old were significantly more sensitive to pain than children aged 6-10. Virtual reality (VR) significantly reduced the "just noticeable" pressure pain sensitivity of children in both age groups.

5.
Sci Rep ; 12(1): 15389, 2022 09 13.
Article in English | MEDLINE | ID: mdl-36100621

ABSTRACT

Accurate classification of brain tumor subtypes is important for prognosis and treatment. Researchers are developing tools based on static and dynamic feature extraction and applying machine learning and deep learning. However, static feature requires further analysis to compute the relevance, strength, and types of association. Recently Bayesian inference approach gains attraction for deeper analysis of static (hand-crafted) features to unfold hidden dynamics and relationships among features. We computed the gray level co-occurrence (GLCM) features from brain tumor meningioma and pituitary MRIs and then ranked based on entropy methods. The highly ranked Energy feature was chosen as our target variable for further empirical analysis of dynamic profiling and optimization to unfold the nonlinear intrinsic dynamics of GLCM features extracted from brain MRIs. The proposed method further unfolds the dynamics and to detailed analysis of computed features based on GLCM features for better understanding of the hidden dynamics for proper diagnosis and prognosis of tumor types leading to brain stroke.


Subject(s)
Brain Neoplasms , Meningeal Neoplasms , Meningioma , Algorithms , Bayes Theorem , Brain/diagnostic imaging , Brain/pathology , Brain Neoplasms/pathology , Humans , Magnetic Resonance Imaging/methods , Meningeal Neoplasms/pathology , Meningioma/diagnostic imaging , Meningioma/pathology
6.
Comput Intell Neurosci ; 2022: 1698137, 2022.
Article in English | MEDLINE | ID: mdl-35607459

ABSTRACT

Recently, bioinformatics and computational biology-enabled applications such as gene expression analysis, cellular restoration, medical image processing, protein structure examination, and medical data classification utilize fuzzy systems in offering effective solutions and decisions. The latest developments of fuzzy systems with artificial intelligence techniques enable to design the effective microarray gene expression classification models. In this aspect, this study introduces a novel feature subset selection with optimal adaptive neuro-fuzzy inference system (FSS-OANFIS) for gene expression classification. The major aim of the FSS-OANFIS model is to detect and classify the gene expression data. To accomplish this, the FSS-OANFIS model designs an improved grey wolf optimizer-based feature selection (IGWO-FS) model to derive an optimal subset of features. Besides, the OANFIS model is employed for gene classification and the parameter tuning of the ANFIS model is adjusted by the use of coyote optimization algorithm (COA). The application of IGWO-FS and COA techniques helps in accomplishing enhanced microarray gene expression classification outcomes. The experimental validation of the FSS-OANFIS model has been performed using Leukemia, Prostate, DLBCL Stanford, and Colon Cancer datasets. The proposed FSS-OANFIS model has resulted in a maximum classification accuracy of 89.47%.


Subject(s)
Artificial Intelligence , Fuzzy Logic , Animals , Male , Algorithms , Computational Biology , Gene Expression
7.
J Healthc Eng ; 2022: 3987494, 2022.
Article in English | MEDLINE | ID: mdl-35368960

ABSTRACT

Brain Computer Interface (BCI) technology commonly used to enable communication for the person with movement disability. It allows the person to communicate and control assistive robots by the use of electroencephalogram (EEG) or other brain signals. Though several approaches have been available in the literature for learning EEG signal feature, the deep learning (DL) models need to further explore for generating novel representation of EEG features and accomplish enhanced outcomes for MI classification. With this motivation, this study designs an arithmetic optimization with RetinaNet based deep learning model for MI classification (AORNDL-MIC) technique on BCIs. The proposed AORNDL-MIC technique initially exploits Multiscale Principal Component Analysis (MSPCA) approach for the EEG signal denoising and Continuous Wavelet Transform (CWT) is exploited for the transformation of 1D-EEG signal into 2D time-frequency amplitude representation, which enables to utilize the DL model via transfer learning approach. In addition, the DL based RetinaNet is applied for extracting of feature vectors from the EEG signal which are then classified with the help of ID3 classifier. In order to optimize the classification efficiency of the AORNDL-MIC technique, arithmetical optimization algorithm (AOA) is employed for hyperparameter tuning of the RetinaNet. The experimental analysis of the AORNDL-MIC algorithm on the benchmark data sets reported its promising performance over the recent state of art methodologies.


Subject(s)
Brain-Computer Interfaces , Algorithms , Electroencephalography/methods , Humans , Imagination , Signal Processing, Computer-Assisted
SELECTION OF CITATIONS
SEARCH DETAIL
...