Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 19(7): e0301441, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38995975

RESUMO

Multimodal medical image fusion is a perennially prominent research topic that can obtain informative medical images and aid radiologists in diagnosing and treating disease more effectively. However, the recent state-of-the-art methods extract and fuse features by subjectively defining constraints, which easily distort the exclusive information of source images. To overcome these problems and get a better fusion method, this study proposes a 2D data fusion method that uses salient structure extraction (SSE) and a swift algorithm via normalized convolution to fuse different types of medical images. First, salient structure extraction (SSE) is used to attenuate the effect of noise and irrelevant data in the source images by preserving the significant structures. The salient structure extraction is performed to ensure that the pixels with a higher gradient magnitude impact the choices of their neighbors and further provide a way to restore the sharply altered pixels to their neighbors. In addition, a Swift algorithm is used to overcome the excessive pixel values and modify the contrast of the source images. Furthermore, the method proposes an efficient method for performing edge-preserving filtering using normalized convolution. In the end,the fused image are obtained through linear combination of the processed image and the input images based on the properties of the filters. A quantitative function composed of structural loss and region mutual data loss is designed to produce restrictions for preserving data at feature level and the structural level. Extensive experiments on CT-MRI images demonstrate that the proposed algorithm exhibits superior performance when compared to some of the state-of-the-art methods in terms of providing detailed information, edge contour, and overall contrasts.


Assuntos
Algoritmos , Neoplasias , Humanos , Neoplasias/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodos , Imagem Multimodal/métodos , Processamento de Sinais Assistido por Computador , Carcinoma/diagnóstico por imagem
2.
Front Artif Intell ; 7: 1269366, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38510470

RESUMO

The emergence of social media has given rise to a variety of networking and communication opportunities, as well as the well-known issue of cyberbullying, which is continuously on the rise in the current world. Researchers have been actively addressing cyberbullying for a long time by applying machine learning and deep learning techniques. However, although these algorithms have performed well on artificial datasets, they do not provide similar results when applied to real-time datasets with high levels of noise and imbalance. Consequently, finding generic algorithms that can work on dynamic data available across several platforms is critical. This study used a unique hybrid random forest-based CNN model for text classification, combining the strengths of both approaches. Real-time datasets from Twitter and Instagram were collected and annotated to demonstrate the effectiveness of the proposed technique. The performance of various ML and DL algorithms was compared, and the RF-based CNN model outperformed them in accuracy and execution speed. This is particularly important for timely detection of bullying episodes and providing assistance to victims. The model achieved an accuracy of 96% and delivered results 3.4 seconds faster than standard CNN models.

3.
Heliyon ; 10(2): e24224, 2024 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-38293533

RESUMO

Agriculture Internet of Things (AIoTs) deployments require design of high-efficiency Quality of Service (QoS) & security models that can provide stable network performance even under large-scale communication requests. Existing security models that use blockchains are either highly complex or require large delays & have higher energy consumption for larger networks. Moreover, the efficiency of these models depends directly on consensus-efficiency & miner-efficiency, which restricts their scalability under real-time scenarios. To overcome these limitations, this study proposes the design of an efficient Q-Learning bioinspired model for enhancing QoS of AIoT deployments via customized shards. The model initially collects temporal information about the deployed AIoT Nodes, and continuously updates individual recurring trust metrics. These trust metrics are used by a Q-Learning process for identification of miners that can participate in the block-addition process. The blocks are added via a novel Proof-of-Performance (PoP) based consensus model, which uses a dynamic consensus function that is based on temporal performance of miner nodes. The PoP consensus is facilitated via customized shards, wherein each shard is deployed based on its context of deployment, that decides the shard-length, hashing model used for the shard, and encryption technique used by these shards. This is facilitated by a Mayfly Optimization (MO) Model that uses PoP scores for selecting shard configurations. These shards are further segregated into smaller shards via a Bacterial Foraging Optimization (BFO) Model, which assists in identification of optimal shard length for underlying deployment contexts. Due to these optimizations, the model is able to improve the speed of mining by 4.5%, while reducing energy needed for mining by 10.4%, improving the throughput during AIoT communications by 8.3%, and improving the packet delivery consistency by 2.5% when compared with existing blockchain-based AIoT deployment models under similar scenarios. This performance was observed to be consistent even under large-scale attacks.

4.
PeerJ Comput Sci ; 9: e1670, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38077588

RESUMO

Deep learning, a subset of artificial intelligence, gives easy way for the analytical and physical tasks to be done automatically. There is a less necessity for human intervention while performing these tasks. Deep hybrid learning is a blended approach to combine machine learning with deep learning. A hybrid deep learning (HDL) model using convolutional neural network (CNN), residual network (ResNet) and long short term memory (LSTM) is proposed for better course selection of the enrolled candidates in an online learning platform. In this work, a hybrid framework that facilitates the analysis and design of a recommendation system for course selection is developed. A student's schedule for the next course should consist of classes in which the student has shown interest. For universities to schedule classes optimally, they need to know what courses each student wants to take before each course begins. The proposed recommendation system selects the most appropriate course that can encourage students to base their selection on informed decision making. This system will enable learners to obtain the correct choices of courses to be studied.

5.
Sci Rep ; 13(1): 20671, 2023 Nov 24.
Artigo em Inglês | MEDLINE | ID: mdl-38001139

RESUMO

The Internet of Things (IoT) is evolving in various sectors such as industries, healthcare, smart homes, and societies. Billions and trillions of IoT devices are used in e-health systems, known as the Internet of Medical Things (IoMT), to improve communication processes in the network. Scientists and researchers have proposed various methods and schemes to ensure automatic monitoring, communication, diagnosis, and even operating on patients at a distance. Several researchers have proposed security schemes and approaches to identify the legitimacy of intelligent systems involved in maintaining records in the network. However, existing schemes have their own performance issues, including delay, storage efficiency, costs, and others. This paper proposes trusted schemes that combine mean and subjective logic aggregation methods to compute the trust of each communicating device in the network. Additionally, the network maintains a blockchain of legitimate devices to oversee the trusted devices in the network. The proposed mechanism is further verified and analyzed using various security metrics, such as reliability, trust, delay, beliefs, and disbeliefs, in comparison to existing schemes.

6.
Sci Rep ; 13(1): 20712, 2023 11 24.
Artigo em Inglês | MEDLINE | ID: mdl-38001149

RESUMO

Retinal vessel segmentation is a critical process in the automated inquiry of fundus images to screen and diagnose diabetic retinopathy. It is a widespread complication of diabetes that causes sudden vision loss. Automated retinal vessel segmentation can help to detect these changes more accurately and quickly than manual evaluation by an ophthalmologist. The proposed approach aims to precisely segregate blood vessels in retinal images while shortening the complication and computational value of the segmentation procedure. This can help to improve the accuracy and reliability of retinal image analysis and assist in diagnosing various eye diseases. Attention U-Net is an essential architecture in retinal image segmentation in diabetic retinopathy that obtained promising results in improving the segmentation accuracy especially in the situation where the training data and ground truth are limited. This approach involves U-Net with an attention mechanism to mainly focus on applicable regions of the input image along with the unfolded deep kernel estimation (UDKE) method to enhance the effective performance of semantic segmentation models. Extensive experiments were carried out on STARE, DRIVE, and CHASE_DB datasets, and the proposed method achieved good performance compared to existing methods.


Assuntos
Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico por imagem , Algoritmos , Reprodutibilidade dos Testes , Vasos Retinianos/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Fundo de Olho
7.
PLoS One ; 18(9): e0291911, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37756296

RESUMO

Low-dose computed tomography (LDCT) has attracted significant attention in the domain of medical imaging due to the inherent risks of normal-dose computed tomography (NDCT) based X-ray radiations to patients. However, reducing radiation dose in CT imaging produces noise and artifacts that degrade image quality and subsequently hinders medical disease diagnostic performance. In order to address these problems, this research article presents a competent low-dose computed tomography image denoising algorithm based on a constructive non-local means algorithm with morphological residual processing to achieve the task of removing noise from the LDCT images. We propose an innovative constructive non-local image filtering algorithm by means of applications in low-dose computed tomography technology. The nonlocal mean filter that was recently proposed was modified to construct our denoising algorithm. It constructs the discrete property of neighboring filtering to enable rapid vectorized and parallel implantation in contemporary shared memory computer platforms while simultaneously decreases computing complexity. Subsequently, the proposed method performs faster computation compared to a non-vectorized and serial implementation in terms of speed and scales linearly with image dimension. In addition, the morphological residual processing is employed for the purpose of edge-preserving image processing. It combines linear lowpass filtering with a nonlinear technique that enables the extraction of meaningful regions where edges could be preserved while removing residual artifacts from the images. Experimental results demonstrate that the proposed algorithm preserves more textural and structural features while reducing noise, enhances edges and significantly improves image quality more effectively. The proposed research article obtains better results both qualitatively and quantitively when compared to other comparative algorithms on publicly accessible datasets.


Assuntos
Implantação do Embrião , Tomografia Computadorizada por Raios X , Humanos , Algoritmos , Artefatos , Processamento de Imagem Assistida por Computador
8.
Sensors (Basel) ; 23(10)2023 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-37430605

RESUMO

An increasing number of patients and a lack of awareness about obstructive sleep apnea is a point of concern for the healthcare industry. Polysomnography is recommended by health experts to detect obstructive sleep apnea. The patient is paired up with devices that track patterns and activities during their sleep. Polysomnography, being a complex and expensive process, cannot be adopted by the majority of patients. Therefore, an alternative is required. The researchers devised various machine learning algorithms using single lead signals such as electrocardiogram, oxygen saturation, etc., for the detection of obstructive sleep apnea. These methods have low accuracy, less reliability, and high computation time. Thus, the authors introduced two different paradigms for the detection of obstructive sleep apnea. The first is MobileNet V1, and the other is the convergence of MobileNet V1 with two separate recurrent neural networks, Long-Short Term Memory and Gated Recurrent Unit. They evaluate the efficacy of their proposed method using authentic medical cases from the PhysioNet Apnea-Electrocardiogram database. The model MobileNet V1 achieves an accuracy of 89.5%, a convergence of MobileNet V1 with LSTM achieves an accuracy of 90%, and a convergence of MobileNet V1 with GRU achieves an accuracy of 90.29%. The obtained results prove the supremacy of the proposed approach in comparison to the state-of-the-art methods. To showcase the implementation of devised methods in a real-life scenario, the authors design a wearable device that monitors ECG signals and classifies them into apnea and normal. The device employs a security mechanism to transmit the ECG signals securely over the cloud with the consent of patients.


Assuntos
Aprendizado Profundo , Apneia Obstrutiva do Sono , Humanos , Reprodutibilidade dos Testes , Apneia Obstrutiva do Sono/diagnóstico , Sono , Algoritmos
9.
Front Hum Neurosci ; 17: 1157155, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37033909

RESUMO

Introduction: Brain tumors arise due to abnormal growth of cells at any brain location with uneven boundaries and shapes. Usually, they proliferate rapidly, and their size increases by approximately 1.4% a day, resulting in invisible illness and psychological and behavioral changes in the human body. It is one of the leading causes of the increase in the mortality rate of adults worldwide. Therefore, early prediction of brain tumors is crucial in saving a patient's life. In addition, selecting a suitable imaging sequence also plays a significant role in treating brain tumors. Among available techniques, the magnetic resonance (MR) imaging modality is widely used due to its noninvasive nature and ability to represent the inherent details of brain tissue. Several computer-assisted diagnosis (CAD) approaches have recently been developed based on these observations. However, there is scope for improvement due to tumor characteristics and image noise variations. Hence, it is essential to establish a new paradigm. Methods: This paper attempts to develop a new medical decision-support system for detecting and differentiating brain tumors from MR images. In the implemented approach, initially, we improve the contrast and brightness using the tuned single-scale retinex (TSSR) approach. Then, we extract the infected tumor region(s) using maximum entropy-based thresholding and morphological operations. Furthermore, we obtain the relevant texture features based on the non-local binary pattern (NLBP) feature descriptor. Finally, the extracted features are subjected to a support vector machine (SVM), K-nearest neighbors (KNN), random forest (RF), and GentleBoost (GB). Results: The presented CAD model achieved 99.75% classification accuracy with 5-fold cross-validation and a 91.88% dice similarity score, which is higher than the existing models. Discussions: By analyzing the experimental outcomes, we conclude that our method can be used as a supportive clinical tool for physicians during the diagnosis of brain tumors.

10.
Curr Med Imaging ; 18(5): 546-562, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34607547

RESUMO

OBJECTIVE: The objective of any multimodal medical image fusion algorithm is to assist a radiologist for better decision-making during the diagnosis and therapy by integrating the anatomical (magnetic resonance imaging) and functional (positron emission tomography/ single-photon emission computed tomography) information. METHODS: We proposed a new medical image fusion method based on content-based decomposition, Principal Component Analysis (PCA), and sigmoid function. We considered Empirical Wavelet Transform (EWT) for content-based decomposition purposes since it can preserve crucial medical image information such as edges and corners. PCA is used to obtain initial weights corresponding to each detail layer. RESULTS: In our experiments, we found that direct usage of PCA for detail layer fusion introduces severe artifacts into the fused image due to weight scaling issues. In order to tackle this, we considered using the sigmoid function for better weight scaling. We considered 24 pairs of MRI-PET and 24 pairs of MRI-SPECT images for fusion, and the results are measured using four significant quantitative metrics. CONCLUSION: Finally, we compared our proposed method with other state-of-the-art transformbased fusion approaches, using traditional and recent performance measures. An appreciable improvement is observed in both qualitative and quantitative results compared to other fusion methods.


Assuntos
Processamento de Imagem Assistida por Computador , Análise de Ondaletas , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Análise de Componente Principal
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...