Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Assunto principal
Intervalo de ano de publicação
1.
Tomography ; 9(6): 2158-2189, 2023 12 05.
Artigo em Inglês | MEDLINE | ID: mdl-38133073

RESUMO

Computed tomography (CT) is used in a wide range of medical imaging diagnoses. However, the reconstruction of CT images from raw projection data is inherently complex and is subject to artifacts and noise, which compromises image quality and accuracy. In order to address these challenges, deep learning developments have the potential to improve the reconstruction of computed tomography images. In this regard, our research aim is to determine the techniques that are used for 3D deep learning in CT reconstruction and to identify the training and validation datasets that are accessible. This research was performed on five databases. After a careful assessment of each record based on the objective and scope of the study, we selected 60 research articles for this review. This systematic literature review revealed that convolutional neural networks (CNNs), 3D convolutional neural networks (3D CNNs), and deep learning reconstruction (DLR) were the most suitable deep learning algorithms for CT reconstruction. Additionally, two major datasets appropriate for training and developing deep learning systems were identified: 2016 NIH-AAPM-Mayo and MSCT. These datasets are important resources for the creation and assessment of CT reconstruction models. According to the results, 3D deep learning may increase the effectiveness of CT image reconstruction, boost image quality, and lower radiation exposure. By using these deep learning approaches, CT image reconstruction may be made more precise and effective, improving patient outcomes, diagnostic accuracy, and healthcare system productivity.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação , Algoritmos
2.
Sensors (Basel) ; 23(19)2023 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-37836983

RESUMO

The Internet of Things (IoT) and network-enabled smart devices are crucial to the digitally interconnected society of the present day. However, the increased reliance on IoT devices increases their susceptibility to malicious activities within network traffic, posing significant challenges to cybersecurity. As a result, both system administrators and end users are negatively affected by these malevolent behaviours. Intrusion-detection systems (IDSs) are commonly deployed as a cyber attack defence mechanism to mitigate such risks. IDS plays a crucial role in identifying and preventing cyber hazards within IoT networks. However, the development of an efficient and rapid IDS system for the detection of cyber attacks remains a challenging area of research. Moreover, IDS datasets contain multiple features, so the implementation of feature selection (FS) is required to design an effective and timely IDS. The FS procedure seeks to eliminate irrelevant and redundant features from large IDS datasets, thereby improving the intrusion-detection system's overall performance. In this paper, we propose a hybrid wrapper-based feature-selection algorithm that is based on the concepts of the Cellular Automata (CA) engine and Tabu Search (TS)-based aspiration criteria. We used a Random Forest (RF) ensemble learning classifier to evaluate the fitness of the selected features. The proposed algorithm, CAT-S, was tested on the TON_IoT dataset. The simulation results demonstrate that the proposed algorithm, CAT-S, enhances classification accuracy while simultaneously reducing the number of features and the false positive rate.

3.
Biomed Signal Process Control ; : 105026, 2023 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-37361196

RESUMO

Since the year 2019, the entire world has been facing the most hazardous and contagious disease as Corona Virus Disease 2019 (COVID-19). Based on the symptoms, the virus can be identified and diagnosed. Amongst, cough is the primary syndrome to detect COVID-19. Existing method requires a long processing time. Early screening and detection is a complex task. To surmount the research drawbacks, a novel ensemble-based deep learning model is designed on heuristic development. The prime intention of the designed work is to detect COVID-19 disease using cough audio signals. At the initial stage, the source signals are fetched and undergo for signal decomposition phase by Empirical Mean Curve Decomposition (EMCD). Consequently, the decomposed signal is called "Mel Frequency Cepstral Coefficients (MFCC), spectral features, and statistical features". Further, all three features are fused and provide the optimal weighted features with the optimal weight value with the help of "Modified Cat and Mouse Based Optimizer (MCMBO)". Lastly, the optimal weighted features are fed as input to the Optimized Deep Ensemble Classifier (ODEC) that is fused together with various classifiers such as "Radial Basis Function (RBF), Long-Short Term Memory (LSTM), and Deep Neural Network (DNN)". In order to attain the best detection results, the parameters in ODEC are optimized by the MCMBO algorithm. Throughout the validation, the designed method attains 96% and 92% concerning accuracy and precision. Thus, result analysis elucidates that the proposed work achieves the desired detective value that aids practitioners to early diagnose COVID-19 ailments.

4.
Sensors (Basel) ; 24(1)2023 Dec 26.
Artigo em Inglês | MEDLINE | ID: mdl-38202990

RESUMO

In the context of 6G technology, the Internet of Everything aims to create a vast network that connects both humans and devices across multiple dimensions. The integration of smart healthcare, agriculture, transportation, and homes is incredibly appealing, as it allows people to effortlessly control their environment through touch or voice commands. Consequently, with the increase in Internet connectivity, the security risk also rises. However, the future is centered on a six-fold increase in connectivity, necessitating the development of stronger security measures to handle the rapidly expanding concept of IoT-enabled metaverse connections. Various types of attacks, often orchestrated using botnets, pose a threat to the performance of IoT-enabled networks. Detecting anomalies within these networks is crucial for safeguarding applications from potentially disastrous consequences. The voting classifier is a machine learning (ML) model known for its effectiveness as it capitalizes on the strengths of individual ML models and has the potential to improve overall predictive performance. In this research, we proposed a novel classification technique based on the DRX approach that combines the advantages of the Decision tree, Random forest, and XGBoost algorithms. This ensemble voting classifier significantly enhances the accuracy and precision of network intrusion detection systems. Our experiments were conducted using the NSL-KDD, UNSW-NB15, and CIC-IDS2017 datasets. The findings of our study show that the DRX-based technique works better than the others. It achieved a higher accuracy of 99.88% on the NSL-KDD dataset, 99.93% on the UNSW-NB15 dataset, and 99.98% on the CIC-IDS2017 dataset, outperforming the other methods. Additionally, there is a notable reduction in the false positive rates to 0.003, 0.001, and 0.00012 for the NSL-KDD, UNSW-NB15, and CIC-IDS2017 datasets.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...