Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
Math Biosci Eng ; 20(8): 13491-13520, 2023 Jun 13.
Article in English | MEDLINE | ID: mdl-37679099

ABSTRACT

The Internet of Things (IoT) is a rapidly evolving technology with a wide range of potential applications, but the security of IoT networks remains a major concern. The existing system needs improvement in detecting intrusions in IoT networks. Several researchers have focused on intrusion detection systems (IDS) that address only one layer of the three-layered IoT architecture, which limits their effectiveness in detecting attacks across the entire network. To address these limitations, this paper proposes an intelligent IDS for IoT networks based on deep learning algorithms. The proposed model consists of a recurrent neural network and gated recurrent units (RNN-GRU), which can classify attacks across the physical, network, and application layers. The proposed model is trained and tested using the ToN-IoT dataset, specifically collected for a three-layered IoT system, and includes new types of attacks compared to other publicly available datasets. The performance analysis of the proposed model was carried out by a number of evaluation metrics such as accuracy, precision, recall, and F1-measure. Two optimization techniques, Adam and Adamax, were applied in the evaluation process of the model, and the Adam performance was found to be optimal. Moreover, the proposed model was compared with various advanced deep learning (DL) and traditional machine learning (ML) techniques. The results show that the proposed system achieves an accuracy of 99% for network flow datasets and 98% for application layer datasets, demonstrating its superiority over previous IDS models.

2.
Math Biosci Eng ; 20(8): 13824-13848, 2023 Jun 16.
Article in English | MEDLINE | ID: mdl-37679112

ABSTRACT

In recent years, the industrial network has seen a number of high-impact attacks. To counter these threats, several security systems have been implemented to detect attacks on industrial networks. However, these systems solely address issues once they have already transpired and do not proactively prevent them from occurring in the first place. The identification of malicious attacks is crucial for industrial networks, as these attacks can lead to system malfunctions, network disruptions, data corruption, and the theft of sensitive information. To ensure the effectiveness of detection in industrial networks, which necessitate continuous operation and undergo changes over time, intrusion detection algorithms should possess the capability to automatically adapt to these changes. Several researchers have focused on the automatic detection of these attacks, in which deep learning (DL) and machine learning algorithms play a prominent role. This study proposes a hybrid model that combines two DL algorithms, namely convolutional neural networks (CNN) and deep belief networks (DBN), for intrusion detection in industrial networks. To evaluate the effectiveness of the proposed model, we utilized the Multi-Step Cyber Attack (MSCAD) dataset and employed various evaluation metrics.

3.
Sensors (Basel) ; 22(20)2022 Oct 21.
Article in English | MEDLINE | ID: mdl-36298423

ABSTRACT

The Internet of Things (IoT) is a complete ecosystem encompassing various communication technologies, sensors, hardware, and software. IoT cutting-edge technologies and Artificial Intelligence (AI) have enhanced the traditional healthcare system considerably. The conventional healthcare system faces many challenges, including avoidable long wait times, high costs, a conventional method of payment, unnecessary long travel to medical centers, and mandatory periodic doctor visits. A Smart healthcare system, Internet of Things (IoT), and AI are arguably the best-suited tailor-made solutions for all the flaws related to traditional healthcare systems. The primary goal of this study is to determine the impact of IoT, AI, various communication technologies, sensor networks, and disease detection/diagnosis in Cardiac healthcare through a systematic analysis of scholarly articles. Hence, a total of 104 fundamental studies are analyzed for the research questions purposefully defined for this systematic study. The review results show that deep learning emerges as a promising technology along with the combination of IoT in the domain of E-Cardiac care with enhanced accuracy and real-time clinical monitoring. This study also pins down the key benefits and significant challenges for E-Cardiology in the domains of IoT and AI. It further identifies the gaps and future research directions related to E-Cardiology, monitoring various Cardiac parameters, and diagnosis patterns.


Subject(s)
Artificial Intelligence , Ecosystem , Wireless Technology , Delivery of Health Care , Technology
4.
Math Biosci Eng ; 19(10): 10550-10580, 2022 07 25.
Article in English | MEDLINE | ID: mdl-36032006

ABSTRACT

The Internet of Things (IoT) is a paradigm that connects a range of physical smart devices to provide ubiquitous services to individuals and automate their daily tasks. IoT devices collect data from the surrounding environment and communicate with other devices using different communication protocols such as CoAP, MQTT, DDS, etc. Study shows that these protocols are vulnerable to attack and prove a significant threat to IoT telemetry data. Within a network, IoT devices are interdependent, and the behaviour of one device depends on the data coming from another device. An intruder exploits vulnerabilities of a device's interdependent feature and can alter the telemetry data to indirectly control the behaviour of other dependent devices in a network. Therefore, securing IoT devices have become a significant concern in IoT networks. The research community often proposes intrusion Detection Systems (IDS) using different techniques. One of the most adopted techniques is machine learning (ML) based intrusion detection. This study suggests a stacking-based ensemble model makes IoT devices more intelligent for detecting unusual behaviour in IoT networks. The TON-IoT (2020) dataset is used to assess the effectiveness of the proposed model. The proposed model achieves significant improvements in accuracy and other evaluation measures in binary and multi-class classification scenarios for most of the sensors compared to traditional ML algorithms and other ensemble techniques.


Subject(s)
Internet of Things , Algorithms , Humans , Machine Learning , Telemetry
5.
Sensors (Basel) ; 22(10)2022 May 10.
Article in English | MEDLINE | ID: mdl-35632016

ABSTRACT

The Internet of Things (IoT) is a widely used technology in automated network systems across the world. The impact of the IoT on different industries has occurred in recent years. Many IoT nodes collect, store, and process personal data, which is an ideal target for attackers. Several researchers have worked on this problem and have presented many intrusion detection systems (IDSs). The existing system has difficulties in improving performance and identifying subcategories of cyberattacks. This paper proposes a deep-convolutional-neural-network (DCNN)-based IDS. A DCNN consists of two convolutional layers and three fully connected dense layers. The proposed model aims to improve performance and reduce computational power. Experiments were conducted utilizing the IoTID20 dataset. The performance analysis of the proposed model was carried out with several metrics, such as accuracy, precision, recall, and F1-score. A number of optimization techniques were applied to the proposed model in which Adam, AdaMax, and Nadam performance was optimum. In addition, the proposed model was compared with various advanced deep learning (DL) and traditional machine learning (ML) techniques. All experimental analysis indicates that the accuracy of the proposed approach is high and more robust than existing DL-based algorithms.


Subject(s)
Internet of Things , Algorithms , Machine Learning , Neural Networks, Computer
6.
Comput Methods Programs Biomed ; 218: 106731, 2022 May.
Article in English | MEDLINE | ID: mdl-35286874

ABSTRACT

Artificial intelligence (AI) and computer vision (CV) methods become reliable to extract features from radiological images, aiding COVID-19 diagnosis ahead of the pathogenic tests and saving critical time for disease management and control. Thus, this review article focuses on cascading numerous deep learning-based COVID-19 computerized tomography (CT) imaging diagnosis research, providing a baseline for future research. Compared to previous review articles on the topic, this study pigeon-holes the collected literature very differently (i.e., its multi-level arrangement). For this purpose, 71 relevant studies were found using a variety of trustworthy databases and search engines, including Google Scholar, IEEE Xplore, Web of Science, PubMed, Science Direct, and Scopus. We classify the selected literature in multi-level machine learning groups, such as supervised and weakly supervised learning. Our review article reveals that weak supervision has been adopted extensively for COVID-19 CT diagnosis compared to supervised learning. Weakly supervised (conventional transfer learning) techniques can be utilized effectively for real-time clinical practices by reusing the sophisticated features rather than over-parameterizing the standard models. Few-shot and self-supervised learning are the recent trends to address data scarcity and model efficacy. The deep learning (artificial intelligence) based models are mainly utilized for disease management and control. Therefore, it is more appropriate for readers to comprehend the related perceptive of deep learning approaches for the in-progress COVID-19 CT diagnosis research.


Subject(s)
COVID-19 , Deep Learning , Artificial Intelligence , COVID-19/diagnostic imaging , COVID-19 Testing , Humans , SARS-CoV-2 , Tomography, X-Ray Computed/methods
7.
Comput Biol Med ; 143: 105267, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35114445

ABSTRACT

Cancer is the second deadliest disease globally that can affect any human body organ. Early detection of cancer can increase the chances of survival in humans. Morphometric appearances of histopathology images make it difficult to segment nuclei effectively. We proposed a model to segment overlapped nuclei from H&E stained images. U-Net model achieved state-of-the-art performance in many medical image segmentation tasks; however, we modified the U-Net to learn a distinct set of consistent features. In this paper, we proposed the DenseRes-Unet model by integrating dense blocks in the last layers of the encoder block of U-Net, focused on relevant features from previous layers of the model. Moreover, we take advantage of residual connections with Atrous blocks instead of conventional skip connections, which helps to reduce the semantic gap between encoder and decoder paths. The distance map and binary threshold techniques intensify the nuclei interior and contour information in the images, respectively. The distance map is used to detect the center point of nuclei; moreover, it differentiates among nuclei interior boundary and core area. The distance map lacks a contour problem, which is resolved by using a binary threshold. Binary threshold helps to enhance the pixels around nuclei. Afterward, we fed images into the proposed DenseRes-Unet model, a deep, fully convolutional network to segment nuclei in the images. We have evaluated our model on four publicly available datasets for Nuclei segmentation to validate the model's performance. Our proposed model achieves 89.77% accuracy 90.36% F1-score, and 78.61% Aggregated Jaccard Index (AJI) on Multi organ Nucleus Segmentation (MoNuSeg).

8.
Sensors (Basel) ; 22(4)2022 Feb 10.
Article in English | MEDLINE | ID: mdl-35214241

ABSTRACT

Internet of Vehicles (IoV) is an application of the Internet of Things (IoT) network that connects smart vehicles to the internet, and vehicles with each other. With the emergence of IoV technology, customers have placed great attention on smart vehicles. However, the rapid growth of IoV has also caused many security and privacy challenges that can lead to fatal accidents. To reduce smart vehicle accidents and detect malicious attacks in vehicular networks, several researchers have presented machine learning (ML)-based models for intrusion detection in IoT networks. However, a proficient and real-time faster algorithm is needed to detect malicious attacks in IoV. This article proposes a hybrid deep learning (DL) model for cyber attack detection in IoV. The proposed model is based on long short-term memory (LSTM) and gated recurrent unit (GRU). The performance of the proposed model is analyzed by using two datasets-a combined DDoS dataset that contains CIC DoS, CI-CIDS 2017, and CSE-CIC-IDS 2018, and a car-hacking dataset. The experimental results demonstrate that the proposed algorithm achieves higher attack detection accuracy of 99.5% and 99.9% for DDoS and car hacks, respectively. The other performance scores, precision, recall, and F1-score, also verify the superior performance of the proposed framework.


Subject(s)
Deep Learning , Internet of Things , Internet , Machine Learning , Neural Networks, Computer
9.
Sensors (Basel) ; 21(21)2021 Oct 22.
Article in English | MEDLINE | ID: mdl-34770322

ABSTRACT

A large number of smart devices in Internet of Things (IoT) environments communicate via different messaging protocols. Message Queuing Telemetry Transport (MQTT) is a widely used publish-subscribe-based protocol for the communication of sensor or event data. The publish-subscribe strategy makes it more attractive for intruders and thus increases the number of possible attacks over MQTT. In this paper, we proposed a Deep Neural Network (DNN) for intrusion detection in the MQTT-based protocol and also compared its performance with other traditional machine learning (ML) algorithms, such as a Naive Bayes (NB), Random Forest (RF), k-Nearest Neighbour (kNN), Decision Tree (DT), Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRUs). The performance is proved using two different publicly available datasets, including (1) MQTT-IoT-IDS2020 and (2) a dataset with three different types of attacks, such as Man in the Middle (MitM), Intrusion in the network, and Denial of Services (DoS). The MQTT-IoT-IDS2020 contains three abstract-level features, including Uni-Flow, Bi-Flow, and Packet-Flow. The results for the first dataset and binary classification show that the DNN-based model achieved 99.92%, 99.75%, and 94.94% accuracies for Uni-flow, Bi-flow, and Packet-flow, respectively. However, in the case of multi-label classification, these accuracies reduced to 97.08%, 98.12%, and 90.79%, respectively. On the other hand, the proposed DNN model attains the highest accuracy of 97.13% against LSTM and GRUs for the second dataset.


Subject(s)
Deep Learning , Internet of Things , Bayes Theorem , Humans , Neural Networks, Computer , Telemetry
10.
J Healthc Eng ; 2021: 6624764, 2021.
Article in English | MEDLINE | ID: mdl-33575018

ABSTRACT

In healthcare applications, deep learning is a highly valuable tool. It extracts features from raw data to save time and effort for health practitioners. A deep learning model is capable of learning and extracting the features from raw data by itself without any external intervention. On the other hand, shallow learning feature extraction techniques depend on user experience in selecting a powerful feature extraction algorithm. In this article, we proposed a multistage model that is based on the spectrogram of biosignal. The proposed model provides an appropriate representation of the input raw biosignal that boosts the accuracy of training and testing dataset. In the next stage, smaller datasets are augmented as larger data sets to enhance the accuracy of the classification for biosignal datasets. After that, the augmented dataset is represented in the TensorFlow that provides more services and functionalities, which give more flexibility. The proposed model was compared with different approaches. The results show that the proposed approach is better in terms of testing and training accuracy.


Subject(s)
Deep Learning , Delivery of Health Care
11.
Comput Math Methods Med ; 2020: 4015323, 2020.
Article in English | MEDLINE | ID: mdl-32411282

ABSTRACT

Previous works on segmentation of SEM (scanning electron microscope) blood cell image ignore the semantic segmentation approach of whole-slide blood cell segmentation. In the proposed work, we address the problem of whole-slide blood cell segmentation using the semantic segmentation approach. We design a novel convolutional encoder-decoder framework along with VGG-16 as the pixel-level feature extraction model. The proposed framework comprises 3 main steps: First, all the original images along with manually generated ground truth masks of each blood cell type are passed through the preprocessing stage. In the preprocessing stage, pixel-level labeling, RGB to grayscale conversion of masked image and pixel fusing, and unity mask generation are performed. After that, VGG16 is loaded into the system, which acts as a pretrained pixel-level feature extraction model. In the third step, the training process is initiated on the proposed model. We have evaluated our network performance on three evaluation metrics. We obtained outstanding results with respect to classwise, as well as global and mean accuracies. Our system achieved classwise accuracies of 97.45%, 93.34%, and 85.11% for RBCs, WBCs, and platelets, respectively, while global and mean accuracies remain 97.18% and 91.96%, respectively.


Subject(s)
Algorithms , Blood Cells/classification , Blood Cells/ultrastructure , Image Processing, Computer-Assisted/methods , Blood Platelets/ultrastructure , Computational Biology , Databases, Factual/statistics & numerical data , Deep Learning , Erythrocytes/ultrastructure , Humans , Image Enhancement/methods , Image Processing, Computer-Assisted/statistics & numerical data , Leukocytes/ultrastructure , Neural Networks, Computer , Precursor Cell Lymphoblastic Leukemia-Lymphoma/blood , Semantics
12.
PLoS One ; 12(2): e0171581, 2017.
Article in English | MEDLINE | ID: mdl-28146580

ABSTRACT

[This corrects the article DOI: 10.1371/journal.pone.0162746.].

13.
PLoS One ; 11(11): e0162746, 2016.
Article in English | MEDLINE | ID: mdl-27851762

ABSTRACT

Processing large amounts of data in real time for identifying security issues pose several performance challenges, especially when hardware infrastructure is limited. Managed Security Service Providers (MSSP), mostly hosting their applications on the Cloud, receive events at a very high rate that varies from a few hundred to a couple of thousand events per second (EPS). It is critical to process this data efficiently, so that attacks could be identified quickly and necessary response could be initiated. This paper evaluates the performance of a security framework OSTROM built on the Esper complex event processing (CEP) engine under a parallel and non-parallel computational framework. We explain three architectures under which Esper can be used to process events. We investigated the effect on throughput, memory and CPU usage in each configuration setting. The results indicate that the performance of the engine is limited by the number of events coming in rather than the queries being processed. The architecture where 1/4th of the total events are submitted to each instance and all the queries are processed by all the units shows best results in terms of throughput, memory and CPU usage.


Subject(s)
Cloud Computing , Software , Computers , Electronic Data Processing
14.
J Med Syst ; 39(10): 128, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26306876

ABSTRACT

Retinal blood vessels are the source to provide oxygen and nutrition to retina and any change in the normal structure may lead to different retinal abnormalities. Automated detection of vascular structure is very important while designing a computer aided diagnostic system for retinal diseases. Most popular methods for vessel segmentation are based on matched filters and Gabor wavelets which give good response against blood vessels. One major drawback in these techniques is that they also give strong response for lesion (exudates, hemorrhages) boundaries which give rise to false vessels. These false vessels may lead to incorrect detection of vascular changes. In this paper, we propose a new hybrid feature set along with new classification technique for accurate detection of blood vessels. The main motivation is to lower the false positives especially from retinal images with severe disease level. A novel region based hybrid feature set is presented for proper discrimination between true and false vessels. A new modified m-mediods based classification is also presented which uses most discriminating features to categorize vessel regions into true and false vessels. The evaluation of proposed system is done thoroughly on publicly available databases along with a locally gathered database with images of advanced level of retinal diseases. The results demonstrate the validity of the proposed system as compared to existing state of the art techniques.


Subject(s)
Image Processing, Computer-Assisted/methods , Retinal Diseases/diagnosis , Retinal Diseases/pathology , Retinal Vessels/pathology , Algorithms , False Positive Reactions , Fundus Oculi , Humans , Retina/pathology
SELECTION OF CITATIONS
SEARCH DETAIL
...