Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(20)2023 Oct 20.
Artigo em Inglês | MEDLINE | ID: mdl-37896705

RESUMO

With the increased use of automated systems, the Internet of Things (IoT), and sensors for real-time water quality monitoring, there is a greater requirement for the timely detection of unexpected values. Technical faults can introduce anomalies, and a large incoming data rate might make the manual detection of erroneous data difficult. This research introduces and applies a pioneering technology, Multivariate Multiple Convolutional Networks with Long Short-Term Memory (MCN-LSTM), to real-time water quality monitoring. MCN-LSTM is a cutting-edge deep learning technology designed to address the difficulty of detecting anomalies in complicated time series data, particularly in monitoring water quality in a real-world setting. The growing reliance on automated systems, the Internet of Things (IoT), and sensor networks for continuous water quality monitoring is driving the development and deployment of the MCN-LSTM approach. As these technologies become more widely used, the rapid and precise identification of unexpected or aberrant data points becomes critical. Technical difficulties, inherent noise, and a high data influx pose significant hurdles to manual anomaly detection processes. The MCN-LSTM technique takes advantage of deep learning by integrating Multiple Convolutional Networks and Long Short-Term Memory networks. This combination of approaches offers efficient and effective anomaly detection in multivariate time series data, allowing for identifying and flagging unexpected patterns or values that may signal water quality issues. Water quality data anomalies can have far-reaching repercussions, influencing future analyses and leading to incorrect judgments. Anomaly identification must be precise to avoid inaccurate findings and ensure the integrity of water quality tests. Extensive tests were carried out to validate the MCN-LSTM technique utilizing real-world information obtained from sensors installed in water quality monitoring scenarios. The results of these studies proved MCN-LSTM's outstanding efficacy, with an impressive accuracy rate of 92.3%. This high level of precision demonstrates the technique's capacity to discriminate between normal and abnormal data instances in real time. The MCN-LSTM technique is a big step forward in water quality monitoring. It can improve decision-making processes and reduce adverse outcomes caused by undetected abnormalities. This unique technique has significant promise for defending human health and maintaining the environment in an era of increased reliance on automated monitoring systems and IoT technology by contributing to the safety and sustainability of water supplies.

2.
Sensors (Basel) ; 23(12)2023 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-37420868

RESUMO

The latest version of ZigBee offers improvements in various aspects, including its low power consumption, flexibility, and cost-effective deployment. However, the challenges persist, as the upgraded protocol continues to suffer from a wide range of security weaknesses. Constrained wireless sensor network devices cannot use standard security protocols such as asymmetric cryptography mechanisms, which are resource-intensive and unsuitable for wireless sensor networks. ZigBee uses the Advanced Encryption Standard (AES), which is the best recommended symmetric key block cipher for securing data of sensitive networks and applications. However, AES is expected to be vulnerable to some attacks in the near future. Moreover, symmetric cryptosystems have key management and authentication issues. To address these concerns in wireless sensor networks, particularly in ZigBee communications, in this paper, we propose a mutual authentication scheme that can dynamically update the secret key value of device-to-trust center (D2TC) and device-to-device (D2D) communications. In addition, the suggested solution improves the cryptographic strength of ZigBee communications by improving the encryption process of a regular AES without the need for asymmetric cryptography. To achieve that, we use a secure one-way hash function operation when D2TC and D2D mutually authenticate each other, along with bitwise exclusive OR operations to enhance cryptography. Once authentication is accomplished, the ZigBee-based participants can mutually agree upon a shared session key and exchange a secure value. This secure value is then integrated with the sensed data from the devices and utilized as input for regular AES encryption. By adopting this technique, the encrypted data gains robust protection against potential cryptanalysis attacks. Finally, a comparative analysis is conducted to illustrate how the proposed scheme effectively maintains efficiency in comparison to eight competitive schemes. This analysis evaluates the scheme's performance across various factors, including security features, communication, and computational cost.


Assuntos
Comunicação , Segurança Computacional , Humanos , Redes de Comunicação de Computadores
3.
Comput Intell Neurosci ; 2022: 8154523, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35387251

RESUMO

A technology known as data analytics is a massively parallel processing approach that may be used to forecast a wide range of illnesses. Many scientific research methodologies have the problem of requiring a significant amount of time and processing effort, which has a negative impact on the overall performance of the system. Virtual screening (VS) is a drug discovery approach that makes use of big data techniques and is based on the concept of virtual screening. This approach is utilised for the development of novel drugs, and it is a time-consuming procedure that includes the docking of ligands in several databases in order to build the protein receptor. The proposed work is divided into two modules: image processing-based cancer segmentation and analysis using extracted features using big data analytics, and cancer segmentation and analysis using extracted features using image processing. This statistical approach is critical in the development of new drugs for the treatment of liver cancer. Machine learning methods were utilised in the prediction of liver cancer, including the MapReduce and Mahout algorithms, which were used to prefilter the set of ligand filaments before they were used in the prediction of liver cancer. This work proposes the SMRF algorithm, an improved scalable random forest algorithm built on the MapReduce foundation. Using a computer cluster or cloud computing environment, this new method categorises massive datasets. With SMRF, small amounts of data are processed and optimised over a large number of computers, allowing for the highest possible throughput. When compared to the standard random forest method, the testing findings reveal that the SMRF algorithm exhibits the same level of accuracy deterioration but exhibits superior overall performance. The accuracy range of 80 percent using the performance metrics analysis is included in the actual formulation of the medicine that is utilised for liver cancer prediction in this study.


Assuntos
Ciência de Dados , Neoplasias Hepáticas , Algoritmos , Computação em Nuvem , Humanos , Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas/diagnóstico por imagem
4.
Comput Intell Neurosci ; 2022: 4454226, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35126492

RESUMO

The digestive system is one of the essential systems in human physiology where the stomach has a significant part to play with its accessories like the esophagus, duodenum, small intestines, and large intestinal tract. Many individuals across the globe suffer from gastric dysrhythmia in combination with dyspepsia (improper digestion), unexplained nausea (feeling), vomiting, abdominal discomfort, ulcer of the stomach, and gastroesophageal reflux illnesses. Some of the techniques used to identify anomalies include clinical analysis, endoscopy, electrogastrogram, and imaging. Electrogastrogram is the registration of electrical impulses that pass through the stomach muscles and regulate the contraction of the muscle. The electrode senses the electrical impulses from the stomach muscles, and the electrogastrogram is recorded. A computer analyzes the captured electrogastrogram (EGG) signals. The usual electric rhythm produces an enhanced current in the typical stomach muscle after a meal. Postmeal electrical rhythm is abnormal in those with stomach muscles or nerve anomalies. This study considers EGG of ordinary individuals, bradycardia, dyspepsia, nausea, tachycardia, ulcer, and vomiting for analysis. Data are collected in collaboration with the doctor for preprandial and postprandial conditions for people with diseases and everyday individuals. In CWT with a genetic algorithm, db4 is utilized to obtain an EGG signal wave pattern in a 3D plot using MATLAB. The figure shows that the existence of the peak reflects the EGG signal cycle. The number of present peaks categorizes EGG. Adaptive Resonance Classifier Network (ARCN) is utilized to identify EGG signals as normal or abnormal subjects, depending on the parameter of alertness (µ). This study may be used as a medical tool to diagnose digestive system problems before proposing invasive treatments. Accuracy of the proposed work comes up with 95.45%, and sensitivity and specificity range is added as 92.45% and 87.12%.


Assuntos
Técnicas Biossensoriais , Dispepsia , Algoritmos , Humanos , Aprendizado de Máquina , Estômago
5.
PeerJ Comput Sci ; 7: e646, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34401475

RESUMO

Cardiovascular diseases (CVDs) are the most critical heart diseases. Accurate analytics for real-time heart disease is significant. This paper sought to develop a smart healthcare framework (SHDML) by using deep and machine learning techniques based on optimization stochastic gradient descent (SGD) to predict the presence of heart disease. The SHDML framework consists of two stage, the first stage of SHDML is able to monitor the heart beat rate condition of a patient. The SHDML framework to monitor patients in real-time has been developed using an ATmega32 Microcontroller to determine heartbeat rate per minute pulse rate sensors. The developed SHDML framework is able to broadcast the acquired sensor data to a Firebase Cloud database every 20 seconds. The smart application is infectious in regard to displaying the sensor data. The second stage of SHDML has been used in medical decision support systems to predict and diagnose heart diseases. Deep or machine learning techniques were ported to the smart application to analyze user data and predict CVDs in real-time. Two different methods of deep and machine learning techniques were checked for their performances. The deep and machine learning techniques were trained and tested using widely used open-access dataset. The proposed SHDML framework had very good performance with an accuracy of 0.99, sensitivity of 0.94, specificity of 0.85, and F1-score of 0.87.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...