Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Heliyon ; 10(12): e32400, 2024 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-38975160

RESUMO

Pests are a significant challenge in paddy cultivation, resulting in a global loss of approximately 20 % of rice yield. Early detection of paddy insects can help to save these potential losses. Several ways have been suggested for identifying and categorizing insects in paddy fields, employing a range of advanced, noninvasive, and portable technologies. However, none of these systems have successfully incorporated feature optimization techniques with Deep Learning and Machine Learning. Hence, the current research provided a framework utilizing these techniques to detect and categorize images of paddy insects promptly. Initially, the suggested research will gather the image dataset and categorize it into two groups: one without paddy insects and the other with paddy insects. Furthermore, various pre-processing techniques, such as augmentation and image filtering, will be applied to enhance the quality of the dataset and eliminate any unwanted noise. To determine and analyze the deep characteristics of an image, the suggested architecture will incorporate 5 pre-trained Convolutional Neural Network models. Following that, feature selection techniques, including Principal Component Analysis (PCA), Recursive Feature Elimination (RFE), Linear Discriminant Analysis (LDA), and an optimization algorithm called Lion Optimization, were utilized in order to further reduce the redundant number of features that were collected for the study. Subsequently, the process of identifying the paddy insects will be carried out by employing 7 ML algorithms. Finally, a set of experimental data analysis has been conducted to achieve the objectives, and the proposed approach demonstrates that the extracted feature vectors of ResNet50 with Logistic Regression and PCA have achieved the highest accuracy, precisely 99.28 %. However, the present idea will significantly impact how paddy insects are diagnosed in the field.

2.
Sci Rep ; 14(1): 9614, 2024 04 26.
Artigo em Inglês | MEDLINE | ID: mdl-38671304

RESUMO

The abnormal heart conduction, known as arrhythmia, can contribute to cardiac diseases that carry the risk of fatal consequences. Healthcare professionals typically use electrocardiogram (ECG) signals and certain preliminary tests to identify abnormal patterns in a patient's cardiac activity. To assess the overall cardiac health condition, cardiac specialists monitor these activities separately. This procedure may be arduous and time-intensive, potentially impacting the patient's well-being. This study automates and introduces a novel solution for predicting the cardiac health conditions, specifically identifying cardiac morbidity and arrhythmia in patients by using invasive and non-invasive measurements. The experimental analyses conducted in medical studies entail extremely sensitive data and any partial or biased diagnoses in this field are deemed unacceptable. Therefore, this research aims to introduce a new concept of determining the uncertainty level of machine learning algorithms using information entropy. To assess the effectiveness of machine learning algorithms information entropy can be considered as a unique performance evaluator of the machine learning algorithm which is not selected previously any studies within the realm of bio-computational research. This experiment was conducted on arrhythmia and heart disease datasets collected from Massachusetts Institute of Technology-Berth Israel Hospital-arrhythmia (DB-1) and Cleveland Heart Disease (DB-2), respectively. Our framework consists of four significant steps: 1) Data acquisition, 2) Feature preprocessing approach, 3) Implementation of learning algorithms, and 4) Information Entropy. The results demonstrate the average performance in terms of accuracy achieved by the classification algorithms: Neural Network (NN) achieved 99.74%, K-Nearest Neighbor (KNN) 98.98%, Support Vector Machine (SVM) 99.37%, Random Forest (RF) 99.76 % and Naïve Bayes (NB) 98.66% respectively. We believe that this study paves the way for further research, offering a framework for identifying cardiac health conditions through machine learning techniques.


Assuntos
Arritmias Cardíacas , Eletrocardiografia , Aprendizado de Máquina , Humanos , Eletrocardiografia/métodos , Arritmias Cardíacas/diagnóstico , Algoritmos , Monitorização Fisiológica/métodos , Cardiopatias/diagnóstico
3.
Sci Rep ; 14(1): 7406, 2024 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-38548726

RESUMO

Software vulnerabilities pose a significant threat to system security, necessitating effective automatic detection methods. Current techniques face challenges such as dependency issues, language bias, and coarse detection granularity. This study presents a novel deep learning-based vulnerability detection system for Java code. Leveraging hybrid feature extraction through graph and sequence-based techniques enhances semantic and syntactic understanding. The system utilizes control flow graphs (CFG), abstract syntax trees (AST), program dependencies (PD), and greedy longest-match first vectorization for graph representation. A hybrid neural network (GCN-RFEMLP) and the pre-trained CodeBERT model extract features, feeding them into a quantum convolutional neural network with self-attentive pooling. The system addresses issues like long-term information dependency and coarse detection granularity, employing intermediate code representation and inter-procedural slice code. To mitigate language bias, a benchmark software assurance reference dataset is employed. Evaluations demonstrate the system's superiority, achieving 99.2% accuracy in detecting vulnerabilities, outperforming benchmark methods. The proposed approach comprehensively addresses vulnerabilities, including improper input validation, missing authorizations, buffer overflow, cross-site scripting, and SQL injection attacks listed by common weakness enumeration (CWE).

4.
PeerJ Comput Sci ; 9: e1524, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37705647

RESUMO

The use of offensive terms in user-generated content on different social media platforms is one of the major concerns for these platforms. The offensive terms have a negative impact on individuals, which may lead towards the degradation of societal and civilized manners. The immense amount of content generated at a higher speed makes it humanly impossible to categorise and detect offensive terms. Besides, it is an open challenge for natural language processing (NLP) to detect such terminologies automatically. Substantial efforts are made for high-resource languages such as English. However, it becomes more challenging when dealing with resource-poor languages such as Urdu. Because of the lack of standard datasets and pre-processing tools for automatic offensive terms detection. This paper introduces a combinatorial pre-processing approach in developing a classification model for cross-platform (Twitter and YouTube) use. The approach uses datasets from two different platforms (Twitter and YouTube) the training and testing the model, which is trained to apply decision tree, random forest and naive Bayes algorithms. The proposed combinatorial pre-processing approach is applied to check how machine learning models behave with different combinations of standard pre-processing techniques for low-resource language in the cross-platform setting. The experimental results represent the effectiveness of the machine learning model over different subsets of traditional pre-processing approaches in building a classification model for automatic offensive terms detection for a low resource language, i.e., Urdu, in the cross-platform scenario. In the experiments, when dataset D1 is used for training and D2 is applied for testing, the pre-processing approach named Stopword removal produced better results with an accuracy of 83.27%. Whilst, in this case, when dataset D2 is used for training and D1 is applied for testing, stopword removal and punctuation removal were observed as a better preprocessing approach with an accuracy of 74.54%. The combinatorial approach proposed in this paper outperformed the benchmark for the considered datasets using classical as well as ensemble machine learning with an accuracy of 82.9% and 97.2% for dataset D1 and D2, respectively.

5.
Sensors (Basel) ; 23(11)2023 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-37299987

RESUMO

A vehicular ad hoc network (VANET) is a technique that uses vehicles with the ability to sense data from the environment and use it for their safety measures. Flooding is a commonly used term used for sending network packets. VANET may cause redundancy, delay, collision, and the incorrect receipt of the messages to their destination. Weather information is one of the most important types of information used for network control and provides an enhanced version of the network simulation environments. The network traffic delay and packet losses are the main problems identified inside the network. In this research, we propose a routing protocol which can transmit the weather forecasting information on demand based on source vehicle to destination vehicles, with the minimum number of hop counts, and provide significant control over network performance parameters. We propose a BBSF-based routing approach. The proposed technique effectively enhances the routing information and provides the secure and reliable service delivery of the network performance. The results taken from the network are based on hop count, network latency, network overhead, and packet delivery ratio. The results effectively show that the proposed technique is reliable in reducing the network latency, and that the hop count is minimized when transferring the weather information.


Assuntos
Blockchain , Algoritmos , Redes de Comunicação de Computadores , Tecnologia sem Fio , Tempo (Meteorologia)
6.
IEEE Sens J ; 23(2): 865-876, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36913223

RESUMO

Smart Sensing has shown notable contributions in the healthcare industry and revamps immense advancement. With this, the present smart sensing applications such as the Internet of Medical Things (IoMT) applications are elongated in the COVID-19 outbreak to facilitate the victims and alleviate the extensive contamination frequency of this pathogenic virus. Although, the existing IoMT applications are utilized productively in this pandemic, but somehow, the Quality of Service (QoS) metrics are overlooked, which is the basic need of these applications followed by patients, physicians, nursing staff, etc. In this review article, we will give a comprehensive assessment of the QoS of IoMT applications used in this pandemic from 2019 to 2021 to identify their requirements and current challenges by taking into account various network components and communication metrics. To claim the contribution of this work, we explored layer-wise QoS challenges in the existing literature to identify particular requirements, and set the footprint for future research. Finally, we compared each section with the existing review articles to acknowledge the uniqueness of this work followed by the answer of a question why this survey paper is needed in the presence of current state-of-the-art review papers.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...