Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(1)2023 Dec 28.
Artigo em Inglês | MEDLINE | ID: mdl-38203051

RESUMO

In today's digitalized era, the usage of Android devices is being extensively witnessed in various sectors. Cybercriminals inevitably adapt to new security technologies and utilize these platforms to exploit vulnerabilities for nefarious purposes, such as stealing users' sensitive and personal data. This may result in financial losses, discredit, ransomware, or the spreading of infectious malware and other catastrophic cyber-attacks. Due to the fact that ransomware encrypts user data and requests a ransom payment in exchange for the decryption key, it is one of the most devastating types of malicious software. The implications of ransomware attacks can range from a loss of essential data to a disruption of business operations and significant monetary damage. Artificial intelligence (AI)-based techniques, namely machine learning (ML), have proven to be notable in the detection of Android ransomware attacks. However, ensemble models and deep learning (DL) models have not been sufficiently explored. Therefore, in this study, we utilized ML- and DL-based techniques to build efficient, precise, and robust models for binary classification. A publicly available dataset from Kaggle consisting of 392,035 records with benign traffic and 10 different types of Android ransomware attacks was used to train and test the models. Two experiments were carried out. In experiment 1, all the features of the dataset were used. In experiment 2, only the best 19 features were used. The deployed models included a decision tree (DT), support vector machine (SVM), k-nearest neighbor (KNN), ensemble of (DT, SVM, and KNN), feedforward neural network (FNN), and tabular attention network (TabNet). Overall, the experiments yielded excellent results. DT outperformed the others, with an accuracy of 97.24%, precision of 98.50%, and F1-score of 98.45%. Whereas, in terms of the highest recall, SVM achieved 100%. The acquired results were thoroughly discussed, in addition to addressing limitations and exploring potential directions for future work.

2.
Sensors (Basel) ; 22(14)2022 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-35891005

RESUMO

In the oil and gas industries, predicting and classifying oil and gas production for hydrocarbon wells is difficult. Most oil and gas companies use reservoir simulation software to predict future oil and gas production and devise optimum field development plans. However, this process costs an immense number of resources and is time consuming. Each reservoir prediction experiment needs tens or hundreds of simulation runs, taking several hours or days to finish. In this paper, we attempt to overcome these issues by creating machine learning and deep learning models to expedite the process of forecasting oil and gas production. The dataset was provided by the leading oil producer, Saudi Aramco. Our approach reduced the time costs to a worst-case of a few minutes. Our study covered eight different ML and DL experiments and achieved its most outstanding R2 scores of 0.96 for XGBoost, 0.97 for ANN, and 0.98 for RNN over the other experiments.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Aprendizado de Máquina , Software , Água
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...