Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 56
Filtrar
1.
Int J Cardiol ; 412: 132339, 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38968972

RESUMO

BACKGROUND: The study aimed to determine the most crucial parameters associated with CVD and employ a novel data ensemble refinement procedure to uncover the optimal pattern of these parameters that can result in a high prediction accuracy. METHODS AND RESULTS: Data were collected from 369 patients in total, 281 patients with CVD or at risk of developing it, compared to 88 otherwise healthy individuals. Within the group of 281 CVD or at-risk patients, 53 were diagnosed with coronary artery disease (CAD), 16 with end-stage renal disease, 47 newly diagnosed with diabetes mellitus 2 and 92 with chronic inflammatory disorders (21 rheumatoid arthritis, 41 psoriasis, 30 angiitis). The data were analyzed using an artificial intelligence-based algorithm with the primary objective of identifying the optimal pattern of parameters that define CVD. The study highlights the effectiveness of a six-parameter combination in discerning the likelihood of cardiovascular disease using DERGA and Extra Trees algorithms. These parameters, ranked in order of importance, include Platelet-derived Microvesicles (PMV), hypertension, age, smoking, dyslipidemia, and Body Mass Index (BMI). Endothelial and erythrocyte MVs, along with diabetes were the least important predictors. In addition, the highest prediction accuracy achieved is 98.64%. Notably, using PMVs alone yields a 91.32% accuracy, while the optimal model employing all ten parameters, yields a prediction accuracy of 0.9783 (97.83%). CONCLUSIONS: Our research showcases the efficacy of DERGA, an innovative data ensemble refinement greedy algorithm. DERGA accelerates the assessment of an individual's risk of developing CVD, allowing for early diagnosis, significantly reduces the number of required lab tests and optimizes resource utilization. Additionally, it assists in identifying the optimal parameters critical for assessing CVD susceptibility, thereby enhancing our understanding of the underlying mechanisms.

2.
Sci Rep ; 14(1): 13723, 2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-38877014

RESUMO

This paper proposes a novel multi-hybrid algorithm named DHPN, using the best-known properties of dwarf mongoose algorithm (DMA), honey badger algorithm (HBA), prairie dog optimizer (PDO), cuckoo search (CS), grey wolf optimizer (GWO) and naked mole rat algorithm (NMRA). It follows an iterative division for extensive exploration and incorporates major parametric enhancements for improved exploitation operation. To counter the local optima problems, a stagnation phase using CS and GWO is added. Six new inertia weight operators have been analyzed to adapt algorithmic parameters, and the best combination of these parameters has been found. An analysis of the suitability of DHPN towards population variations and higher dimensions has been performed. For performance evaluation, the CEC 2005 and CEC 2019 benchmark data sets have been used. A comparison has been performed with differential evolution with active archive (JADE), self-adaptive DE (SaDE), success history based DE (SHADE), LSHADE-SPACMA, extended GWO (GWO-E), jDE100, and others. The DHPN algorithm is also used to solve the image fusion problem for four fusion quality metrics, namely, edge-based similarity index ( Q A B / F ), sum of correlation difference (SCD), structural similarity index measure (SSIM), and artifact measure ( N A B / F ). The average Q A B / F = 0.765508 , S C D = 1.63185 , S S I M = 0.726317 , and N A B / F = 0.006617 shows the best combination of results obtained by DHPN with respect to the existing algorithms such as DCH, CBF, GTF, JSR and others. Experimental and statistical Wilcoxon's and Friedman's tests show that the proposed DHPN algorithm performs significantly better in comparison to the other algorithms under test.

3.
Artigo em Inglês | MEDLINE | ID: mdl-38923476

RESUMO

In recent times, there has been a notable rise in the utilization of Internet of Medical Things (IoMT) frameworks particularly those based on edge computing, to enhance remote monitoring in healthcare applications. Most existing models in this field have been developed temperature screening methods using RCNN, face temperature encoder (FTE), and a combination of data from wearable sensors for predicting respiratory rate (RR) and monitoring blood pressure. These methods aim to facilitate remote screening and monitoring of Severe Acute Respiratory Syndrome Coronavirus (SARS-CoV) and COVID-19. However, these models require inadequate computing resources and are not suitable for lightweight environments. We propose a multimodal screening framework that leverages deep learning-inspired data fusion models to enhance screening results. A Variation Encoder (VEN) design proposes to measure skin temperature using Regions of Interest (RoI) identified by YoLo. Subsequently, the multi-data fusion model integrates electronic records features with data from wearable human sensors. To optimize computational efficiency, a data reduction mechanism is added to eliminate unnecessary features. Furthermore, we employ a contingent probability method to estimate distinct feature weights for each cluster, deepening our understanding of variations in thermal and sensory data to assess the prediction of abnormal COVID-19 instances. Simulation results using our lab dataset demonstrate a precision of 95.2%, surpassing state-of-the-art models due to the thoughtful design of the multimodal data-based feature fusion model, weight prediction factor, and feature selection model.

4.
Sci Rep ; 14(1): 7833, 2024 04 03.
Artigo em Inglês | MEDLINE | ID: mdl-38570560

RESUMO

Heart disease is a major global cause of mortality and a major public health problem for a large number of individuals. A major issue raised by regular clinical data analysis is the recognition of cardiovascular illnesses, including heart attacks and coronary artery disease, even though early identification of heart disease can save many lives. Accurate forecasting and decision assistance may be achieved in an effective manner with machine learning (ML). Big Data, or the vast amounts of data generated by the health sector, may assist models used to make diagnostic choices by revealing hidden information or intricate patterns. This paper uses a hybrid deep learning algorithm to describe a large data analysis and visualization approach for heart disease detection. The proposed approach is intended for use with big data systems, such as Apache Hadoop. An extensive medical data collection is first subjected to an improved k-means clustering (IKC) method to remove outliers, and the remaining class distribution is then balanced using the synthetic minority over-sampling technique (SMOTE). The next step is to forecast the disease using a bio-inspired hybrid mutation-based swarm intelligence (HMSI) with an attention-based gated recurrent unit network (AttGRU) model after recursive feature elimination (RFE) has determined which features are most important. In our implementation, we compare four machine learning algorithms: SAE + ANN (sparse autoencoder + artificial neural network), LR (logistic regression), KNN (K-nearest neighbour), and naïve Bayes. The experiment results indicate that a 95.42% accuracy rate for the hybrid model's suggested heart disease prediction is attained, which effectively outperforms and overcomes the prescribed research gap in mentioned related work.


Assuntos
Doença da Artéria Coronariana , Aprendizado Profundo , Cardiopatias , Humanos , Teorema de Bayes , Cardiopatias/diagnóstico , Cardiopatias/genética , Doença da Artéria Coronariana/diagnóstico , Doença da Artéria Coronariana/genética , Algoritmos , Inteligência
5.
J Environ Manage ; 358: 120756, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38599080

RESUMO

Water quality indicators (WQIs), such as chlorophyll-a (Chl-a) and dissolved oxygen (DO), are crucial for understanding and assessing the health of aquatic ecosystems. Precise prediction of these indicators is fundamental for the efficient administration of rivers, lakes, and reservoirs. This research utilized two unique DL algorithms-namely, convolutional neural network (CNNs) and gated recurrent units (GRUs)-alongside their amalgamation, CNN-GRU, to precisely gauge the concentration of these indicators within a reservoir. Moreover, to optimize the outcomes of the developed hybrid model, we considered the impact of a decomposition technique, specifically the wavelet transform (WT). In addition to these efforts, we created two distinct machine learning (ML) algorithms-namely, random forest (RF) and support vector regression (SVR)-to demonstrate the superior performance of deep learning algorithms over individual ML ones. We initially gathered WQIs from diverse locations and varying depths within the reservoir using an AAQ-RINKO device in the study area to achieve this. It is important to highlight that, despite utilizing diverse data-driven models in water quality estimation, a significant gap persists in the existing literature regarding implementing a comprehensive hybrid algorithm. This algorithm integrates the wavelet transform, convolutional neural network (CNN), and gated recurrent unit (GRU) methodologies to estimate WQIs accurately within a spatiotemporal framework. Subsequently, the effectiveness of the models that were developed was assessed utilizing various statistical metrics, encompassing the correlation coefficient (r), root mean square error (RMSE), mean absolute error (MAE), and Nash-Sutcliffe efficiency (NSE) throughout both the training and testing phases. The findings demonstrated that the WT-CNN-GRU model exhibited better performance in comparison with the other algorithms by 13% (SVR), 13% (RF), 9% (CNN), and 8% (GRU) when R-squared and DO were considered as evaluation indices and WQIs, respectively.


Assuntos
Algoritmos , Redes Neurais de Computação , Qualidade da Água , Aprendizado de Máquina , Monitoramento Ambiental/métodos , Lagos , Clorofila A/análise , Análise de Ondaletas
6.
Eur J Intern Med ; 125: 67-73, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38458880

RESUMO

It is important to determine the risk for admission to the intensive care unit (ICU) in patients with COVID-19 presenting at the emergency department. Using artificial neural networks, we propose a new Data Ensemble Refinement Greedy Algorithm (DERGA) based on 15 easily accessible hematological indices. A database of 1596 patients with COVID-19 was used; it was divided into 1257 training datasets (80 % of the database) for training the algorithms and 339 testing datasets (20 % of the database) to check the reliability of the algorithms. The optimal combination of hematological indicators that gives the best prediction consists of only four hematological indicators as follows: neutrophil-to-lymphocyte ratio (NLR), lactate dehydrogenase, ferritin, and albumin. The best prediction corresponds to a particularly high accuracy of 97.12 %. In conclusion, our novel approach provides a robust model based only on basic hematological parameters for predicting the risk for ICU admission and optimize COVID-19 patient management in the clinical practice.


Assuntos
Algoritmos , COVID-19 , Unidades de Terapia Intensiva , Aprendizado de Máquina , Índice de Gravidade de Doença , Humanos , COVID-19/diagnóstico , COVID-19/sangue , Masculino , Feminino , Pessoa de Meia-Idade , Prognóstico , Idoso , SARS-CoV-2 , Ferritinas/sangue , Redes Neurais de Computação , Neutrófilos , Adulto , L-Lactato Desidrogenase/sangue
7.
Sci Rep ; 14(1): 6942, 2024 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-38521848

RESUMO

Watermarking is one of the crucial techniques in the domain of information security, preventing the exploitation of 3D Mesh models in the era of Internet. In 3D Mesh watermark embedding, moderately perturbing the vertices is commonly required to retain them in certain pre-arranged relationship with their neighboring vertices. This paper proposes a novel watermarking authentication method, called Nearest Centroid Discrete Gaussian and Levenberg-Marquardt (NCDG-LV), for distortion detection and recovery using salient point detection. In this method, the salient points are selected using the Nearest Centroid and Discrete Gaussian Geometric (NC-DGG) salient point detection model. Map segmentation is applied to the 3D Mesh model to segment into distinct sub regions according to the selected salient points. Finally, the watermark is embedded by employing the Multi-function Barycenter into each spatially selected and segmented region. In the extraction process, the embedded 3D Mesh image is extracted from each re-segmented region by means of Levenberg-Marquardt Deep Neural Network Watermark Extraction. In the authentication stage, watermark bits are extracted by analyzing the geometry via Levenberg-Marquardt back-propagation. Based on a performance evaluation, the proposed method exhibits high imperceptibility and tolerance against attacks, such as smoothing, cropping, translation, and rotation. The experimental results further demonstrate that the proposed method is superior in terms of salient point detection time, distortion rate, true positive rate, peak signal to noise ratio, bit error rate, and root mean square error compared to the state-of-the-art methods.

8.
Sci Rep ; 14(1): 4877, 2024 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-38418500

RESUMO

Differential evolution (DE) is a robust optimizer designed for solving complex domain research problems in the computational intelligence community. In the present work, a multi-hybrid DE (MHDE) is proposed for improving the overall working capability of the algorithm without compromising the solution quality. Adaptive parameters, enhanced mutation, enhanced crossover, reducing population, iterative division and Gaussian random sampling are some of the major characteristics of the proposed MHDE algorithm. Firstly, an iterative division for improved exploration and exploitation is used, then an adaptive proportional population size reduction mechanism is followed for reducing the computational complexity. It also incorporated Weibull distribution and Gaussian random sampling to mitigate premature convergence. The proposed framework is validated by using IEEE CEC benchmark suites (CEC 2005, CEC 2014 and CEC 2017). The algorithm is applied to four engineering design problems and for the weight minimization of three frame design problems. Experimental results are analysed and compared with recent hybrid algorithms such as laplacian biogeography based optimization, adaptive differential evolution with archive (JADE), success history based DE, self adaptive DE, LSHADE, MVMO, fractional-order calculus-based flower pollination algorithm, sine cosine crow search algorithm and others. Statistically, the Friedman and Wilcoxon rank sum tests prove that the proposed algorithm fares better than others.

9.
J Cell Mol Med ; 28(4): e18105, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38339761

RESUMO

Complement inhibition has shown promise in various disorders, including COVID-19. A prediction tool including complement genetic variants is vital. This study aims to identify crucial complement-related variants and determine an optimal pattern for accurate disease outcome prediction. Genetic data from 204 COVID-19 patients hospitalized between April 2020 and April 2021 at three referral centres were analysed using an artificial intelligence-based algorithm to predict disease outcome (ICU vs. non-ICU admission). A recently introduced alpha-index identified the 30 most predictive genetic variants. DERGA algorithm, which employs multiple classification algorithms, determined the optimal pattern of these key variants, resulting in 97% accuracy for predicting disease outcome. Individual variations ranged from 40 to 161 variants per patient, with 977 total variants detected. This study demonstrates the utility of alpha-index in ranking a substantial number of genetic variants. This approach enables the implementation of well-established classification algorithms that effectively determine the relevance of genetic variants in predicting outcomes with high accuracy.


Assuntos
COVID-19 , Humanos , COVID-19/epidemiologia , COVID-19/genética , Inteligência Artificial , Algoritmos
10.
Sci Rep ; 14(1): 4816, 2024 Feb 27.
Artigo em Inglês | MEDLINE | ID: mdl-38413614

RESUMO

Many real-world optimization problems, particularly engineering ones, involve constraints that make finding a feasible solution challenging. Numerous researchers have investigated this challenge for constrained single- and multi-objective optimization problems. In particular, this work extends the boundary update (BU) method proposed by Gandomi and Deb (Comput. Methods Appl. Mech. Eng. 363:112917, 2020) for the constrained optimization problem. BU is an implicit constraint handling technique that aims to cut the infeasible search space over iterations to find the feasible region faster. In doing so, the search space is twisted, which can make the optimization problem more challenging. In response, two switching mechanisms are implemented that transform the landscape along with the variables to the original problem when the feasible region is found. To achieve this objective, two thresholds, representing distinct switching methods, are taken into account. In the first approach, the optimization process transitions to a state without utilizing the BU approach when constraint violations reach zero. In the second method, the optimization process shifts to a BU method-free optimization phase when there is no further change observed in the objective space. To validate, benchmarks and engineering problems are considered to be solved with well-known evolutionary single- and multi-objective optimization algorithms. Herein, the proposed method is benchmarked using with and without BU approaches over the whole search process. The results show that the proposed method can significantly boost the solutions in both convergence speed and finding better solutions for constrained optimization problems.

11.
Sci Rep ; 14(1): 676, 2024 01 05.
Artigo em Inglês | MEDLINE | ID: mdl-38182607

RESUMO

Melanoma is a severe skin cancer that involves abnormal cell development. This study aims to provide a new feature fusion framework for melanoma classification that includes a novel 'F' Flag feature for early detection. This novel 'F' indicator efficiently distinguishes benign skin lesions from malignant ones known as melanoma. The article proposes an architecture that is built in a Double Decker Convolutional Neural Network called DDCNN future fusion. The network's deck one, known as a Convolutional Neural Network (CNN), finds difficult-to-classify hairy images using a confidence factor termed the intra-class variance score. These hirsute image samples are combined to form a Baseline Separated Channel (BSC). By eliminating hair and using data augmentation techniques, the BSC is ready for analysis. The network's second deck trains the pre-processed BSC and generates bottleneck features. The bottleneck features are merged with features generated from the ABCDE clinical bio indicators to promote classification accuracy. Different types of classifiers are fed to the resulting hybrid fused features with the novel 'F' Flag feature. The proposed system was trained using the ISIC 2019 and ISIC 2020 datasets to assess its performance. The empirical findings expose that the DDCNN feature fusion strategy for exposing malignant melanoma achieved a specificity of 98.4%, accuracy of 93.75%, precision of 98.56%, and Area Under Curve (AUC) value of 0.98. This study proposes a novel approach that can accurately identify and diagnose fatal skin cancer and outperform other state-of-the-art techniques, which is attributed to the DDCNN 'F' Feature fusion framework. Also, this research ascertained improvements in several classifiers when utilising the 'F' indicator, resulting in the highest specificity of + 7.34%.


Assuntos
Melanoma , Neoplasias Cutâneas , Humanos , Melanoma/diagnóstico por imagem , Neoplasias Cutâneas/diagnóstico por imagem , Pele , Área Sob a Curva , Redes Neurais de Computação
12.
Sci Rep ; 14(1): 534, 2024 01 04.
Artigo em Inglês | MEDLINE | ID: mdl-38177156

RESUMO

The most widely used method for detecting Coronavirus Disease 2019 (COVID-19) is real-time polymerase chain reaction. However, this method has several drawbacks, including high cost, lengthy turnaround time for results, and the potential for false-negative results due to limited sensitivity. To address these issues, additional technologies such as computed tomography (CT) or X-rays have been employed for diagnosing the disease. Chest X-rays are more commonly used than CT scans due to the widespread availability of X-ray machines, lower ionizing radiation, and lower cost of equipment. COVID-19 presents certain radiological biomarkers that can be observed through chest X-rays, making it necessary for radiologists to manually search for these biomarkers. However, this process is time-consuming and prone to errors. Therefore, there is a critical need to develop an automated system for evaluating chest X-rays. Deep learning techniques can be employed to expedite this process. In this study, a deep learning-based method called Custom Convolutional Neural Network (Custom-CNN) is proposed for identifying COVID-19 infection in chest X-rays. The Custom-CNN model consists of eight weighted layers and utilizes strategies like dropout and batch normalization to enhance performance and reduce overfitting. The proposed approach achieved a classification accuracy of 98.19% and aims to accurately classify COVID-19, normal, and pneumonia samples.


Assuntos
COVID-19 , Humanos , Raios X , Radiografia , COVID-19/diagnóstico por imagem , Redes Neurais de Computação , Biomarcadores
13.
iScience ; 27(1): 108709, 2024 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-38269095

RESUMO

The increasing demand for food production due to the growing population is raising the need for more food-productive environments for plants. The genetic behavior of plant traits remains different in different growing environments. However, it is tedious and impossible to look after the individual plant component traits manually. Plant breeders need computer vision-based plant monitoring systems to analyze different plants' productivity and environmental suitability. It leads to performing feasible quantitative analysis, geometric analysis, and yield rate analysis of the plants. Many of the data collection methods have been used by plant breeders according to their needs. In the presented review, most of them are discussed with their corresponding challenges and limitations. Furthermore, the traditional approaches of segmentation and classification of plant phenotyping are also discussed. The data limitation problems and their currently adapted solutions in the computer vision aspect are highlighted, which somehow solve the problem but are not genuine. The available datasets and current issues are enlightened. The presented study covers the plants phenotyping problems, suggested solutions, and current challenges from data collection to classification steps.

14.
BMC Bioinformatics ; 25(1): 33, 2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38253993

RESUMO

Breast cancer remains a major public health challenge worldwide. The identification of accurate biomarkers is critical for the early detection and effective treatment of breast cancer. This study utilizes an integrative machine learning approach to analyze breast cancer gene expression data for superior biomarker and drug target discovery. Gene expression datasets, obtained from the GEO database, were merged post-preprocessing. From the merged dataset, differential expression analysis between breast cancer and normal samples revealed 164 differentially expressed genes. Meanwhile, a separate gene expression dataset revealed 350 differentially expressed genes. Additionally, the BGWO_SA_Ens algorithm, integrating binary grey wolf optimization and simulated annealing with an ensemble classifier, was employed on gene expression datasets to identify predictive genes including TOP2A, AKR1C3, EZH2, MMP1, EDNRB, S100B, and SPP1. From over 10,000 genes, BGWO_SA_Ens identified 1404 in the merged dataset (F1 score: 0.981, PR-AUC: 0.998, ROC-AUC: 0.995) and 1710 in the GSE45827 dataset (F1 score: 0.965, PR-AUC: 0.986, ROC-AUC: 0.972). The intersection of DEGs and BGWO_SA_Ens selected genes revealed 35 superior genes that were consistently significant across methods. Enrichment analyses uncovered the involvement of these superior genes in key pathways such as AMPK, Adipocytokine, and PPAR signaling. Protein-protein interaction network analysis highlighted subnetworks and central nodes. Finally, a drug-gene interaction investigation revealed connections between superior genes and anticancer drugs. Collectively, the machine learning workflow identified a robust gene signature for breast cancer, illuminated their biological roles, interactions and therapeutic associations, and underscored the potential of computational approaches in biomarker discovery and precision oncology.


Assuntos
Biomarcadores Tumorais , Neoplasias da Mama , Humanos , Feminino , Biomarcadores Tumorais/genética , Medicina de Precisão , Algoritmos , Sistemas de Liberação de Medicamentos , Neoplasias da Mama/tratamento farmacológico , Neoplasias da Mama/genética
15.
Sci Rep ; 14(1): 2215, 2024 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-38278836

RESUMO

Detecting potholes and traffic signs is crucial for driver assistance systems and autonomous vehicles, emphasizing real-time and accurate recognition. In India, approximately 2500 fatalities occur annually due to accidents linked to hidden potholes and overlooked traffic signs. Existing methods often overlook water-filled and illuminated potholes, as well as those shaded by trees. Additionally, they neglect the perspective and illuminated (nighttime) traffic signs. To address these challenges, this study introduces a novel approach employing a cascade classifier along with a vision transformer. A cascade classifier identifies patterns associated with these elements, and Vision Transformers conducts detailed analysis and classification. The proposed approach undergoes training and evaluation on ICTS, GTSRDB, KAGGLE, and CCSAD datasets. Model performance is assessed using precision, recall, and mean Average Precision (mAP) metrics. Compared to state-of-the-art techniques like YOLOv3, YOLOv4, Faster RCNN, and SSD, the method achieves impressive recognition with a mAP of 97.14% for traffic sign detection and 98.27% for pothole detection.

16.
Sci Rep ; 14(1): 1333, 2024 01 16.
Artigo em Inglês | MEDLINE | ID: mdl-38228772

RESUMO

In previous studies, replicated and multiple types of speech data have been used for Parkinson's disease (PD) detection. However, two main problems in these studies are lower PD detection accuracy and inappropriate validation methodologies leading to unreliable results. This study discusses the effects of inappropriate validation methodologies used in previous studies and highlights the use of appropriate alternative validation methods that would ensure generalization. To enhance PD detection accuracy, we propose a two-stage diagnostic system that refines the extracted set of features through [Formula: see text] regularized linear support vector machine and classifies the refined subset of features through a deep neural network. To rigorously evaluate the effectiveness of the proposed diagnostic system, experiments are performed on two different voice recording-based benchmark datasets. For both datasets, the proposed diagnostic system achieves 100% accuracy under leave-one-subject-out (LOSO) cross-validation (CV) and 97.5% accuracy under k-fold CV. The results show that the proposed system outperforms the existing methods regarding PD detection accuracy. The results suggest that the proposed diagnostic system is essential to improving non-invasive diagnostic decision support in PD.


Assuntos
Doença de Parkinson , Voz , Humanos , Algoritmos , Doença de Parkinson/diagnóstico , Máquina de Vetores de Suporte , Redes Neurais de Computação
17.
Sci Rep ; 13(1): 18335, 2023 Oct 26.
Artigo em Inglês | MEDLINE | ID: mdl-37884584

RESUMO

OAuth2.0 is a Single Sign-On approach that helps to authorize users to log into multiple applications without re-entering the credentials. Here, the OAuth service provider controls the central repository where data is stored, which may lead to third-party fraud and identity theft. To circumvent this problem, we need a distributed framework to authenticate and authorize the user without third-party involvement. This paper proposes a distributed authentication and authorization framework using a secret-sharing mechanism that comprises a blockchain-based decentralized identifier and a private distributed storage via an interplanetary file system. We implemented our proposed framework in Hyperledger Fabric (permissioned blockchain) and Ethereum TestNet (permissionless blockchain). Our performance analysis indicates that secret sharing-based authentication takes negligible time for generation and a combination of shares for verification. Moreover, security analysis shows that our model is robust, end-to-end secure, and compliant with the Universal Composability Framework.

18.
Sci Rep ; 13(1): 11052, 2023 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-37422487

RESUMO

The considerable improvement of technology produced for various applications has resulted in a growth in data sizes, such as healthcare data, which is renowned for having a large number of variables and data samples. Artificial neural networks (ANN) have demonstrated adaptability and effectiveness in classification, regression, and function approximation tasks. ANN is used extensively in function approximation, prediction, and classification. Irrespective of the task, ANN learns from the data by adjusting the edge weights to minimize the error between the actual and predicted values. Back Propagation is the most frequent learning technique that is used to learn the weights of ANN. However, this approach is prone to the problem of sluggish convergence, which is especially problematic in the case of Big Data. In this paper, we propose a Distributed Genetic Algorithm based ANN Learning Algorithm for addressing challenges associated with ANN learning for Big data. Genetic Algorithm is one of the well-utilized bio-inspired combinatorial optimization methods. Also, it is possible to parallelize it at multiple stages, and this may be done in an extremely effective manner for the distributed learning process. The proposed model is tested with various datasets to evaluate its realizability and efficiency. The results obtained from the experiments show that after a specific volume of data, the proposed learning method outperformed the traditional methods in terms of convergence time and accuracy. The proposed model outperformed the traditional model by almost 80% improvement in computational time.


Assuntos
Big Data , Redes Neurais de Computação , Algoritmos
19.
Environ Sci Pollut Res Int ; 30(35): 84110-84125, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37355508

RESUMO

Effectual air quality monitoring network (AQMN) design plays a prominent role in environmental engineering. An optimal AQMN design should consider stations' mutual information and system uncertainties for effectiveness. This study develops a novel optimization model using a non-dominated sorting genetic algorithm II (NSGA-II). The Bayesian maximum entropy (BME) method generates potential stations as the input of a framework based on the transinformation entropy (TE) method to maximize the coverage and minimize the probability of selecting stations. Also, the fuzzy degree of membership and the nonlinear interval number programming (NINP) approaches are used to survey the uncertainty of the joint information. To obtain the best Pareto optimal solution of the AQMN characterization, a robust ranking technique, called Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE) approach, is utilized to select the most appropriate AQMN properties. This methodology is applied to Los Angeles, Long Beach, and Anaheim in California, USA. Results suggest using 4, 4, and 5 stations to monitor CO, NO2, and ozone, respectively; however, implementing this recommendation reduces coverage by 3.75, 3.75, and 3 times for CO, NO2, and ozone, respectively. On the positive side, this substantially decreases TE for CO, NO2, and ozone concentrations by 8.25, 5.86, and 4.75 times, respectively.


Assuntos
Poluição do Ar , Ozônio , Modelos Teóricos , Teorema de Bayes , Monitoramento Ambiental/métodos , Entropia , Dióxido de Nitrogênio/análise , Poluição do Ar/análise , Ozônio/análise
20.
Sci Rep ; 13(1): 8517, 2023 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-37231039

RESUMO

Large-scale solar energy production is still a great deal of obstruction due to the unpredictability of solar power. The intermittent, chaotic, and random quality of solar energy supply has to be dealt with by some comprehensive solar forecasting technologies. Despite forecasting for the long-term, it becomes much more essential to predict short-term forecasts in minutes or even seconds prior. Because key factors such as sudden movement of the clouds, instantaneous deviation of temperature in ambiance, the increased proportion of relative humidity and uncertainty in the wind velocities, haziness, and rains cause the undesired up and down ramping rates, thereby affecting the solar power generation to a greater extent. This paper aims to acknowledge the extended stellar forecasting algorithm using artificial neural network common sensical aspect. Three layered systems have been suggested, consisting of an input layer, hidden layer, and output layer feed-forward in conjunction with back propagation. A prior 5-min te output forecast fed to the input layer to reduce the error has been introduced to have a more precise forecast. Weather remains the most vital input for the ANN type of modeling. The forecasting errors might enhance considerably, thereby affecting the solar power supply relatively due to the variations in the solar irradiations and temperature on any forecasting day. Prior approximation of stellar radiations exhibits a small amount of qualm depending upon climatic conditions such as temperature, shading conditions, soiling effects, relative humidity, etc. All these environmental factors incorporate uncertainty regarding the prediction of the output parameter. In such a case, the approximation of PV output could be much more suitable than direct solar radiation. This paper uses Gradient Descent (GD) and Levenberg Maquarndt Artificial Neural Network (LM-ANN) techniques to apply to data obtained and recorded milliseconds from a 100 W solar panel. The essential purpose of this paper is to establish a time perspective with the greatest deal for the output forecast of small solar power utilities. It has been observed that 5 ms to 12 h time perspective gives the best short- to medium-term prediction for April. A case study has been done in the Peer Panjal region. The data collected for four months with various parameters have been applied randomly as input data using GD and LM type of artificial neural network compared to actual solar energy data. The proposed ANN based algorithm has been used for unswerving petite term forecasting. The model output has been presented in root mean square error and mean absolute percentage error. The results exhibit a improved concurrence between the forecasted and real models. The forecasting of solar energy and load variations assists in fulfilling the cost-effective aspects.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...