Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 57
Filter
1.
Int J Cardiol ; 412: 132339, 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38968972

ABSTRACT

BACKGROUND: The study aimed to determine the most crucial parameters associated with CVD and employ a novel data ensemble refinement procedure to uncover the optimal pattern of these parameters that can result in a high prediction accuracy. METHODS AND RESULTS: Data were collected from 369 patients in total, 281 patients with CVD or at risk of developing it, compared to 88 otherwise healthy individuals. Within the group of 281 CVD or at-risk patients, 53 were diagnosed with coronary artery disease (CAD), 16 with end-stage renal disease, 47 newly diagnosed with diabetes mellitus 2 and 92 with chronic inflammatory disorders (21 rheumatoid arthritis, 41 psoriasis, 30 angiitis). The data were analyzed using an artificial intelligence-based algorithm with the primary objective of identifying the optimal pattern of parameters that define CVD. The study highlights the effectiveness of a six-parameter combination in discerning the likelihood of cardiovascular disease using DERGA and Extra Trees algorithms. These parameters, ranked in order of importance, include Platelet-derived Microvesicles (PMV), hypertension, age, smoking, dyslipidemia, and Body Mass Index (BMI). Endothelial and erythrocyte MVs, along with diabetes were the least important predictors. In addition, the highest prediction accuracy achieved is 98.64%. Notably, using PMVs alone yields a 91.32% accuracy, while the optimal model employing all ten parameters, yields a prediction accuracy of 0.9783 (97.83%). CONCLUSIONS: Our research showcases the efficacy of DERGA, an innovative data ensemble refinement greedy algorithm. DERGA accelerates the assessment of an individual's risk of developing CVD, allowing for early diagnosis, significantly reduces the number of required lab tests and optimizes resource utilization. Additionally, it assists in identifying the optimal parameters critical for assessing CVD susceptibility, thereby enhancing our understanding of the underlying mechanisms.

2.
Sci Rep ; 14(1): 16438, 2024 Jul 16.
Article in English | MEDLINE | ID: mdl-39013941

ABSTRACT

In regions like Oman, which are characterized by aridity, enhancing the water quality discharged from reservoirs poses considerable challenges. This predicament is notably pronounced at Wadi Dayqah Dam (WDD), where meeting the demand for ample, superior water downstream proves to be a formidable task. Thus, accurately estimating and mapping water quality indicators (WQIs) is paramount for sustainable planning of inland in the study area. Since traditional procedures to collect water quality data are time-consuming, labor-intensive, and costly, water resources management has shifted from gathering field measurement data to utilizing remote sensing (RS) data. WDD has been threatened by various driving forces in recent years, such as contamination from different sources, sedimentation, nutrient runoff, salinity intrusion, temperature fluctuations, and microbial contamination. Therefore, this study aimed to retrieve and map WQIs, namely dissolved oxygen (DO) and chlorophyll-a (Chl-a) of the Wadi Dayqah Dam (WDD) reservoir from Sentinel-2 (S2) satellite data using a new procedure of weighted averaging, namely Bayesian Maximum Entropy-based Fusion (BMEF). To do so, the outputs of four Machine Learning (ML) algorithms, namely Multilayer Regression (MLR), Random Forest Regression (RFR), Support Vector Regression (SVRs), and XGBoost, were combined using this approach together, considering uncertainty. Water samples from 254 systematic plots were obtained for temperature (T), electrical conductivity (EC), chlorophyll-a (Chl-a), pH, oxidation-reduction potential (ORP), and dissolved oxygen (DO) in WDD. The findings indicated that, throughout both the training and testing phases, the BMEF model outperformed individual machine learning models. Considering Chl-a, as WQI, and R-squared, as evaluation indices, BMEF outperformed MLR, SVR, RFR, and XGBoost by 6%, 9%, 2%, and 7%, respectively. Furthermore, the results were significantly enhanced when the best combination of various spectral bands was considered to estimate specific WQIs instead of using all S2 bands as input variables of the ML algorithms.

3.
Article in English | MEDLINE | ID: mdl-38923476

ABSTRACT

In recent times, there has been a notable rise in the utilization of Internet of Medical Things (IoMT) frameworks particularly those based on edge computing, to enhance remote monitoring in healthcare applications. Most existing models in this field have been developed temperature screening methods using RCNN, face temperature encoder (FTE), and a combination of data from wearable sensors for predicting respiratory rate (RR) and monitoring blood pressure. These methods aim to facilitate remote screening and monitoring of Severe Acute Respiratory Syndrome Coronavirus (SARS-CoV) and COVID-19. However, these models require inadequate computing resources and are not suitable for lightweight environments. We propose a multimodal screening framework that leverages deep learning-inspired data fusion models to enhance screening results. A Variation Encoder (VEN) design proposes to measure skin temperature using Regions of Interest (RoI) identified by YoLo. Subsequently, the multi-data fusion model integrates electronic records features with data from wearable human sensors. To optimize computational efficiency, a data reduction mechanism is added to eliminate unnecessary features. Furthermore, we employ a contingent probability method to estimate distinct feature weights for each cluster, deepening our understanding of variations in thermal and sensory data to assess the prediction of abnormal COVID-19 instances. Simulation results using our lab dataset demonstrate a precision of 95.2%, surpassing state-of-the-art models due to the thoughtful design of the multimodal data-based feature fusion model, weight prediction factor, and feature selection model.

4.
Sci Rep ; 14(1): 13723, 2024 Jun 14.
Article in English | MEDLINE | ID: mdl-38877014

ABSTRACT

This paper proposes a novel multi-hybrid algorithm named DHPN, using the best-known properties of dwarf mongoose algorithm (DMA), honey badger algorithm (HBA), prairie dog optimizer (PDO), cuckoo search (CS), grey wolf optimizer (GWO) and naked mole rat algorithm (NMRA). It follows an iterative division for extensive exploration and incorporates major parametric enhancements for improved exploitation operation. To counter the local optima problems, a stagnation phase using CS and GWO is added. Six new inertia weight operators have been analyzed to adapt algorithmic parameters, and the best combination of these parameters has been found. An analysis of the suitability of DHPN towards population variations and higher dimensions has been performed. For performance evaluation, the CEC 2005 and CEC 2019 benchmark data sets have been used. A comparison has been performed with differential evolution with active archive (JADE), self-adaptive DE (SaDE), success history based DE (SHADE), LSHADE-SPACMA, extended GWO (GWO-E), jDE100, and others. The DHPN algorithm is also used to solve the image fusion problem for four fusion quality metrics, namely, edge-based similarity index ( Q A B / F ), sum of correlation difference (SCD), structural similarity index measure (SSIM), and artifact measure ( N A B / F ). The average Q A B / F = 0.765508 , S C D = 1.63185 , S S I M = 0.726317 , and N A B / F = 0.006617 shows the best combination of results obtained by DHPN with respect to the existing algorithms such as DCH, CBF, GTF, JSR and others. Experimental and statistical Wilcoxon's and Friedman's tests show that the proposed DHPN algorithm performs significantly better in comparison to the other algorithms under test.

5.
J Environ Manage ; 358: 120756, 2024 May.
Article in English | MEDLINE | ID: mdl-38599080

ABSTRACT

Water quality indicators (WQIs), such as chlorophyll-a (Chl-a) and dissolved oxygen (DO), are crucial for understanding and assessing the health of aquatic ecosystems. Precise prediction of these indicators is fundamental for the efficient administration of rivers, lakes, and reservoirs. This research utilized two unique DL algorithms-namely, convolutional neural network (CNNs) and gated recurrent units (GRUs)-alongside their amalgamation, CNN-GRU, to precisely gauge the concentration of these indicators within a reservoir. Moreover, to optimize the outcomes of the developed hybrid model, we considered the impact of a decomposition technique, specifically the wavelet transform (WT). In addition to these efforts, we created two distinct machine learning (ML) algorithms-namely, random forest (RF) and support vector regression (SVR)-to demonstrate the superior performance of deep learning algorithms over individual ML ones. We initially gathered WQIs from diverse locations and varying depths within the reservoir using an AAQ-RINKO device in the study area to achieve this. It is important to highlight that, despite utilizing diverse data-driven models in water quality estimation, a significant gap persists in the existing literature regarding implementing a comprehensive hybrid algorithm. This algorithm integrates the wavelet transform, convolutional neural network (CNN), and gated recurrent unit (GRU) methodologies to estimate WQIs accurately within a spatiotemporal framework. Subsequently, the effectiveness of the models that were developed was assessed utilizing various statistical metrics, encompassing the correlation coefficient (r), root mean square error (RMSE), mean absolute error (MAE), and Nash-Sutcliffe efficiency (NSE) throughout both the training and testing phases. The findings demonstrated that the WT-CNN-GRU model exhibited better performance in comparison with the other algorithms by 13% (SVR), 13% (RF), 9% (CNN), and 8% (GRU) when R-squared and DO were considered as evaluation indices and WQIs, respectively.


Subject(s)
Algorithms , Neural Networks, Computer , Water Quality , Machine Learning , Environmental Monitoring/methods , Lakes , Chlorophyll A/analysis , Wavelet Analysis
6.
Sci Rep ; 14(1): 7833, 2024 04 03.
Article in English | MEDLINE | ID: mdl-38570560

ABSTRACT

Heart disease is a major global cause of mortality and a major public health problem for a large number of individuals. A major issue raised by regular clinical data analysis is the recognition of cardiovascular illnesses, including heart attacks and coronary artery disease, even though early identification of heart disease can save many lives. Accurate forecasting and decision assistance may be achieved in an effective manner with machine learning (ML). Big Data, or the vast amounts of data generated by the health sector, may assist models used to make diagnostic choices by revealing hidden information or intricate patterns. This paper uses a hybrid deep learning algorithm to describe a large data analysis and visualization approach for heart disease detection. The proposed approach is intended for use with big data systems, such as Apache Hadoop. An extensive medical data collection is first subjected to an improved k-means clustering (IKC) method to remove outliers, and the remaining class distribution is then balanced using the synthetic minority over-sampling technique (SMOTE). The next step is to forecast the disease using a bio-inspired hybrid mutation-based swarm intelligence (HMSI) with an attention-based gated recurrent unit network (AttGRU) model after recursive feature elimination (RFE) has determined which features are most important. In our implementation, we compare four machine learning algorithms: SAE + ANN (sparse autoencoder + artificial neural network), LR (logistic regression), KNN (K-nearest neighbour), and naïve Bayes. The experiment results indicate that a 95.42% accuracy rate for the hybrid model's suggested heart disease prediction is attained, which effectively outperforms and overcomes the prescribed research gap in mentioned related work.


Subject(s)
Coronary Artery Disease , Deep Learning , Heart Diseases , Humans , Bayes Theorem , Heart Diseases/diagnosis , Heart Diseases/genetics , Coronary Artery Disease/diagnosis , Coronary Artery Disease/genetics , Algorithms , Intelligence
7.
Eur J Intern Med ; 125: 67-73, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38458880

ABSTRACT

It is important to determine the risk for admission to the intensive care unit (ICU) in patients with COVID-19 presenting at the emergency department. Using artificial neural networks, we propose a new Data Ensemble Refinement Greedy Algorithm (DERGA) based on 15 easily accessible hematological indices. A database of 1596 patients with COVID-19 was used; it was divided into 1257 training datasets (80 % of the database) for training the algorithms and 339 testing datasets (20 % of the database) to check the reliability of the algorithms. The optimal combination of hematological indicators that gives the best prediction consists of only four hematological indicators as follows: neutrophil-to-lymphocyte ratio (NLR), lactate dehydrogenase, ferritin, and albumin. The best prediction corresponds to a particularly high accuracy of 97.12 %. In conclusion, our novel approach provides a robust model based only on basic hematological parameters for predicting the risk for ICU admission and optimize COVID-19 patient management in the clinical practice.


Subject(s)
Algorithms , COVID-19 , Intensive Care Units , Machine Learning , Severity of Illness Index , Humans , COVID-19/diagnosis , COVID-19/blood , Male , Female , Middle Aged , Prognosis , Aged , SARS-CoV-2 , Ferritins/blood , Neural Networks, Computer , Neutrophils , Adult , L-Lactate Dehydrogenase/blood
8.
Sci Rep ; 14(1): 6942, 2024 Mar 23.
Article in English | MEDLINE | ID: mdl-38521848

ABSTRACT

Watermarking is one of the crucial techniques in the domain of information security, preventing the exploitation of 3D Mesh models in the era of Internet. In 3D Mesh watermark embedding, moderately perturbing the vertices is commonly required to retain them in certain pre-arranged relationship with their neighboring vertices. This paper proposes a novel watermarking authentication method, called Nearest Centroid Discrete Gaussian and Levenberg-Marquardt (NCDG-LV), for distortion detection and recovery using salient point detection. In this method, the salient points are selected using the Nearest Centroid and Discrete Gaussian Geometric (NC-DGG) salient point detection model. Map segmentation is applied to the 3D Mesh model to segment into distinct sub regions according to the selected salient points. Finally, the watermark is embedded by employing the Multi-function Barycenter into each spatially selected and segmented region. In the extraction process, the embedded 3D Mesh image is extracted from each re-segmented region by means of Levenberg-Marquardt Deep Neural Network Watermark Extraction. In the authentication stage, watermark bits are extracted by analyzing the geometry via Levenberg-Marquardt back-propagation. Based on a performance evaluation, the proposed method exhibits high imperceptibility and tolerance against attacks, such as smoothing, cropping, translation, and rotation. The experimental results further demonstrate that the proposed method is superior in terms of salient point detection time, distortion rate, true positive rate, peak signal to noise ratio, bit error rate, and root mean square error compared to the state-of-the-art methods.

9.
J Cell Mol Med ; 28(4): e18105, 2024 02.
Article in English | MEDLINE | ID: mdl-38339761

ABSTRACT

Complement inhibition has shown promise in various disorders, including COVID-19. A prediction tool including complement genetic variants is vital. This study aims to identify crucial complement-related variants and determine an optimal pattern for accurate disease outcome prediction. Genetic data from 204 COVID-19 patients hospitalized between April 2020 and April 2021 at three referral centres were analysed using an artificial intelligence-based algorithm to predict disease outcome (ICU vs. non-ICU admission). A recently introduced alpha-index identified the 30 most predictive genetic variants. DERGA algorithm, which employs multiple classification algorithms, determined the optimal pattern of these key variants, resulting in 97% accuracy for predicting disease outcome. Individual variations ranged from 40 to 161 variants per patient, with 977 total variants detected. This study demonstrates the utility of alpha-index in ranking a substantial number of genetic variants. This approach enables the implementation of well-established classification algorithms that effectively determine the relevance of genetic variants in predicting outcomes with high accuracy.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , COVID-19/genetics , Artificial Intelligence , Algorithms
10.
Sci Rep ; 14(1): 4877, 2024 Feb 28.
Article in English | MEDLINE | ID: mdl-38418500

ABSTRACT

Differential evolution (DE) is a robust optimizer designed for solving complex domain research problems in the computational intelligence community. In the present work, a multi-hybrid DE (MHDE) is proposed for improving the overall working capability of the algorithm without compromising the solution quality. Adaptive parameters, enhanced mutation, enhanced crossover, reducing population, iterative division and Gaussian random sampling are some of the major characteristics of the proposed MHDE algorithm. Firstly, an iterative division for improved exploration and exploitation is used, then an adaptive proportional population size reduction mechanism is followed for reducing the computational complexity. It also incorporated Weibull distribution and Gaussian random sampling to mitigate premature convergence. The proposed framework is validated by using IEEE CEC benchmark suites (CEC 2005, CEC 2014 and CEC 2017). The algorithm is applied to four engineering design problems and for the weight minimization of three frame design problems. Experimental results are analysed and compared with recent hybrid algorithms such as laplacian biogeography based optimization, adaptive differential evolution with archive (JADE), success history based DE, self adaptive DE, LSHADE, MVMO, fractional-order calculus-based flower pollination algorithm, sine cosine crow search algorithm and others. Statistically, the Friedman and Wilcoxon rank sum tests prove that the proposed algorithm fares better than others.

11.
Sci Rep ; 14(1): 4816, 2024 Feb 27.
Article in English | MEDLINE | ID: mdl-38413614

ABSTRACT

Many real-world optimization problems, particularly engineering ones, involve constraints that make finding a feasible solution challenging. Numerous researchers have investigated this challenge for constrained single- and multi-objective optimization problems. In particular, this work extends the boundary update (BU) method proposed by Gandomi and Deb (Comput. Methods Appl. Mech. Eng. 363:112917, 2020) for the constrained optimization problem. BU is an implicit constraint handling technique that aims to cut the infeasible search space over iterations to find the feasible region faster. In doing so, the search space is twisted, which can make the optimization problem more challenging. In response, two switching mechanisms are implemented that transform the landscape along with the variables to the original problem when the feasible region is found. To achieve this objective, two thresholds, representing distinct switching methods, are taken into account. In the first approach, the optimization process transitions to a state without utilizing the BU approach when constraint violations reach zero. In the second method, the optimization process shifts to a BU method-free optimization phase when there is no further change observed in the objective space. To validate, benchmarks and engineering problems are considered to be solved with well-known evolutionary single- and multi-objective optimization algorithms. Herein, the proposed method is benchmarked using with and without BU approaches over the whole search process. The results show that the proposed method can significantly boost the solutions in both convergence speed and finding better solutions for constrained optimization problems.

12.
BMC Bioinformatics ; 25(1): 33, 2024 Jan 22.
Article in English | MEDLINE | ID: mdl-38253993

ABSTRACT

Breast cancer remains a major public health challenge worldwide. The identification of accurate biomarkers is critical for the early detection and effective treatment of breast cancer. This study utilizes an integrative machine learning approach to analyze breast cancer gene expression data for superior biomarker and drug target discovery. Gene expression datasets, obtained from the GEO database, were merged post-preprocessing. From the merged dataset, differential expression analysis between breast cancer and normal samples revealed 164 differentially expressed genes. Meanwhile, a separate gene expression dataset revealed 350 differentially expressed genes. Additionally, the BGWO_SA_Ens algorithm, integrating binary grey wolf optimization and simulated annealing with an ensemble classifier, was employed on gene expression datasets to identify predictive genes including TOP2A, AKR1C3, EZH2, MMP1, EDNRB, S100B, and SPP1. From over 10,000 genes, BGWO_SA_Ens identified 1404 in the merged dataset (F1 score: 0.981, PR-AUC: 0.998, ROC-AUC: 0.995) and 1710 in the GSE45827 dataset (F1 score: 0.965, PR-AUC: 0.986, ROC-AUC: 0.972). The intersection of DEGs and BGWO_SA_Ens selected genes revealed 35 superior genes that were consistently significant across methods. Enrichment analyses uncovered the involvement of these superior genes in key pathways such as AMPK, Adipocytokine, and PPAR signaling. Protein-protein interaction network analysis highlighted subnetworks and central nodes. Finally, a drug-gene interaction investigation revealed connections between superior genes and anticancer drugs. Collectively, the machine learning workflow identified a robust gene signature for breast cancer, illuminated their biological roles, interactions and therapeutic associations, and underscored the potential of computational approaches in biomarker discovery and precision oncology.


Subject(s)
Biomarkers, Tumor , Breast Neoplasms , Humans , Female , Biomarkers, Tumor/genetics , Precision Medicine , Algorithms , Drug Delivery Systems , Breast Neoplasms/drug therapy , Breast Neoplasms/genetics
13.
Sci Rep ; 14(1): 2215, 2024 Jan 26.
Article in English | MEDLINE | ID: mdl-38278836

ABSTRACT

Detecting potholes and traffic signs is crucial for driver assistance systems and autonomous vehicles, emphasizing real-time and accurate recognition. In India, approximately 2500 fatalities occur annually due to accidents linked to hidden potholes and overlooked traffic signs. Existing methods often overlook water-filled and illuminated potholes, as well as those shaded by trees. Additionally, they neglect the perspective and illuminated (nighttime) traffic signs. To address these challenges, this study introduces a novel approach employing a cascade classifier along with a vision transformer. A cascade classifier identifies patterns associated with these elements, and Vision Transformers conducts detailed analysis and classification. The proposed approach undergoes training and evaluation on ICTS, GTSRDB, KAGGLE, and CCSAD datasets. Model performance is assessed using precision, recall, and mean Average Precision (mAP) metrics. Compared to state-of-the-art techniques like YOLOv3, YOLOv4, Faster RCNN, and SSD, the method achieves impressive recognition with a mAP of 97.14% for traffic sign detection and 98.27% for pothole detection.

14.
iScience ; 27(1): 108709, 2024 Jan 19.
Article in English | MEDLINE | ID: mdl-38269095

ABSTRACT

The increasing demand for food production due to the growing population is raising the need for more food-productive environments for plants. The genetic behavior of plant traits remains different in different growing environments. However, it is tedious and impossible to look after the individual plant component traits manually. Plant breeders need computer vision-based plant monitoring systems to analyze different plants' productivity and environmental suitability. It leads to performing feasible quantitative analysis, geometric analysis, and yield rate analysis of the plants. Many of the data collection methods have been used by plant breeders according to their needs. In the presented review, most of them are discussed with their corresponding challenges and limitations. Furthermore, the traditional approaches of segmentation and classification of plant phenotyping are also discussed. The data limitation problems and their currently adapted solutions in the computer vision aspect are highlighted, which somehow solve the problem but are not genuine. The available datasets and current issues are enlightened. The presented study covers the plants phenotyping problems, suggested solutions, and current challenges from data collection to classification steps.

15.
Sci Rep ; 14(1): 1333, 2024 01 16.
Article in English | MEDLINE | ID: mdl-38228772

ABSTRACT

In previous studies, replicated and multiple types of speech data have been used for Parkinson's disease (PD) detection. However, two main problems in these studies are lower PD detection accuracy and inappropriate validation methodologies leading to unreliable results. This study discusses the effects of inappropriate validation methodologies used in previous studies and highlights the use of appropriate alternative validation methods that would ensure generalization. To enhance PD detection accuracy, we propose a two-stage diagnostic system that refines the extracted set of features through [Formula: see text] regularized linear support vector machine and classifies the refined subset of features through a deep neural network. To rigorously evaluate the effectiveness of the proposed diagnostic system, experiments are performed on two different voice recording-based benchmark datasets. For both datasets, the proposed diagnostic system achieves 100% accuracy under leave-one-subject-out (LOSO) cross-validation (CV) and 97.5% accuracy under k-fold CV. The results show that the proposed system outperforms the existing methods regarding PD detection accuracy. The results suggest that the proposed diagnostic system is essential to improving non-invasive diagnostic decision support in PD.


Subject(s)
Parkinson Disease , Voice , Humans , Algorithms , Parkinson Disease/diagnosis , Support Vector Machine , Neural Networks, Computer
16.
Sci Rep ; 14(1): 534, 2024 01 04.
Article in English | MEDLINE | ID: mdl-38177156

ABSTRACT

The most widely used method for detecting Coronavirus Disease 2019 (COVID-19) is real-time polymerase chain reaction. However, this method has several drawbacks, including high cost, lengthy turnaround time for results, and the potential for false-negative results due to limited sensitivity. To address these issues, additional technologies such as computed tomography (CT) or X-rays have been employed for diagnosing the disease. Chest X-rays are more commonly used than CT scans due to the widespread availability of X-ray machines, lower ionizing radiation, and lower cost of equipment. COVID-19 presents certain radiological biomarkers that can be observed through chest X-rays, making it necessary for radiologists to manually search for these biomarkers. However, this process is time-consuming and prone to errors. Therefore, there is a critical need to develop an automated system for evaluating chest X-rays. Deep learning techniques can be employed to expedite this process. In this study, a deep learning-based method called Custom Convolutional Neural Network (Custom-CNN) is proposed for identifying COVID-19 infection in chest X-rays. The Custom-CNN model consists of eight weighted layers and utilizes strategies like dropout and batch normalization to enhance performance and reduce overfitting. The proposed approach achieved a classification accuracy of 98.19% and aims to accurately classify COVID-19, normal, and pneumonia samples.


Subject(s)
COVID-19 , Humans , X-Rays , Radiography , COVID-19/diagnostic imaging , Neural Networks, Computer , Biomarkers
17.
Sci Rep ; 14(1): 676, 2024 01 05.
Article in English | MEDLINE | ID: mdl-38182607

ABSTRACT

Melanoma is a severe skin cancer that involves abnormal cell development. This study aims to provide a new feature fusion framework for melanoma classification that includes a novel 'F' Flag feature for early detection. This novel 'F' indicator efficiently distinguishes benign skin lesions from malignant ones known as melanoma. The article proposes an architecture that is built in a Double Decker Convolutional Neural Network called DDCNN future fusion. The network's deck one, known as a Convolutional Neural Network (CNN), finds difficult-to-classify hairy images using a confidence factor termed the intra-class variance score. These hirsute image samples are combined to form a Baseline Separated Channel (BSC). By eliminating hair and using data augmentation techniques, the BSC is ready for analysis. The network's second deck trains the pre-processed BSC and generates bottleneck features. The bottleneck features are merged with features generated from the ABCDE clinical bio indicators to promote classification accuracy. Different types of classifiers are fed to the resulting hybrid fused features with the novel 'F' Flag feature. The proposed system was trained using the ISIC 2019 and ISIC 2020 datasets to assess its performance. The empirical findings expose that the DDCNN feature fusion strategy for exposing malignant melanoma achieved a specificity of 98.4%, accuracy of 93.75%, precision of 98.56%, and Area Under Curve (AUC) value of 0.98. This study proposes a novel approach that can accurately identify and diagnose fatal skin cancer and outperform other state-of-the-art techniques, which is attributed to the DDCNN 'F' Feature fusion framework. Also, this research ascertained improvements in several classifiers when utilising the 'F' indicator, resulting in the highest specificity of + 7.34%.


Subject(s)
Melanoma , Skin Neoplasms , Humans , Melanoma/diagnostic imaging , Skin Neoplasms/diagnostic imaging , Skin , Area Under Curve , Neural Networks, Computer
18.
Sci Rep ; 13(1): 18335, 2023 Oct 26.
Article in English | MEDLINE | ID: mdl-37884584

ABSTRACT

OAuth2.0 is a Single Sign-On approach that helps to authorize users to log into multiple applications without re-entering the credentials. Here, the OAuth service provider controls the central repository where data is stored, which may lead to third-party fraud and identity theft. To circumvent this problem, we need a distributed framework to authenticate and authorize the user without third-party involvement. This paper proposes a distributed authentication and authorization framework using a secret-sharing mechanism that comprises a blockchain-based decentralized identifier and a private distributed storage via an interplanetary file system. We implemented our proposed framework in Hyperledger Fabric (permissioned blockchain) and Ethereum TestNet (permissionless blockchain). Our performance analysis indicates that secret sharing-based authentication takes negligible time for generation and a combination of shares for verification. Moreover, security analysis shows that our model is robust, end-to-end secure, and compliant with the Universal Composability Framework.

19.
Sci Rep ; 13(1): 11052, 2023 Jul 08.
Article in English | MEDLINE | ID: mdl-37422487

ABSTRACT

The considerable improvement of technology produced for various applications has resulted in a growth in data sizes, such as healthcare data, which is renowned for having a large number of variables and data samples. Artificial neural networks (ANN) have demonstrated adaptability and effectiveness in classification, regression, and function approximation tasks. ANN is used extensively in function approximation, prediction, and classification. Irrespective of the task, ANN learns from the data by adjusting the edge weights to minimize the error between the actual and predicted values. Back Propagation is the most frequent learning technique that is used to learn the weights of ANN. However, this approach is prone to the problem of sluggish convergence, which is especially problematic in the case of Big Data. In this paper, we propose a Distributed Genetic Algorithm based ANN Learning Algorithm for addressing challenges associated with ANN learning for Big data. Genetic Algorithm is one of the well-utilized bio-inspired combinatorial optimization methods. Also, it is possible to parallelize it at multiple stages, and this may be done in an extremely effective manner for the distributed learning process. The proposed model is tested with various datasets to evaluate its realizability and efficiency. The results obtained from the experiments show that after a specific volume of data, the proposed learning method outperformed the traditional methods in terms of convergence time and accuracy. The proposed model outperformed the traditional model by almost 80% improvement in computational time.


Subject(s)
Big Data , Neural Networks, Computer , Algorithms
20.
Environ Sci Pollut Res Int ; 30(35): 84110-84125, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37355508

ABSTRACT

Effectual air quality monitoring network (AQMN) design plays a prominent role in environmental engineering. An optimal AQMN design should consider stations' mutual information and system uncertainties for effectiveness. This study develops a novel optimization model using a non-dominated sorting genetic algorithm II (NSGA-II). The Bayesian maximum entropy (BME) method generates potential stations as the input of a framework based on the transinformation entropy (TE) method to maximize the coverage and minimize the probability of selecting stations. Also, the fuzzy degree of membership and the nonlinear interval number programming (NINP) approaches are used to survey the uncertainty of the joint information. To obtain the best Pareto optimal solution of the AQMN characterization, a robust ranking technique, called Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE) approach, is utilized to select the most appropriate AQMN properties. This methodology is applied to Los Angeles, Long Beach, and Anaheim in California, USA. Results suggest using 4, 4, and 5 stations to monitor CO, NO2, and ozone, respectively; however, implementing this recommendation reduces coverage by 3.75, 3.75, and 3 times for CO, NO2, and ozone, respectively. On the positive side, this substantially decreases TE for CO, NO2, and ozone concentrations by 8.25, 5.86, and 4.75 times, respectively.


Subject(s)
Air Pollution , Ozone , Models, Theoretical , Bayes Theorem , Environmental Monitoring/methods , Entropy , Nitrogen Dioxide/analysis , Air Pollution/analysis , Ozone/analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...