Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 34.259
Filter
1.
J Colloid Interface Sci ; 677(Pt A): 273-281, 2025 Jan.
Article in English | MEDLINE | ID: mdl-39094488

ABSTRACT

Wearable electronics based on conductive hydrogels (CHs) offer remarkable flexibility, conductivity, and versatility. However, the flexibility, adhesiveness, and conductivity of traditional CHs deteriorate when they freeze, thereby limiting their utility in challenging environments. In this work, we introduce a PHEA-NaSS/G hydrogel that can be conveniently fabricated into a freeze-resistant conductive hydrogel by weakening the hydrogen bonds between water molecules. This is achieved through the synergistic interaction between the charged polar end group (-SO3-) and the glycerol-water binary solvent system. The conductive hydrogel is simultaneously endowed with tunable mechanical properties and conductive pathways by the modulation caused by varying material compositions. Due to the uniform interconnectivity of the network structure resulting from strong intermolecular interactions and the enhancement effect of charged polar end-groups, the resulting hydrogel exhibits 174 kPa tensile strength, 2105 % tensile strain, and excellent sensing ability (GF = 2.86, response time: 121 ms), and the sensor is well suited for repeatable and stable monitoring of human motion. Additionally, using the Full Convolutional Network (FCN) algorithm, the sensor can be used to recognize English letter handwriting with an accuracy of 96.4 %. This hydrogel strain sensor provides a simple method for creating multi-functional electronic devices, with significant potential in the fields of multifunctional electronics such as soft robotics, health monitoring, and human-computer interaction.

2.
Methods Mol Biol ; 2847: 63-93, 2025.
Article in English | MEDLINE | ID: mdl-39312137

ABSTRACT

Machine learning algorithms, and in particular deep learning approaches, have recently garnered attention in the field of molecular biology due to remarkable results. In this chapter, we describe machine learning approaches specifically developed for the design of RNAs, with a focus on the learna_tools Python package, a collection of automated deep reinforcement learning algorithms for secondary structure-based RNA design. We explain the basic concepts of reinforcement learning and its extension, automated reinforcement learning, and outline how these concepts can be successfully applied to the design of RNAs. The chapter is structured to guide through the usage of the different programs with explicit examples, highlighting particular applications of the individual tools.


Subject(s)
Algorithms , Machine Learning , Nucleic Acid Conformation , RNA , Software , RNA/chemistry , RNA/genetics , Computational Biology/methods , Deep Learning
3.
Methods Mol Biol ; 2847: 153-161, 2025.
Article in English | MEDLINE | ID: mdl-39312142

ABSTRACT

Understanding the connection between complex structural features of RNA and biological function is a fundamental challenge in evolutionary studies and in RNA design. However, building datasets of RNA 3D structures and making appropriate modeling choices remain time-consuming and lack standardization. In this chapter, we describe the use of rnaglib, to train supervised and unsupervised machine learning-based function prediction models on datasets of RNA 3D structures.


Subject(s)
Computational Biology , Nucleic Acid Conformation , RNA , Software , RNA/chemistry , RNA/genetics , Computational Biology/methods , Machine Learning , Models, Molecular
4.
Methods Mol Biol ; 2847: 241-300, 2025.
Article in English | MEDLINE | ID: mdl-39312149

ABSTRACT

Nucleic acid tests (NATs) are considered as gold standard in molecular diagnosis. To meet the demand for onsite, point-of-care, specific and sensitive, trace and genotype detection of pathogens and pathogenic variants, various types of NATs have been developed since the discovery of PCR. As alternatives to traditional NATs (e.g., PCR), isothermal nucleic acid amplification techniques (INAATs) such as LAMP, RPA, SDA, HDR, NASBA, and HCA were invented gradually. PCR and most of these techniques highly depend on efficient and optimal primer and probe design to deliver accurate and specific results. This chapter starts with a discussion of traditional NATs and INAATs in concert with the description of computational tools available to aid the process of primer/probe design for NATs and INAATs. Besides briefly covering nanoparticles-assisted NATs, a more comprehensive presentation is given on the role CRISPR-based technologies have played in molecular diagnosis. Here we provide examples of a few groundbreaking CRISPR assays that have been developed to counter epidemics and pandemics and outline CRISPR biology, highlighting the role of CRISPR guide RNA and its design in any successful CRISPR-based application. In this respect, we tabularize computational tools that are available to aid the design of guide RNAs in CRISPR-based applications. In the second part of our chapter, we discuss machine learning (ML)- and deep learning (DL)-based computational approaches that facilitate the design of efficient primer and probe for NATs/INAATs and guide RNAs for CRISPR-based applications. Given the role of microRNA (miRNAs) as potential future biomarkers of disease diagnosis, we have also discussed ML/DL-based computational approaches for miRNA-target predictions. Our chapter presents the evolution of nucleic acid-based diagnosis techniques from PCR and INAATs to more advanced CRISPR/Cas-based methodologies in concert with the evolution of deep learning (DL)- and machine learning (ml)-based computational tools in the most relevant application domains.


Subject(s)
Deep Learning , Humans , CRISPR-Cas Systems , Molecular Diagnostic Techniques/methods , Nucleic Acid Amplification Techniques/methods , RNA/genetics , Machine Learning , Clustered Regularly Interspaced Short Palindromic Repeats/genetics
5.
Methods Mol Biol ; 2834: 3-39, 2025.
Article in English | MEDLINE | ID: mdl-39312158

ABSTRACT

Quantitative structure-activity relationships (QSAR) is a method for predicting the physical and biological properties of small molecules; it is in use in industry and public services. However, as any scientific method, it is challenged by more and more requests, especially considering its possible role in assessing the safety of new chemicals. To answer the question whether QSAR, by exploiting available knowledge, can build new knowledge, the chapter reviews QSAR methods in search of a QSAR epistemology. QSAR stands on tree pillars, i.e., biological data, chemical knowledge, and modeling algorithms. Usually the biological data, resulting from good experimental practice, are taken as a true picture of the world; chemical knowledge has scientific bases; so if a QSAR model is not working, blame modeling. The role of modeling in developing scientific theories, and in producing knowledge, is so analyzed. QSAR is a mature technology and is part of a large body of in silico methods and other computational methods. The active debate about the acceptability of the QSAR models, about the way to communicate them, and the explanation to provide accompanies the development of today QSAR models. An example about predicting possible endocrine-disrupting chemicals (EDC) shows the many faces of modern QSAR methods.


Subject(s)
Quantitative Structure-Activity Relationship , Algorithms , Humans , Endocrine Disruptors/chemistry
6.
Methods Mol Biol ; 2847: 121-135, 2025.
Article in English | MEDLINE | ID: mdl-39312140

ABSTRACT

Fundamental to the diverse biological functions of RNA are its 3D structure and conformational flexibility, which enable single sequences to adopt a variety of distinct 3D states. Currently, computational RNA design tasks are often posed as inverse problems, where sequences are designed based on adopting a single desired secondary structure without considering 3D geometry and conformational diversity. In this tutorial, we present gRNAde, a geometric RNA design pipeline operating on sets of 3D RNA backbone structures to design sequences that explicitly account for RNA 3D structure and dynamics. gRNAde is a graph neural network that uses an SE (3) equivariant encoder-decoder framework for generating RNA sequences conditioned on backbone structures where the identities of the bases are unknown. We demonstrate the utility of gRNAde for fixed-backbone re-design of existing RNA structures of interest from the PDB, including riboswitches, aptamers, and ribozymes. gRNAde is more accurate in terms of native sequence recovery while being significantly faster compared to existing physics-based tools for 3D RNA inverse design, such as Rosetta.


Subject(s)
Deep Learning , Nucleic Acid Conformation , RNA , Software , RNA/chemistry , RNA/genetics , Computational Biology/methods , RNA, Catalytic/chemistry , RNA, Catalytic/genetics , Models, Molecular , Neural Networks, Computer
7.
Methods Mol Biol ; 2856: 357-400, 2025.
Article in English | MEDLINE | ID: mdl-39283464

ABSTRACT

Three-dimensional (3D) chromatin interactions, such as enhancer-promoter interactions (EPIs), loops, topologically associating domains (TADs), and A/B compartments, play critical roles in a wide range of cellular processes by regulating gene expression. Recent development of chromatin conformation capture technologies has enabled genome-wide profiling of various 3D structures, even with single cells. However, current catalogs of 3D structures remain incomplete and unreliable due to differences in technology, tools, and low data resolution. Machine learning methods have emerged as an alternative to obtain missing 3D interactions and/or improve resolution. Such methods frequently use genome annotation data (ChIP-seq, DNAse-seq, etc.), DNA sequencing information (k-mers and transcription factor binding site (TFBS) motifs), and other genomic properties to learn the associations between genomic features and chromatin interactions. In this review, we discuss computational tools for predicting three types of 3D interactions (EPIs, chromatin interactions, and TAD boundaries) and analyze their pros and cons. We also point out obstacles to the computational prediction of 3D interactions and suggest future research directions.


Subject(s)
Chromatin , Deep Learning , Chromatin/genetics , Chromatin/metabolism , Humans , Computational Biology/methods , Machine Learning , Genomics/methods , Enhancer Elements, Genetic , Promoter Regions, Genetic , Binding Sites , Genome , Software
8.
Ophthalmol Sci ; 5(1): 100587, 2025.
Article in English | MEDLINE | ID: mdl-39380882

ABSTRACT

Purpose: To apply methods for quantifying uncertainty of deep learning segmentation of geographic atrophy (GA). Design: Retrospective analysis of OCT images and model comparison. Participants: One hundred twenty-six eyes from 87 participants with GA in the SWAGGER cohort of the Nonexudative Age-Related Macular Degeneration Imaged with Swept-Source OCT (SS-OCT) study. Methods: The manual segmentations of GA lesions were conducted on structural subretinal pigment epithelium en face images from the SS-OCT images. Models were developed for 2 approximate Bayesian deep learning techniques, Monte Carlo dropout and ensemble, to assess the uncertainty of GA semantic segmentation and compared to a traditional deep learning model. Main Outcome Measures: Model performance (Dice score) was compared. Uncertainty was calculated using the formula for Shannon Entropy. Results: The output of both Bayesian technique models showed a greater number of pixels with high entropy than the standard model. Dice scores for the Monte Carlo dropout method (0.90, 95% confidence interval 0.87-0.93) and the ensemble method (0.88, 95% confidence interval 0.85-0.91) were significantly higher (P < 0.001) than for the traditional model (0.82, 95% confidence interval 0.78-0.86). Conclusions: Quantifying the uncertainty in a prediction of GA may improve trustworthiness of the models and aid clinicians in decision-making. The Bayesian deep learning techniques generated pixel-wise estimates of model uncertainty for segmentation, while also improving model performance compared with traditionally trained deep learning models. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

9.
J Biomed Opt ; 30(Suppl 1): S13706, 2025 Jan.
Article in English | MEDLINE | ID: mdl-39295734

ABSTRACT

Significance: Oral cancer surgery requires accurate margin delineation to balance complete resection with post-operative functionality. Current in vivo fluorescence imaging systems provide two-dimensional margin assessment yet fail to quantify tumor depth prior to resection. Harnessing structured light in combination with deep learning (DL) may provide near real-time three-dimensional margin detection. Aim: A DL-enabled fluorescence spatial frequency domain imaging (SFDI) system trained with in silico tumor models was developed to quantify the depth of oral tumors. Approach: A convolutional neural network was designed to produce tumor depth and concentration maps from SFDI images. Three in silico representations of oral cancer lesions were developed to train the DL architecture: cylinders, spherical harmonics, and composite spherical harmonics (CSHs). Each model was validated with in silico SFDI images of patient-derived tongue tumors, and the CSH model was further validated with optical phantoms. Results: The performance of the CSH model was superior when presented with patient-derived tumors ( P -value < 0.05 ). The CSH model could predict depth and concentration within 0.4 mm and 0.4 µ g / mL , respectively, for in silico tumors with depths less than 10 mm. Conclusions: A DL-enabled SFDI system trained with in silico CSH demonstrates promise in defining the deep margins of oral tumors.


Subject(s)
Computer Simulation , Deep Learning , Mouth Neoplasms , Optical Imaging , Phantoms, Imaging , Surgery, Computer-Assisted , Optical Imaging/methods , Humans , Mouth Neoplasms/diagnostic imaging , Mouth Neoplasms/surgery , Mouth Neoplasms/pathology , Surgery, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Margins of Excision
10.
Spectrochim Acta A Mol Biomol Spectrosc ; 324: 125001, 2025 Jan 05.
Article in English | MEDLINE | ID: mdl-39180971

ABSTRACT

Utilizing visible and near-infrared (Vis-NIR) spectroscopy in conjunction with chemometrics methods has been widespread for identifying plant diseases. However, a key obstacle involves the extraction of relevant spectral characteristics. This study aimed to enhance sugarcane disease recognition by combining convolutional neural network (CNN) with continuous wavelet transform (CWT) spectrograms for spectral features extraction within the Vis-NIR spectra (380-1400 nm) to improve the accuracy of sugarcane diseases recognition. Using 130 sugarcane leaf samples, the obtained one-dimensional CWT coefficients from Vis-NIR spectra were transformed into two-dimensional spectrograms. Employing CNN, spectrogram features were extracted and incorporated into decision tree, K-nearest neighbour, partial least squares discriminant analysis, and random forest (RF) calibration models. The RF model, integrating spectrogram-derived features, demonstrated the best performance with an average precision of 0.9111, sensitivity of 0.9733, specificity of 0.9791, and accuracy of 0.9487. This study may offer a non-destructive, rapid, and accurate means to detect sugarcane diseases, enabling farmers to receive timely and actionable insights on the crops' health, thus minimizing crop loss and optimizing yields.


Subject(s)
Deep Learning , Plant Diseases , Saccharum , Spectroscopy, Near-Infrared , Wavelet Analysis , Saccharum/chemistry , Spectroscopy, Near-Infrared/methods , Plant Leaves/chemistry , Least-Squares Analysis , Discriminant Analysis
11.
Diagn Interv Imaging ; 2024 Oct 03.
Article in English | MEDLINE | ID: mdl-39366836

ABSTRACT

PURPOSE: The purpose of this study was to evaluate the diagnostic performance of automated deep learning in the detection of coronary artery disease (CAD) on photon-counting coronary CT angiography (PC-CCTA). MATERIALS AND METHODS: Consecutive patients with suspected CAD who underwent PC-CCTA between January 2022 and December 2023 were included in this retrospective, single-center study. Non-ultra-high resolution (UHR) PC-CCTA images were analyzed by artificial intelligence using two deep learning models (CorEx, Spimed-AI), and compared to human expert reader assessment using UHR PC-CCTA images. Diagnostic performance for global CAD assessment (at least one significant stenosis ≥ 50 %) was estimated at patient and vessel levels. RESULTS: A total of 140 patients (96 men, 44 women) with a median age of 60 years (first quartile, 51; third quartile, 68) were evaluated. Significant CAD on UHR PC-CCTA was present in 36/140 patients (25.7 %). The sensitivity, specificity, accuracy, positive predictive value), and negative predictive value of deep learning-based CAD were 97.2 %, 81.7 %, 85.7 %, 64.8 %, and 98.9 %, respectively, at the patient level and 96.6 %, 86.7 %, 88.1 %, 53.8 %, and 99.4 %, respectively, at the vessel level. The area under the receiver operating characteristic curve was 0.90 (95 % CI: 0.83-0.94) at the patient level and 0.92 (95 % CI: 0.89-0.94) at the vessel level. CONCLUSION: Automated deep learning shows remarkable performance for the diagnosis of significant CAD on non-UHR PC-CCTA images. AI pre-reading may be of supportive value to the human reader in daily clinical practice to target and validate coronary artery stenosis using UHR PC-CCTA.

12.
Network ; : 1-29, 2024 Oct 05.
Article in English | MEDLINE | ID: mdl-39367861

ABSTRACT

The current research explores the improvements in predictive performance and computational efficiency that machine learning and deep learning methods have made over time. Specifically, the application of transfer learning concepts within Convolutional Neural Networks (CNNs) has proved useful for diagnosing and classifying the various stages of Alzheimer's disease. Using base architectures such as Xception, InceptionResNetV2, DenseNet201, InceptionV3, ResNet50, and MobileNetV2, this study extends these models by adding batch normalization (BN), dropout, and dense layers. These enhancements improve the model's effectiveness and precision in addressing the specified medical issue. The proposed model is rigorously validated and evaluated using publicly available Kaggle MRI Alzheimer's data consisting of 1280 testing images and 5120 patient training images. For comprehensive performance evaluation, precision, recall, F1-score, and accuracy metrics are utilized. The findings indicate that the Xception method is the most promising of those considered. Without employing five K-fold techniques, this model obtains a 99% accuracy and 0.135 loss score. In addition, integrating five K-fold methods enhances the accuracy to 99.68% while decreasing the loss score to 0.120. The research further included the evaluation of the Receiver Operating Characteristic Area Under the Curve (ROC-AUC) for various classes and models. As a result, our model may detect and diagnose Alzheimer's disease quickly and accurately.

13.
Comput Biol Med ; 183: 109223, 2024 Oct 04.
Article in English | MEDLINE | ID: mdl-39368312

ABSTRACT

Optical coherence tomography (OCT) is widely used for its high resolution. Accurate OCT image segmentation can significantly improve the diagnosis and treatment of retinal diseases such as Diabetic Macular Edema (DME). However, in resource-limited regions, portable devices with low-quality output are more frequently used, severely affecting the performance of segmentation. To address this issue, we propose a novel methodology in this paper, including a dedicated pre-processing pipeline and an end-to-end double U-shaped cascaded architecture, H-Unets. In addition, an Adaptive Attention Fusion (AAF) module is elaborately designed to improve the segmentation performance of H-Unets. To demonstrate the effectiveness of our method, we conduct a bunch of ablation and comparative studies on three open-source datasets. The experimental results show the validity of the pre-processing pipeline and H-Unets, achieving the highest Dice score of 90.60%±0.87% among popular methods in a relatively small model size.

14.
Comput Methods Programs Biomed ; 257: 108443, 2024 Sep 28.
Article in English | MEDLINE | ID: mdl-39368441

ABSTRACT

BACKGROUND AND OBJECTIVE: Accurate prostate dissection is crucial in transanal surgery for patients with low rectal cancer. Improper dissection can lead to adverse events such as urethral injury, severely affecting the patient's postoperative recovery. However, unclear boundaries, irregular shape of the prostate, and obstructive factors such as smoke present significant challenges for surgeons. METHODS: Our innovative contribution lies in the introduction of a novel video semantic segmentation framework, IG-Net, which incorporates prior surgical instrument features for real-time and precise prostate segmentation. Specifically, we designed an instrument-guided module that calculates the surgeon's region of attention based on instrument features, performs local segmentation, and integrates it with global segmentation to enhance performance. Additionally, we proposed a keyframe selection module that calculates the temporal correlations between consecutive frames based on instrument features. This module adaptively selects non-keyframe for feature fusion segmentation, reducing noise and optimizing speed. RESULTS: To evaluate the performance of IG-Net, we constructed the most extensive dataset known to date, comprising 106 video clips and 6153 images. The experimental results reveal that this method achieves favorable performance, with 72.70% IoU, 82.02% Dice, and 35 FPS. CONCLUSIONS: For the task of prostate segmentation based on surgical videos, our proposed IG-Net surpasses all previous methods across multiple metrics. IG-Net balances segmentation accuracy and speed, demonstrating strong robustness against adverse factors.

15.
Structure ; 2024 Sep 24.
Article in English | MEDLINE | ID: mdl-39368461

ABSTRACT

Protein-protein interactions (PPIs) play pivotal roles in directing T cell fate. One key player is the non-receptor tyrosine protein kinase Lck that helps to transduce T cell activation signals. Lck is mediated by other proteins via interactions that are inadequately understood. Here, we use the deep learning method AF2Complex to predict PPIs involving Lck, by screening it against ∼1,000 proteins implicated in immune responses, followed by extensive structural modeling for selected interactions. Remarkably, we describe how Lck may be specifically targeted by a palmitoyltransferase using a phosphotyrosine motif. We uncover "hotspot" interactions between Lck and the tyrosine phosphatase CD45, leading to a significant conformational shift of Lck for activation. Lastly, we present intriguing interactions between the phosphotyrosine-binding domain of Lck and the cytoplasmic tail of the immune checkpoint LAG3 and propose a molecular mechanism for its inhibitory role. Together, this multifaceted study provides valuable insights into T cell regulation and signaling.

16.
Skin Res Technol ; 30(10): e70088, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39366914

ABSTRACT

BACKGROUND: Skin tone assessment is critical in both cosmetic and medical fields, yet traditional methods like the individual typology angle (ITA) have limitations, such as sensitivity to illuminants and insensitivity to skin redness. METHODS: This study introduces an automated image-based method for skin tone mapping by applying optical approaches and deep learning. The method generates skin tone maps by leveraging the illuminant spectrum, segments the skin region from face images, and identifies the corresponding skin tone on the map. The method was evaluated by generating skin tone maps under three standard illuminants (D45, D65, and D85) and comparing the results with those obtained using ITA on skin tone simulation images. RESULTS: The results showed that skin tone maps generated under the same lighting conditions as the image acquisition (D65) provided the highest accuracy, with a color difference of around 6, which is more than twice as small as those observed under other illuminants. The mapping positions also demonstrated a clear correlation with pigment levels. Compared to ITA, the proposed approach was particularly effective in distinguishing skin tones related to redness. CONCLUSION: Despite the need to measure the illuminant spectrum and for further physiological validation, the proposed approach shows potential for enhancing skin tone assessment. Its ability to mitigate the effects of illuminants and distinguish between the two dominant pigments offers promising applications in both cosmetic and medical diagnostics.


Subject(s)
Deep Learning , Skin Pigmentation , Humans , Skin Pigmentation/physiology , Female , Adult , Skin/diagnostic imaging , Male , Young Adult , Lighting/methods , Face/physiology , Face/diagnostic imaging , Image Processing, Computer-Assisted/methods
17.
Sci Rep ; 14(1): 23107, 2024 10 04.
Article in English | MEDLINE | ID: mdl-39367046

ABSTRACT

Identification of retinal diseases in automated screening methods, such as those used in clinical settings or computer-aided diagnosis, usually depends on the localization and segmentation of the Optic Disc (OD) and fovea. However, this task is difficult since these anatomical features have irregular spatial, texture, and shape characteristics, limited sample sizes, and domain shifts due to different data distributions across datasets. This study proposes a novel Multiresolution Cascaded Attention U-Net (MCAU-Net) model that addresses these problems by optimally balancing receptive field size and computational efficiency. The MCAU-Net utilizes two skip connections to accurately localize and segment the OD and fovea in fundus images. We incorporated a Multiresolution Wavelet Pooling Module (MWPM) into the CNN at each stage of U-Net input to compensate for spatial information loss. Additionally, we integrated a cascaded connection of the spatial and channel attentions as a skip connection in MCAU-Net to concentrate precisely on the target object and improve model convergence for segmenting and localizing OD and fovea centers. The proposed model has a low parameter count of 0.8 million, improving computational efficiency and reducing the risk of overfitting. For OD segmentation, the MCAU-Net achieves high IoU values of 0.9771, 0.945, and 0.946 for the DRISHTI-GS, DRIONS-DB, and IDRiD datasets, respectively, outperforming previous results for all three datasets. For the IDRiD dataset, the MCAU-Net locates the OD center with an Euclidean Distance (ED) of 16.90 pixels and the fovea center with an ED of 33.45 pixels, demonstrating its effectiveness in overcoming the common limitations of state-of-the-art methods.


Subject(s)
Fovea Centralis , Fundus Oculi , Optic Disk , Humans , Optic Disk/diagnostic imaging , Fovea Centralis/diagnostic imaging , Neural Networks, Computer , Algorithms , Image Processing, Computer-Assisted/methods
18.
Sci Rep ; 14(1): 23080, 2024 Oct 04.
Article in English | MEDLINE | ID: mdl-39367073

ABSTRACT

We evaluate the capability of convolutional neural networks (CNNs) to predict a velocity field as it relates to fluid flow around various arrangements of obstacles within a two-dimensional, rectangular channel. We base our network architecture on a gated residual U-Net template and train it on velocity fields generated from computational fluid dynamics (CFD) simulations. We then assess the extent to which our model can accurately and efficiently predict steady flows in terms of velocity fields associated with inlet speeds and obstacle configurations not included in our training set. Real-world applications often require fluid-flow predictions in larger and more complex domains that contain more obstacles than used in model training. To address this problem, we propose a method that decomposes a domain into subdomains for which our model can individually and accurately predict the fluid flow, after which we apply smoothness and continuity constraints to reconstruct velocity fields across the whole of the original domain. This piecewise, semicontinuous approach is computationally more efficient than the alternative, which involves generation of CFD datasets required to retrain the model on larger and more spatially complex domains. We introduce a local orientational vector field entropy (LOVE) metric, which quantifies a decorrelation scale for velocity fields in geometric domains with one or more obstacles, and use it to devise a strategy for decomposing complex domains into weakly interacting subsets suitable for application of our modeling approach. We end with an assessment of error propagation across modeled domains of increasing size.

19.
Sci Rep ; 14(1): 23092, 2024 Oct 04.
Article in English | MEDLINE | ID: mdl-39367098

ABSTRACT

Modern natural language processing (NLP) state-of-the-art (SoTA) deep learning (DL) models have hundreds of millions of parameters, making them extremely complex. Large datasets are required for training these models, and while pretraining has reduced this requirement, human-labelled datasets are still necessary for fine-tuning. Few-shot learning (FSL) techniques, such as meta-learning, try to train models from smaller datasets to mitigate this cost. However, the tasks used to evaluate these meta-learners frequently diverge from the problems in the real world that they are meant to resolve. This work aims to apply meta-learning to a problem that is more pertinent to the real world: class incremental learning (IL). In this scenario, after completing its training, the model learns to classify newly introduced classes. One unique quality of meta-learners is that they can generalise from a small sample size to classes that have never been seen before, which makes them especially useful for class incremental learning (IL). The method describes how to emulate class IL using proxy new classes. This method allows a meta-learner to complete the task without the need for retraining. To generate predictions, the transformer-based aggregation function in a meta-learner that modifies data from examples across all classes has been proposed. The principal contributions of the model include concurrently considering the entire support and query sets, and prioritising attention to crucial samples, such as the question, to increase the significance of its impact during inference. The outcomes demonstrate that the model surpasses prevailing benchmarks in the industry. Notably, most meta-learners demonstrate significant generalisation in the context of class IL even without specific training for this task. This paper establishes a high-performing baseline for subsequent transformer-based aggregation techniques, thereby emphasising the practical significance of meta-learners in class IL.

20.
Sci Rep ; 14(1): 23069, 2024 Oct 04.
Article in English | MEDLINE | ID: mdl-39367158

ABSTRACT

A smart grid (SG) is a cutting-edge electrical grid that utilizes digital communication technology and automation to effectively handle electricity consumption, distribution, and generation. It incorporates energy storage systems, smart meters, and renewable energy sources for bidirectional communication and enhanced energy flow between grid modules. Due to their cyberattack vulnerability, SGs need robust safety measures to protect sensitive data, ensure public safety, and maintain a reliable power supply. Robust safety measures, comprising intrusion detection systems (IDSs), are significant to protect against malicious manipulation, unauthorized access, and data breaches in grid operations, confirming the electricity supply chain's integrity, resilience, and reliability. Deep learning (DL) improves intrusion recognition in SGs by effectually analyzing network data, recognizing complex attack patterns, and adjusting to dynamic threats in real-time, thereby strengthening the reliability and resilience of the grid against cyber-attacks. This study develops a novel Mountain Gazelle Optimization with Deep Ensemble Learning based intrusion detection (MGODEL-ID) technique on SG environment. The MGODEL-ID methodology exploits ensemble learning with metaheuristic approaches to identify intrusions in the SG environment. Primarily, the MGODEL-ID approach utilizes Z-score normalization to convert the input data into a uniform format. Besides, the MGODEL-ID approach employs the MGO model for feature subset selection. Meanwhile, the detection of intrusions is performed by an ensemble of three classifiers such as long short-term memory (LSTM), deep autoencoder (DAE), and extreme learning machine (ELM). Eventually, the dung beetle optimizer (DBO) is utilized to tune the hyperparameter tuning of the classifiers. A widespread simulation outcome is made to demonstrate the improved security outcomes of the MGODEL-ID model. The experimental values implied that the MGODEL-ID model performs better than other models.

SELECTION OF CITATIONS
SEARCH DETAIL