Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 9.742
Filter
1.
PLoS One ; 19(5): e0302787, 2024.
Article in English | MEDLINE | ID: mdl-38718077

ABSTRACT

To monitor the sharing of research data through repositories is increasingly of interest to institutions and funders, as well as from a meta-research perspective. Automated screening tools exist, but they are based on either narrow or vague definitions of open data. Where manual validation has been performed, it was based on a small article sample. At our biomedical research institution, we developed detailed criteria for such a screening, as well as a workflow which combines an automated and a manual step, and considers both fully open and restricted-access data. We use the results for an internal incentivization scheme, as well as for a monitoring in a dashboard. Here, we describe in detail our screening procedure and its validation, based on automated screening of 11035 biomedical research articles, of which 1381 articles with potential data sharing were subsequently screened manually. The screening results were highly reliable, as witnessed by inter-rater reliability values of ≥0.8 (Krippendorff's alpha) in two different validation samples. We also report the results of the screening, both for our institution and an independent sample from a meta-research study. In the largest of the three samples, the 2021 institutional sample, underlying data had been openly shared for 7.8% of research articles. For an additional 1.0% of articles, restricted-access data had been shared, resulting in 8.3% of articles overall having open and/or restricted-access data. The extraction workflow is then discussed with regard to its applicability in different contexts, limitations, possible variations, and future developments. In summary, we present a comprehensive, validated, semi-automated workflow for the detection of shared research data underlying biomedical article publications.


Subject(s)
Biomedical Research , Workflow , Biomedical Research/methods , Humans , Information Dissemination/methods , Access to Information , Reproducibility of Results
2.
Methods Cell Biol ; 187: 43-56, 2024.
Article in English | MEDLINE | ID: mdl-38705629

ABSTRACT

Correlative Light Electron Microscopy (CLEM) encompasses a wide range of experimental approaches with different degrees of complexity and technical challenges where the attributes of both light and electron microscopy are combined in a single experiment. Although the biological question always determines what technology is the most appropriate, we generally set out to apply the simplest workflow possible. For 2D cell cultures expressing fluorescently tagged molecules, we report on a simple and very powerful CLEM approach by using gridded finder imaging dishes. We first determine the gross localization of the fluorescence using light microscopy and subsequently we retrace the origin/localization of the fluorescence by projecting it onto the ultrastructural reference space obtained by transmission electron microscopy (TEM). Here we describe this workflow and highlight some basic principles of the sample preparation for such a simple CLEM experiment. We will specifically focus on the steps following the resin embedding for TEM and the introduction of the sample in the electron microscope.


Subject(s)
Workflow , Humans , Microscopy, Fluorescence/methods , Microscopy, Electron, Transmission/methods , Microscopy, Electron/methods , Animals
3.
Br Dent J ; 236(9): 718, 2024 May.
Article in English | MEDLINE | ID: mdl-38730170
4.
Curr Protoc ; 4(5): e1046, 2024 May.
Article in English | MEDLINE | ID: mdl-38717471

ABSTRACT

Whole-genome sequencing is widely used to investigate population genomic variation in organisms of interest. Assorted tools have been independently developed to call variants from short-read sequencing data aligned to a reference genome, including single nucleotide polymorphisms (SNPs) and structural variations (SVs). We developed SNP-SVant, an integrated, flexible, and computationally efficient bioinformatic workflow that predicts high-confidence SNPs and SVs in organisms without benchmarked variants, which are traditionally used for distinguishing sequencing errors from real variants. In the absence of these benchmarked datasets, we leverage multiple rounds of statistical recalibration to increase the precision of variant prediction. The SNP-SVant workflow is flexible, with user options to tradeoff accuracy for sensitivity. The workflow predicts SNPs and small insertions and deletions using the Genome Analysis ToolKit (GATK) and predicts SVs using the Genome Rearrangement IDentification Software Suite (GRIDSS), and it culminates in variant annotation using custom scripts. A key utility of SNP-SVant is its scalability. Variant calling is a computationally expensive procedure, and thus, SNP-SVant uses a workflow management system with intermediary checkpoint steps to ensure efficient use of resources by minimizing redundant computations and omitting steps where dependent files are available. SNP-SVant also provides metrics to assess the quality of called variants and converts between VCF and aligned FASTA format outputs to ensure compatibility with downstream tools to calculate selection statistics, which are commonplace in population genomics studies. By accounting for both small and large structural variants, users of this workflow can obtain a wide-ranging view of genomic alterations in an organism of interest. Overall, this workflow advances our capabilities in assessing the functional consequences of different types of genomic alterations, ultimately improving our ability to associate genotypes with phenotypes. © 2024 The Authors. Current Protocols published by Wiley Periodicals LLC. Basic Protocol: Predicting single nucleotide polymorphisms and structural variations Support Protocol 1: Downloading publicly available sequencing data Support Protocol 2: Visualizing variant loci using Integrated Genome Viewer Support Protocol 3: Converting between VCF and aligned FASTA formats.


Subject(s)
Polymorphism, Single Nucleotide , Software , Workflow , Polymorphism, Single Nucleotide/genetics , Computational Biology/methods , Genomics/methods , Molecular Sequence Annotation/methods , Whole Genome Sequencing/methods
5.
Article in English | MEDLINE | ID: mdl-38717248

ABSTRACT

A video can help highlight the real-time steps, anatomy and the technical aspects of a case that may be difficult to convey with text or static images alone. Editing with a regimented workflow allows for the transmission of only essential information to the viewer while maximizing efficiency by going through the editing process. This video tutorial breaks down the fundamentals of surgical video editing with tips and pointers to simplify the workflow.


Subject(s)
Video Recording , Humans , Surgical Procedures, Operative/methods , Workflow
6.
Bioinformatics ; 40(5)2024 May 02.
Article in English | MEDLINE | ID: mdl-38741151

ABSTRACT

MOTIVATION: Systems biology aims to better understand living systems through mathematical modelling of experimental and clinical data. A pervasive challenge in quantitative dynamical modelling is the integration of time series measurements, which often have high variability and low sampling resolution. Approaches are required to utilize such information while consistently handling uncertainties. RESULTS: We present BayModTS (Bayesian modelling of time series data), a new FAIR (findable, accessible, interoperable, and reusable) workflow for processing and analysing sparse and highly variable time series data. BayModTS consistently transfers uncertainties from data to model predictions, including process knowledge via parameterized models. Further, credible differences in the dynamics of different conditions can be identified by filtering noise. To demonstrate the power and versatility of BayModTS, we applied it to three hepatic datasets gathered from three different species and with different measurement techniques: (i) blood perfusion measurements by magnetic resonance imaging in rat livers after portal vein ligation, (ii) pharmacokinetic time series of different drugs in normal and steatotic mice, and (iii) CT-based volumetric assessment of human liver remnants after clinical liver resection. AVAILABILITY AND IMPLEMENTATION: The BayModTS codebase is available on GitHub at https://github.com/Systems-Theory-in-Systems-Biology/BayModTS. The repository contains a Python script for the executable BayModTS workflow and a widely applicable SBML (systems biology markup language) model for retarded transient functions. In addition, all examples from the paper are included in the repository. Data and code of the application examples are stored on DaRUS: https://doi.org/10.18419/darus-3876. The raw MRI ROI voxel data were uploaded to DaRUS: https://doi.org/10.18419/darus-3878. The steatosis metabolite data are published on FairdomHub: 10.15490/fairdomhub.1.study.1070.1.


Subject(s)
Bayes Theorem , Workflow , Animals , Rats , Humans , Mice , Systems Biology/methods , Liver/metabolism , Software , Magnetic Resonance Imaging/methods
7.
Molecules ; 29(9)2024 May 01.
Article in English | MEDLINE | ID: mdl-38731577

ABSTRACT

Recently, benchtop nuclear magnetic resonance (NMR) spectrometers utilizing permanent magnets have emerged as versatile tools with applications across various fields, including food and pharmaceuticals. Their efficacy is further enhanced when coupled with chemometric methods. This study presents an innovative approach to leveraging a compact benchtop NMR spectrometer coupled with chemometrics for screening honey-based food supplements adulterated with active pharmaceutical ingredients. Initially, fifty samples seized by French customs were analyzed using a 60 MHz benchtop spectrometer. The investigation unveiled the presence of tadalafil in 37 samples, sildenafil in 5 samples, and a combination of flibanserin with tadalafil in 1 sample. After conducting comprehensive qualitative and quantitative characterization of the samples, we propose a chemometric workflow to provide an efficient screening of honey samples using the NMR dataset. This pipeline, utilizing partial least squares discriminant analysis (PLS-DA) models, enables the classification of samples as either adulterated or non-adulterated, as well as the identification of the presence of tadalafil or sildenafil. Additionally, PLS regression models are employed to predict the quantitative content of these adulterants. Through blind analysis, this workflow allows for the detection and quantification of adulterants in these honey supplements.


Subject(s)
Dietary Supplements , Honey , Magnetic Resonance Spectroscopy , Honey/analysis , Dietary Supplements/analysis , Magnetic Resonance Spectroscopy/methods , Sildenafil Citrate/analysis , Workflow , Chemometrics/methods , Tadalafil/analysis , Least-Squares Analysis , Drug Contamination/prevention & control , Discriminant Analysis
8.
BMJ Case Rep ; 17(5)2024 May 15.
Article in English | MEDLINE | ID: mdl-38749520

ABSTRACT

This case report focuses on the replacement of ceramic laminate veneers with suboptimal marginal fit and design, employing a digital workflow and CAD-CAM technology. The patient, a woman in her 30s, expressed concerns about the appearance and hygiene challenges of her existing veneers. A comprehensive assessment, including clinical examination, facial photographs and intraoral scanning, was conducted. Utilising CAD software, facial photographs and 3D models merged to create a digital wax-up, crucial in designing suitable veneers and addressing issues like overcontouring and a poor emergence profile. Following the removal of old veneers, a mock-up was performed and approved. Preparations ensured space for restorations with well-defined margins. The final restorations, milled with Leucite-reinforced vitreous ceramic, were cemented. At the 1 year follow-up, improved aesthetics, gingival health and functional restorations were observed. This report highlights the efficacy of digital workflows in achieving consistent and aesthetically pleasing outcomes in ceramic laminate veneer replacement.


Subject(s)
Ceramics , Computer-Aided Design , Dental Veneers , Workflow , Humans , Female , Adult , Esthetics, Dental , Dental Prosthesis Design/methods , Dental Porcelain
9.
J Pathol Clin Res ; 10(3): e12376, 2024 May.
Article in English | MEDLINE | ID: mdl-38738521

ABSTRACT

The identification of gene fusions has become an integral part of soft tissue and bone tumour diagnosis. We investigated the added value of targeted RNA-based sequencing (targeted RNA-seq, Archer FusionPlex) to our current molecular diagnostic workflow of these tumours, which is based on fluorescence in situ hybridisation (FISH) for the detection of gene fusions using 25 probes. In a series of 131 diagnostic samples targeted RNA-seq identified a gene fusion, BCOR internal tandem duplication or ALK deletion in 47 cases (35.9%). For 74 cases, encompassing 137 FISH analyses, concordance between FISH and targeted RNA-seq was evaluated. A positive or negative FISH result was confirmed by targeted RNA-seq in 27 out of 49 (55.1%) and 81 out of 88 (92.0%) analyses, respectively. While negative concordance was high, targeted RNA-seq identified a canonical gene fusion in seven cases despite a negative FISH result. The 22 discordant FISH-positive analyses showed a lower percentage of rearrangement-positive nuclei (range 15-41%) compared to the concordant FISH-positive analyses (>41% of nuclei in 88.9% of cases). Six FISH analyses (in four cases) were finally considered false positive based on histological and targeted RNA-seq findings. For the EWSR1 FISH probe, we observed a gene-dependent disparity (p = 0.0020), with 8 out of 35 cases showing a discordance between FISH and targeted RNA-seq (22.9%). This study demonstrates an added value of targeted RNA-seq to our current diagnostic workflow of soft tissue and bone tumours in 19 out of 131 cases (14.5%), which we categorised as altered diagnosis (3 cases), added precision (6 cases), or augmented spectrum (10 cases). In the latter subgroup, four novel fusion transcripts were found for which the clinical relevance remains unclear: NAB2::NCOA2, YAP1::NUTM2B, HSPA8::BRAF, and PDE2A::PLAG1. Overall, targeted RNA-seq has proven extremely valuable in the diagnostic workflow of soft tissue and bone tumours.


Subject(s)
Bone Neoplasms , In Situ Hybridization, Fluorescence , Soft Tissue Neoplasms , Workflow , Humans , Bone Neoplasms/genetics , Bone Neoplasms/diagnosis , Bone Neoplasms/pathology , Soft Tissue Neoplasms/genetics , Soft Tissue Neoplasms/diagnosis , Soft Tissue Neoplasms/pathology , Female , Adult , Male , Middle Aged , Adolescent , Aged , Sequence Analysis, RNA , Child , Young Adult , Gene Fusion , Biomarkers, Tumor/genetics , Child, Preschool , Aged, 80 and over , Oncogene Proteins, Fusion/genetics
10.
BMC Bioinformatics ; 25(1): 184, 2024 May 09.
Article in English | MEDLINE | ID: mdl-38724907

ABSTRACT

BACKGROUND: Major advances in sequencing technologies and the sharing of data and metadata in science have resulted in a wealth of publicly available datasets. However, working with and especially curating public omics datasets remains challenging despite these efforts. While a growing number of initiatives aim to re-use previous results, these present limitations that often lead to the need for further in-house curation and processing. RESULTS: Here, we present the Omics Dataset Curation Toolkit (OMD Curation Toolkit), a python3 package designed to accompany and guide the researcher during the curation process of metadata and fastq files of public omics datasets. This workflow provides a standardized framework with multiple capabilities (collection, control check, treatment and integration) to facilitate the arduous task of curating public sequencing data projects. While centered on the European Nucleotide Archive (ENA), the majority of the provided tools are generic and can be used to curate datasets from different sources. CONCLUSIONS: Thus, it offers valuable tools for the in-house curation previously needed to re-use public omics data. Due to its workflow structure and capabilities, it can be easily used and benefit investigators in developing novel omics meta-analyses based on sequencing data.


Subject(s)
Data Curation , Software , Workflow , Data Curation/methods , Metadata , Databases, Genetic , Genomics/methods , Computational Biology/methods
11.
Anal Chem ; 96(19): 7373-7379, 2024 May 14.
Article in English | MEDLINE | ID: mdl-38696819

ABSTRACT

Cross-linking mass spectrometry (XL-MS) has evolved into a pivotal technique for probing protein interactions. This study describes the implementation of Parallel Accumulation-Serial Fragmentation (PASEF) on timsTOF instruments, enhancing the detection and analysis of protein interactions by XL-MS. Addressing the challenges in XL-MS, such as the interpretation of complex spectra, low abundant cross-linked peptides, and a data acquisition bias, our current study integrates a peptide-centric approach for the analysis of XL-MS data and presents the foundation for integrating data-independent acquisition (DIA) in XL-MS with a vendor-neutral and open-source platform. A novel workflow is described for processing data-dependent acquisition (DDA) of PASEF-derived information. For this, software by Bruker Daltonics is used, enabling the conversion of these data into a format that is compatible with MeroX and Skyline software tools. Our approach significantly improves the identification of cross-linked products from complex mixtures, allowing the XL-MS community to overcome current analytical limitations.


Subject(s)
Cross-Linking Reagents , Mass Spectrometry , Software , Workflow , Cross-Linking Reagents/chemistry , Peptides/chemistry , Peptides/analysis , Humans
12.
Anal Chem ; 96(19): 7460-7469, 2024 May 14.
Article in English | MEDLINE | ID: mdl-38702053

ABSTRACT

Natural products (or specialized metabolites) are historically the main source of new drugs. However, the current drug discovery pipelines require miniaturization and speeds that are incompatible with traditional natural product research methods, especially in the early stages of the research. This article introduces the NP3 MS Workflow, a robust open-source software system for liquid chromatography-tandem mass spectrometry (LC-MS/MS) untargeted metabolomic data processing and analysis, designed to rank bioactive natural products directly from complex mixtures of compounds, such as bioactive biota samples. NP3 MS Workflow allows minimal user intervention as well as customization of each step of LC-MS/MS data processing, with diagnostic statistics to allow interpretation and optimization of LC-MS/MS data processing by the user. NP3 MS Workflow adds improved computing of the MS2 spectra in an LC-MS/MS data set and provides tools for automatic [M + H]+ ion deconvolution using fragmentation rules; chemical structural annotation against MS2 databases; and relative quantification of the precursor ions for bioactivity correlation scoring. The software will be presented with case studies and comparisons with equivalent tools currently available. NP3 MS Workflow shows a robust and useful approach to select bioactive natural products from complex mixtures, improving the set of tools available for untargeted metabolomics. It can be easily integrated into natural product-based drug-discovery pipelines and to other fields of research at the interface of chemistry and biology.


Subject(s)
Biological Products , Drug Discovery , Metabolomics , Software , Tandem Mass Spectrometry , Biological Products/chemistry , Biological Products/metabolism , Biological Products/analysis , Chromatography, Liquid/methods , Workflow
13.
Med Sci Monit ; 30: e943526, 2024 May 12.
Article in English | MEDLINE | ID: mdl-38734884

ABSTRACT

BACKGROUND A significant number of atrial fibrillation (AF) recurrences occur after initial ablation, often due to pulmonary vein reconnections or triggers from non-pulmonary veins. MATERIAL AND METHODS Patients with paroxysmal AF who underwent radiofrequency catheter ablation for the first time were enrolled. Base on propensity score matching (1: 1 matching), 118 patients were selected for an optimized workflow for the radiofrequency catheter ablation of paroxysmal AF (OWCA) group and a conventional group. Comparative analysis of the acute and 12-month clinical outcomes was conducted. Moreover, an artificial intelligence analytics platform was used to evaluate the quality of pulmonary vein isolation (PVI) circles. RESULTS PVI was successfully achieved in all patients. Incidence of first-pass isolation of bilateral PVI circles was higher (P=0.009) and acute pulmonary vein reconnections was lower (P=0.027) in the OWCA group than conventional group. The OWCA group displayed a significant reduction in the number of fractured points (P<0.001), stacked points (P=0.003), and a greater proportion of cases in which the radiofrequency index achieved the target value (P=0.003). Additionally, the contact force consistently met the force over time criteria (P<0.001) for bilateral PVI circles in the OWCA group, accompanied by a shorter operation time (P=0.017). During the 12-month follow-up period, the OWCA group exhibited a higher atrial arrhythmia-free survival rate following the initial ablation procedure than did the conventional group. CONCLUSIONS The optimized workflow for radiofrequency catheter ablation of paroxysmal AF could play a crucial role in creating higher quality PVI circles. This improvement is reflected in a significantly elevated 12-month atrial arrhythmia-free survival rate.


Subject(s)
Atrial Fibrillation , Catheter Ablation , Pulmonary Veins , Workflow , Humans , Atrial Fibrillation/surgery , Catheter Ablation/methods , Female , Male , Middle Aged , Treatment Outcome , Pulmonary Veins/surgery , Aged , Propensity Score , Recurrence
14.
Sci Rep ; 14(1): 11018, 2024 05 14.
Article in English | MEDLINE | ID: mdl-38744902

ABSTRACT

Antibody-drug conjugates (ADC) payloads are cleavable drugs that act as the warhead to exert an ADC's cytotoxic effects on cancer cells intracellularly. A simple and highly sensitive workflow is developed and validated for the simultaneous quantification of six ADC payloads, namely SN-38, MTX, DXd, MMAE, MMAF and Calicheamicin (CM). The workflow consists of a short and simple sample extraction using a methanol-ethanol mixture, followed by a fast liquid chromatography tandem mass spectrometry (LC-MS/MS) analysis. The results showed that well-validated linear response ranges of 0.4-100 nM for SN38, MTX and DXd, 0.04-100 nM for MMAE and MMAF, 0.4-1000 nM for CM were achieved in mouse serum. Recoveries for all six payloads at three different concentrations (low, medium and high) were more than 85%. An ultra-low sample volume of only 5 µL of serum is required due to the high sensitivity of the method. This validated method was successfully applied to a pharmacokinetic study to quantify MMAE in mouse serum samples.


Subject(s)
Immunoconjugates , Tandem Mass Spectrometry , Animals , Mice , Chromatography, Liquid/methods , Immunoconjugates/pharmacokinetics , Immunoconjugates/chemistry , Tandem Mass Spectrometry/methods , Workflow , Liquid Chromatography-Mass Spectrometry
15.
Methods Mol Biol ; 2726: 85-104, 2024.
Article in English | MEDLINE | ID: mdl-38780728

ABSTRACT

The structure of RNA molecules and their complexes are crucial for understanding biology at the molecular level. Resolving these structures holds the key to understanding their manifold structure-mediated functions ranging from regulating gene expression to catalyzing biochemical processes. Predicting RNA secondary structure is a prerequisite and a key step to accurately model their three dimensional structure. Although dedicated modelling software are making fast and significant progresses, predicting an accurate secondary structure from the sequence remains a challenge. Their performance can be significantly improved by the incorporation of experimental RNA structure probing data. Many different chemical and enzymatic probes have been developed; however, only one set of quantitative data can be incorporated as constraints for computer-assisted modelling. IPANEMAP is a recent workflow based on RNAfold that can take into account several quantitative or qualitative data sets to model RNA secondary structure. This chapter details the methods for popular chemical probing (DMS, CMCT, SHAPE-CE, and SHAPE-Map) and the subsequent analysis and structure prediction using IPANEMAP.


Subject(s)
Models, Molecular , Nucleic Acid Conformation , RNA , Software , Workflow , RNA/chemistry , RNA/genetics , Computational Biology/methods
16.
Sci Data ; 11(1): 530, 2024 May 23.
Article in English | MEDLINE | ID: mdl-38783061

ABSTRACT

The identification of lead molecules and the exploration of novel pharmacological drug targets are major challenges of medical life sciences today. Genome-wide association studies, multi-omics, and systems pharmacology steadily reveal new protein networks, extending the known and relevant disease-modifying proteome. Unfortunately, the vast majority of the disease-modifying proteome consists of 'orphan targets' of which intrinsic ligands/substrates, (patho)physiological roles, and/or modulators are unknown. Undruggability is a major challenge in drug development today, and medicinal chemistry efforts cannot keep up with hit identification and hit-to-lead optimization studies. New 'thinking-outside-the-box' approaches are necessary to identify structurally novel and functionally distinctive ligands for orphan targets. Here we present a unique dataset that includes critical information on the orphan target ABCA1, from which a novel cheminformatic workflow - computer-aided pattern scoring (C@PS) - for the identification of novel ligands was developed. Providing a hit rate of 95.5% and molecules with high potency and molecular-structural diversity, this dataset represents a suitable template for general deorphanization studies.


Subject(s)
Drug Design , Drug Discovery , Humans , Ligands , Workflow
17.
Brief Bioinform ; 25(4)2024 May 23.
Article in English | MEDLINE | ID: mdl-38783705

ABSTRACT

Tumor mutational signatures have gained prominence in cancer research, yet the lack of standardized methods hinders reproducibility and robustness. Leveraging colorectal cancer (CRC) as a model, we explored the influence of computational parameters on mutational signature analyses across 230 CRC cell lines and 152 CRC patients. Results were validated in three independent datasets: 483 endometrial cancer patients stratified by mismatch repair (MMR) status, 35 lung cancer patients by smoking status and 12 patient-derived organoids (PDOs) annotated for colibactin exposure. Assessing various bioinformatic tools, reference datasets and input data sizes including whole genome sequencing, whole exome sequencing and a pan-cancer gene panel, we demonstrated significant variability in the results. We report that the use of distinct algorithms and references led to statistically different results, highlighting how arbitrary choices may induce variability in the mutational signature contributions. Furthermore, we found a differential contribution of mutational signatures between coding and intergenic regions and defined the minimum number of somatic variants required for reliable mutational signature assignment. To facilitate the identification of the most suitable workflows, we developed Comparative Mutational Signature analysis on Coding and Extragenic Regions (CoMSCER), a bioinformatic tool which allows researchers to easily perform comparative mutational signature analysis by coupling the results from several tools and public reference datasets and to assess mutational signature contributions in coding and non-coding genomic regions. In conclusion, our study provides a comparative framework to elucidate the impact of distinct computational workflows on mutational signatures.


Subject(s)
Colorectal Neoplasms , Computational Biology , Mutation , Humans , Colorectal Neoplasms/genetics , Colorectal Neoplasms/pathology , Computational Biology/methods , Workflow , Cell Line, Tumor , Exome Sequencing/methods , Female , Algorithms
18.
Article in English | MEDLINE | ID: mdl-38765212

ABSTRACT

The presentation of pulmonary embolism (PE) varies from asymptomatic to life-threatening, and management involves multiple specialists. Timely diagnosis of PE is based on clinical presentation, D-dimer testing, and computed tomography pulmonary angiogram (CTPA), and assessment by a Pulmonary Embolism Response Team (PERT) is critical to management. Artificial intelligence (AI) technology plays a key role in the PE workflow with automated detection and flagging of suspected PE in CTPA imaging. HIPAA-compliant communication features of mobile and web-based applications may facilitate PERT workflow with immediate access to imaging, team activation, and real-time information sharing and collaboration. In this review, we describe contemporary diagnostic tools, specifically AI, that are important in the triage and diagnosis of PE.


Subject(s)
Artificial Intelligence , Biomarkers , Computed Tomography Angiography , Fibrin Fibrinogen Degradation Products , Predictive Value of Tests , Pulmonary Embolism , Humans , Pulmonary Embolism/diagnostic imaging , Pulmonary Embolism/diagnosis , Fibrin Fibrinogen Degradation Products/analysis , Fibrin Fibrinogen Degradation Products/metabolism , Biomarkers/blood , Workflow , Prognosis , Radiographic Image Interpretation, Computer-Assisted , Pulmonary Artery/diagnostic imaging , Pulmonary Artery/physiopathology
19.
J Pak Med Assoc ; 74(4 (Supple-4)): S109-S116, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38712418

ABSTRACT

Breast Cancer (BC) has evolved from traditional morphological analysis to molecular profiling, identifying new subtypes. Ki-67, a prognostic biomarker, helps classify subtypes and guide chemotherapy decisions. This review explores how artificial intelligence (AI) can optimize Ki-67 assessment, improving precision and workflow efficiency in BC management. The study presents a critical analysis of the current state of AI-powered Ki-67 assessment. Results demonstrate high agreement between AI and standard Ki-67 assessment methods highlighting AI's potential as an auxiliary tool for pathologists. Despite these advancements, the review acknowledges limitations such as the restricted timeframe and diverse study designs, emphasizing the need for further research to address these concerns. In conclusion, AI holds promise in enhancing Ki-67 assessment's precision and workflow efficiency in BC diagnosis. While challenges persist, the integration of AI can revolutionize BC care, making it more accessible and precise, even in resource-limited settings.


Subject(s)
Artificial Intelligence , Breast Neoplasms , Ki-67 Antigen , Workflow , Humans , Breast Neoplasms/metabolism , Breast Neoplasms/pathology , Breast Neoplasms/diagnosis , Ki-67 Antigen/metabolism , Female , Biomarkers, Tumor/metabolism
20.
Nat Commun ; 15(1): 3675, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38693118

ABSTRACT

The wide applications of liquid chromatography - mass spectrometry (LC-MS) in untargeted metabolomics demand an easy-to-use, comprehensive computational workflow to support efficient and reproducible data analysis. However, current tools were primarily developed to perform specific tasks in LC-MS based metabolomics data analysis. Here we introduce MetaboAnalystR 4.0 as a streamlined pipeline covering raw spectra processing, compound identification, statistical analysis, and functional interpretation. The key features of MetaboAnalystR 4.0 includes an auto-optimized feature detection and quantification algorithm for LC-MS1 spectra processing, efficient MS2 spectra deconvolution and compound identification for data-dependent or data-independent acquisition, and more accurate functional interpretation through integrated spectral annotation. Comprehensive validation studies using LC-MS1 and MS2 spectra obtained from standards mixtures, dilution series and clinical metabolomics samples have shown its excellent performance across a wide range of common tasks such as peak picking, spectral deconvolution, and compound identification with good computing efficiency. Together with its existing statistical analysis utilities, MetaboAnalystR 4.0 represents a significant step toward a unified, end-to-end workflow for LC-MS based global metabolomics in the open-source R environment.


Subject(s)
Mass Spectrometry , Metabolomics , Workflow , Algorithms , Chromatography, Liquid/methods , Liquid Chromatography-Mass Spectrometry , Mass Spectrometry/methods , Metabolomics/methods , Software
SELECTION OF CITATIONS
SEARCH DETAIL
...