Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS Comput Biol ; 19(5): e1011137, 2023 May.
Article in English | MEDLINE | ID: mdl-37253059

ABSTRACT

Gene editing characterization with currently available tools does not always give precise relative proportions among the different types of gene edits present in an edited bulk of cells. We have developed CRISPR-Analytics, CRISPR-A, which is a comprehensive and versatile genome editing web application tool and a nextflow pipeline to give support to gene editing experimental design and analysis. CRISPR-A provides a robust gene editing analysis pipeline composed of data analysis tools and simulation. It achieves higher accuracy than current tools and expands the functionality. The analysis includes mock-based noise correction, spike-in calibrated amplification bias reduction, and advanced interactive graphics. This expanded robustness makes this tool ideal for analyzing highly sensitive cases such as clinical samples or experiments with low editing efficiencies. It also provides an assessment of experimental design through the simulation of gene editing results. Therefore, CRISPR-A is ideal to support multiple kinds of experiments such as double-stranded DNA break-based engineering, base editing (BE), primer editing (PE), and homology-directed repair (HDR), without the need of specifying the used experimental approach.


Subject(s)
CRISPR-Cas Systems , Gene Editing , Gene Editing/methods , CRISPR-Cas Systems/genetics , Clustered Regularly Interspaced Short Palindromic Repeats/genetics , Recombinational DNA Repair , DNA Breaks, Double-Stranded
2.
J Med Imaging (Bellingham) ; 10(6): 061403, 2023 Nov.
Article in English | MEDLINE | ID: mdl-36814939

ABSTRACT

Purpose: Deep learning has shown great promise as the backbone of clinical decision support systems. Synthetic data generated by generative models can enhance the performance and capabilities of data-hungry deep learning models. However, there is (1) limited availability of (synthetic) datasets and (2) generative models are complex to train, which hinders their adoption in research and clinical applications. To reduce this entry barrier, we explore generative model sharing to allow more researchers to access, generate, and benefit from synthetic data. Approach: We propose medigan, a one-stop shop for pretrained generative models implemented as an open-source framework-agnostic Python library. After gathering end-user requirements, design decisions based on usability, technical feasibility, and scalability are formulated. Subsequently, we implement medigan based on modular components for generative model (i) execution, (ii) visualization, (iii) search & ranking, and (iv) contribution. We integrate pretrained models with applications across modalities such as mammography, endoscopy, x-ray, and MRI. Results: The scalability and design of the library are demonstrated by its growing number of integrated and readily-usable pretrained generative models, which include 21 models utilizing nine different generative adversarial network architectures trained on 11 different datasets. We further analyze three medigan applications, which include (a) enabling community-wide sharing of restricted data, (b) investigating generative model evaluation metrics, and (c) improving clinical downstream tasks. In (b), we extract Fréchet inception distances (FID) demonstrating FID variability based on image normalization and radiology-specific feature extractors. Conclusion: medigan allows researchers and developers to create, increase, and domain-adapt their training data in just a few lines of code. Capable of enriching and accelerating the development of clinical machine learning models, we show medigan's viability as platform for generative model sharing. Our multimodel synthetic data experiments uncover standards for assessing and reporting metrics, such as FID, in image synthesis studies.

3.
Artif Intell Med ; 132: 102386, 2022 10.
Article in English | MEDLINE | ID: mdl-36207090

ABSTRACT

Computer-aided detection systems based on deep learning have shown great potential in breast cancer detection. However, the lack of domain generalization of artificial neural networks is an important obstacle to their deployment in changing clinical environments. In this study, we explored the domain generalization of deep learning methods for mass detection in digital mammography and analyzed in-depth the sources of domain shift in a large-scale multi-center setting. To this end, we compared the performance of eight state-of-the-art detection methods, including Transformer based models, trained in a single domain and tested in five unseen domains. Moreover, a single-source mass detection training pipeline was designed to improve the domain generalization without requiring images from the new domain. The results show that our workflow generalized better than state-of-the-art transfer learning based approaches in four out of five domains while reducing the domain shift caused by the different acquisition protocols and scanner manufacturers. Subsequently, an extensive analysis was performed to identify the covariate shifts with the greatest effects on detection performance, such as those due to differences in patient age, breast density, mass size, and mass malignancy. Ultimately, this comprehensive study provides key insights and best practices for future research on domain generalization in deep learning based breast cancer detection.


Subject(s)
Breast Neoplasms , Deep Learning , Breast Neoplasms/diagnostic imaging , Female , Humans , Machine Learning , Mammography/methods , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...