Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS Comput Biol ; 19(7): e1011323, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37490493

RESUMO

Fluorescence staining techniques, such as Cell Painting, together with fluorescence microscopy have proven invaluable for visualizing and quantifying the effects that drugs and other perturbations have on cultured cells. However, fluorescence microscopy is expensive, time-consuming, labor-intensive, and the stains applied can be cytotoxic, interfering with the activity under study. The simplest form of microscopy, brightfield microscopy, lacks these downsides, but the images produced have low contrast and the cellular compartments are difficult to discern. Nevertheless, by harnessing deep learning, these brightfield images may still be sufficient for various predictive purposes. In this study, we compared the predictive performance of models trained on fluorescence images to those trained on brightfield images for predicting the mechanism of action (MoA) of different drugs. We also extracted CellProfiler features from the fluorescence images and used them to benchmark the performance. Overall, we found comparable and largely correlated predictive performance for the two imaging modalities. This is promising for future studies of MoAs in time-lapse experiments for which using fluorescence images is problematic. Explorations based on explainable AI techniques also provided valuable insights regarding compounds that were better predicted by one modality over the other.


Assuntos
Processamento de Imagem Assistida por Computador , Microscopia de Fluorescência/métodos , Células Cultivadas , Processamento de Imagem Assistida por Computador/métodos
2.
PLoS One ; 16(10): e0258546, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34653209

RESUMO

Fluorescence microscopy, which visualizes cellular components with fluorescent stains, is an invaluable method in image cytometry. From these images various cellular features can be extracted. Together these features form phenotypes that can be used to determine effective drug therapies, such as those based on nanomedicines. Unfortunately, fluorescence microscopy is time-consuming, expensive, labour intensive, and toxic to the cells. Bright-field images lack these downsides but also lack the clear contrast of the cellular components and hence are difficult to use for downstream analysis. Generating the fluorescence images directly from bright-field images using virtual staining (also known as "label-free prediction" and "in-silico labeling") can get the best of both worlds, but can be very challenging to do for poorly visible cellular structures in the bright-field images. To tackle this problem deep learning models were explored to learn the mapping between bright-field and fluorescence images for adipocyte cell images. The models were tailored for each imaging channel, paying particular attention to the various challenges in each case, and those with the highest fidelity in extracted cell-level features were selected. The solutions included utilizing privileged information for the nuclear channel, and using image gradient information and adversarial training for the lipids channel. The former resulted in better morphological and count features and the latter resulted in more faithfully captured defects in the lipids, which are key features required for downstream analysis of these channels.


Assuntos
Adipócitos/patologia , Microscopia de Fluorescência/métodos , Núcleo Celular/patologia , Citoplasma/patologia , Humanos , Processamento de Imagem Assistida por Computador , Modelos Biológicos , Coloração e Rotulagem
3.
Nanomedicine (Lond) ; 16(13): 1097-1110, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33949890

RESUMO

Background: Early prediction of time-lapse microscopy experiments enables intelligent data management and decision-making. Aim: Using time-lapse data of HepG2 cells exposed to lipid nanoparticles loaded with mRNA for expression of GFP, the authors hypothesized that it is possible to predict in advance whether a cell will express GFP. Methods: The first modeling approach used a convolutional neural network extracting per-cell features at early time points. These features were then combined and explored using either a long short-term memory network (approach 2) or time series feature extraction and gradient boosting machines (approach 3). Results: Accounting for the temporal dynamics significantly improved performance. Conclusion: The results highlight the benefit of accounting for temporal dynamics when studying drug delivery using high-content imaging.


Assuntos
Aprendizado Profundo , Nanopartículas , Preparações Farmacêuticas , Lipídeos , Redes Neurais de Computação
4.
Gigascience ; 10(3)2021 03 19.
Artigo em Inglês | MEDLINE | ID: mdl-33739401

RESUMO

BACKGROUND: Large streamed datasets, characteristic of life science applications, are often resource-intensive to process, transport and store. We propose a pipeline model, a design pattern for scientific pipelines, where an incoming stream of scientific data is organized into a tiered or ordered "data hierarchy". We introduce the HASTE Toolkit, a proof-of-concept cloud-native software toolkit based on this pipeline model, to partition and prioritize data streams to optimize use of limited computing resources. FINDINGS: In our pipeline model, an "interestingness function" assigns an interestingness score to data objects in the stream, inducing a data hierarchy. From this score, a "policy" guides decisions on how to prioritize computational resource use for a given object. The HASTE Toolkit is a collection of tools to adopt this approach. We evaluate with 2 microscopy imaging case studies. The first is a high content screening experiment, where images are analyzed in an on-premise container cloud to prioritize storage and subsequent computation. The second considers edge processing of images for upload into the public cloud for real-time control of a transmission electron microscope. CONCLUSIONS: Through our evaluation, we created smart data pipelines capable of effective use of storage, compute, and network resources, enabling more efficient data-intensive experiments. We note a beneficial separation between scientific concerns of data priority, and the implementation of this behaviour for different resources in different deployment contexts. The toolkit allows intelligent prioritization to be `bolted on' to new and existing systems - and is intended for use with a range of technologies in different deployment scenarios.


Assuntos
Disciplinas das Ciências Biológicas , Software , Diagnóstico por Imagem
5.
PLoS One ; 16(2): e0246336, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33524053

RESUMO

Microscopy imaging experiments generate vast amounts of data, and there is a high demand for smart acquisition and analysis methods. This is especially true for transmission electron microscopy (TEM) where terabytes of data are produced if imaging a full sample at high resolution, and analysis can take several hours. One way to tackle this issue is to collect a continuous stream of low resolution images whilst moving the sample under the microscope, and thereafter use this data to find the parts of the sample deemed most valuable for high-resolution imaging. However, such image streams are degraded by both motion blur and noise. Building on deep learning based approaches developed for deblurring videos of natural scenes we explore the opportunities and limitations of deblurring and denoising images captured from a fast image stream collected by a TEM microscope. We start from existing neural network architectures and make adjustments of convolution blocks and loss functions to better fit TEM data. We present deblurring results on two real datasets of images of kidney tissue and a calibration grid. Both datasets consist of low quality images from a fast image stream captured by moving the sample under the microscope, and the corresponding high quality images of the same region, captured after stopping the movement at each position to let all motion settle. We also explore the generalizability and overfitting on real and synthetically generated data. The quality of the restored images, evaluated both quantitatively and visually, show that using deep learning for image restoration of TEM live image streams has great potential but also comes with some limitations.


Assuntos
Processamento de Imagem Assistida por Computador , Microscopia Eletrônica de Transmissão/métodos , Processamento de Imagem Assistida por Computador/métodos , Modelos Estatísticos , Redes Neurais de Computação , Gravação em Vídeo/métodos
6.
IEEE J Biomed Health Inform ; 25(2): 371-380, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-32750907

RESUMO

With the increasing amount of image data collected from biomedical experiments there is an urgent need for smarter and more effective analysis methods. Many scientific questions require analysis of image sub-regions related to some specific biology. Finding such regions of interest (ROIs) at low resolution and limiting the data subjected to final quantification at full resolution can reduce computational requirements and save time. In this paper we propose a three-step pipeline: First, bounding boxes for ROIs are located at low resolution. Next, ROIs are subjected to semantic segmentation into sub-regions at mid-resolution. We also estimate the confidence of the segmented sub-regions. Finally, quantitative measurements are extracted at full resolution. We use deep learning for the first two steps in the pipeline and conformal prediction for confidence assessment. We show that limiting final quantitative analysis to sub-regions with full confidence reduces noise and increases separability of observed biological effects.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador , Semântica
7.
Cytometry A ; 95(4): 366-380, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30565841

RESUMO

Artificial intelligence, deep convolutional neural networks, and deep learning are all niche terms that are increasingly appearing in scientific presentations as well as in the general media. In this review, we focus on deep learning and how it is applied to microscopy image data of cells and tissue samples. Starting with an analogy to neuroscience, we aim to give the reader an overview of the key concepts of neural networks, and an understanding of how deep learning differs from more classical approaches for extracting information from image data. We aim to increase the understanding of these methods, while highlighting considerations regarding input data requirements, computational resources, challenges, and limitations. We do not provide a full manual for applying these methods to your own data, but rather review previously published articles on deep learning in image cytometry, and guide the readers toward further reading on specific networks and methods, including new methods not yet applied to cytometry data. © 2018 The Authors. Cytometry Part A published by Wiley Periodicals, Inc. on behalf of International Society for Advancement of Cytometry.


Assuntos
Aprendizado Profundo , Citometria por Imagem/métodos , Animais , Inteligência Artificial/tendências , Aprendizado Profundo/tendências , Humanos , Citometria por Imagem/instrumentação , Citometria por Imagem/tendências , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Microscopia/instrumentação , Microscopia/métodos , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...