Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Methods Programs Biomed ; 208: 106291, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34333205

RESUMO

BACKGROUND AND OBJECTIVE: Computerized pathology image analysis is an important tool in research and clinical settings, which enables quantitative tissue characterization and can assist a pathologist's evaluation. The aim of our study is to systematically quantify and minimize uncertainty in output of computer based pathology image analysis. METHODS: Uncertainty quantification (UQ) and sensitivity analysis (SA) methods, such as Variance-Based Decomposition (VBD) and Morris One-At-a-Time (MOAT), are employed to track and quantify uncertainty in a real-world application with large Whole Slide Imaging datasets - 943 Breast Invasive Carcinoma (BRCA) and 381 Lung Squamous Cell Carcinoma (LUSC) patients. Because these studies are compute intensive, high-performance computing systems and efficient UQ/SA methods were combined to provide efficient execution. UQ/SA has been able to highlight parameters of the application that impact the results, as well as nuclear features that carry most of the uncertainty. Using this information, we built a method for selecting stable features that minimize application output uncertainty. RESULTS: The results show that input parameter variations significantly impact all stages (segmentation, feature computation, and survival analysis) of the use case application. We then identified and classified features according to their robustness to parameter variation, and using the proposed features selection strategy, for instance, patient grouping stability in survival analysis has been improved from in 17% and 34% for BRCA and LUSC, respectively. CONCLUSIONS: This strategy created more robust analyses, demonstrating that SA and UQ are important methods that may increase confidence digital pathology.


Assuntos
Processamento de Imagem Assistida por Computador , Humanos , Incerteza
2.
Concurr Comput ; 32(2)2020 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-32669980

RESUMO

Parameter sensitivity analysis (SA) is an effective tool to gain knowledge about complex analysis applications and assess the variability in their analysis results. However, it is an expensive process as it requires the execution of the target application multiple times with a large number of different input parameter values. In this work, we propose optimizations to reduce the overall computation cost of SA in the context of analysis applications that segment high-resolution slide tissue images, ie, images with resolutions of 100k × 100k pixels. Two cost-cutting techniques are combined to efficiently execute SA: use of distributed hybrid systems for parallel execution and computation reuse at multiple levels of an analysis pipeline to reduce the amount of computation. These techniques were evaluated using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. Our parallel execution method attained an efficiency of over 90% on 256 nodes. The hybrid execution on the CPU and Intel Phi improved the performance by 2×. Multilevel computation reuse led to performance gains of over 2.9×.

3.
Comput Biol Med ; 108: 371-381, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-31054503

RESUMO

Digital pathology imaging enables valuable quantitative characterizations of tissue state at the sub-cellular level. While there is a growing set of methods for analysis of whole slide tissue images, many of them are sensitive to changes in input parameters. Evaluating how analysis results are affected by variations in input parameters is important for the development of robust methods. Executing algorithm sensitivity analyses by systematically varying input parameters is an expensive task because a single evaluation run with a moderate number of tissue images may take hours or days. Our work investigates the use of Surrogate Models (SMs) along with parallel execution to speed up parameter sensitivity analysis (SA). This approach significantly reduces the SA cost, because the SM execution is inexpensive. The evaluation of several SM strategies with two image segmentation workflows demonstrates that a SA study with SMs attains results close to a SA with real application runs (mean absolute error lower than 0.022), while the SM accelerates the SA execution by 51 × . We also show that, although the number of parameters in the example workflows is high, most of the uncertainty can be associated with a few parameters. In order to identify the impact of variations in segmentation results to downstream analyses, we carried out a survival analysis with 387 Lung Squamous Cell Carcinoma cases. This analysis was repeated using 3 values for the most significant parameters identified by the SA for the two segmentation algorithms; about 600 million cell nuclei were segmented per run. The results show that significance of the survival correlations of patient groups, assessed by a logrank test, are strongly affected by the segmentation parameter changes. This indicates that sensitivity analysis is an important tool for evaluating the stability of conclusions from image analyses.


Assuntos
Algoritmos , Carcinoma de Células Escamosas , Núcleo Celular/patologia , Processamento de Imagem Assistida por Computador , Neoplasias Pulmonares , Reconhecimento Automatizado de Padrão , Fluxo de Trabalho , Carcinoma de Células Escamosas/diagnóstico por imagem , Carcinoma de Células Escamosas/mortalidade , Carcinoma de Células Escamosas/patologia , Bases de Dados Factuais , Feminino , Humanos , Neoplasias Pulmonares/diagnóstico , Neoplasias Pulmonares/mortalidade , Neoplasias Pulmonares/patologia , Masculino
4.
J Comput Biol ; 26(9): 908-922, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-30951368

RESUMO

Most of the exact algorithms for biological sequence comparison obtain the optimal result by calculating dynamic programming (DP) matrices with quadratic time and space complexity. Fickett prunes the DP matrices by only computing values inside a band of size k, thus reducing time and space complexity to [Formula: see text]. Myers and Miller (MM) proposed a linear space algorithm that splits a sequence comparison into multiple comparisons of subsequences, using a divide-and-conquer approach. In this article, we propose a parallel strategy that combines the Fickett and MM algorithms, thus adding pruning capability to the MM algorithm. By using an appropriate Fickett band in each subsequence comparison, we can significantly reduce the number of cells computed in the DP matrices. Our strategy was integrated to stages 3 and 4 of CUDAlign, a state-of-the-art parallel tool for optimal biological sequence comparison, generating two implementations: Fickett-MM-4 and Fickett-MM-3-4. These implementations were used to compare real DNA sequences, reaching a speedup of 101.19 × in the 10 × 10 millions of base pairs comparison when compared with CUDAlign stages 3 and 4. In this case, the execution time was reduced from 71.42 to 0.7 seconds.


Assuntos
Algoritmos , Biologia Computacional/métodos , Alinhamento de Sequência/métodos , Análise de Sequência/métodos
5.
J Digit Imaging ; 32(3): 521-533, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30402669

RESUMO

We propose a software platform that integrates methods and tools for multi-objective parameter auto-tuning in tissue image segmentation workflows. The goal of our work is to provide an approach for improving the accuracy of nucleus/cell segmentation pipelines by tuning their input parameters. The shape, size, and texture features of nuclei in tissue are important biomarkers for disease prognosis, and accurate computation of these features depends on accurate delineation of boundaries of nuclei. Input parameters in many nucleus segmentation workflows affect segmentation accuracy and have to be tuned for optimal performance. This is a time-consuming and computationally expensive process; automating this step facilitates more robust image segmentation workflows and enables more efficient application of image analysis in large image datasets. Our software platform adjusts the parameters of a nuclear segmentation algorithm to maximize the quality of image segmentation results while minimizing the execution time. It implements several optimization methods to search the parameter space efficiently. In addition, the methodology is developed to execute on high-performance computing systems to reduce the execution time of the parameter tuning phase. These capabilities are packaged in a Docker container for easy deployment and can be used through a friendly interface extension in 3D Slicer. Our results using three real-world image segmentation workflows demonstrate that the proposed solution is able to (1) search a small fraction (about 100 points) of the parameter space, which contains billions to trillions of points, and improve the quality of segmentation output by × 1.20, × 1.29, and × 1.29, on average; (2) decrease the execution time of a segmentation workflow by up to 11.79× while improving output quality; and (3) effectively use parallel systems to accelerate parameter tuning and segmentation phases.


Assuntos
Núcleo Celular , Rastreamento de Células/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Glioblastoma/diagnóstico por imagem , Glioblastoma/patologia , Humanos , Software , Interface Usuário-Computador , Fluxo de Trabalho
6.
Concurr Comput ; 30(14)2018 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-30344454

RESUMO

The Irregular Wavefront Propagation Pattern (IWPP) is a core computing structure in several image analysis operations. Efficient implementation of IWPP on the Intel Xeon Phi is difficult because of the irregular data access and computation characteristics. The traditional IWPP algorithm relies on atomic instructions, which are not available in the SIMD set of the Intel Phi. To overcome this limitation, we have proposed a new IWPP algorithm that can take advantage of non-atomic SIMD instructions supported on the Intel Xeon Phi. We have also developed and evaluated methods to use CPU and Intel Phi cooperatively for parallel execution of the IWPP algorithms. Our new cooperative IWPP version is also able to handle large out-of-core images that would not fit into the memory of the accelerator. The new IWPP algorithm is used to implement the Morphological Reconstruction and Fill Holes operations, which are operations commonly found in image analysis applications. The vectorization implemented with the new IWPP has attained improvements of up to about 5× on top of the original IWPP and significant gains as compared to state-of-the-art the CPU and GPU versions. The new version running on an Intel Phi is 6.21× and 3.14× faster than running on a 16-core CPU and on a GPU, respectively. Finally, the cooperative execution using two Intel Phi devices and a multi-core CPU has reached performance gains of 2.14× as compared to the execution using a single Intel Xeon Phi.

7.
Proc IEEE Int Conf Clust Comput ; 2017: 25-35, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-29081725

RESUMO

We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies.

8.
Bioinformatics ; 33(7): 1064-1072, 2017 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-28062445

RESUMO

Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/ . Contact: teodoro@unb.br. Supplementary information: Supplementary data are available at Bioinformatics online.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Encefálicas/patologia , Glioblastoma/patologia , Humanos
9.
Bioinformatics ; 32(8): 1238-40, 2016 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-26704597

RESUMO

MOTIVATION: Structured RNAs can be hard to search for as they often are not well conserved in their primary structure and are local in their genomic or transcriptomic context. Thus, the need for tools which in particular can make local structural alignments of RNAs is only increasing. RESULTS: To meet the demand for both large-scale screens and hands on analysis through web servers, we present a new multithreaded version of Foldalign. We substantially improve execution time while maintaining all previous functionalities, including carrying out local structural alignments of sequences with low similarity. Furthermore, the improvements allow for comparing longer RNAs and increasing the sequence length. For example, lengths in the range 2000-6000 nucleotides improve execution up to a factor of five. AVAILABILITY AND IMPLEMENTATION: The Foldalign software and the web server are available at http://rth.dk/resources/foldalign CONTACT: gorodkin@rth.dk SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
RNA/química , Alinhamento de Sequência/métodos , Análise de Sequência de RNA/métodos , Software , Transcriptoma
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...