Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-20875985

RESUMO

We have developed several methods of designing sparse periodic arrays based upon the polynomial factorization method. In these methods, transmit and receive aperture polynomials are selected such that their product results in a polynomial representing the desired combined transmit/receive (T/R) effective aperture function. A desired combined T/R effective aperture is simply an aperture with an appropriate width exhibiting a spectrum that corresponds to the desired two-way radiation pattern. At least one of the two aperture functions that constitute the combined T/R effective aperture function will be a sparse polynomial. A measure of sparsity of the designed array is defined in terms of the element reduction factor. We show that elements of a linear array can be reduced with varying degrees of beam mainlobe width to sidelobe reduction properties.

2.
IEEE Trans Image Process ; 17(10): 1783-94, 2008 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-18784027

RESUMO

This paper presents a new technique for color enhancement in the compressed domain. The proposed technique is simple but more effective than some of the existing techniques reported earlier. The novelty lies in this case in its treatment of the chromatic components, while previous techniques treated only the luminance component. The results of all previous techniques along with that of the proposed one are compared with respect to those obtained by applying a spatial domain color enhancement technique that appears to provide very good enhancement. The proposed technique, computationally more efficient than the spatial domain based method, is found to provide better enhancement compared to other compressed domain based approaches.


Assuntos
Algoritmos , Cor , Colorimetria/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
3.
Artigo em Inglês | MEDLINE | ID: mdl-17048389

RESUMO

With the growing surge of biological measurements, the problem of integrating and analyzing different types of genomic measurements has become an immediate challenge for elucidating events at the molecular level. In order to address the problem of integrating different data types, we present a framework that locates variation patterns in two biological inputs based on the generalized singular value decomposition (GSVD). In this work, we jointly examine gene expression and copy number data and iteratively project the data on different decomposition directions defined by the projection angle theta in the GSVD. With the proper choice of theta, we locate similar and dissimilar patterns of variation between both data types. We discuss the properties of our algorithm using simulated data and conduct a case study with biologically verified results. Ultimately, we demonstrate the efficacy of our method on two genome-wide breast cancer studies to identify genes with large variation in expression and copy number across numerous cell line and tumor samples. Our method identifies genes that are statistically significant in both input measurements. The proposed method is useful for a wide variety of joint copy number and expression-based studies. Supplementary information is available online, including software implementations and experimental data.


Assuntos
Biomarcadores Tumorais/genética , Neoplasias da Mama/genética , Dosagem de Genes/genética , Perfilação da Expressão Gênica/métodos , Expressão Gênica/genética , Marcadores Genéticos/genética , Proteínas de Neoplasias/genética , Linhagem Celular Tumoral , Bases de Dados Genéticas , Humanos , Armazenamento e Recuperação da Informação/métodos , Modelos Genéticos , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
4.
BMC Bioinformatics ; 5: 194, 2004 Dec 09.
Artigo em Inglês | MEDLINE | ID: mdl-15588297

RESUMO

BACKGROUND: Microarray data normalization is an important step for obtaining data that are reliable and usable for subsequent analysis. One of the most commonly utilized normalization techniques is the locally weighted scatterplot smoothing (LOWESS) algorithm. However, a much overlooked concern with the LOWESS normalization strategy deals with choosing the appropriate parameters. Parameters are usually chosen arbitrarily, which may reduce the efficiency of the normalization and result in non-optimally normalized data. Thus, there is a need to explore LOWESS parameter selection in greater detail. RESULTS AND DISCUSSION: In this work, we discuss how to choose parameters for the LOWESS method. Moreover, we present an optimization approach for obtaining the fraction of data points utilized in the local regression and analyze results for local print-tip normalization. The optimization procedure determines the bandwidth parameter for the local regression by minimizing a cost function that represents the mean-squared difference between the LOWESS estimates and the normalization reference level. We demonstrate the utility of the systematic parameter selection using two publicly available data sets. The first data set consists of three self versus self hybridizations, which allow for a quantitative study of the optimization method. The second data set contains a collection of DNA microarray data from a breast cancer study utilizing four breast cancer cell lines. Our results show that different parameter choices for the bandwidth window yield dramatically different calibration results in both studies. CONCLUSIONS: Results derived from the self versus self experiment indicate that the proposed optimization approach is a plausible solution for estimating the LOWESS parameters, while results from the breast cancer experiment show that the optimization procedure is readily applicable to real-life microarray data normalization. In summary, the systematic approach to obtain critical parameters in the LOWESS technique is likely to produce data that optimally meets assumptions made in the data preprocessing step and thereby makes studies utilizing the LOWESS method unambiguous and easier to repeat.


Assuntos
Análise de Sequência com Séries de Oligonucleotídeos/métodos , Análise de Sequência com Séries de Oligonucleotídeos/estatística & dados numéricos , Algoritmos , Neoplasias da Mama/genética , Neoplasias da Mama/patologia , Calibragem/normas , Linhagem Celular Tumoral , Perfilação da Expressão Gênica/métodos , Perfilação da Expressão Gênica/normas , Perfilação da Expressão Gênica/estatística & dados numéricos , Regulação Neoplásica da Expressão Gênica/genética , Humanos , Distribuição Normal , Análise de Sequência com Séries de Oligonucleotídeos/normas , Reação em Cadeia da Polimerase Via Transcriptase Reversa/métodos
5.
IEEE Trans Image Process ; 13(4): 534-48, 2004 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-15376588

RESUMO

This paper advances a new framework for chromatic filtering of color images. The chromatic content of a color image is encoded in the CIE u'v' chromaticity coordinates whereas the achromatic content is encoded as CIE Y tristimulus value. Within the u'v' chromaticity diagram, colors are added according to the well-known center of gravity law of additive color mixtures, which is generalized here into a nonlinear filtering scheme for processing the two chromatic signals u' and v'. The achromatic channel Y can be processed with traditional filtering schemes, either linear or nonlinear, depending on the specific task at hand. The most interesting characteristics of the new filtering scheme are: 1) the elimination of color smearing effects along edges between bright and dark areas; 2) the possibility of processing chromatic components in a noniterative fashion through linear convolution operations; and 3) the consequent amenability to computationally efficient implementations with fast Fourier transform. The paper includes several examples with both synthetic and real images where the performance of the new filtering method is compared with that of other color image processing algorithms.


Assuntos
Algoritmos , Cor , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão , Processamento de Sinais Assistido por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Processos Estocásticos
6.
IEEE Trans Image Process ; 11(12): 1337-48, 2002.
Artigo em Inglês | MEDLINE | ID: mdl-18249702

RESUMO

Lattice Vector quantization (LVQ) solves the complexity problem of LBG based vector quantizers, yielding very general codebooks. However, a single stage LVQ, when applied to high resolution quantization of a vector, may result in very large and unwieldy indices, making it unsuitable for applications requiring successive refinement. The goal of this work is to develop a unified framework for progressive uniform quantization of vectors without having to sacrifice the mean- squared-error advantage of lattice quantization. A successive refinement uniform vector quantization methodology is developed, where the codebooks in successive stages are all lattice codebooks, each in the shape of the Voronoi regions of the lattice at the previous stage. Such Voronoi shaped geometric lattice codebooks are named Voronoi lattice VQs (VLVQ). Measures of efficiency of successive refinement are developed based on the entropy of the indices transmitted by the VLVQs. Additionally, a constructive method for asymptotically optimal uniform quantization is developed using tree-structured subset VLVQs in conjunction with entropy coding. The methodology developed here essentially yields the optimal vector counterpart of scalar "bitplane-wise" refinement. Unfortunately it is not as trivial to implement as in the scalar case. Furthermore, the benefits of asymptotic optimality in tree-structured subset VLVQs remain elusive in practical nonasymptotic situations. Nevertheless, because scalar bitplane- wise refinement is extensively used in modern wavelet image coders, we have applied the VLVQ techniques to successively refine vectors of wavelet coefficients in the vector set-partitioning (VSPIHT) framework. The results are compared against SPIHT and the previous successive approximation wavelet vector quantization (SA-W-VQ) results of Sampson, da Silva and Ghanbari.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...