Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
Proc Natl Acad Sci U S A ; 121(10): e2313719121, 2024 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-38416677

RESUMO

Single-cell data integration can provide a comprehensive molecular view of cells, and many algorithms have been developed to remove unwanted technical or biological variations and integrate heterogeneous single-cell datasets. Despite their wide usage, existing methods suffer from several fundamental limitations. In particular, we lack a rigorous statistical test for whether two high-dimensional single-cell datasets are alignable (and therefore should even be aligned). Moreover, popular methods can substantially distort the data during alignment, making the aligned data and downstream analysis difficult to interpret. To overcome these limitations, we present a spectral manifold alignment and inference (SMAI) framework, which enables principled and interpretable alignability testing and structure-preserving integration of single-cell data with the same type of features. SMAI provides a statistical test to robustly assess the alignability between datasets to avoid misleading inference and is justified by high-dimensional statistical theory. On a diverse range of real and simulated benchmark datasets, it outperforms commonly used alignment methods. Moreover, we show that SMAI improves various downstream analyses such as identification of differentially expressed genes and imputation of single-cell spatial transcriptomics, providing further biological insights. SMAI's interpretability also enables quantification and a deeper understanding of the sources of technical confounders in single-cell data.


Assuntos
Algoritmos , Perfilação da Expressão Gênica , Expressão Gênica , Análise de Célula Única
2.
Magn Reson (Gott) ; 2(2): 843-861, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-37905225

RESUMO

Although the concepts of nonuniform sampling (NUS​​​​​​​) and non-Fourier spectral reconstruction in multidimensional NMR began to emerge 4 decades ago , it is only relatively recently that NUS has become more commonplace. Advantages of NUS include the ability to tailor experiments to reduce data collection time and to improve spectral quality, whether through detection of closely spaced peaks (i.e., "resolution") or peaks of weak intensity (i.e., "sensitivity"). Wider adoption of these methods is the result of improvements in computational performance, a growing abundance and flexibility of software, support from NMR spectrometer vendors, and the increased data sampling demands imposed by higher magnetic fields. However, the identification of best practices still remains a significant and unmet challenge. Unlike the discrete Fourier transform, non-Fourier methods used to reconstruct spectra from NUS data are nonlinear, depend on the complexity and nature of the signals, and lack quantitative or formal theory describing their performance. Seemingly subtle algorithmic differences may lead to significant variabilities in spectral qualities and artifacts. A community-based critical assessment of NUS challenge problems has been initiated, called the "Nonuniform Sampling Contest" (NUScon), with the objective of determining best practices for processing and analyzing NUS experiments. We address this objective by constructing challenges from NMR experiments that we inject with synthetic signals, and we process these challenges using workflows submitted by the community. In the initial rounds of NUScon our aim is to establish objective criteria for evaluating the quality of spectral reconstructions. We present here a software package for performing the quantitative analyses, and we present the results from the first two rounds of NUScon. We discuss the challenges that remain and present a roadmap for continued community-driven development with the ultimate aim of providing best practices in this rapidly evolving field. The NUScon software package and all data from evaluating the challenge problems are hosted on the NMRbox platform.

3.
Proc Natl Acad Sci U S A ; 117(48): 30029-30032, 2020 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-33229565
4.
Proc Natl Acad Sci U S A ; 117(40): 24652-24663, 2020 10 06.
Artigo em Inglês | MEDLINE | ID: mdl-32958680

RESUMO

Modern practice for training classification deepnets involves a terminal phase of training (TPT), which begins at the epoch where training error first vanishes. During TPT, the training error stays effectively zero, while training loss is pushed toward zero. Direct measurements of TPT, for three prototypical deepnet architectures and across seven canonical classification datasets, expose a pervasive inductive bias we call neural collapse (NC), involving four deeply interconnected phenomena. (NC1) Cross-example within-class variability of last-layer training activations collapses to zero, as the individual activations themselves collapse to their class means. (NC2) The class means collapse to the vertices of a simplex equiangular tight frame (ETF). (NC3) Up to rescaling, the last-layer classifiers collapse to the class means or in other words, to the simplex ETF (i.e., to a self-dual configuration). (NC4) For a given activation, the classifier's decision collapses to simply choosing whichever class has the closest train class mean (i.e., the nearest class center [NCC] decision rule). The symmetric and very simple geometry induced by the TPT confers important benefits, including better generalization performance, better robustness, and better interpretability.

5.
Ann Stat ; 46(4): 1742-1778, 2018 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-30258255

RESUMO

We show that in a common high-dimensional covariance model, the choice of loss function has a profound effect on optimal estimation. In an asymptotic framework based on the Spiked Covariance model and use of orthogonally invariant estimators, we show that optimal estimation of the population covariance matrix boils down to design of an optimal shrinker η that acts elementwise on the sample eigenvalues. Indeed, to each loss function there corresponds a unique admissible eigenvalue shrinker η* dominating all other shrinkers. The shape of the optimal shrinker is determined by the choice of loss function and, crucially, by inconsistency of both eigenvalues and eigenvectors of the sample covariance matrix. Details of these phenomena and closed form formulas for the optimal eigenvalue shrinkers are worked out for a menagerie of 26 loss functions for covariance estimation found in the literature, including the Stein, Entropy, Divergence, Fréchet, Bhattacharya/Matusita, Frobenius Norm, Operator Norm, Nuclear Norm and Condition Number losses.

6.
Proc Natl Acad Sci U S A ; 110(21): 8405-10, 2013 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-23650360

RESUMO

Let X(0) be an unknown M by N matrix. In matrix recovery, one takes n < MN linear measurements y(1),…,y(n) of X(0), where y(i) = Tr(A(T)iX(0)) and each A(i) is an M by N matrix. A popular approach for matrix recovery is nuclear norm minimization (NNM): solving the convex optimization problem min ||X||*subject to y(i) =Tr(A(T)(i)X) for all 1 ≤ i ≤ n, where || · ||* denotes the nuclear norm, namely, the sum of singular values. Empirical work reveals a phase transition curve, stated in terms of the undersampling fraction δ(n,M,N) = n/(MN), rank fraction ρ=rank(X0)/min {M,N}, and aspect ratio ß=M/N. Specifically when the measurement matrices Ai have independent standard Gaussian random entries, a curve δ*(ρ) = δ*(ρ;ß) exists such that, if δ > δ*(ρ), NNM typically succeeds for large M,N, whereas if δ < δ*(ρ), it typically fails. An apparently quite different problem is matrix denoising in Gaussian noise, in which an unknown M by N matrix X(0) is to be estimated based on direct noisy measurements Y =X(0) + Z, where the matrix Z has independent and identically distributed Gaussian entries. A popular matrix denoising scheme solves the unconstrained optimization problem min|| Y-X||(2)(F)/2+λ||X||*. When optimally tuned, this scheme achieves the asymptotic minimax mean-squared error M(ρ;ß) = lim(M,N → ∞)inf(λ)sup(rank(X) ≤ ρ · M)MSE(X,X(λ)), where M/N → . We report extensive experiments showing that the phase transition δ*(ρ) in the first problem, matrix recovery from Gaussian measurements, coincides with the minimax risk curve M(ρ)=M(ρ;ß) in the second problem, matrix denoising in Gaussian noise: δ*(ρ)=M(ρ), for any rank fraction 0 < ρ < 1 (at each common aspect ratio ß). Our experiments considered matrices belonging to two constraint classes: real M by N matrices, of various ranks and aspect ratios, and real symmetric positive-semidefinite N by N matrices, of various ranks.

7.
Proc Natl Acad Sci U S A ; 110(4): 1181-6, 2013 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-23277588

RESUMO

In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements Y = Ax(0). For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (k/n,n/N)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property--with the same phase transition location--holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to X(N) for four different sets X [symbol: see text]{[0, 1], R(=), R, C}, and the results establish our finding for each of the four associated phase transitions.

9.
Philos Trans A Math Phys Eng Sci ; 367(1906): 4273-93, 2009 Nov 13.
Artigo em Inglês | MEDLINE | ID: mdl-19805445

RESUMO

We review connections between phase transitions in high-dimensional combinatorial geometry and phase transitions occurring in modern high-dimensional data analysis and signal processing. In data analysis, such transitions arise as abrupt breakdown of linear model selection, robust data fitting or compressed sensing reconstructions, when the complexity of the model or the number of outliers increases beyond a threshold. In combinatorial geometry, these transitions appear as abrupt changes in the properties of face counts of convex polytopes when the dimensions are varied. The thresholds in these very different problems appear in the same critical locations after appropriate calibration of variables. These thresholds are important in each subject area: for linear modelling, they place hard limits on the degree to which the now ubiquitous high-throughput data analysis can be successful; for robustness, they place hard limits on the degree to which standard robust fitting methods can tolerate outliers before breaking down; for compressed sensing, they define the sharp boundary of the undersampling/sparsity trade-off curve in undersampling theorems. Existing derivations of phase transitions in combinatorial geometry assume that the underlying matrices have independent and identically distributed Gaussian elements. In applications, however, it often seems that Gaussianity is not required. We conducted an extensive computational experiment and formal inferential analysis to test the hypothesis that these phase transitions are universal across a range of underlying matrix ensembles. We ran millions of linear programs using random matrices spanning several matrix ensembles and problem sizes; visually, the empirical phase transitions do not depend on the ensemble, and they agree extremely well with the asymptotic theory assuming Gaussianity. Careful statistical analysis reveals discrepancies that can be explained as transient terms, decaying with problem size. The experimental results are thus consistent with an asymptotic large-n universality across matrix ensembles; finite-sample universality can be rejected.

10.
Philos Trans A Math Phys Eng Sci ; 367(1906): 4449-70, 2009 Nov 13.
Artigo em Inglês | MEDLINE | ID: mdl-19805453

RESUMO

We consider two-class linear classification in a high-dimensional, small-sample-size setting. Only a small fraction of the features are useful, these being unknown to us, and each useful feature contributes weakly to the classification decision. This was called the rare/weak (RW) model in our previous study (Donoho, D. & Jin, J. 2008 Proc. Natl Acad. Sci. USA 105, 14 790-14 795). We select features by thresholding feature Z-scores. The threshold is set by higher criticism (HC). For 1

11.
Proc Natl Acad Sci U S A ; 106(45): 18914-9, 2009 Nov 10.
Artigo em Inglês | MEDLINE | ID: mdl-19858495

RESUMO

Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity-undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity-undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity-undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity-undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity-undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this apparently very different theoretical formalism.


Assuntos
Algoritmos , Modelos Estatísticos , Tamanho da Amostra , Estatística como Assunto/métodos
12.
Proc Natl Acad Sci U S A ; 105(39): 14790-5, 2008 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-18815365

RESUMO

In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value associated with the ith feature Z-score and pi((i)) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p - pi((i)))/sqrt{i/p(1-i/p)}. We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT.


Assuntos
Viés , Coleta de Dados/estatística & dados numéricos , Genômica/estatística & dados numéricos , Modelos Lineares , Proteômica/estatística & dados numéricos
13.
IEEE Trans Image Process ; 16(11): 2675-81, 2007 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-17990744

RESUMO

In a recent paper, a method called morphological component analysis (MCA) has been proposed to separate the texture from the natural part in images. MCA relies on an iterative thresholding algorithm, using a threshold which decreases linearly towards zero along the iterations. This paper shows how the MCA convergence can be drastically improved using the mutual incoherence of the dictionaries associated to the different components. This modified MCA algorithm is then compared to basis pursuit, and experiments show that MCA and BP solutions are similar in terms of sparsity, as measured by the l1 norm, but MCA is much faster and gives us the possibility of handling large scale data sets.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Inteligência Artificial , Análise de Componente Principal , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
14.
Magn Reson Med ; 58(6): 1182-95, 2007 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17969013

RESUMO

The sparsity which is implicit in MR images is exploited to significantly undersample k-space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain-for example, in terms of spatial finite-differences or their wavelet coefficients. According to the recently developed mathematical theory of compressed-sensing, images with a sparse representation can be recovered from randomly undersampled k-space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise-like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo-random variable-density undersampling of phase-encodes. The reconstruction is performed by minimizing the l(1) norm of a transformed image, subject to data fidelity constraints. Examples demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin-echo brain imaging and 3D contrast enhanced angiography.


Assuntos
Encéfalo/anatomia & histologia , Compressão de Dados/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Inteligência Artificial , Humanos , Imageamento por Ressonância Magnética/instrumentação , Imagens de Fantasmas , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
15.
J Magn Reson ; 188(2): 295-300, 2007 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-17723313

RESUMO

Iterative thresholding algorithms have a long history of application to signal processing. Although they are intuitive and easy to implement, their development was heuristic and mainly ad hoc. Using a special form of the thresholding operation, called soft thresholding, we show that the fixed point of iterative thresholding is equivalent to minimum l(1)-norm reconstruction. We illustrate the method for spectrum analysis of a time series. This result helps to explain the success of these methods and illuminates connections with maximum entropy and minimum area methods, while also showing that there are more efficient routes to the same result. The power of the l(1)-norm and related functionals as regularizers of solutions to underdetermined systems will likely find numerous useful applications in NMR.


Assuntos
Espectroscopia de Ressonância Magnética/métodos , Proteínas/química , Processamento de Sinais Assistido por Computador , Algoritmos , Análise de Fourier
16.
PLoS One ; 2(5): e460, 2007 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-17520019

RESUMO

BACKGROUND: We applied the Virtual Northern technique to human brain mRNA to systematically measure human mRNA transcript lengths on a genome-wide scale. METHODOLOGY/PRINCIPAL FINDINGS: We used separation by gel electrophoresis followed by hybridization to cDNA microarrays to measure 8,774 mRNA transcript lengths representing at least 6,238 genes at high (>90%) confidence. By comparing these transcript lengths to the Refseq and H-Invitational full-length cDNA databases, we found that nearly half of our measurements appeared to represent novel transcript variants. Comparison of length measurements determined by hybridization to different cDNAs derived from the same gene identified clones that potentially correspond to alternative transcript variants. We observed a close linear relationship between ORF and mRNA lengths in human mRNAs, identical in form to the relationship we had previously identified in yeast. Some functional classes of protein are encoded by mRNAs whose untranslated regions (UTRs) tend to be longer or shorter than average; these functional classes were similar in both human and yeast. CONCLUSIONS/SIGNIFICANCE: Human transcript diversity is extensive and largely unannotated. Our length dataset can be used as a new criterion for judging the completeness of cDNAs and annotating mRNA sequences. Similar relationships between the lengths of the UTRs in human and yeast mRNAs and the functions of the proteins they encode suggest that UTR sequences serve an important regulatory role among eukaryotes.


Assuntos
Northern Blotting , Genoma Humano , Análise por Conglomerados , Humanos , Análise de Sequência com Séries de Oligonucleotídeos , Fases de Leitura Aberta , RNA Mensageiro/genética , Regiões não Traduzidas
17.
IEEE Trans Image Process ; 14(10): 1570-82, 2005 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-16238062

RESUMO

The separation of image content into semantic parts plays a vital role in applications such as compression, enhancement, restoration, and more. In recent years, several pioneering works suggested such a separation be based on variational formulation and others using independent component analysis and sparsity. This paper presents a novel method for separating images into texture and piecewise smooth (cartoon) parts, exploiting both the variational and the sparsity mechanisms. The method combines the basis pursuit denoising (BPDN) algorithm and the total-variation (TV) regularization scheme. The basic idea presented in this paper is the use of two appropriate dictionaries, one for the representation of textures and the other for the natural scene parts assumed to be piecewise smooth. Both dictionaries are chosen such that they lead to sparse representations over one type of image-content (either texture or piecewise smooth). The use of the BPDN with the two amalgamed dictionaries leads to the desired separation, along with noise removal as a by-product. As the need to choose proper dictionaries is generally hard, a TV regularization is employed to better direct the separation process and reduce ringing artifacts. We present a highly efficient numerical scheme to solve the combined optimization problem posed by our model and to show several experimental results that validate the algorithm's performance.


Assuntos
Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão/métodos , Modelos Estatísticos
18.
Proc Natl Acad Sci U S A ; 102(27): 9446-51, 2005 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-15976026

RESUMO

Consider an underdetermined system of linear equations y = Ax with known y and d x n matrix A. We seek the nonnegative x with the fewest nonzeros satisfying y = Ax. In general, this problem is NP-hard. However, for many matrices A there is a threshold phenomenon: if the sparsest solution is sufficiently sparse, it can be found by linear programming. We explain this by the theory of convex polytopes. Let a(j) denote the jth column of A, 1 < or = j < or = n, let a0 = 0 and P denote the convex hull of the a(j). We say the polytope P is outwardly k-neighborly if every subset of k vertices not including 0 spans a face of P. We show that outward k-neighborliness is equivalent to the statement that, whenever y = Ax has a nonnegative solution with at most k nonzeros, it is the nonnegative solution to y = Ax having minimal sum. We also consider weak neighborliness, where the overwhelming majority of k-sets of a(j)s not containing 0 span a face of P. This implies that most nonnegative vectors x with k nonzeros are uniquely recoverable from y = Ax by linear programming. Numerous corollaries follow by invoking neighborliness results. For example, for most large n by 2n underdetermined systems having a solution with fewer nonzeros than roughly half the number of equations, the sparsest solution can be found by linear programming.

19.
Proc Natl Acad Sci U S A ; 102(27): 9452-7, 2005 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-15972808

RESUMO

Let A be a d x n matrix and T = T(n-1) be the standard simplex in Rn. Suppose that d and n are both large and comparable: d approximately deltan, delta in (0, 1). We count the faces of the projected simplex AT when the projector A is chosen uniformly at random from the Grassmann manifold of d-dimensional orthoprojectors of Rn. We derive rhoN(delta) > 0 with the property that, for any rho < rhoN(delta), with overwhelming probability for large d, the number of k-dimensional faces of P = AT is exactly the same as for T, for 0 < or = k < or = rhod. This implies that P is left floor rhod right floor-neighborly, and its skeleton Skel(left floor rhod right floor)(P) is combinatorially equivalent to Skel( left floor rhod right floor)(T). We also study a weaker notion of neighborliness where the numbers of k-dimensional faces f(k)(P) > or = f(k)(T)(1-epsilon). Vershik and Sporyshev previously showed existence of a threshold rhoVS(delta) > 0 at which phase transition occurs in k/d. We compute and display rhoVS and compare with rhoN. Corollaries are as follows. (1) The convex hull of n Gaussian samples in Rd, with n large and proportional to d, has the same k-skeleton as the (n-1) simplex, for k < rhoN (d/n)d(1 + oP(1)). (2) There is a "phase transition" in the ability of linear programming to find the sparsest nonnegative solution to systems of underdetermined linear equations. For most systems having a solution with fewer than rhoVS(d/n)d(1 + o(1)) nonzeros, linear programming will find that solution.

20.
IEEE Trans Image Process ; 14(2): 200-12, 2005 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-15700525

RESUMO

A new class of related algorithms for deblocking block-transform compressed images and video sequences is proposed in this paper. The algorithms apply weighted sums on pixel quartets, which are symmetrically aligned with respect to block boundaries. The basic weights, which are aimed at very low bit-rate images, are obtained from a two-dimensional function which obeys predefined constraints. Using these weights on images compressed at higher bit rates produces a deblocked image which contains blurred "false" edges near real edges. We refer to this phenomenon as the ghosting effect. In order to prevent its occurrences, the weights of pixels, which belong to nonmonotone areas, are modified by dividing each pixel's weight by a predefined factor called a grade. This scheme is referred to as weight adaptation by grading (WABG). Better deblocking of monotone areas is achieved by applying three iterations of the WABG scheme on such areas followed by a fourth iteration which is applied on the rest of the image. We refer to this scheme as deblocking frames of variable size (DFOVS). DFOVS automatically adapts itself to the activity of each block. This new class of algorithms produces very good subjective results and PSNR results which are competitive relative to available state-of-the-art methods.


Assuntos
Algoritmos , Artefatos , Compressão de Dados/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Gravação em Vídeo/métodos , Redes de Comunicação de Computadores , Simulação por Computador , Análise Numérica Assistida por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...