RESUMO
OBJECTIVES: Severe combined immunodeficiency (SCID) represents one of the most severe forms of Primary immunodeficiency (PID) disorders, characterized by T cell lymphopenia (TCL) and lack of cellular and humoral immune responses. However, not all patients with low T cell lymphocyte counts may have an abnormal T cell immunity and the observed TCL may be a temporary suppression resulting from transient lymphopenia secondary to severe infections. In such cases, it is necessary to estimate the severity of the observed TCL by assessing thymic capabilities. METHODS: In this study, patients clinically suspected of SCID were evaluated for lymphocyte subsets analysis, naïve T cells and T cell receptor excision circles (TREC). RESULTS: Patients with transient lymphopenia had detectable TREC levels and normal naïve T cells subsets. Normalization of absolute lymphocyte counts, and T cells was seen in the patients after a short duration. CONCLUSIONS: The authors highlight the importance of detailed immunological investigations in an infant with severe infections and lymphopenia before labeling the infant as SCID.
Assuntos
Linfopenia/complicações , Linfopenia/imunologia , Imunodeficiência Combinada Severa/complicações , Imunodeficiência Combinada Severa/imunologia , Feminino , Humanos , Imunidade Humoral , Lactente , Contagem de Linfócitos , MasculinoRESUMO
In many applications of regression, one is concerned with the efficiency of the estimated function in addition to the accuracy of the regression. For efficiency, it is common to represent the estimated function as a rectangular lattice of values-a lookup table (LUT)-that can be linearly interpolated for any needed value. Typically, a LUT is constructed from data with a two-step process that first fits a function to the data, then evaluates that fitted function at the nodes of the lattice. We present an approach, termed lattice regression, that directly optimizes the values of the lattice nodes to minimize the post-interpolation training error. Additionally, we propose a second-order difference regularizer to promote smoothness. We demonstrate the effectiveness of this approach on two image processing tasks that require both accurate regression and efficient function evaluations: inverse device characterization for color management and omnidirectional super-resolution for visual homing.
RESUMO
This paper addresses the problem of classifying signals that have been corrupted by noise and unknown linear time-invariant (LTI) filtering such as multipath, given labeled uncorrupted training signals. A maximum a posteriori approach to the deconvolution and classification is considered, which produces estimates of the desired signal, the unknown channel, and the class label. For cases in which only a class label is needed, the classification accuracy can be improved by not committing to an estimate of the channel or signal. A variant of the quadratic discriminant analysis (QDA) classifier is proposed that probabilistically accounts for the unknown LTI filtering, and which avoids deconvolution. The proposed QDA classifier can work either directly on the signal or on features whose transformation by LTI filtering can be analyzed; as an example a classifier for subband-power features is derived. Results on simulated data and real Bowhead whale vocalizations show that jointly considering deconvolution with classification can dramatically improve classification performance over traditional methods over a range of signal-to-noise ratios.
Assuntos
Acústica , Animais , Análise Discriminante , Funções Verossimilhança , Modelos Lineares , Ruído , Detecção de Sinal Psicológico , ÁguaRESUMO
A robust parallelized halftoning method is proposed that is insensitive to dot loss, which can be a problem with xerographic and high-resolution printing. The method uses standard clustered-dot dither for smooth image blocks but uses the rankings of dither thresholds to halftone image blocks with high spatial activity. Experiments compare the ranked dither to dither, error diffusion, and green-noise error diffusion for four-color printing using standard clustered-dot dither masks at standard angles for the cyan, magenta, yellow, and black planes, respectively.
RESUMO
Local learning methods, such as local linear regression and nearest neighbor classifiers, base estimates on nearby training samples, neighbors. Usually, the number of neighbors used in estimation is fixed to be a global "optimal" value, chosen by cross validation. This paper proposes adapting the number of neighbors used for estimation to the local geometry of the data, without need for cross validation. The term enclosing neighborhood is introduced to describe a set of neighbors whose convex hull contains the test point when possible. It is proven that enclosing neighborhoods yield bounded estimation variance under some assumptions. Three such enclosing neighborhood definitions are presented: natural neighbors, natural neighbors inclusive, and enclosing k-NN. The effectiveness of these neighborhood definitions with local linear regression is tested for estimating lookup tables for color management. Significant improvements in error metrics are shown, indicating that enclosing neighborhoods may be a promising adaptive neighborhood definition for other local learning tasks as well, depending on the density of training samples.
Assuntos
Algoritmos , Cor , Colorimetria/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Impressão/métodos , Simulação por Computador , Modelos Lineares , Modelos Estatísticos , Análise de Regressão , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
Nonparametric neighborhood methods for learning entail estimation of class conditional probabilities based on relative frequencies of samples that are "near-neighbors" of a test point. We propose and explore the behavior of a learning algorithm that uses linear interpolation and the principle of maximum entropy (LIME). We consider some theoretical properties of the LIME algorithm: LIME weights have exponential form; the estimates are consistent; and the estimates are robust to additive noise. In relation to bias reduction, we show that near-neighbors contain a test point in their convex hull asymptotically. The common linear interpolation solution used for regression on grids or look-up-tables is shown to solve a related maximum entropy problem. LIME simulation results support use of the method, and performance on a pipeline integrity classification problem demonstrates that the proposed algorithm has practical value.