Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
Entropy (Basel) ; 23(4)2021 Mar 27.
Article in English | MEDLINE | ID: mdl-33801733

ABSTRACT

Transfer learning seeks to improve the generalization performance of a target task by exploiting the knowledge learned from a related source task. Central questions include deciding what information one should transfer and when transfer can be beneficial. The latter question is related to the so-called negative transfer phenomenon, where the transferred source information actually reduces the generalization performance of the target task. This happens when the two tasks are sufficiently dissimilar. In this paper, we present a theoretical analysis of transfer learning by studying a pair of related perceptron learning tasks. Despite the simplicity of our model, it reproduces several key phenomena observed in practice. Specifically, our asymptotic analysis reveals a phase transition from negative transfer to positive transfer as the similarity of the two tasks moves past a well-defined threshold.

2.
Proc IEEE Inst Electr Electron Eng ; 106(8): 1293-1310, 2018 Aug.
Article in English | MEDLINE | ID: mdl-30828106

ABSTRACT

For many modern applications in science and engineering, data are collected in a streaming fashion carrying time-varying information, and practitioners need to process them with a limited amount of memory and computational resources in a timely manner for decision making. This often is coupled with the missing data problem, such that only a small fraction of data attributes are observed. These complications impose significant, and unconventional, constraints on the problem of streaming Principal Component Analysis (PCA) and subspace tracking, which is an essential building block for many inference tasks in signal processing and machine learning. This survey article reviews a variety of classical and recent algorithms for solving this problem with low computational and memory complexities, particularly those applicable in the big data regime with missing data. We illustrate that streaming PCA and subspace tracking algorithms can be understood through algebraic and geometric perspectives, and they need to be adjusted carefully to handle missing data. Both asymptotic and non-asymptotic convergence guarantees are reviewed. Finally, we benchmark the performance of several competitive algorithms in the presence of missing data for both well-conditioned and ill-conditioned systems.

3.
IEEE Trans Image Process ; 26(11): 5107-5121, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28742038

ABSTRACT

Many patch-based image denoising algorithms can be formulated as applying a smoothing filter to the noisy image. Expressed as matrices, the smoothing filters must be row normalized, so that each row sums to unity. Surprisingly, if we apply a column normalization before the row normalization, the performance of the smoothing filter can often be significantly improved. Prior works showed that such performance gain is related to the Sinkhorn-Knopp balancing algorithm, an iterative procedure that symmetrizes a row-stochastic matrix to a doubly stochastic matrix. However, a complete understanding of the performance gain phenomenon is still lacking. In this paper, we study the performance gain phenomenon from a statistical learning perspective. We show that Sinkhorn-Knopp is equivalent to an expectation-maximization (EM) algorithm of learning a Gaussian mixture model of the image patches. By establishing the correspondence between the steps of Sinkhorn-Knopp and the EM algorithm, we provide a geometrical interpretation of the symmetrization process. This observation allows us to develop a new denoising algorithm called Gaussian mixture model symmetric smoothing filter (GSF). GSF is an extension of the Sinkhorn-Knopp and is a generalization of the original smoothing filters. Despite its simple formulation, GSF outperforms many existing smoothing filters and has a similar performance compared with several state-of-the-art denoising algorithms.

4.
Neuroimage ; 125: 587-600, 2016 Jan 15.
Article in English | MEDLINE | ID: mdl-26481679

ABSTRACT

Motivated by recent progress in signal processing on graphs, we have developed a matched signal detection (MSD) theory for signals with intrinsic structures described by weighted graphs. First, we regard graph Laplacian eigenvalues as frequencies of graph-signals and assume that the signal is in a subspace spanned by the first few graph Laplacian eigenvectors associated with lower eigenvalues. The conventional matched subspace detector can be applied to this case. Furthermore, we study signals that may not merely live in a subspace. Concretely, we consider signals with bounded variation on graphs and more general signals that are randomly drawn from a prior distribution. For bounded variation signals, the test is a weighted energy detector. For the random signals, the test statistic is the difference of signal variations on associated graphs, if a degenerate Gaussian distribution specified by the graph Laplacian is adopted. We evaluate the effectiveness of the MSD on graphs both with simulated and real data sets. Specifically, we apply MSD to the brain imaging data classification problem of Alzheimer's disease (AD) based on two independent data sets: 1) positron emission tomography data with Pittsburgh compound-B tracer of 30 AD and 40 normal control (NC) subjects, and 2) resting-state functional magnetic resonance imaging (R-fMRI) data of 30 early mild cognitive impairment and 20 NC subjects. Our results demonstrate that the MSD approach is able to outperform the traditional methods and help detect AD at an early stage, probably due to the success of exploiting the manifold structure of the data.


Subject(s)
Alzheimer Disease/diagnosis , Brain Mapping/methods , Brain/pathology , Image Interpretation, Computer-Assisted/methods , Models, Neurological , Algorithms , Humans , Machine Learning , Magnetic Resonance Imaging , Models, Theoretical , Positron-Emission Tomography
5.
J Neurophysiol ; 115(1): 39-59, 2016 Jan 01.
Article in English | MEDLINE | ID: mdl-26467513

ABSTRACT

Perceptual decision making is fundamental to a broad range of fields including neurophysiology, economics, medicine, advertising, law, etc. Although recent findings have yielded major advances in our understanding of perceptual decision making, decision making as a function of time and frequency (i.e., decision-making dynamics) is not well understood. To limit the review length, we focus most of this review on human findings. Animal findings, which are extensively reviewed elsewhere, are included when beneficial or necessary. We attempt to put these various findings and data sets, which can appear to be unrelated in the absence of a formal dynamic analysis, into context using published models. Specifically, by adding appropriate dynamic mechanisms (e.g., high-pass filters) to existing models, it appears that a number of otherwise seemingly disparate findings from the literature might be explained. One hypothesis that arises through this dynamic analysis is that decision making includes phasic (high pass) neural mechanisms, an evidence accumulator and/or some sort of midtrial decision-making mechanism (e.g., peak detector and/or decision boundary).


Subject(s)
Brain/physiology , Decision Making , Perception , Animals , Humans , Sensory Thresholds
6.
PLoS One ; 10(5): e0128136, 2015.
Article in English | MEDLINE | ID: mdl-26024224

ABSTRACT

Understanding network features of brain pathology is essential to reveal underpinnings of neurodegenerative diseases. In this paper, we introduce a novel graph regression model (GRM) for learning structural brain connectivity of Alzheimer's disease (AD) measured by amyloid-ß deposits. The proposed GRM regards 11C-labeled Pittsburgh Compound-B (PiB) positron emission tomography (PET) imaging data as smooth signals defined on an unknown graph. This graph is then estimated through an optimization framework, which fits the graph to the data with an adjustable level of uniformity of the connection weights. Under the assumed data model, results based on simulated data illustrate that our approach can accurately reconstruct the underlying network, often with better reconstruction than those obtained by both sample correlation and ℓ1-regularized partial correlation estimation. Evaluations performed upon PiB-PET imaging data of 30 AD and 40 elderly normal control (NC) subjects demonstrate that the connectivity patterns revealed by the GRM are easy to interpret and consistent with known pathology. Moreover, the hubs of the reconstructed networks match the cortical hubs given by functional MRI. The discriminative network features including both global connectivity measurements and degree statistics of specific nodes discovered from the AD and NC amyloid-beta networks provide new potential biomarkers for preclinical and clinical AD.


Subject(s)
Alzheimer Disease/pathology , Brain/pathology , Models, Biological , Regression Analysis , Aged , Alzheimer Disease/metabolism , Amyloid beta-Peptides/metabolism , Aniline Compounds , Brain/metabolism , Female , Fourier Analysis , Humans , Male , Positron-Emission Tomography/methods , Reference Values , Thiazoles
7.
IEEE Trans Image Process ; 23(8): 3711-25, 2014 Aug.
Article in English | MEDLINE | ID: mdl-25122743

ABSTRACT

We propose a randomized version of the nonlocal means (NLM) algorithm for large-scale image filtering. The new algorithm, called Monte Carlo nonlocal means (MCNLM), speeds up the classical NLM by computing a small subset of image patch distances, which are randomly selected according to a designed sampling pattern. We make two contributions. First, we analyze the performance of the MCNLM algorithm and show that, for large images or large external image databases, the random outcomes of MCNLM are tightly concentrated around the deterministic full NLM result. In particular, our error probability bounds show that, at any given sampling ratio, the probability for MCNLM to have a large deviation from the original NLM solution decays exponentially as the size of the image or database grows. Second, we derive explicit formulas for optimal sampling patterns that minimize the error probability bound by exploiting partial knowledge of the pairwise similarity weights. Numerical experiments show that MCNLM is competitive with other state-of-the-art fast NLM algorithms for single-image denoising. When applied to denoising images using an external database containing ten billion patches, MCNLM returns a randomized solution that is within 0.2 dB of the full NLM solution while reducing the runtime by three orders of magnitude.


Subject(s)
Algorithms , Artifacts , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Models, Statistical , Computer Simulation , Monte Carlo Method , Numerical Analysis, Computer-Assisted , Reproducibility of Results , Sample Size , Sensitivity and Specificity , Signal Processing, Computer-Assisted
8.
Proc Natl Acad Sci U S A ; 110(30): 12186-91, 2013 Jul 23.
Article in English | MEDLINE | ID: mdl-23776236

ABSTRACT

Imagine that you are blindfolded inside an unknown room. You snap your fingers and listen to the room's response. Can you hear the shape of the room? Some people can do it naturally, but can we design computer algorithms that hear rooms? We show how to compute the shape of a convex polyhedral room from its response to a known sound, recorded by a few microphones. Geometric relationships between the arrival times of echoes enable us to "blindfoldedly" estimate the room geometry. This is achieved by exploiting the properties of Euclidean distance matrices. Furthermore, we show that under mild conditions, first-order echoes provide a unique description of convex polyhedral rooms. Our algorithm starts from the recorded impulse responses and proceeds by learning the correct assignment of echoes to walls. In contrast to earlier methods, the proposed algorithm reconstructs the full 3D geometry of the room from a single sound emission, and with an arbitrary geometry of the microphone array. As long as the microphones can hear the echoes, we can position them as we want. Besides answering a basic question about the inverse problem of room acoustics, our results find applications in areas such as architectural acoustics, indoor localization, virtual reality, and audio forensics.

9.
Inf Process Med Imaging ; 23: 1-12, 2013.
Article in English | MEDLINE | ID: mdl-24683953

ABSTRACT

We develop a matched signal detection (MSD) theory for signals with an intrinsic structure described by a weighted graph. Hypothesis tests are formulated under different signal models. In the simplest scenario, we assume that the signal is deterministic with noise in a subspace spanned by a subset of eigenvectors of the graph Laplacian. The conventional matched subspace detection can be easily extended to this case. Furthermore, we study signals with certain level of smoothness. The test turns out to be a weighted energy detector, when the noise variance is negligible. More generally, we presume that the signal follows a prior distribution, which could be learnt from training data. The test statistic is then the difference of signal variations on associated graph structures, if an Ising model is adopted. Effectiveness of the MSD on graph is evaluated both by simulation and real data. We apply it to the network classification problem of Alzheimer's disease (AD) particularly. The preliminary results demonstrate that our approach is able to exploit the sub-manifold structure of the data, and therefore achieve a better performance than the traditional principle component analysis (PCA).


Subject(s)
Alzheimer Disease/diagnostic imaging , Brain Mapping/methods , Brain/diagnostic imaging , Connectome/methods , Nerve Net/diagnostic imaging , Pattern Recognition, Automated/methods , Positron-Emission Tomography/methods , Algorithms , Alzheimer Disease/metabolism , Aniline Compounds , Benzothiazoles/pharmacokinetics , Brain/metabolism , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Neural Pathways/diagnostic imaging , Reproducibility of Results , Sensitivity and Specificity , Thiazoles , Tissue Distribution
10.
IEEE Trans Image Process ; 21(4): 1421-36, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22180507

ABSTRACT

We study a new image sensor that is reminiscent of a traditional photographic film. Each pixel in the sensor has a binary response, giving only a 1-bit quantized measurement of the local light intensity. To analyze its performance, we formulate the oversampled binary sensing scheme as a parameter estimation problem based on quantized Poisson statistics. We show that, with a single-photon quantization threshold and large oversampling factors, the Cramér-Rao lower bound (CRLB) of the estimation variance approaches that of an ideal unquantized sensor, i.e., as if there were no quantization in the sensor measurements. Furthermore, the CRLB is shown to be asymptotically achievable by the maximum-likelihood estimator (MLE). By showing that the log-likelihood function of our problem is concave, we guarantee the global optimality of iterative algorithms in finding the MLE. Numerical results on both synthetic data and images taken by a prototype sensor verify our theoretical analysis and demonstrate the effectiveness of our image reconstruction algorithm. They also suggest the potential application of the oversampled binary sensing scheme in high dynamic range photography.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Photography/instrumentation , Photometry/instrumentation , Semiconductors , Signal Processing, Computer-Assisted/instrumentation , Transducers , Computer-Aided Design , Data Interpretation, Statistical , Equipment Design , Equipment Failure Analysis , Image Enhancement/instrumentation , Image Enhancement/methods , Image Interpretation, Computer-Assisted/instrumentation , Pilot Projects , Poisson Distribution , Reproducibility of Results , Sample Size , Sensitivity and Specificity
11.
IEEE Trans Image Process ; 19(8): 2085-98, 2010 Aug.
Article in English | MEDLINE | ID: mdl-20236886

ABSTRACT

Color image demosaicking is a key process in the digital imaging pipeline. In this paper, we study a well-known and influential demosaicking algorithm based upon alternating projections (AP), proposed by Gunturk, Altunbasak and Mersereau in 2002. Since its publication, the AP algorithm has been widely cited and compared against in a series of more recent papers in the demosaicking literature. Despite good performances, a limitation of the AP algorithm is its high computational complexity. We provide three main contributions in this paper. First, we present a rigorous analysis of the convergence property of the AP demosaicking algorithm, showing that it is a contraction mapping, with a unique fixed point. Second, we show that this fixed point is in fact the solution to a constrained quadratic minimization problem, thus, establishing the optimality of the AP algorithm. Finally, using the tool of polyphase representation, we show how to obtain the results of the AP algorithm in a single step, implemented as linear filtering in the polyphase domain. Replacing the original iterative procedure by the proposed one-step solution leads to substantial computational savings, by about an order of magnitude in our experiments.


Subject(s)
Algorithms , Color , Colorimetry/methods , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Numerical Analysis, Computer-Assisted , Reproducibility of Results , Sensitivity and Specificity , Signal Processing, Computer-Assisted
12.
IEEE Trans Image Process ; 16(4): 918-31, 2007 Apr.
Article in English | MEDLINE | ID: mdl-17405426

ABSTRACT

In 1992, Bamberger and Smith proposed the directional filter bank (DFB) for an efficient directional decomposition of 2-D signals. Due to the nonseparable nature of the system, extending the DFB to higher dimensions while still retaining its attractive features is a challenging and previously unsolved problem. We propose a new family of filter banks, named NDFB, that can achieve the directional decomposition of arbitrary N-dimensional (N > or =2) signals with a simple and efficient tree-structured construction. In 3-D, the ideal passbands of the proposed NDFB are rectangular-based pyramids radiating out from the origin at different orientations and tiling the entire frequency space. The proposed NDFB achieves perfect reconstruction via an iterated filter bank with a redundancy factor of N in N-D. The angular resolution of the proposed NDFB can be iteratively refined by invoking more levels of decomposition through a simple expansion rule. By combining the NDFB with a new multiscale pyramid, we propose the surfacelet transform, which can be used to efficiently capture and represent surface-like singularities in multidimensional data.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Information Storage and Retrieval/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...