Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Appl Opt ; 50(21): 3773-80, 2011 Jul 20.
Article in English | MEDLINE | ID: mdl-21772358

ABSTRACT

The local model fitting (LMF) method is a single-shot surface profiling algorithm. Its measurement principle is based on the assumption that the target surface to be profiled is locally flat, which enables us to utilize the information brought by nearby pixels in the single interference image for robust LMF. Given that the shape and size of the local area is appropriately determined, the LMF method was demonstrated to provide very accurate measurement results. However, the appropriate choice of the local area often requires prior knowledge on the target surface profile or manual parameter tuning. To cope with this problem, we propose a method for automatically determining the shape and size of local regions only from the single interference image. The effectiveness of the proposed method is demonstrated through experiments.

2.
Appl Opt ; 49(22): 4270-7, 2010 Aug 01.
Article in English | MEDLINE | ID: mdl-20676182

ABSTRACT

The local model fitting (LMF) method is one of the useful single-shot surface profiling algorithms. The measurement principle of the LMF method relies on the assumption that the target surface is locally flat. Based on this assumption, the height of the surface at each pixel is estimated from pixel values in its vicinity. Therefore, we can estimate flat areas of the target surface precisely, whereas the measurement accuracy could be degraded in areas where the assumption is violated, because of a curved surface or sharp steps. In this paper, we propose to overcome this problem by weighting the contribution of the pixels according to the degree of satisfaction of the locally flat assumption. However, since we have no information on the surface profile beforehand, we iteratively estimate it and use this estimation result to determine the weights. This algorithm is named the iteratively-reweighted LMF (IRLMF) method. Experimental results show that the proposed algorithm works excellently.

3.
Appl Opt ; 48(18): 3497-508, 2009 Jun 20.
Article in English | MEDLINE | ID: mdl-19543360

ABSTRACT

The local model fitting (LMF) method is a useful single-shot surface profiling algorithm based on spatial carrier frequency fringe patterns. The measurement principle of the LMF method relies on the assumption that the target surface is locally flat. In this paper, we first analyze the measurement error of the LMF method caused by violation of the locally flat assumption. More specifically, we theoretically prove that the measurement error is zero at fringe intensity extrema in an interference pattern even when the locally flat assumption is violated. Based on this theoretical finding, we propose a new surface profiling method called the interpolated LMF (iLMF) algorithm, which is more accurate and computationally efficient than the original LMF method. The practical usefulness of the iLMF method is shown through experiments.

4.
Appl Opt ; 45(31): 7999-8005, 2006 Nov 01.
Article in English | MEDLINE | ID: mdl-17068539

ABSTRACT

A new surface profiling algorithm called the local model fitting (LMF) method is proposed. LMF is a single-shot method that employs only a single image, so it is fast and robust against vibration. LMF does not require a conventional assumption of smoothness of the target surface in a band-limit sense, but we instead assume that the target surface is locally constant. This enables us to recover sharp edges on the surface. LMF employs only local image data, so objects covered with heterogeneous materials can also be measured. The LMF algorithm is simple to implement and is efficient in computation. Experimental results showed that the proposed LMF method works very well.

5.
IEEE Trans Neural Netw ; 17(2): 345-56, 2006 Mar.
Article in English | MEDLINE | ID: mdl-16566463

ABSTRACT

This paper presents analysis of the recently proposed modulated Hebb-Oja (MHO) method that performs linear mapping to a lower-dimensional subspace. Principal component subspace is the method that will be analyzed. Comparing to some other well-known methods for yielding principal component subspace (e.g., Oja's Subspace Learning Algorithm), the proposed method has one feature that could be seen as desirable from the biological point of view--synaptic efficacy learning rule does not need the explicit information about the value of the other efficacies to make individual efficacy modification. Also, the simplicity of the "neural circuits" that perform global computations and a fact that their number does not depend on the number of input and output neurons, could be seen as good features of the proposed method.


Subject(s)
Algorithms , Artificial Intelligence , Decision Support Techniques , Models, Theoretical , Numerical Analysis, Computer-Assisted , Pattern Recognition, Automated/methods , Computer Simulation , Neural Networks, Computer
6.
Int J Neural Syst ; 14(5): 313-23, 2004 Oct.
Article in English | MEDLINE | ID: mdl-15593379

ABSTRACT

Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.


Subject(s)
Computer Simulation , Learning/physiology , Neural Networks, Computer , Principal Component Analysis , Algorithms , Humans , Models, Neurological , Neurons/physiology , Probability , Time Factors
7.
Int J Neural Syst ; 13(4): 215-23, 2003 Aug.
Article in English | MEDLINE | ID: mdl-12964209

ABSTRACT

This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.


Subject(s)
Neural Networks, Computer , Principal Component Analysis , Models, Neurological , Neurons/physiology , Retina/physiology
8.
Appl Opt ; 41(23): 4876-83, 2002 Aug 10.
Article in English | MEDLINE | ID: mdl-12197656

ABSTRACT

We propose a fast surface-profiling algorithm based on white-light interferometry by use of sampling theory. We first provide a generalized sampling theorem that reconstructs the squared-envelope function of the white-light interferogram from sampled values of the interferogram and then propose the new algorithm based on the theorem. The algorithm extends the sampling interval to 1.425 microm when an optical filter with a center wavelength of 600 nm and a bandwidth of 60 nm is used. The sampling interval is 6-14 times wider than those used in conventional systems. The algorithm has been installed in a commercial system that achieved the world's fastest scanning speed of 80 microm/s. The height resolution of the system is of the order of 10 nm for a measurement range of greater than 100 microm.

9.
Neural Netw ; 15(3): 349-61, 2002 Apr.
Article in English | MEDLINE | ID: mdl-12125890

ABSTRACT

The problem of designing the regularization term and regularization parameter for linear regression models is discussed. Previously, we derived an approximation to the generalization error called the subspace information criterion (SIC), which is an unbiased estimator of the generalization error with finite samples under certain conditions. In this paper, we apply SIC to regularization learning and use it for: (a) choosing the optimal regularization term and regularization parameter from the given candidates; (b) obtaining the closed form of the optimal regularization parameter for a fixed regularization term. The effectiveness of SIC is demonstrated through computer simulations with artificial and real data.


Subject(s)
Linear Models , Models, Theoretical
SELECTION OF CITATIONS
SEARCH DETAIL
...