Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Microsc ; 283(2): 93-101, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33797077

RESUMO

Through-focus scanning optical microscopy (TSOM) is a model-based nanoscale metrology technique which combines conventional bright-field microscopy and the relevant numerical simulations. A TSOM image is generated after through-focus scanning and data processing. However, the mechanical vibration and optical noise introduced into the TSOM image during image generation can affect the measurement accuracy. To reduce this effect, this paper proposes a imaging error compensation method for the TSOM image based on deep learning with U-Net. Here, the simulated TSOM image is regarded as the ground truth, and the U-Net is trained using the experimental TSOM images by means of a supervised learning strategy. The experimental TSOM image is first encoded and then decoded with the U-shaped structure of the U-Net. The difference between the experimental and simulated TSOM images is minimised by iteratively updating the weights and bias factors of the network, to obtain the compensated TSOM image. The proposed method is applied for optimising the TSOM images for nanoscale linewidth estimation. The results demonstrate that the proposed method performs as expected and provides a significant enhancement in accuracy.


Assuntos
Aprendizado Profundo , Microscopia , Fenômenos Ópticos
2.
J Microsc ; 283(2): 117-126, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33826151

RESUMO

Through-focus scanning optical microscopy (TSOM) is an economical, non-contact and nondestructive method for rapid measurement of three-dimensional nanostructures. There are two methods using TSOM image to measure the dimensions of one sample, including the library-matching method and the machine-learning regression method. The first has the defects of small measurement range and strict environmental requirements; the other has the disadvantages of feature extraction method greatly influenced by human subjectivity and low measurement accuracy. To solve the problems above, a TSOM dimensional measurement method based on deep-learning classification model is proposed. TSOM images are used to train the ResNet50 and DenseNet121 classification model respectively in this paper, and the test images are used to test the model, the classification result of which is taken as the measurement value. The test results showed that with the number of training linewidths increasing, the mean square error (MSE) of the test images is 21.05 nm² for DenseNet121 model and 31.84 nm² for ResNet50 model, both far lower than machine-learning regression method, and the measurement accuracy is significantly improved. The feasibility of using deep-learning classification model, instead of machine-learning regression model, for dimensional measurement is verified, providing a theoretical basis for further improvement on the accuracy of dimensional measurement.


Assuntos
Aprendizado Profundo , Humanos , Aprendizado de Máquina , Microscopia , Fenômenos Ópticos
3.
Opt Express ; 28(5): 6294-6305, 2020 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-32225881

RESUMO

Through-focus scanning optical microscopy (TSOM) is a high-efficient, low-costed, and nondestructive model-based optical nanoscale method with the capability of measuring semiconductor targets from nanometer to micrometer level. However, some instability issues resulted from lateral movement of the target and angular illuminating non-uniformity during the collection of through-focus (TF) images restrict TSOM's potential applications so that considerable efforts are needed to align optical elements before the collection and correct the experimental TSOM image before differentiating the experimental TSOM image from simulated TSOM image. An improved corrected TSOM method using Fourier transform is herein presented in this paper. First, a series of experimental TF images are collected through scanning the objective of the optical microscopy, and the ideally simulated TF images are obtained by a full-vector formulation. Then, each experimental image is aligned to its corresponding simulated counterpart before constructing the TSOM image. Based on the analysis of precision and repeatability, this method demonstrates its capability to improve the performance of TSOM, and the promising possibilities in application of online and in-machine measurements.

4.
Opt Express ; 27(23): 33978-33998, 2019 Nov 11.
Artigo em Inglês | MEDLINE | ID: mdl-31878456

RESUMO

Through-focus scanning optical microscopy (TSOM) is an economical and nondestructive method for measuring three-dimensional nanostructures. After obtaining a TSOM image, a library-matching method is typically used to interpret optical intensity information and determine the dimensions of a measurement target. To further improve dimensional measurement accuracy, this paper proposes a machine learning method that extracts texture information from TSOM images. The method extracts feature vectors of TSOM images in terms of the Gray-level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP), and Histogram of Oriented Gradient (HOG). We tested models trained with these vectors in isolation, in pairs, and a combination of all three to test seven possible feature vectors. Once normalized, these feature vectors were then used to train and test three machine-learning regression models: random forest, GBDT, and AdaBoost. Compared with the results of the library-matching method, the measurement accuracy of the machine learning method is considerably higher. When detecting dimensional features that fall into a wide range of sizes, the AdaBoost model used with the combined LBP and HOG feature vectors performs better than the others. For detecting dimensional features within a narrower range of sizes, the AdaBoost model combined with HOG feature extraction algorithm performs better.

5.
Microsc Res Tech ; 82(2): 101-113, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30451353

RESUMO

Inserting an electrically tunable lens (ETL), such as liquid lens or tunable acoustic gradient lens, into a microscope can enable fast axial scanning, autofocusing, and extended depth of field. However, placing the ETL at different positions has different influences on image quality. Specially, in a wide-field microscope for measurement, the magnification has to be constant when introducing an ETL, otherwise it will affect measurement accuracy. To determine the best position of ETL, axial scanning range and magnification variation are quantitatively analyzed and discussed in finite and infinite microscopes through theoretical analysis, optical simulation, and experiment for four configurations: when ETL is placed at the back focal plane of objective, at the conjugate plane of objective's back focal plane between two relay lenses, or behind two relay lenses, and at imaging detector plane. The obtained results are as follows. When ETL is placed at the back focal plane, the system has a large scanning range, but the magnification varies because the back focal plane is inside the objective. When ETL is placed between two relay lenses, the magnification stays constant, but the scanning range is small. When ETL is placed behind two relay lenses, the magnification keeps invariant and the scanning range is large, but ETL and two relay lenses are inside the microscope and the system has to be customized. Finally, when ETL is placed at imaging detector plane, the magnification stays constant, but the scanning range is 0, which means the system has no axial scanning capability. RESEARCH HIGHLIGHTS: An electrically tunable lens (ETL) is introduced into a wide-field microscope for measurement. Axial scanning range and magnification variation are analyzed and discussed. Theoretical analysis, ZEMAX optical simulation and experiments are performed.

6.
Microsc Res Tech ; 81(12): 1434-1442, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30351513

RESUMO

Monocular rotational microscopes suffer from shallow depth of field and reduced field of view. Consequently, we propose monocular wide-field optical microscopy with extended depth of field to enable accurate 3D measurements of micro-objects. We use a liquid lens and a 4f system to extend the system's depth of field and two additional rotational reflectors to capture multi-view image sequences. In addition, we increase the number of expansion directions in the patch-based multi-view stereo algorithm and optimize the expansion radius. Our approach outperforms that based on the patch-based multi-view stereo method and achieves a high relative measurement accuracy of 1.9%. RESEARCH HIGHLIGHTS: This study proposes 3D measurements of micro-objects via monocular rotational microscopy. The depth of field is extended by the use of a liquid lens and a 4f system. Our approach outperforms that based on the patch-based multi-view stereo method and achieves a high relative measurement accuracy of 1.9%.

7.
Sensors (Basel) ; 18(1)2018 Jan 14.
Artigo em Inglês | MEDLINE | ID: mdl-29342908

RESUMO

In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

8.
Opt Express ; 25(21): 25004-25022, 2017 Oct 16.
Artigo em Inglês | MEDLINE | ID: mdl-29041173

RESUMO

Photographic images taken in foggy or hazy weather (hazy images) exhibit poor visibility and detail because of scattering and attenuation of light caused by suspended particles, and therefore, image dehazing has attracted considerable research attention. The current polarization-based dehazing algorithms strongly rely on the presence of a "sky area", and thus, the selection of model parameters is susceptible to external interference of high-brightness objects and strong light sources. In addition, the noise of the restored image is large. In order to solve these problems, we propose a polarization-based dehazing algorithm that does not rely on the sky area ("non-sky"). First, a linear polarizer is used to collect three polarized images. The maximum- and minimum-intensity images are then obtained by calculation, assuming the polarization of light emanating from objects is negligible in most scenarios involving non-specular objects. Subsequently, the polarization difference of the two images is used to determine a sky area and calculate the infinite atmospheric light value. Next, using the global features of the image, and based on the assumption that the airlight and object radiance are irrelevant, the degree of polarization of the airlight (DPA) is calculated by solving for the optimal solution of the correlation coefficient equation between airlight and object radiance; the optimal solution is obtained by setting the right-hand side of the equation to zero. Then, the hazy image is subjected to dehazing. Subsequently, a filtering denoising algorithm, which combines the polarization difference information and block-matching and 3D (BM3D) filtering, is designed to filter the image smoothly. Our experimental results show that the proposed polarization-based dehazing algorithm does not depend on whether the image includes a sky area and does not require complex models. Moreover, the dehazing image except specular object scenarios is superior to those obtained by Tarel, Fattal, Ren, and Berman based on the criteria of no-reference quality assessment (NRQA), blind/referenceless image spatial quality evaluator (BRISQUE), blind anistropic quality index (AQI), and e.

9.
Sensors (Basel) ; 17(7)2017 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-28657609

RESUMO

High accuracy target recognition and tracking systems using a single sensor or a passive multisensor set are susceptible to external interferences and exhibit environmental dependencies. These difficulties stem mainly from limitations to the available imaging frequency bands, and a general lack of coherent diversity of the available target-related data. This paper proposes an active multimodal sensor system for target recognition and tracking, consisting of a visible, an infrared, and a hyperspectral sensor. The system makes full use of its multisensor information collection abilities; furthermore, it can actively control different sensors to collect additional data, according to the needs of the real-time target recognition and tracking processes. This level of integration between hardware collection control and data processing is experimentally shown to effectively improve the accuracy and robustness of the target recognition and tracking system.

10.
Microsc Res Tech ; 79(11): 1112-1122, 2016 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-27582009

RESUMO

For the design of a passive autofocusing (AF) system for optical microscopes, many time-consuming and tedious experiments have been performed to determine and design a better focus criterion function, owing to the sample-dependence of this function. To accelerate the development of the AF systems in optical microscopes and to increase AF speed as well as maintain the AF accuracy, this study proposes a self-adaptive and nonmechanical motion AF system. The presented AF system does not require the selection and design of a focus criterion function when it is developed. Instead, the system can automatically determine a better focus criterion function for an observed sample by analyzing the texture features of the sample and subsequently perform an AF procedure to bring the sample into focus in the objective of an optical microscope. In addition, to increase the AF speed, the Z axis scanning of the mechanical motion of the sample or the objective is replaced by focusing scanning performed by a liquid lens, which is driven by an electrical current and does not involve mechanical motion. Experiments show that the reproducibility of the results obtained with the proposed self-adaptive and nonmechanical motion AF system is better than that provided by that of traditional AF systems, and that the AF speed is 10 times faster than that of traditional AF systems. Also, the self-adaptive function increased the speed of AF process by an average of 10.5% than Laplacian and Tenegrad functions.

11.
J Microsc ; 258(3): 212-22, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25817930

RESUMO

The axial imaging range of optical microscopy is restricted by its fixed working plane and limited depth of field. In this paper, the axial capabilities of an off-the-shelf microscope is improved by inserting a liquid lens, which can be controlled by a driving electrical voltage, into the optical path of the microscope. First, the numerical formulas of the working distance and the magnification with the variation of the focus of the liquid lens are inferred using a ray tracing method and conclusion is obtained that the best position for inserting a liquid lens with consistent magnification is the aperture plane and the rear focal plane of the objective lens. Second, with the liquid lens embedded in the microscope, the numerical relationship between the magnification and the working distance of the proposed flexible-axial-capability microscope and the liquid lens driving voltage is calibrated and fitted using the inferred numerical formulas. Third, techniques including autofocus, extending depth of field and three-dimensional imaging are researched and applied, improving the designed microscope to not only flexibly control its working distance, but also to extend the depth of field near the variable working plane. Experiments show that the presented flexible-axial-capability microscope has a long working distance range of 8 mm, and by calibrating the magnification curve within the working distance range, samples can be observed and measured precisely. The depth of field can be extended to 400 µm from the variable working plane and is 20 times that of the off-the-shelf microscope.

12.
Appl Opt ; 46(15): 3046-51, 2007 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-17514256

RESUMO

A new acoustic grating fringe projector (AGFP) was developed for high-speed and high-precision 3D measurement. A new acoustic grating fringe projection theory is also proposed to describe the optical system. The AGFP instrument can adjust the spatial phase and period of fringes with unprecedented speed and accuracy. Using rf power proportional-integral-derivative (PID) control and CCD synchronous control, we obtain fringes with fine sinusoidal characteristics and realize high-speed acquisition of image data. Using the device, we obtained a precise phase map for a 3D profile. In addition, the AGFP can work in running fringe mode, which could be applied in other measurement fields.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...