Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 66
Filtrar
1.
Sensors (Basel) ; 24(13)2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-39000817

RESUMO

Parallax processing and structure preservation have long been important and challenging tasks in image stitching. In this paper, an image stitching method based on sliding camera to eliminate perspective deformation and asymmetric optical flow to solve parallax is proposed. By maintaining the viewpoint of two input images in the mosaic non-overlapping area and creating a virtual camera by interpolation in the overlapping area, the viewpoint is gradually transformed from one to another so as to complete the smooth transition of the two image viewpoints and reduce perspective deformation. Two coarsely aligned warped images are generated with the help of a global projection plane. After that, the optical flow propagation and gradient descent method are used to quickly calculate the bidirectional asymmetric optical flow between the two warped images, and the optical-flow-based method is used to further align the two warped images to reduce parallax. In the image blending, the softmax function and registration error are used to adjust the width of the blending area, further eliminating ghosting and reducing parallax. Finally, by comparing our method with APAP, AANAP, SPHP, SPW, TFT, and REW, it has been proven that our method can not only effectively solve perspective deformation, but also gives more natural transitions between images. At the same time, our method can robustly reduce local misalignment in various scenarios, with higher structural similarity index. A scoring method combining subjective and objective evaluations of perspective deformation, local alignment and runtime is defined and used to rate all methods, where our method ranks first.

2.
Sensors (Basel) ; 24(11)2024 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-38894303

RESUMO

The most critical aspect of panorama generation is maintaining local semantic consistency. Objects may be projected from different depths in the captured image. When warping the image to a unified canvas, pixels at the semantic boundaries of the different views are significantly misaligned. We propose two lightweight strategies to address this challenge efficiently. First, the original image is segmented as superpixels rather than regular grids to preserve the structure of each cell. We propose effective cost functions to generate the warp matrix for each superpixel. The warp matrix varies progressively for smooth projection, which contributes to a more faithful reconstruction of object structures. Second, to deal with artifacts introduced by stitching, we use a seam line method tailored to superpixels. The algorithm takes into account the feature similarity of neighborhood superpixels, including color difference, structure and entropy. We also consider the semantic information to avoid semantic misalignment. The optimal solution constrained by the cost functions is obtained under a graph model. The resulting stitched images exhibit improved naturalness. Extensive testing on common panorama stitching datasets is performed on the algorithm. Experimental results show that the proposed algorithm effectively mitigates artifacts, preserves the completeness of semantics and produces panoramic images with a subjective quality that is superior to that of alternative methods.

3.
Sensors (Basel) ; 24(12)2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38931562

RESUMO

Efficient image stitching plays a vital role in the Non-Destructive Evaluation (NDE) of infrastructures. An essential challenge in the NDE of infrastructures is precisely visualizing defects within large structures. The existing literature predominantly relies on high-resolution close-distance images to detect surface or subsurface defects. While the automatic detection of all defect types represents a significant advancement, understanding the location and continuity of defects is imperative. It is worth noting that some defects may be too small to capture from a considerable distance. Consequently, multiple image sequences are captured and processed using image stitching techniques. Additionally, visible and infrared data fusion strategies prove essential for acquiring comprehensive information to detect defects across vast structures. Hence, there is a need for an effective image stitching method appropriate for infrared and visible images of structures and industrial assets, facilitating enhanced visualization and automated inspection for structural maintenance. This paper proposes an advanced image stitching method appropriate for dual-sensor inspections. The proposed image stitching technique employs self-supervised feature detection to enhance the quality and quantity of feature detection. Subsequently, a graph neural network is employed for robust feature matching. Ultimately, the proposed method results in image stitching that effectively eliminates perspective distortion in both infrared and visible images, a prerequisite for subsequent multi-modal fusion strategies. Our results substantially enhance the visualization capabilities for infrastructure inspection. Comparative analysis with popular state-of-the-art methods confirms the effectiveness of the proposed approach.

4.
Sci Rep ; 14(1): 13304, 2024 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-38858367

RESUMO

The limited field of view of high-resolution microscopic images hinders the study of biological samples in a single shot. Stitching of microscope images (tiles) captured by the whole-slide imaging (WSI) technique solves this problem. However, stitching is challenging due to the repetitive textures of tissues, the non-informative background part of the slide, and the large number of tiles that impact performance and computational time. To address these challenges, we proposed the Fast and Robust Microscopic Image Stitching (FRMIS) algorithm, which relies on pairwise and global alignment. The speeded up robust features (SURF) were extracted and matched within a small part of the overlapping region to compute the transformation and align two neighboring tiles. In cases where the transformation could not be computed due to an insufficient number of matched features, features were extracted from the entire overlapping region. This enhances the efficiency of the algorithm since most of the computational load is related to pairwise registration and reduces misalignment that may occur by matching duplicated features in tiles with repetitive textures. Then, global alignment was achieved by constructing a weighted graph where the weight of each edge is determined by the normalized inverse of the number of matched features between two tiles. FRMIS has been evaluated on experimental and synthetic datasets from different modalities with different numbers of tiles and overlaps, demonstrating faster stitching time compared to existing algorithms such as the Microscopy Image Stitching Tool (MIST) toolbox. FRMIS outperforms MIST by 481% for bright-field, 259% for phase-contrast, and 282% for fluorescence modalities, while also being robust to uneven illumination.

5.
Comput Biol Med ; 178: 108456, 2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38909449

RESUMO

Large-scale electron microscopy (EM) has enabled the reconstruction of brain connectomes at the synaptic level by serially scanning over massive areas of sample sections. The acquired big EM data sets raise the great challenge of image mosaicking at high accuracy. Currently, it simply follows the conventional algorithms designed for natural images, which are usually composed of only a few tiles, using a single type of keypoint feature that would sacrifice speed for stronger performance. Even so, in the process of stitching hundreds of thousands of tiles for large EM data, errors are still inevitable and diverse. Moreover, there has not yet been an appropriate metric to quantitatively evaluate the stitching of biomedical EM images. Here we propose a two-stage error detection method to improve the EM image mosaicking. It firstly uses point-based error detection in combination with a hybrid feature framework to expedite the stitching computation while maintaining high accuracy. Following is the second detection of unresolved errors with a newly designed metric of EM stitched image quality assessment (EMSIQA). The novel detection-based mosaicking pipeline is tested on large EM data sets and proven to be more effective and as accurate when compared with existing methods.

6.
J Imaging ; 10(5)2024 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-38786559

RESUMO

The current study aimed to quantify the value of color spaces and channels as a potential superior replacement for standard grayscale images, as well as the relative performance of open-source detectors and descriptors for general feature-based image registration purposes, based on a large benchmark dataset. The public dataset UDIS-D, with 1106 diverse image pairs, was selected. In total, 21 color spaces or channels including RGB, XYZ, Y'CrCb, HLS, L*a*b* and their corresponding channels in addition to grayscale, nine feature detectors including AKAZE, BRISK, CSE, FAST, HL, KAZE, ORB, SIFT, and TBMR, and 11 feature descriptors including AKAZE, BB, BRIEF, BRISK, DAISY, FREAK, KAZE, LATCH, ORB, SIFT, and VGG were evaluated according to reprojection error (RE), root mean square error (RMSE), structural similarity index measure (SSIM), registration failure rate, and feature number, based on 1,950,984 image registrations. No meaningful benefits from color space or channel were observed, although XYZ, RGB color space and L* color channel were able to outperform grayscale by a very minor margin. Per the dataset, the best-performing color space or channel, detector, and descriptor were XYZ/RGB, SIFT/FAST, and AKAZE. The most robust color space or channel, detector, and descriptor were L*a*b*, TBMR, and VGG. The color channel, detector, and descriptor with the most initial detector features and final homography features were Z/L*, FAST, and KAZE. In terms of the best overall unfailing combinations, XYZ/RGB+SIFT/FAST+VGG/SIFT seemed to provide the highest image registration quality, while Z+FAST+VGG provided the most image features.

7.
J Appl Clin Med Phys ; : e14373, 2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38696704

RESUMO

PURPOSE: Lateral response artifact (LRA) is caused by the interaction between film and flatbed scanner in the direction perpendicular to the scanning direction. This can significantly affect the accuracy of patient-specific quality assurance (QA) in cases involving large irradiation fields. We hypothesized that by utilizing the central area of the flatbed scanner, where the magnitude of LRA is relatively small, the LRA could be mitigated effectively. This study proposes a practical solution using the image-stitching technique to correct LRA for patient-specific QA involving large irradiation fields. METHODS: Gafchromic™ EBT4 film and Epson Expression ES-G11000 flatbed scanner were used in this study. The image-stitching algorithm requires a spot between adjacent images to combine them. The film was scanned at three locations on a flatbed scanner, and these images were combined using the image-stitching technique. The combined film dose was then calculated and compared with the treatment planning system (TPS)-calculated dose using gamma analysis (3%/2 mm). Our proposed LRA correction was applied to several films exposed to 18 × 18 cm2 open fields at doses of 200, 400, and 600 cGy, as well as to four clinical Volumetric Modulated Arc Therapy (VMAT) treatment plans involving large fields. RESULTS: For doses of 200, 400, and 600 cGy, the gamma analysis values with and without LRA corrections were 95.7% versus 67.8%, 95.5% versus 66.2%, and 91.8% versus 35.9%, respectively. For the clinical VMAT treatment plan, the average pass rate ± standard deviation in gamma analysis was 94.1% ± 0.4% with LRA corrections and 72.5% ± 1.5% without LRA corrections. CONCLUSIONS: The effectiveness of our proposed LRA correction using the image-stitching technique was demonstrated to significantly improve the accuracy of patient-specific QA for VMAT treatment plans involving large irradiation fields.

8.
Sci Rep ; 14(1): 9215, 2024 04 22.
Artigo em Inglês | MEDLINE | ID: mdl-38649426

RESUMO

Stitching of microscopic images is a technique used to combine multiple overlapping images (tiles) from biological samples with a limited field of view and high resolution to create a whole slide image. Image stitching involves two main steps: pairwise registration and global alignment. Most of the computational load and the accuracy of the stitching algorithm depend on the pairwise registration method. Therefore, choosing an efficient, accurate, robust, and fast pairwise registration method is crucial in the whole slide imaging technique. This paper presents a detailed comparative analysis of different pairwise registration techniques in terms of execution time and quality. These techniques included feature-based methods such as Harris, Shi-Thomasi, FAST, ORB, BRISK, SURF, SIFT, KAZE, MSER, and deep learning-based SuperPoint features. Additionally, region-based methods were analyzed, which were based on the normalized cross-correlation (NCC) and the combination of phase correlation and NCC. Investigations have been conducted on microscopy images from different modalities such as bright-field, phase-contrast, and fluorescence. The feature-based methods were highly robust to uneven illumination in tiles. Moreover, some features were found to be more accurate and faster than region-based methods, with the SURF features identified as the most effective technique. This study provides valuable insights into the selection of the most efficient and accurate pairwise registration method for creating whole slide images, which is essential for the advancement of computational pathology and biology.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Microscopia , Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos , Humanos , Aprendizado Profundo
9.
Math Biosci Eng ; 21(1): 494-522, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38303432

RESUMO

To address the challenges of repetitive and low-texture features in intraoral endoscopic images, a novel methodology for stitching panoramic half jaw images of the oral cavity is proposed. Initially, an enhanced self-attention mechanism guided by Time-Weighting concepts is employed to augment the clustering potential of feature points, thereby increasing the number of matched features. Subsequently, a combination of the Sinkhorn algorithm and Random Sample Consensus (RANSAC) is utilized to maximize the count of matched feature pairs, accurately remove outliers and minimize error. Last, to address the unique spatial alignment among intraoral endoscopic images, a wavelet transform and weighted fusion algorithm based on dental arch arrangement in intraoral endoscopic images have been developed, specifically for use in the fusion stage of intraoral endoscopic images. This enables the local oral images to be precisely positioned along the dental arch, and seamless stitching is achieved through wavelet transformation and a gradual weighted fusion technique. Experimental results demonstrate that this method yields promising outcomes in panoramic stitching tasks for intraoral endoscopic images, achieving a matching accuracy of 84.6% and a recall rate of 78.4% in a dataset with an average overlap of 35%. A novel solution for panoramic stitching of intraoral endoscopic images is provided by this method.


Assuntos
Arco Dental , Endoscopia , Algoritmos , Projetos de Pesquisa
10.
Entropy (Basel) ; 26(1)2024 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-38248186

RESUMO

Image stitching aims to synthesize a wider and more informative whole image, which has been widely used in various fields. This study focuses on improving the accuracy of image mosaic and proposes an image mosaic method based on local edge contour matching constraints. Because the accuracy and quantity of feature matching have a direct influence on the stitching result, it often leads to wrong image warpage model estimation when feature points are difficult to detect and match errors are easy to occur. To address this issue, the geometric invariance is used to expand the number of feature matching points, thus enriching the matching information. Based on Canny edge detection, significant local edge contour features are constructed through operations such as structure separation and edge contour merging to improve the image registration effect. The method also introduces the spatial variation warping method to ensure the local alignment of the overlapping area, maintains the line structure in the image without bending by the constraints of short and long lines, and eliminates the distortion of the non-overlapping area by the global line-guided warping method. The method proposed in this paper is compared with other research through experimental comparisons on multiple datasets, and excellent stitching results are obtained.

11.
Sensors (Basel) ; 23(23)2023 Nov 28.
Artigo em Inglês | MEDLINE | ID: mdl-38067854

RESUMO

Image stitching involves combining multiple images of the same scene captured from different viewpoints into a single image with an expanded field of view. While this technique has various applications in computer vision, traditional methods rely on the successive stitching of image pairs taken from multiple cameras. While this approach is effective for organized camera arrays, it can pose challenges for unstructured ones, especially when handling scene overlaps. This paper presents a deep learning-based approach for stitching images from large unstructured camera sets covering complex scenes. Our method processes images concurrently by using the SandFall algorithm to transform data from multiple cameras into a reduced fixed array, thereby minimizing data loss. A customized convolutional neural network then processes these data to produce the final image. By stitching images simultaneously, our method avoids the potential cascading errors seen in sequential pairwise stitching while offering improved time efficiency. In addition, we detail an unsupervised training method for the network utilizing metrics from Generative Adversarial Networks supplemented with supervised learning. Our testing revealed that the proposed approach operates in roughly ∼1/7th the time of many traditional methods on both CPU and GPU platforms, achieving results consistent with established methods.

12.
Math Biosci Eng ; 20(9): 17356-17383, 2023 09 08.
Artigo em Inglês | MEDLINE | ID: mdl-37920058

RESUMO

To address the limitation of narrow field-of-view in local oral cavity images that fail to capture large-area targets at once, this paper designs a method for generating natural dental panoramas based on oral endoscopic imaging that consists of two main stages: the anti-perspective transformation feature extraction and the coarse-to-fine global optimization matching. In the first stage, we increase the number of matched pairs and improve the robustness of the algorithm to viewpoint transformation by normalizing the anti-affine transformation region extracted from the Gaussian scale space and using log-polar coordinates to compute the gradient histogram of the octagonal region to obtain the set of perspective transformation resistant feature points. In the second stage, we design a coarse-to-fine global optimization matching strategy. Initially, we incorporate motion smoothing constraints and improve the Fast Library for Approximate Nearest Neighbors (FLANN) algorithm by utilizing neighborhood information for coarse matching. Then, we eliminate mismatches via homography-guided Random Sample Consensus (RANSAC) and further refine the matching using the Levenberg-Marquardt (L-M) algorithm to reduce cumulative errors and achieve global optimization. Finally, multi-band blending is used to eliminate the ghosting due to unalignment and make the image transition more natural. Experiments show that the visual effect of dental panoramas generated by the proposed method is significantly better than that of other methods, addressing the problems of sparse splicing discontinuities caused by sparse keypoints, ghosting due to parallax, and distortion caused by the accumulation of errors in multi-image splicing in oral endoscopic image stitching.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Diagnóstico por Imagem , Movimento (Física) , Projetos de Pesquisa
13.
Front Biosci (Landmark Ed) ; 28(10): 249, 2023 10 19.
Artigo em Inglês | MEDLINE | ID: mdl-37919069

RESUMO

BACKGROUND: Due to antibiotic abuse, the problem of bacterial resistance is becoming increasingly serious, and rapid detection of bacterial resistance has become an urgent issue. Because under the action of antibiotics, different active bacteria have different metabolism of heavy water, antibiotic resistance of bacteria can be identified according to the existence of a C-D peak in the 2030-2400 cm-1 range in the Raman spectrum. METHODS: To ensure data veracity, a large number of bacteria need to be detected, however, due to the limitation of the field of view of the high magnification objective, the number of single cells in a single field of view is very small. By combining an image stitching algorithm, image recognition algorithm, and processing of Raman spectrum and peak-seeking algorithm, can identify and locate single cells in multiple fields of view at one time and can discriminate whether they are Antimicrobial-resistant bacteria. RESULTS: In experiments 1 and 2, 2706 bacteria in 9 × 11 fields of view and 2048 bacteria in 11 × 11 fields of view were detected. Results showed that in experiment 1, there are 1137 antibiotic-resistant bacteria, accounting for 42%, and 1569 sensitive bacteria, accounting for 58%. In experiment 2, there are 1087 antibiotic-resistant bacteria, accounting for 53%, and 961 sensitive bacteria, accounting for 47%. It showed excellent performance in terms of speed and recognition accuracy as compared to traditional manual detection approaches. And solves the problems of low accuracy of data, a large number of manual experiments, and low efficiency due to the small number of single cells in the high magnification field of view and different peak-seeking parameters of different Raman spectra. CONCLUSIONS: The detection and analysis method of bacterial Raman spectra based on image stitching can be used for unattended, automatic, rapid and accurate detection of single cells at high magnification with multiple fields of view. With the characteristics of automatic, high-throughput, rapid, and accurate identification, it can be used as an unattended, universal and non-invasive means to measure antibiotic-resistant bacteria to screen for effective antibiotics, which is of great importance for studying the persistence and spread of antibiotics in bacterial pathogens.


Assuntos
Infecções Bacterianas , Análise Espectral Raman , Humanos , Análise Espectral Raman/métodos , Bactérias/metabolismo , Infecções Bacterianas/microbiologia , Antibacterianos/farmacologia , Antibacterianos/metabolismo
14.
Sensors (Basel) ; 23(17)2023 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-37687944

RESUMO

As an important representation of scenes in virtual reality and augmented reality, image stitching aims to generate a panoramic image with a natural field-of-view by stitching multiple images together, which are captured by different visual sensors. Existing deep-learning-based methods for image stitching only conduct a single deep homography to perform image alignment, which may produce inevitable alignment distortions. To address this issue, we propose a content-seam-preserving multi-alignment network (CSPM-Net) for visual-sensor-based image stitching, which could preserve the image content consistency and avoid seam distortions simultaneously. Firstly, a content-preserving deep homography estimation was designed to pre-align the input image pairs and reduce the content inconsistency. Secondly, an edge-assisted mesh warping was conducted to further align the image pairs, where the edge information is introduced to eliminate seam artifacts. Finally, in order to predict the final stitched image accurately, a content consistency loss was designed to preserve the geometric structure of overlapping regions between image pairs, and a seam smoothness loss is proposed to eliminate the edge distortions of image boundaries. Experimental results demonstrated that the proposed image-stitching method can provide favorable stitching results for visual-sensor-based images and outperform other state-of-the-art methods.

15.
Entropy (Basel) ; 25(9)2023 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-37761582

RESUMO

Image stitching technology realizes alignment and fusion of a series of images with common pixel areas taken from different viewpoints of the same scene to produce a wide field of view panoramic image with natural structure. The night environment is one of the important scenes of human life, and the night image stitching technology has more urgent practical significance in the fields of security monitoring and intelligent driving at night. Due to the influence of artificial light sources at night, the brightness of the image is unevenly distributed and there are a large number of dark light areas, but often these dark light areas have rich structural information. The structural features hidden in the darkness are difficult to extract, resulting in ghosting and misalignment when stitching, which makes it difficult to meet the practical application requirements. Therefore, a nighttime image stitching method based on image decomposition enhancement is proposed to address the problem of insufficient line feature extraction in the stitching process of nighttime images. The proposed algorithm performs luminance enhancement on the structural layer, smoothes the nighttime image noise using a denoising algorithm on the texture layer, and finally complements the texture of the fused image by an edge enhancement algorithm. The experimental results show that the proposed algorithm improves the image quality in terms of information entropy, contrast, and noise suppression compared with other algorithms. Moreover, the proposed algorithm extracts the most line features from the processed nighttime images, which is more helpful for the stitching of nighttime images.

16.
Sensors (Basel) ; 23(16)2023 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-37631838

RESUMO

Autonomous robots heavily rely on simultaneous localization and mapping (SLAM) techniques and sensor data to create accurate maps of their surroundings. When multiple robots are employed to expedite exploration, the resulting maps often have varying coordinates and scales. To achieve a comprehensive global view, the utilization of map merging techniques becomes necessary. Previous studies have typically depended on extracting image features from maps to establish connections. However, it is important to note that maps of the same location can exhibit inconsistencies due to sensing errors. Additionally, robot-generated maps are commonly represented in an occupancy grid format, which limits the availability of features for extraction and matching. Therefore, feature extraction and matching play crucial roles in map merging, particularly when dealing with uncertain sensing data. In this study, we introduce a novel method that addresses image noise resulting from sensing errors and applies additional corrections before performing feature extraction. This approach allows for the collection of features from corresponding locations in different maps, facilitating the establishment of connections between different coordinate systems and enabling effective map merging. Evaluation results demonstrate the significant reduction of sensing errors during the image stitching process, thanks to the proposed image pre-processing technique.

17.
Sensors (Basel) ; 23(10)2023 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-37430497

RESUMO

Image stitching is of great importance for multiple fields such as moving object detection and tracking, ground reconnaissance and augmented reality. To ameliorate the stitching effect and alleviate the mismatch rate, an effective image stitching algorithm based on color difference and an improved KAZE with a fast guided filter is proposed. Firstly, the fast guided filter is introduced to reduce the mismatch rate before feature matching. Secondly, the KAZE algorithm based on improved random sample consensus is used for feature matching. Then, the color difference and brightness difference of the overlapping area are calculated to make an overall adjustment to the original images so as to improve the nonuniformity of the splicing result. Finally, the warped images with color difference compensation are fused to obtain the stitched image. The proposed method is evaluated by both visual effect mapping and quantitative values. In addition, the proposed algorithm is compared with other current popular stitching algorithms. The results show that the proposed algorithm is superior to other algorithms in terms of the quantity of feature point pairs, the matching accuracy, the root mean square error and the mean absolute error.

18.
Sensors (Basel) ; 23(12)2023 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-37420530

RESUMO

We propose an algorithm for generating a panoramic image of a pipe's inner surface based on inverse perspective mapping (IPM). The objective of this study is to generate a panoramic image of the entire inner surface of a pipe for efficient crack detection, without relying on high-performance capturing equipment. Frontal images taken while passing through the pipe were converted to images of the inner surface of the pipe using IPM. We derived a generalized IPM formula that considers the slope of the image plane to correct the image distortion caused by the tilt of the plane; this IPM formula was derived based on the vanishing point of the perspective image, which was detected using optical flow techniques. Finally, the multiple transformed images with overlapping areas were combined via image stitching to create a panoramic image of the inner pipe surface. To validate our proposed algorithm, we restored images of pipe inner surfaces using a 3D pipe model and used these images for crack detection. The resulting panoramic image of the internal pipe surface accurately demonstrated the positions and shapes of cracks, highlighting its potential utility for crack detection using visual inspection or image-processing techniques.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos
19.
Micromachines (Basel) ; 14(4)2023 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-37421012

RESUMO

In order to improve the positioning accuracy of the micromanipulation system, a comprehensive error model is first established to take into account the microscope nonlinear imaging distortion, camera installation error, and the mechanical displacement error of the motorized stage. A novel error compensation method is then proposed with distortion compensation coefficients obtained by the Levenberg-Marquardt optimization algorithm combined with the deduced nonlinear imaging model. The compensation coefficients for camera installation error and mechanical displacement error are derived from the rigid-body translation technique and image stitching algorithm. To validate the error compensation model, single shot and cumulative error tests were designed. The experimental results show that after the error compensation, the displacement errors were controlled within 0.25 µm when moving in a single direction and within 0.02 µm per 1000 µm when moving in multiple directions.

20.
Microscopy (Oxf) ; 72(5): 425-432, 2023 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-36786473

RESUMO

We have developed a method to quantitatively measure image distortion, one of the five Seidel aberrations, in transmission electron microscopes without using a standard sample with a known structure. Displacements of small local segments in an image due to image distortion of the intermediate and projection lens system are first measured by comparing images taken before and after a given shift at the first image plane of the objective lens. Then, the sum of the second partial derivatives, or the Laplacian, of the displacement field is measured, and the radial and azimuthal distortion parameters are determined from the measured results. We confirmed using numerically distorted images that the proposed method can measure the image distortion within a relative error ratio of 0.04 for a wide range of distortion amount from 0.1% to 5.0%. The distortion measurement and correction were confirmed to work correctly by using the experimental images, and the iterative measurement and correction procedure could reduce the distortion to a level where the average image displacement was < 0.05 pixels.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...