Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-38959150

ABSTRACT

Despite acceleration in the use of 3D meshes, it is difficult to find effective mesh quality assessment algorithms that can produce predictions highly correlated with human subjective opinions. Defining mesh quality features is challenging due to the irregular topology of meshes, which are defined on vertices and triangles. To address this, we propose a novel 3D projective structural similarity index ( 3D- PSSIM) for meshes that is robust to differences in mesh topology. We address topological differences between meshes by introducing multi-view and multi-layer projections that can densely represent the mesh textures and geometrical shapes irrespective of mesh topology. It also addresses occlusion problems that occur during projection. We propose visual sensitivity weights that capture the perceptual sensitivity to the degree of mesh surface curvature. 3D- PSSIM computes perceptual quality predictions by aggregating quality-aware features that are computed in multiple projective spaces onto the mesh domain, rather than on 2D spaces. This allows 3D- PSSIM to determine which parts of a mesh surface are distorted by geometric or color impairments. Experimental results show that 3D- PSSIM can predict mesh quality with high correlation against human subjective judgments, across the presence of noise, even when there are large topological differences, outperforming existing mesh quality assessment models.

2.
Sensors (Basel) ; 24(10)2024 May 10.
Article in English | MEDLINE | ID: mdl-38793878

ABSTRACT

Many countries use low-cost sensors for high-resolution monitoring of particulate matter (PM2.5 and PM10) to manage public health. To enhance the accuracy of low-cost sensors, studies have been conducted to calibrate them considering environmental variables. Previous studies have considered various variables to calibrate seasonal variations in the PM concentration but have limitations in properly accounting for seasonal variability. This study considered the meridian altitude to account for seasonal variations in the PM concentration. In the PM10 calibration, we considered the calibrated PM2.5 as a subset of PM10. To validate the proposed methodology, we used the feedforward neural network, support vector machine, generalized additive model, and stepwise linear regression algorithms to analyze the results for different combinations of input variables. The inclusion of the meridian altitude enhanced the accuracy and explanatory power of the calibration model. For PM2.5, the combination of relative humidity, temperature, and meridian altitude yielded the best performance, with an average R2 of 0.93 and root mean square error of 5.6 µg/m3. For PM10, the average mean absolute percentage error decreased from 27.41% to 18.55% when considering the meridian altitude and further decreased to 15.35% when calibrated PM2.5 was added.

3.
Front Plant Sci ; 15: 1320969, 2024.
Article in English | MEDLINE | ID: mdl-38410726

ABSTRACT

Machine learning (ML) techniques offer a promising avenue for improving the integration of remote sensing data into mathematical crop models, thereby enhancing crop growth prediction accuracy. A critical variable for this integration is the leaf area index (LAI), which can be accurately assessed using proximal or remote sensing data based on plant canopies. This study aimed to (1) develop a machine learning-based method for estimating the LAI in rice and soybean crops using proximal sensing data and (2) evaluate the performance of a Remote Sensing-Integrated Crop Model (RSCM) when integrated with the ML algorithms. To achieve these objectives, we analyzed rice and soybean datasets to identify the most effective ML algorithms for modeling the relationship between LAI and vegetation indices derived from canopy reflectance measurements. Our analyses employed a variety of ML regression models, including ridge, lasso, support vector machine, random forest, and extra trees. Among these, the extra trees regression model demonstrated the best performance, achieving test scores of 0.86 and 0.89 for rice and soybean crops, respectively. This model closely replicated observed LAI values under different nitrogen treatments, achieving Nash-Sutcliffe efficiencies of 0.93 for rice and 0.97 for soybean. Our findings show that incorporating ML techniques into RSCM effectively captures seasonal LAI variations across diverse field management practices, offering significant potential for improving crop growth and productivity monitoring.

4.
Sensors (Basel) ; 23(9)2023 Apr 30.
Article in English | MEDLINE | ID: mdl-37177636

ABSTRACT

Object detection is a fundamental task in computer vision. Over the past several years, convolutional neural network (CNN)-based object detection models have significantly improved detection accuracyin terms of average precision (AP). Furthermore, feature pyramid networks (FPNs) are essential modules for object detection models to consider various object scales. However, the AP for small objects is lower than the AP for medium and large objects. It is difficult to recognize small objects because they do not have sufficient information, and information is lost in deeper CNN layers. This paper proposes a new FPN model named ssFPN (scale sequence (S2) feature-based feature pyramid network) to detect multi-scale objects, especially small objects. We propose a new scale sequence (S2) feature that is extracted by 3D convolution on the level of the FPN. It is defined and extracted from the FPN to strengthen the information on small objects based on scale-space theory. Motivated by this theory, the FPN is regarded as a scale space and extracts a scale sequence (S2) feature by three-dimensional convolution on the level axis of the FPN. The defined feature is basically scale-invariant and is built on a high-resolution pyramid feature map for small objects. Additionally, the deigned S2 feature can be extended to most object detection models based on FPNs. We also designed a feature-level super-resolution approach to show the efficiency of the scale sequence (S2) feature. We verified that the scale sequence (S2) feature could improve the classification accuracy for low-resolution images by training a feature-level super-resolution model. To demonstrate the effect of the scale sequence (S2) feature, experiments on the scale sequence (S2) feature built-in object detection approach including both one-stage and two-stage models were conducted on the MS COCO dataset. For the two-stage object detection models Faster R-CNN and Mask R-CNN with the S2 feature, AP improvements of up to 1.6% and 1.4%, respectively, were achieved. Additionally, the APS of each model was improved by 1.2% and 1.1%, respectively. Furthermore, the one-stage object detection models in the YOLO series were improved. For YOLOv4-P5, YOLOv4-P6, YOLOR-P6, YOLOR-W6, and YOLOR-D6 with the S2 feature, 0.9%, 0.5%, 0.5%, 0.1%, and 0.1% AP improvements were observed. For small object detection, the APS increased by 1.1%, 1.1%, 0.9%, 0.4%, and 0.1%, respectively. Experiments using the feature-level super-resolution approach with the proposed scale sequence (S2) feature were conducted on the CIFAR-100 dataset. By training the feature-level super-resolution model, we verified that ResNet-101 with the S2 feature trained on LR images achieved a 55.2% classification accuracy, which was 1.6% higher than for ResNet-101 trained on HR images.

5.
Article in English | MEDLINE | ID: mdl-36374893

ABSTRACT

Single-image 3-D reconstruction has long been a challenging problem. Recent deep learning approaches have been introduced to this 3-D area, but the ability to generate point clouds still remains limited due to inefficient and expensive 3-D representations, the dependency between the output and the number of model parameters, or the lack of a suitable computing operation. In this article, we present a novel deep-learning-based method to reconstruct a point cloud of an object from a single still image. The proposed method can be decomposed into two steps: feature fusion and deformation. The first step extracts both global and point-specific shape features from a 2-D object image, and then injects them into a randomly generated point cloud. In the second step, which is deformation, we introduce a new layer termed as GraphX that considers the interrelationship between points like common graph convolutions but operates on unordered sets. The framework can be applicable to realistic image data with background as we optionally learn a mask branch to segment objects from input images. To complement the quality of point clouds, we further propose an objective function to control the point uniformity. In addition, we introduce different variants of GraphX that cover from best performance to best memory budget. Moreover, the proposed model can generate an arbitrary-sized point cloud, which is the first deep method to do so. Extensive experiments demonstrate that we outperform the existing models and set a new height for different performance metrics in single-image 3-D reconstruction.

6.
Sensors (Basel) ; 22(11)2022 May 30.
Article in English | MEDLINE | ID: mdl-35684765

ABSTRACT

It is possible to construct cost-efficient three-dimensional (3D) or four-dimensional (4D) scanning systems using multiple affordable off-the-shelf RGB-D sensors to produce high-quality reconstructions of 3D objects. However, the quality of these systems' reconstructions is sensitive to a number of factors in reconstruction pipelines, such as multi-view calibration, depth estimation, 3D reconstruction, and color mapping accuracy, because the successive pipelines to reconstruct 3D meshes from multiple active stereo sensors are strongly correlated with each other. This paper categorizes the pipelines into sub-procedures and analyze various factors that can significantly affect reconstruction quality. Thus, this paper provides analytical and practical guidelines for high-quality 3D reconstructions with off-the-shelf sensors. For each sub-procedure, this paper shows comparisons and evaluations of several methods using data captured by 18 RGB-D sensors and provide analyses and discussions towards robust 3D reconstruction. Through various experiments, it has been demonstrated that significantly more accurate 3D scans can be obtained with the considerations along the pipelines. We believe our analyses, benchmarks, and guidelines will help anyone build their own studio and their further research for 3D reconstruction.


Subject(s)
Algorithms , Imaging, Three-Dimensional , Calibration , Imaging, Three-Dimensional/methods
7.
Sensors (Basel) ; 22(9)2022 Apr 26.
Article in English | MEDLINE | ID: mdl-35591022

ABSTRACT

The relationship between the disparity and depth information of corresponding pixels is inversely proportional. Thus, in order to accurately estimate depth from stereo vision, it is important to obtain accurate disparity maps, which encode the difference between horizontal coordinates of corresponding image points. Stereo vision can be classified as either passive or active. Active stereo vision generates pattern texture, which passive stereo vision does not have, on the image to fill the textureless regions. In passive stereo vision, many surveys have discovered that disparity accuracy is heavily reliant on attributes, such as radiometric variation and color variation, and have found the best-performing conditions. However, in active stereo matching, the accuracy of the disparity map is influenced not only by those affecting the passive stereo technique, but also by the attributes of the generated pattern textures. Therefore, in this paper, we analyze and evaluate the relationship between the performance of the active stereo technique and the attributes of pattern texture. When evaluating, experiments are conducted under various settings, such as changing the pattern intensity, pattern contrast, number of pattern dots, and global gain, that may affect the overall performance of the active stereo matching technique. Through this evaluation, our discovery can act as a noteworthy reference for constructing an active stereo system.


Subject(s)
Algorithms , Imaging, Three-Dimensional , Imaging, Three-Dimensional/methods , Vision, Ocular
8.
Sensors (Basel) ; 21(18)2021 Sep 18.
Article in English | MEDLINE | ID: mdl-34577483

ABSTRACT

When reconstructing a 3D object, it is difficult to obtain accurate 3D geometric information using a single camera. In order to capture detailed geometric information of a 3D object, it is inevitable to increase the number of cameras to capture the object. However, cameras need to be synchronized in order to simultaneously capture frames. If cameras are incorrectly synchronized, many artifacts are produced in the reconstructed 3D object. The RealSense RGB-D camera, which is commonly used for obtaining geometric information of a 3D object, provides synchronization modes to mitigate synchronization errors. However, the synchronization modes provided by theRealSense cameras can only sync depth cameras and have limitations in the number of cameras that can be synchronized using a single host due to the hardware issue of stable data transmission. Therefore, in this paper, we propose a novel synchronization method that synchronizes an arbitrary number of RealSense cameras by adjusting the number of hosts to support stable data transmission. Our method establishes a master-slave architecture in order to synchronize the system clocks of the hosts. While synchronizing the system clocks, delays that resulted from the process of synchronization were estimated so that the difference between the system clocks could be minimized. Through synchronization of the system clocks, cameras connected to the different hosts can be synchronized based on the timestamp of the data received by the hosts. Thus, our method synchronizes theRealSense cameras to simultaneously capture accurate 3D information of an object at a constant frame rate without dropping it.

9.
IEEE Trans Image Process ; 27(12): 5933-5946, 2018 Dec.
Article in English | MEDLINE | ID: mdl-30072325

ABSTRACT

In this paper, we propose a novel vessel tracking method, called active vessel tracking (AVT). The proposed method retains the major advantages that most 2D segmentation methods have demonstrated for 3D tracking while overcoming the drawbacks of previous 3D vessel tracking methods. Under the assumption that the vessel is cylindrical, thereby making its cross-section elliptical, the AVT finds a plane perpendicular to the vessel axis while tracking the vessel along its length. Also, We propose a method for vessel branch detection to automatically track complete vascular networks from a single starting point, whereas the previously proposed solutions have usually been limited in handling vessel bifurcations precisely on 3D or have required considerable user interaction. Our results show that the method is robust and accurate in both synthetic and clinical cases. In an experiment on synthetic data sets, the proposed method achieved a tracking accuracy of 96.1±0.5, detecting 99.1% of the branches. In an experiment on abdominal CTA data sets, it achieved a tracking accuracy of 98.4±0.5 for six target vessels, detecting 98.3% of the branches. These results show that the proposed method can outperform previous methods for vessel tracking.

10.
Comput Methods Programs Biomed ; 148: 99-112, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28774443

ABSTRACT

BACKGROUND AND OBJECTIVES: A robust vessel segmentation and tracking method based on a particle-filtering framework is proposed to cope with increasing demand for a method that can detect and track vessel anomalies. METHODS: We apply the level set method to segment the vessel boundary and a particle filter to track the position and shape variations in the vessel boundary between two adjacent slices. To enhance the segmentation and tracking performances, the importance density of the particle filter is localized by estimating the translation of an object's boundary. In addition, to minimize problems related to degeneracy and sample impoverishment in the particle filter, a newly proposed weighting policy is investigated. RESULTS: Compared to conventional methods, the proposed algorithm demonstrates better segmentation and tracking performances. Moreover, the stringent weighting policy we proposed demonstrates a tendency of suppressing degeneracy and sample impoverishment, and higher tracking accuracy can be obtained. CONCLUSIONS: The proposed method is expected to be applied to highly valuable applications for more accurate three-dimensional vessel tracking and rendering.


Subject(s)
Blood Vessels/diagnostic imaging , Image Processing, Computer-Assisted , Algorithms , Angiography , Bayes Theorem , Humans , Tomography, X-Ray Computed
11.
Methods Cell Biol ; 119: 55-72, 2014.
Article in English | MEDLINE | ID: mdl-24439279

ABSTRACT

Microscope projection photolithography (MPP) based on a protein-friendly photoresist is a versatile tool for the fabrication of protein- and cell-micropatterned surfaces. Photomasks containing various features can be economically produced by printing features on transparency films. Features in photomasks are projected by the objective lens of a microscope, resulting in a significant reduction of the feature size to as small as ~1 µm, close to the practical limit of light-based microfabrication. A fluorescence microscope used in most biology labs can be used for the fabrication process with some modifications. Using such a microscope, multistep MPP can be readily performed with precise registration of each micropattern on transparency film masks. Here, we describe methods of the synthesis and characterization of a protein-friendly photoresist poly(2,2-dimethoxy nitrobenzyl methacrylate-r-methyl methacrylate-r-poly(ethylene glycol) methacrylate) and the setups of fluorescence microscopes and the MPP procedures. In addition, we describe the protocols used in the micropatterning of multiple lymphocytes and the dynamic micropatterning of adherent cells.


Subject(s)
Microtechnology/methods , Printing , Proteins/chemistry , Imaging, Three-Dimensional , Light , Photography , Surface Properties
SELECTION OF CITATIONS
SEARCH DETAIL
...