Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38843060

ABSTRACT

Over the past few years, the manufacturing industry has increasingly embraced Augmented Reality (AR) for inspecting real products, yet faces challenges in visualization modalities. In fact, AR content presentation significantly impacts user performance, especially when virtual object colors lack real-world context. Additionally, the lack of studies in this area compounds uncertainty about visualization effects on user performance in inspection tasks. This study introduces a novel AR recoloring technique to enhance user performance during industrial assembly inspection tasks. This technique automatically recolors virtual components based on their physical counterparts, improving distinctiveness. Experimental comparisons with AR experts and representative users, using objective and subjective metrics, demonstrate the proposed AR recoloring technique enhances task performance and reduces mental burden during inspection activities. This innovative approach outperforms established methods like CAD and random modes, showcasing its potential for advancing AR applications in manufacturing, particularly in the inspection of products.

2.
Sensors (Basel) ; 16(4)2016 Apr 14.
Article in English | MEDLINE | ID: mdl-27089344

ABSTRACT

The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera.

3.
Sensors (Basel) ; 13(8): 11007-31, 2013 Aug 20.
Article in English | MEDLINE | ID: mdl-23966193

ABSTRACT

In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms.


Subject(s)
Algorithms , Artifacts , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Radar , Immersion , Water
4.
IEEE Trans Vis Comput Graph ; 19(1): 159-72, 2013 Jan.
Article in English | MEDLINE | ID: mdl-22508901

ABSTRACT

Visuo-haptic mixed reality consists of adding to a real scene the ability to see and touch virtual objects. It requires the use of see-through display technology for visually mixing real and virtual objects, and haptic devices for adding haptic interaction with the virtual objects. Unfortunately, the use of commodity haptic devices poses obstruction and misalignment issues that complicate the correct integration of a virtual tool and the user's real hand in the mixed reality scene. In this work, we propose a novel mixed reality paradigm where it is possible to touch and see virtual objects in combination with a real scene, using commodity haptic devices, and with a visually consistent integration of the user's hand and the virtual tool. We discuss the visual obstruction and misalignment issues introduced by commodity haptic devices, and then propose a solution that relies on four simple technical steps: color-based segmentation of the hand, tracking-based segmentation of the haptic device, background repainting using image-based models, and misalignment-free compositing of the user's hand. We have developed a successful proof-of-concept implementation, where a user can touch virtual objects and interact with them in the context of a real scene, and we have evaluated the impact on user performance of obstruction and misalignment correction.

SELECTION OF CITATIONS
SEARCH DETAIL
...