Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Heliyon ; 10(4): e26042, 2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38390062

RESUMO

In this paper, we present a new generation of omnidirectional automated guided vehicles (omniagv) used for transporting materials within a manufacturing factory with the ability to navigate autonomously and intelligently by interacting with the environment, including people and other entities. This robot has to be integrated into the operating environment without significant changes to the current facilities or heavy redefinitions of the logistics processes already running. For this purpose, different vision-based systems and advanced methods in mobile and cognitive robotics are developed and integrated. In this context, vision and perception are key factors. Different developed modules are in charge of supporting the robot during its navigation in the environment. Specifically, the localization module provides information about the robot pose by using visual odometry and wheel odometry systems. The obstacle avoidance module can detect obstacles and recognize some object classes for adaptive navigation. Finally, the tag detection module aids the robot during the picking phase of carts and provides information for global localization. The smart integration of vision and perception is paramount for effectively using the robot in the industrial context. Extensive qualitative and quantitative results prove the capability and effectiveness of the proposed AGV to navigate in the considered industrial environment.

2.
Front Plant Sci ; 14: 1305292, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38449576

RESUMO

Introduction: Drought detection, spanning from early stress to severe conditions, plays a crucial role in maintaining productivity, facilitating recovery, and preventing plant mortality. While handheld thermal cameras have been widely employed to track changes in leaf water content and stomatal conductance, research on thermal image classification remains limited due mainly to low resolution and blurry images produced by handheld cameras. Methods: In this study, we introduce a computer vision pipeline to enhance the significance of leaf-level thermal images across 27 distinct cotton genotypes cultivated in a greenhouse under progressive drought conditions. Our approach involved employing a customized software pipeline to process raw thermal images, generating leaf masks, and extracting a range of statistically relevant thermal features (e.g., min and max temperature, median value, quartiles, etc.). These features were then utilized to develop machine learning algorithms capable of assessing leaf hydration status and distinguishing between well-watered (WW) and dry-down (DD) conditions. Results: Two different classifiers were trained to predict the plant treatment-random forest and multilayer perceptron neural networks-finding 75% and 78% accuracy in the treatment prediction, respectively. Furthermore, we evaluated the predicted versus true labels based on classic physiological indicators of drought in plants, including volumetric soil water content, leaf water potential, and chlorophyll a fluorescence, to provide more insights and possible explanations about the classification outputs. Discussion: Interestingly, mislabeled leaves mostly exhibited notable responses in fluorescence, water uptake from the soil, and/or leaf hydration status. Our findings emphasize the potential of AI-assisted thermal image analysis in enhancing the informative value of common heterogeneous datasets for drought detection. This application suggests widening the experimental settings to be used with deep learning models, designing future investigations into the genotypic variation in plant drought response and potential optimization of water management in agricultural settings.

3.
Sensors (Basel) ; 20(3)2020 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-32041371

RESUMO

In this paper we tackle the problem of indoor robot localization by using a vision-based approach. Specifically, we propose a visual odometer able to give back the relative pose of an omnidirectional automatic guided vehicle (AGV) that moves inside an indoor industrial environment. A monocular downward-looking camera having the optical axis nearly perpendicular to the ground floor, is used for collecting floor images. After a preliminary analysis of images aimed at detecting robust point features (keypoints) takes place, specific descriptors associated to the keypoints enable to match the detected points to their consecutive frames. A robust correspondence feature filter based on statistical and geometrical information is devised for rejecting those incorrect matchings, thus delivering better pose estimations. A camera pose compensation is further introduced for ensuring better positioning accuracy. The effectiveness of proposed methodology has been proven through several experiments, in laboratory as well as in an industrial setting. Both quantitative and qualitative evaluations have been made. Outcomes have shown that the method provides a final positioning percentage error of 0.21% on an average distance of 17.2 m. A longer run in an industrial context has provided comparable results (a percentage error of 0.94% after about 80 m). The average relative positioning error is about 3%, which is still in good agreement with current state of the art.

4.
J Imaging ; 6(7)2020 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-34460655

RESUMO

Knowing an accurate passengers attendance estimation on each metro car contributes to the safely coordination and sorting the crowd-passenger in each metro station. In this work we propose a multi-head Convolutional Neural Network (CNN) architecture trained to infer an estimation of passenger attendance in a metro car. The proposed network architecture consists of two main parts: a convolutional backbone, which extracts features over the whole input image, and a multi-head layers able to estimate a density map, needed to predict the number of people within the crowd image. The network performance is first evaluated on publicly available crowd counting datasets, including the ShanghaiTech part_A, ShanghaiTech part_B and UCF_CC_50, and then trained and tested on our dataset acquired in subway cars in Italy. In both cases a comparison is made against the most relevant and latest state of the art crowd counting architectures, showing that our proposed MH-MetroNet architecture outperforms in terms of Mean Absolute Error (MAE) and Mean Square Error (MSE) and passenger-crowd people number prediction.

5.
Appl Opt ; 58(34): G155-G161, 2019 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-31873498

RESUMO

Digital holography is widely used in many fields for imaging, display, and metrology by exploiting its capability to furnish quantitative phase contrast maps. The entire processing pipeline that permits achievement of phase contrast images can be obtained by a cascade of numerical processing, such as zero-order and twin-image suppression, automatic refocusing, phase extraction by aberration compensation, and, if necessary, phase unwrapping. In this paper, we propose a new method, to the best of our knowledge, based on singular value decomposition filtering, to suppress zero-order and twin images in off-axis configuration, thus, automatically selecting the desired real diffraction order. We demonstrate the proposed approach in the case of lack of knowledge about the reference beam's frequency and curvature, which typically occurs in portable off-axis holographic microscope systems for lab-on-a-chip applications. We validate the proposed strategy by a comparison with common Fourier spatial filtering in the case of different experimental conditions and for several biological samples.

6.
Sci Rep ; 8(1): 17185, 2018 11 21.
Artigo em Inglês | MEDLINE | ID: mdl-30464205

RESUMO

The Risso's dolphin is a widely distributed species, found in deep temperate and tropical waters. Estimates of its abundance are available in a few regions, details of its distribution are lacking, and its status in the Mediterranean Sea is ranked as Data Deficient by the IUCN Red List. In this paper, a synergy between bio-ecological analysis and innovative strategies has been applied to construct a digital platform, DolFin. It contains a collection of sighting data and geo-referred photos of Grampus griseus, acquired from 2013 to 2016 in the Gulf of Taranto (Northern Ionian Sea, North-eastern Central Mediterranean Sea), and the first automated tool for Smart Photo Identification of the Risso's dolphin (SPIR). This approach provides the capability to collect and analyse significant amounts of data acquired over wide areas and extended periods of time. This effort establishes the baseline for future large-scale studies, essential to providing further information on the distribution of G. griseus. Our data and analysis results corroborate the hypothesis of a resident Risso's dolphin population in the Gulf of Taranto, showing site fidelity in a relatively restricted area characterized by a steep slope to around 800 m in depth, north of the Taranto Valley canyon system.


Assuntos
Golfinhos/crescimento & desenvolvimento , Filogeografia/métodos , Zoologia/métodos , Animais , Mar Mediterrâneo
7.
Sensors (Basel) ; 15(2): 2283-308, 2015 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-25621605

RESUMO

In this paper, an accurate range sensor for the three-dimensional reconstruction of environments is designed and developed. Following the principles of laser profilometry, the device exploits a set of optical transmitters able to project a laser line on the environment. A high-resolution and high-frame-rate camera assisted by a telecentric lens collects the laser light reflected by a parabolic mirror, whose shape is designed ad hoc to achieve a maximum measurement error of 10 mm when the target is placed 3 m away from the laser source. Measurements are derived by means of an analytical model, whose parameters are estimated during a preliminary calibration phase. Geometrical parameters, analytical modeling and image processing steps are validated through several experiments, which indicate the capability of the proposed device to recover the shape of a target with high accuracy. Experimental measurements show Gaussian statistics, having standard deviation of 1.74 mm within the measurable range. Results prove that the presented range sensor is a good candidate for environmental inspections and measurements.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...