Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 121
Filtrar
1.
Insects ; 15(6)2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38921088

RESUMO

Pest control is crucial in crop production; however, the use of chemical pesticides, the primary method of pest control, poses environmental issues and leads to insecticide resistance in pests. To overcome these issues, laser zapping has been studied as a clean pest control technology against the nocturnal cotton leafworm, Spodoptera litura, which has high fecundity and causes severe damage to various crops. For better sighting during laser zapping, it is important to measure the coordinates and speed of moths under low-light conditions. To achieve this, we developed an automatic detection pipeline based on point cloud time series data from stereoscopic images. We obtained 3D point cloud data from disparity images recorded under infrared and low-light conditions. To identify S. litura, we removed noise from the data using multiple filters and a support vector machine. We then computed the size of the outline box and directional angle of the 3D point cloud time series to determine the noisy point clouds. We visually inspected the flight trajectories and found that the size of the outline box and the movement direction were good indicators of noisy data. After removing noisy data, we obtained 68 flight trajectories, and the average flight speed of free-flying S. litura was 1.81 m/s.

2.
J Imaging ; 10(6)2024 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-38921621

RESUMO

Thanks to the line-scanning camera, the measurement method based on line-scanning stereo vision has high optical accuracy, data transmission efficiency, and a wide field of vision. It is more suitable for continuous operation and high-speed transmission of industrial product detection sites. However, the one-dimensional imaging characteristics of the line-scanning camera cause motion distortion during image data acquisition, which directly affects the accuracy of detection. Effectively reducing the influence of motion distortion is the primary problem to ensure detection accuracy. To obtain the two-dimensional color image and three-dimensional contour data of the heavy rail surface at the same time, a binocular color line-scanning stereo vision system is designed to collect the heavy rail surface data combined with the bright field illumination of the symmetrical linear light source. Aiming at the image motion distortion caused by system installation error and collaborative acquisition frame rate mismatch, this paper uses the checkerboard target and two-step cubature Kalman filter algorithm to solve the nonlinear parameters in the motion distortion model, estimate the real motion, and correct the image information. The experiments show that the accuracy of the data contained in the image is improved by 57.3% after correction.

3.
Sensors (Basel) ; 24(6)2024 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-38544137

RESUMO

This paper presents an innovative dataset designed explicitly for challenging agricultural environments, such as greenhouses, where precise location is crucial, but GNNS accuracy may be compromised by construction elements and the crop. The dataset was collected using a mobile platform equipped with a set of sensors typically used in mobile robots as it was moved through all the corridors of a typical Mediterranean greenhouse featuring tomato crops. This dataset presents a unique opportunity for constructing detailed 3D models of plants in such indoor-like spaces, with potential applications such as robotized spraying. For the first time, to the authors' knowledge, a dataset suitable to test simultaneous localization and mapping (SLAM) methods is presented in a greenhouse environment, which poses unique challenges. The suitability of the dataset for this purpose is assessed by presenting SLAM results with state-of-the-art algorithms. The dataset is available online.

4.
Sensors (Basel) ; 24(6)2024 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-38544205

RESUMO

Automated precision weed control requires visual methods to discriminate between crops and weeds. State-of-the-art plant detection methods fail to reliably detect weeds, especially in dense and occluded scenes. In the past, using hand-crafted detection models, both color (RGB) and depth (D) data were used for plant detection in dense scenes. Remarkably, the combination of color and depth data is not widely used in current deep learning-based vision systems in agriculture. Therefore, we collected an RGB-D dataset using a stereo vision camera. The dataset contains sugar beet crops in multiple growth stages with a varying weed densities. This dataset was made publicly available and was used to evaluate two novel plant detection models, the D-model, using the depth data as the input, and the CD-model, using both the color and depth data as inputs. For ease of use, for existing 2D deep learning architectures, the depth data were transformed into a 2D image using color encoding. As a reference model, the C-model, which uses only color data as the input, was included. The limited availability of suitable training data for depth images demands the use of data augmentation and transfer learning. Using our three detection models, we studied the effectiveness of data augmentation and transfer learning for depth data transformed to 2D images. It was found that geometric data augmentation and transfer learning were equally effective for both the reference model and the novel models using the depth data. This demonstrates that combining color-encoded depth data with geometric data augmentation and transfer learning can improve the RGB-D detection model. However, when testing our detection models on the use case of volunteer potato detection in sugar beet farming, it was found that the addition of depth data did not improve plant detection at high vegetation densities.


Assuntos
Plantas Daninhas , Controle de Plantas Daninhas , Humanos , Agricultura , Produtos Agrícolas , Açúcares
5.
ACS Appl Mater Interfaces ; 15(50): 58631-58642, 2023 Dec 20.
Artigo em Inglês | MEDLINE | ID: mdl-38054897

RESUMO

The neuromorphic vision system (NVS) equipped with optoelectronic synapses integrates perception, storage, and processing and is expected to address the issues of traditional machine vision. However, owing to their lack of stereo vision, existing NVSs focus on 2D image processing, which makes it difficult to solve problems such as spatial cognition errors and low-precision interpretation. Consequently, inspired by the human visual system, an NVS with stereo vision is developed to achieve 3D object recognition, depending on the prepared ReS2 optoelectronic synapse with 12.12 fJ ultralow power consumption. This device exhibits excellent optical synaptic plasticity derived from the persistent photoconductivity effect. As the cornerstone for 3D vision, color planar information is successfully discriminated and stored in situ at the sensor end, benefiting from its wavelength-dependent plasticity in the visible region. Importantly, the dependence of the channel conductance on the target distance is experimentally revealed, implying that the structure information on the object can be directly captured and stored by the synapse. The 3D image of the object is successfully reconstructed via fusion of its planar and depth images. Therefore, the proposed 3D-NVS based on ReS2 synapses for 3D objects achieves a recognition accuracy of 97.0%, which is much higher than that for 2D objects (32.6%), demonstrating its strong ability to prevent 2D-photo spoofing in applications such as face payment, entrance guard systems, and others.

6.
Adv Mater ; : e2310134, 2023 Dec 03.
Artigo em Inglês | MEDLINE | ID: mdl-38042993

RESUMO

Fluid flow behavior is visualized through particle image velocimetry (PIV) for understanding and studying experimental fluid dynamics. However, traditional PIV methods require multiple cameras and conventional lens systems for image acquisition to resolve multi-dimensional velocity fields. In turn, it introduces complexity to the entire system. Meta-lenses are advanced flat optical devices composed of artificial nanoantenna arrays. It can manipulate the wavefront of light with the advantages of ultrathin, compact, and no spherical aberration. Meta-lenses offer novel functionalities and promise to replace traditional optical imaging systems. Here, a binocular meta-lens PIV technique is proposed, where a pair of GaN meta-lenses are fabricated on one substrate and integrated with a imaging sensor to form a compact binocular PIV system. The meta-lens weigh only 116 mg, much lighter than commercial lenses. The 3D velocity field can be obtained by the binocular disparity and particle image displacement information of fluid flow. The measurement error of vortex-ring diameter is ≈1.25% experimentally validates via a Reynolds-number (Re) 2000 vortex-ring. This work demonstrates a new development trend for the PIV technique for rejuvenating traditional flow diagnostic tools toward a more compact, easy-to-deploy technique. It enables further miniaturization and low-power systems for portable, field-use, and space-constrained PIV applications.

7.
Sensors (Basel) ; 23(15)2023 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-37571684

RESUMO

The low light conditions, abundant dust, and rocky terrain on the lunar surface pose challenges for scientific research. To effectively perceive the surrounding environment, lunar rovers are equipped with binocular cameras. In this paper, with the aim of accurately detect obstacles on the lunar surface under complex conditions, an Improved Semi-Global Matching (I-SGM) algorithm for the binocular cameras is proposed. The proposed method first carries out a cost calculation based on the improved Census transform and an adaptive window based on a connected component. Then, cost aggregation is performed using cross-based cost aggregation in the AD-Census algorithm and the initial disparity of the image is calculated via the Winner-Takes-All (WTA) strategy. Finally, disparity optimization is performed using left-right consistency detection and disparity padding. Utilizing standard test image pairs provided by the Middleburry website, the results of the test reveal that the algorithm can effectively improve the matching accuracy of the SGM algorithm, while reducing the running time of the program and enhancing noise immunity. Furthermore, when applying the I-SGM algorithm to the simulated lunar environment, the results show that the I-SGM algorithm is applicable in dim conditions on the lunar surface and can better help a lunar rover to detect obstacles during its travel.

8.
Sensors (Basel) ; 23(15)2023 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-37571707

RESUMO

Model-based stereo vision methods can estimate the 6D poses of rigid objects. They can help robots to achieve a target grip in complex home environments. This study presents a novel approach, called the variable photo-model method, to estimate the pose and size of an unknown object using a single photo of the same category. By employing a pre-trained You Only Look Once (YOLO) v4 weight for object detection and 2D model generation in the photo, the method converts the segmented 2D photo-model into 3D flat photo-models assuming different sizes and poses. Through perspective projection and model matching, the method finds the best match between the model and the actual object in the captured stereo images. The matching fitness function is optimized using a genetic algorithm (GA). Unlike data-driven approaches, this approach does not require multiple photos or pre-training time for single object pose recognition, making it more versatile. Indoor experiments demonstrate the effectiveness of the variable photo-model method in estimating the pose and size of the target objects within the same class. The findings of this study have practical implications for object detection prior to robotic grasping, particularly due to its ease of application and the limited data required.

9.
Sensors (Basel) ; 23(14)2023 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-37514869

RESUMO

Acoustic and optical sensing modalities represent two of the primary sensing methods within underwater environments, and both have been researched extensively in previous works. Acoustic sensing is the premier method due to its high transmissivity in water and its relative immunity to environmental factors such as water clarity. Optical sensing is, however, valuable for many operational and inspection tasks and is readily understood by human operators. In this work, we quantify and compare the operational characteristics and environmental effects of turbidity and illumination on two commercial-off-the-shelf sensors and an additional augmented optical method, including: a high-frequency, forward-looking inspection sonar, a stereo camera with built-in stereo depth estimation, and color imaging, where a laser has been added for distance triangulation. The sensors have been compared in a controlled underwater environment with known target objects to ascertain quantitative operation performance, and it is shown that optical stereo depth estimation and laser triangulation operate satisfactorily at low and medium turbidites up to a distance of approximately one meter, with an error below 2 cm and 12 cm, respectively; acoustic measurements are almost completely unaffected up to two meters under high turbidity, with an error below 5 cm. Moreover, the stereo vision algorithm is slightly more robust than laser-line triangulation across turbidity and lighting conditions. Future work will concern the improvement of the stereo reconstruction and laser triangulation by algorithm enhancement and the fusion of the two sensing modalities.

10.
IEEE Open J Eng Med Biol ; 4: 45-54, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37223053

RESUMO

Goal: The modern way of living has significantly influenced the daily diet. The ever-increasing number of people with obesity, diabetes and cardiovascular diseases stresses the need to find tools that could help in the daily intake of the necessary nutrients. Methods: In this paper, we present an automated image-based dietary assessment system of Mediterranean food, based on: 1) an image dataset of Mediterranean foods, 2) on a pre-trained Convolutional Neural Network (CNN) for food image classification, and 3) on stereo vision techniques for the volume and nutrition estimation of the food. We use a pre-trained CNN in the Food-101 dataset to train a deep learning classification model employing our dataset Mediterranean Greek Food (MedGRFood). Based on the EfficientNet family of CNNs, we use the EfficientNetB2 both for the pre-trained model and its weights evaluation, as well as for classifying food images in the MedGRFood dataset. Next, we estimate the volume of the food, through 3D food reconstruction of two images taken by a smartphone camera. The proposed volume estimation subsystem uses stereo vision techniques and algorithms, and needs the input of two food images to reconstruct the point cloud of the food and to compute its quantity. Results: The classification accuracy where true class matches with the most probable class predicted by the model (Top-1 accuracy) is 83.8%, while the accuracy where true class matches with any one of the 5 most probable classes predicted by the model (Top-5 accuracy) is 97.6%, for the food classification subsystem. The food volume estimation subsystem achieves an overall mean absolute percentage error 10.5% for 148 different food dishes. Conclusions: The proposed automated image-based dietary assessment system provides the capability of continuous recording of health data in real time.

11.
Math Biosci Eng ; 20(4): 6294-6311, 2023 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-37161107

RESUMO

Estimating the volume of food plays an important role in diet monitoring. However, it is difficult to perform this estimation automatically and accurately. A new method based on the multi-layer superpixel technique is proposed in this paper to avoid tedious human-computer interaction and improve estimation accuracy. Our method includes the following steps: 1) obtain a pair of food images along with the depth information using a stereo camera; 2) reconstruct the plate plane from the disparity map; 3) warp the input image and the disparity map to form a new direction of view parallel to the plate plane; 4) cut the warped image into a series of slices according to the depth information and estimate the occluded part of the food; and 5) rescale superpixels for each slice and estimate the food volume by accumulating all available slices in the segmented food region. Through a combination of image data and disparity map, the influences of noise and visual error in existing interactive food volume estimation methods are reduced, and the estimation accuracy is improved. Our experiments show that our method is effective, accurate and convenient, providing a new tool for promoting a balanced diet and maintaining health.

12.
Int J Comput Assist Radiol Surg ; 18(6): 1043-1051, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37067752

RESUMO

PURPOSE: Tissue deformation recovery is to reconstruct the change in shape and surface strain caused by tool-tissue interaction or respiration, which is essential for providing motion and shape information that benefits the improvement of the safety of minimally invasive surgery. The binocular vision-based approach is a practical candidate for deformation recovery as no extra devices are required. However, previous methods suffer from limitations such as the reliance on biomechanical priors and the vulnerability to the occlusion caused by surgical instruments. To address the issues, we propose a deformation recovery method incorporating mesh structures and scene flow. METHODS: The method can be divided into three modules. The first one is the implementation of the two-step scene flow generation module to extract the 3D motion from the binocular sequence. Second, we propose a strain-based filtering method to denoise the original scene flow. Third, a mesh optimization model is proposed that strengthens the robustness to occlusion by employing contextual connectivity. RESULTS: In a phantom and an in vivo experiment, the feasibility of the method in recovering surface deformation in the presence of tool-induced occlusion was demonstrated. Surface reconstruction accuracy was quantitatively evaluated by comparing the recovered mesh surface with the 3D scanned model in the phantom experiment. Results show that the overall error is 0.70 ± 0.55 mm. CONCLUSION: The method has been demonstrated to be capable of continuously recovering surface deformation using mesh representation with robustness to the occlusion caused by surgical forceps and promises to be suitable for the application in actual surgery.


Assuntos
Algoritmos , Cirurgia Assistida por Computador , Humanos , Telas Cirúrgicas , Cirurgia Assistida por Computador/métodos , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Movimento (Física)
13.
Sensors (Basel) ; 23(8)2023 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-37112198

RESUMO

Consider the case of a small, unmanned boat that is performing an autonomous mission. Naturally, such a platform might need to approximate the ocean surface of its surroundings in real-time. Much like obstacle mapping in autonomous (off-road) rovers, an approximation of the ocean surface in a vessel's surroundings in real-time can be used for improved control and optimized route planning. Unfortunately, such an approximation seems to require either expensive and heavy sensors or external logistics that are mostly not available for small or low-cost vessels. In this paper, we present a real-time method for detecting and tracking ocean waves around a floating object that is based on stereo vision sensors. Based on a large set of experiments, we conclude that the presented method allows reliable, real-time, and cost-effective ocean surface mapping suitable for small autonomous boats.

14.
Sensors (Basel) ; 23(5)2023 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-36904743

RESUMO

The research and development of an intelligent magnetic levitation transportation system has become an important research branch of the current intelligent transportation system (ITS), which can provide technical support for state-of-the-art fields such as intelligent magnetic levitation digital twin. First, we applied unmanned aerial vehicle oblique photography technology to acquire the magnetic levitation track image data and preprocessed them. Then, we extracted the image features and matched them based on the incremental structure from motion (SFM) algorithm, recovered the camera pose parameters of the image data and the 3D scene structure information of key points, and optimized the bundle adjustment to output 3D magnetic levitation sparse point clouds. Then, we applied multiview stereo (MVS) vision technology to estimate the depth map and normal map information. Finally, we extracted the output of the dense point clouds that can precisely express the physical structure of the magnetic levitation track, such as turnout, turning, linear structures, etc. By comparing the dense point clouds model with the traditional building information model, experiments verified that the magnetic levitation image 3D reconstruction system based on the incremental SFM and MVS algorithm has strong robustness and accuracy and can express a variety of physical structures of magnetic levitation track with high accuracy.

15.
Sensors (Basel) ; 23(5)2023 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-36904917

RESUMO

Smart farming (SF) applications rely on robust and accurate computer vision systems. An important computer vision task in agriculture is semantic segmentation, which aims to classify each pixel of an image and can be used for selective weed removal. State-of-the-art implementations use convolutional neural networks (CNN) that are trained on large image datasets. In agriculture, publicly available RGB image datasets are scarce and often lack detailed ground-truth information. In contrast to agriculture, other research areas feature RGB-D datasets that combine color (RGB) with additional distance (D) information. Such results show that including distance as an additional modality can improve model performance further. Therefore, we introduce WE3DS as the first RGB-D image dataset for multi-class plant species semantic segmentation in crop farming. It contains 2568 RGB-D images (color image and distance map) and corresponding hand-annotated ground-truth masks. Images were taken under natural light conditions using an RGB-D sensor consisting of two RGB cameras in a stereo setup. Further, we provide a benchmark for RGB-D semantic segmentation on the WE3DS dataset and compare it with a solely RGB-based model. Our trained models achieve up to 70.7% mean Intersection over Union (mIoU) for discriminating between soil, seven crop species, and ten weed species. Finally, our work confirms the finding that additional distance information improves segmentation quality.

16.
Sensors (Basel) ; 23(6)2023 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-36992022

RESUMO

The utilization of stationary underwater cameras is a modern and well-adapted approach to provide a continuous and cost-effective long-term solution to monitor underwater habitats of particular interest. A common goal of such monitoring systems is to gain better insight into the dynamics and condition of populations of various marine organisms, such as migratory or commercially relevant fish taxa. This paper describes a complete processing pipeline to automatically determine the abundance, type and estimate the size of biological taxa from stereoscopic video data captured by the stereo camera of a stationary Underwater Fish Observatory (UFO). A calibration of the recording system was carried out in situ and, afterward, validated using the synchronously recorded sonar data. The video data were recorded continuously for nearly one year in the Kiel Fjord, an inlet of the Baltic Sea in northern Germany. It shows underwater organisms in their natural behavior, as passive low-light cameras were used instead of active lighting to dampen attraction effects and allow for the least invasive recording possible. The recorded raw data are pre-filtered by an adaptive background estimation to extract sequences with activity, which are then processed by a deep detection network, i.e., Yolov5. This provides the location and type of organisms detected in each video frame of both cameras, which are used to calculate stereo correspondences following a basic matching scheme. In a subsequent step, the size and distance of the depicted organisms are approximated using the corner coordinates of the matched bounding boxes. The Yolov5 model employed in this study was trained on a novel dataset comprising 73,144 images and 92,899 bounding box annotations for 10 categories of marine animals. The model achieved a mean detection accuracy of 92.4%, a mean average precision (mAP) of 94.8% and an F1 score of 93%.


Assuntos
Organismos Aquáticos , Aprendizado Profundo , Animais , Meio Ambiente , Peixes , Baías
17.
Neural Netw ; 162: 502-515, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36972650

RESUMO

Multi-view stereo reconstruction aims to construct 3D scenes from multiple 2D images. In recent years, learning-based multi-view stereo methods have achieved significant results in depth estimation for multi-view stereo reconstruction. However, the current popular multi-stage processing method cannot solve the low-efficiency problem satisfactorily owing to the use of 3D convolution and still involves significant amounts of calculation. Therefore, to further balance the efficiency and generalization performance, this study proposed a multi-scale iterative probability estimation with refinement, which is a highly efficient method for multi-view stereo reconstruction. It comprises three main modules: 1) a high-precision probability estimator, dilated-LSTM that encodes the pixel probability distribution of depth in the hidden state, 2) an efficient interactive multi-scale update module that fully integrates multi-scale information and improves parallelism by interacting information between adjacent scales, and 3) a Pi-error Refinement module that converts the depth error between views into a grayscale error map and refines the edges of objects in the depth map. Simultaneously, we introduced a large amount of high-frequency information to ensure the accuracy of the refined edges. Among the most efficient methods (e.g., runtime and memory), the proposed method achieved the best generalization on the Tanks & Temples benchmarks. Additionally, the performance of the Miper-MVS was highly competitive in DTU benchmark. Our code is available at https://github.com/zhz120/Miper-MVS.


Assuntos
Benchmarking , Generalização Psicológica , Aprendizagem , Probabilidade
18.
Sensors (Basel) ; 23(3)2023 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-36772173

RESUMO

In this work, a new method for aerial robot remote sensing using stereo vision is proposed. A variable baseline and flexible configuration stereo setup is achieved by separating the left camera and right camera on two separate quadrotor aerial robots. Monocular cameras, one on each aerial robot, are used as a stereo pair, allowing independent adjustment of the pose of the stereo pair. In contrast to conventional stereo vision where two cameras are fixed, having a flexible configuration system allows a large degree of independence in changing the configuration in accordance with various kinds of applications. Larger baselines can be used for stereo vision of farther away targets while using a vertical stereo configuration in tasks where there would be a loss of horizontal overlap caused by a lack of suitable horizontal configuration. Additionally, a method for the practical use of variable baseline stereo vision is introduced, combining multiple point clouds from multiple stereo baselines. Issues from using an inappropriate baseline, such as estimation error induced by insufficient baseline, and occlusions from using too large a baseline can be avoided with this solution.

19.
ArXiv ; 2023 Jan 11.
Artigo em Inglês | MEDLINE | ID: mdl-36713236

RESUMO

Classical good continuation for image curves is based on 2D position and orientation. It is supported by the columnar organization of cortex, by psychophysical experiments, and by rich models of (differential) geometry. Here we extend good continuation to stereo. We introduce a neurogeometric model, in which the parametrizations involve both spatial and orientation disparities. Our model provides insight into the neurobiology, suggesting an implicit organization for neural interactions and a well-defined 3D association field. Our model sheds light on the computations underlying the correspondence problem, and illustrates how good continuation in the world generalizes good continuation in the plane.

20.
Philos Trans R Soc Lond B Biol Sci ; 378(1869): 20210455, 2023 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-36511406

RESUMO

Since Kepler and Descartes in the early-1600s, vision science has been committed to a triangulation model of stereo vision. But in the early-1800s, we realized that disparities are responsible for stereo vision. And we have spent the past 200 years trying to shoe-horn disparities back into the triangulation account. The first part of this article argues that this is a mistake, and that stereo vision is a solution to a different problem: the eradication of rivalry between the two retinal images, rather than the triangulation of objects in space. This leads to a 'minimal theory of 3D vision', where 3D vision is no longer tied to estimating the scale, shape, and direction of objects in the world. The second part of this article then asks whether the other aspects of 3D vision, which go beyond stereo vision, really operate at the same level of visual experience as stereo vision? I argue they do not. Whilst we want a theory of real-world 3D vision, the literature risks giving us a theory of picture perception instead. And I argue for a two-stage theory, where our purely internal 'minimal' 3D percept (from stereo vision) is linked to the world through cognition. This article is part of a discussion meeting issue 'New approaches to 3D vision'.


Assuntos
Cognição , Visão Ocular
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...