Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37527324

RESUMO

Canonical correlation analysis (CCA) is a correlation analysis technique that is widely used in statistics and the machine-learning community. However, the high complexity involved in the training process lays a heavy burden on the processing units and memory system, making CCA nearly impractical in large-scale data. To overcome this issue, a novel CCA method that tries to carry out analysis on the dataset in the Fourier domain is developed in this article. Appling Fourier transform on the data, we can convert the traditional eigenvector computation of CCA into finding some predefined discriminative Fourier bases that can be learned with only element-wise dot product and sum operations, without complex time-consuming calculations. As the eigenvalues come from the sum of individual sample products, they can be estimated in parallel. Besides, thanks to the data characteristic of pattern repeatability, the eigenvalues can be well estimated with partial samples. Accordingly, a progressive estimate scheme is proposed, in which the eigenvalues are estimated through feeding data batch by batch until the eigenvalues sequence is stable in order. As a result, the proposed method shows its characteristics of extraordinarily fast and memory efficiencies. Furthermore, we extend this idea to the nonlinear kernel and deep models and obtained satisfactory accuracy and extremely fast training time consumption as expected. An extensive discussion on the fast Fourier transform (FFT)-CCA is made in terms of time and memory efficiencies. Experimental results on several large-scale correlation datasets, such as MNIST8M, X-RAY MICROBEAM SPEECH, and Twitter Users Data, demonstrate the superiority of the proposed algorithm over state-of-the-art (SOTA) large-scale CCA methods, as our proposed method achieves almost same accuracy with the training time of our proposed method being 1000 times faster. This makes our proposed models best practice models for dealing with large-scale correlation datasets. The source code is available at https://github.com/Mrxuzhao/FFTCCA.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(3): 3604-3616, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35687620

RESUMO

To reveal the mystery behind deep neural networks (DNNs), optimization may offer a good perspective. There are already some clues showing the strong connection between DNNs and optimization problems, e.g., under a mild condition, DNN's activation function is indeed a proximal operator. In this paper, we are committed to providing a unified optimization induced interpretability for a special class of networks-equilibrium models, i.e., neural networks defined by fixed point equations, which have become increasingly attractive recently. To this end, we first decompose DNNs into a new class of unit layer that is the proximal operator of an implicit convex function while keeping its output unchanged. Then, the equilibrium model of the unit layer can be derived, we name it Optimization Induced Equilibrium Networks (OptEq). The equilibrium point of OptEq can be theoretically connected to the solution of a convex optimization problem with explicit objectives. Based on this, we can flexibly introduce prior properties to the equilibrium points: 1) modifying the underlying convex problems explicitly so as to change the architectures of OptEq; and 2) merging the information into the fixed point iteration, which guarantees to choose the desired equilibrium point when the fixed point set is non-singleton. We show that OptEq outperforms previous implicit models even with fewer parameters.

3.
IEEE Trans Image Process ; 32: 13-28, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36459602

RESUMO

Human action recognition (HAR) is one of most important tasks in video analysis. Since video clips distributed on networks are usually untrimmed, it is required to accurately segment a given untrimmed video into a set of action segments for HAR. As an unsupervised temporal segmentation technology, subspace clustering learns the codes from each video to construct an affinity graph, and then cuts the affinity graph to cluster the video into a set of action segments. However, most of the existing subspace clustering schemes not only ignore the sequential information of frames in code learning, but also the negative effects of noises when cutting the affinity graph, which lead to inferior performance. To address these issues, we propose a sequential order-aware coding-based robust subspace clustering (SOAC-RSC) scheme for HAR. By feeding the motion features of video frames into multi-layer neural networks, two expressive code matrices are learned in a sequential order-aware manner from unconstrained and constrained videos, respectively, to construct the corresponding affinity graphs. Then, with the consideration of the existence of noise effects, a simple yet robust cutting algorithm is proposed to cut the constructed affinity graphs to accurately obtain the action segments for HAR. The extensive experiments demonstrate the proposed SOAC-RSC scheme achieves the state-of-the-art performance on the datasets of Keck Gesture and Weizmann, and provides competitive performance on the other 6 public datasets such as UCF101 and URADL for HAR task, compared to the recent related approaches.

4.
Artigo em Inglês | MEDLINE | ID: mdl-37015390

RESUMO

The ability to capture joint connections in complicated motion is essential for skeleton-based action recognition. However, earlier approaches may not be able to fully explore this connection in either the spatial or temporal dimension due to fixed or single-level topological structures and insufficient temporal modeling. In this paper, we propose a novel multilevel spatial-temporal excited graph network (ML-STGNet) to address the above problems. In the spatial configuration, we decouple the learning of the human skeleton into general and individual graphs by designing a multilevel graph convolution (ML-GCN) network and a spatial data-driven excitation (SDE) module, respectively. ML-GCN leverages joint-level, part-level, and body-level graphs to comprehensively model the hierarchical relations of a human body. Based on this, SDE is further introduced to handle the diverse joint relations of different samples in a data-dependent way. This decoupling approach not only increases the flexibility of the model for graph construction but also enables the generality to adapt to various data samples. In the temporal configuration, we apply the concept of temporal difference to the human skeleton and design an efficient temporal motion excitation (TME) module to highlight the motion-sensitive features. Furthermore, a simplified multiscale temporal convolution (MS-TCN) network is introduced to enrich the expression ability of temporal features. Extensive experiments on the four popular datasets NTU-RGB+D, NTU-RGB+D 120, Kinetics Skeleton 400, and Toyota Smarthome demonstrate that ML-STGNet gains considerable improvements over the existing state of the art.

5.
IEEE Trans Neural Netw Learn Syst ; 32(3): 947-961, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32310782

RESUMO

The projective dictionary pair learning (DPL) model jointly seeks a synthesis dictionary and an analysis dictionary by extracting the block-diagonal coefficients with an incoherence-constrained analysis dictionary. However, DPL fails to discover the underlying subspaces and salient features at the same time, and it cannot encode the neighborhood information of the embedded coding coefficients, especially adaptively. In addition, although the data can be well reconstructed via the minimization of the reconstruction error, useful distinguishing salient feature information may be lost and incorporated into the noise term. In this article, we propose a novel self-expressive adaptive locality-preserving framework: twin-incoherent self-expressive latent DPL (SLatDPL). To capture the salient features from the samples, SLatDPL minimizes a latent reconstruction error by integrating the coefficient learning and salient feature extraction into a unified model, which can also be used to simultaneously discover the underlying subspaces and salient features. To make the coefficients block diagonal and ensure that the salient features are discriminative, our SLatDPL regularizes them by imposing a twin-incoherence constraint. Moreover, SLatDPL utilizes a self-expressive adaptive weighting strategy that uses normalized block-diagonal coefficients to preserve the locality of the codes and salient features. SLatDPL can use the class-specific reconstruction residual to handle new data directly. Extensive simulations on several public databases demonstrate the satisfactory performance of our SLatDPL compared with related methods.

6.
IEEE Trans Pattern Anal Mach Intell ; 43(2): 549-566, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-31478840

RESUMO

In some significant applications such as data forecasting, the locations of missing entries cannot obey any non-degenerate distributions, questioning the validity of the prevalent assumption that the missing data is randomly chosen according to some probabilistic model. To break through the limits of random sampling, we explore in this paper the problem of real-valued matrix completion under the setup of deterministic sampling. We propose two conditions, isomeric condition and relative well-conditionedness, for guaranteeing an arbitrary matrix to be recoverable from a sampling of the matrix entries. It is provable that the proposed conditions are weaker than the assumption of uniform sampling and, most importantly, it is also provable that the isomeric condition is necessary for the completions of any partial matrices to be identifiable. Equipped with these new tools, we prove a collection of theorems for missing data recovery as well as convex/nonconvex matrix completion. Among other things, we study in detail a Schatten quasi-norm induced method termed isomeric dictionary pursuit (IsoDP), and we show that IsoDP exhibits some distinct behaviors absent in the traditional bilinear programs.

7.
Artigo em Inglês | MEDLINE | ID: mdl-31944974

RESUMO

In this paper, we investigate the robust dictionary learning (DL) to discover the hybrid salient low-rank and sparse representation in a factorized compressed space. A Joint Robust Factorization and Projective Dictionary Learning (J-RFDL) model is presented. The setting of J-RFDL aims at improving the data representations by enhancing the robustness to outliers and noise in data, encoding the reconstruction error more accurately and obtaining hybrid salient coefficients with accurate reconstruction ability. Specifically, J-RFDL performs the robust representation by DL in a factorized compressed space to eliminate the negative effects of noise and outliers on the results, which can also make the DL process efficient. To make the encoding process robust to noise in data, J-RFDL clearly uses sparse L2, 1-norm that can potentially minimize the factorization and reconstruction errors jointly by forcing rows of the reconstruction errors to be zeros. To deliver salient coefficients with good structures to reconstruct given data well, J-RFDL imposes the joint low-rank and sparse constraints on the embedded coefficients with a synthesis dictionary. Based on the hybrid salient coefficients, we also extend J-RFDL for the joint classification and propose a discriminative J-RFDL model, which can improve the discriminating abilities of learnt coefficients by minimizing the classification error jointly. Extensive experiments on public datasets demonstrate that our formulations can deliver superior performance over other state-of-the-art methods.

8.
Neural Netw ; 117: 201-215, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31174048

RESUMO

Most existing low-rank and sparse representation models cannot preserve the local manifold structures of samples adaptively, or separate the locality preservation from the coding process, which may result in the decreased performance. In this paper, we propose an inductive Robust Auto-weighted Low-Rank and Sparse Representation (RALSR) framework by joint feature embedding for the salient feature extraction of high-dimensional data. Technically, the model of our RALSR seamlessly integrates the joint low-rank and sparse recovery with robust salient feature extraction. Specifically, RALSR integrates the adaptive locality preserving weighting, joint low-rank/sparse representation and the robustness-promoting representation into a unified model. For accurate similarity measure, RALSR computes the adaptive weights by minimizing the joint reconstruction errors over the recovered clean data and salient features simultaneously, where L1-norm is also applied to ensure the sparse properties of learnt weights. The joint minimization can also potentially enable the weight matrix to have the power to remove noise and unfavorable features by reconstruction adaptively. The underlying projection is encoded by a joint low-rank and sparse regularization, which can ensure it to be powerful for salient feature extraction. Thus, the calculated low-rank sparse features of high-dimensional data would be more accurate for the subsequent classification. Visual and numerical comparison results demonstrate the effectiveness of our RALSR for data representation and classification.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Reconhecimento Automatizado de Padrão/métodos , Processamento de Imagem Assistida por Computador/normas , Reconhecimento Automatizado de Padrão/normas
9.
Artigo em Inglês | MEDLINE | ID: mdl-31144634

RESUMO

Dimension reduction is widely regarded as an effective way for decreasing the computation, storage and communication loads of data-driven intelligent systems, leading to a growing demand for statistical methods that allow analysis (e.g., clustering) of compressed data. We therefore study in this paper a novel problem called compressive robust subspace clustering, which is to perform robust subspace clustering with the compressed data, and which is generated by projecting the original high-dimensional data onto a lower-dimensional subspace chosen at random. Given only the compressed data and sensing matrix, the proposed method, row space pursuit (RSP), recovers the authentic row space that gives correct clustering results under certain conditions. Extensive experiments show that RSP is distinctly better than the competing methods, in terms of both clustering accuracy and computational efficiency.

10.
IEEE Trans Neural Netw Learn Syst ; 29(6): 2441-2449, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-28489554

RESUMO

Hashing is emerging as a powerful tool for building highly efficient indices in large-scale search systems. In this paper, we study spectral hashing (SH), which is a classical method of unsupervised hashing. In general, SH solves for the hash codes by minimizing an objective function that tries to preserve the similarity structure of the data given. Although computationally simple, very often SH performs unsatisfactorily and lags distinctly behind the state-of-the-art methods. We observe that the inferior performance of SH is mainly due to its imperfect formulation; that is, the optimization of the minimization problem in SH actually cannot ensure that the similarity structure of the high-dimensional data is really preserved in the low-dimensional hash code space. In this paper, we, therefore, introduce reversed SH (ReSH), which is SH with its input and output interchanged. Unlike SH, which estimates the similarity structure from the given high-dimensional data, our ReSH defines the similarities between data points according to the unknown low-dimensional hash codes. Equipped with such a reversal mechanism, ReSH can seamlessly overcome the drawback of SH. More precisely, the minimization problem in our ReSH can be optimized if and only if similar data points are mapped to adjacent hash codes, and mostly important, dissimilar data points are considerably separated from each other in the code space. Finally, we solve the minimization problem in ReSH by multilayer neural networks and obtain state-of-the-art retrieval results on three benchmark data sets.

11.
IEEE Trans Image Process ; 27(1): 477-489, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29053462

RESUMO

While current block diagonal constrained subspace clustering methods are performed explicitly on the original data space, in practice, it is often more desirable to embed the block diagonal prior into the reproducing kernel Hilbert feature space by kernelization techniques, as the underlying data structure in reality is usually nonlinear. However, it is still unknown how to carry out the embedding and kernelization in the models with block diagonal constraints. In this paper, we shall take a step in this direction. First, we establish a novel model termed implicit block diagonal low-rank representation (IBDLR), by incorporating the implicit feature representation and block diagonal prior into the prevalent low-rank representation method. Second, mostly important, we show that the model in IBDLR could be kernelized by making use of a smoothed dual representation and the specifics of a proximal gradient-based optimization algorithm. Finally, we provide some theoretical analyses for the convergence of our optimization algorithm. Comprehensive experiments on synthetic and real-world data sets demonstrate the superiorities of our IBDLR over state-of-the-art methods.

12.
Appl Opt ; 56(22): 6079-6086, 2017 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-29047799

RESUMO

Limitations of beam steering in Risley prisms, induced by total internal reflection, are investigated for the four typical configurations. The incident angles at the exit surfaces of double prisms are calculated by nonparaxial ray tracing and compared with the critical angle. On this basis, the limitations of the opening angle, relative orientation of the prisms, and ray deviation power of the system are derived. It is shown that the ray deviation power reaches its extreme value when the opening angles increase to a certain limit value for a given prism material. As the opening angles exceed the limit value, the prisms' relative orientation is limited. With the increase in refractive index, the limit value of opening angles decreases, while the extreme value of deviation power increases. In comparison to a 21-12, 12-12 configuration, a 21-21, 12-21 configuration has a larger limit value of opening angles and also a larger extreme value (90°) of deviation power, so that it leaves a wider margin for the design of wide-angle beam steering system. The research can afford guidance for prism material and geometry choices in the design of a wide-angle Risley-prism-based beam steering system.

13.
IEEE Trans Pattern Anal Mach Intell ; 39(1): 47-60, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-26978552

RESUMO

This paper studies the problem of recovering the authentic samples that lie on a union of multiple subspaces from their corrupted observations. Due to the high-dimensional and massive nature of today's data-driven community, it is arguable that the target matrix (i.e., authentic sample matrix) to recover is often low-rank. In this case, the recently established Robust Principal Component Analysis (RPCA) method already provides us a convenient way to solve the problem of recovering mixture data. However, in general, RPCA is not good enough because the incoherent condition assumed by RPCA is not so consistent with the mixture structure of multiple subspaces. Namely, when the subspace number grows, the row-coherence of data keeps heightening and, accordingly, RPCA degrades. To overcome the challenges arising from mixture data, we suggest to consider LRR in this paper. We elucidate that LRR can well handle mixture data, as long as its dictionary is configured appropriately. More precisely, we mathematically prove that LRR can weaken the dependence on the row-coherence, provided that the dictionary is well-conditioned and has a rank of not too high. In particular, if the dictionary itself is sufficiently low-rank, then the dependence on the row-coherence can be completely removed. These provide some elementary principles for dictionary learning and naturally lead to a practical algorithm for recovering mixture data. Our experiments on randomly generated matrices and real motion sequences show promising results.

14.
Appl Opt ; 55(19): 5149-57, 2016 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-27409203

RESUMO

Laser beam scanning can be realized using two independently rotating, inline polarization gratings, termed Risley gratings, in a fashion similar to Risley prisms. The analytical formulas of pointing position as well as their inverse solutions are described. On this basis, the beam scanning is investigated and the performance of scanning imaging is evaluated. It is shown that the scanning function in 1D scanning evolves from a sinusoidal to triangular scan and the duty cycle increases rapidly as the ratio of grating period to wavelength is reduced toward 2. The scan pattern in 2D scanning is determined by the ratio k of the gratings' rotatory frequency. In imaging applications, when k tends toward 1 or -1, the scan pattern becomes dense and is inclined to be spiral or rose-like, respectively, which is desirable for the purpose of enhancing spatial resolution. There is a direct trade-off between spatial resolution and frame rate. The spiral and rose scanning enable multiresolution imaging, providing a preview of the scanned area in a fraction of the overall scan time, which is extremely useful for fast, real-time imaging applications.

15.
IEEE Trans Pattern Anal Mach Intell ; 38(3): 417-30, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-27046488

RESUMO

The recently proposed low-rank representation (LRR) method has been empirically shown to be useful in various tasks such as motion segmentation, image segmentation, saliency detection and face recognition. While potentially powerful, LRR depends heavily on the configuration of its key parameter, λ. In realistic environments where the prior knowledge about data is lacking, however, it is still unknown how to choose λ in a suitable way. Even more, there is a lack of rigorous analysis about the success conditions of the method, and thus the significance of LRR is a little bit vague. In this paper we therefore establish a theoretical analysis for LRR, striving for figuring out under which conditions LRR can be successful, and deriving a moderately good estimate to the key parameter λ as well. Simulations on synthetic data points and experiments on real motion sequences verify our claims.

16.
Appl Opt ; 53(25): 5775-83, 2014 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-25321377

RESUMO

Based on the vector form Snell's law, ray tracing is performed to quantify the pointing errors of Risley-prism-based beam steering systems, induced by component errors, prism orientation errors, and assembly errors. Case examples are given to elucidate the pointing error distributions in the field of regard and evaluate the allowances of the error sources for a given pointing accuracy. It is found that the assembly errors of the second prism will result in more remarkable pointing errors in contrast with the first one. The pointing errors induced by prism tilt depend on the tilt direction. The allowances of bearing tilt and prism tilt are almost identical if the same pointing accuracy is planned. All conclusions can provide a theoretical foundation for practical works.

17.
IEEE Trans Image Process ; 23(12): 5047-56, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25312928

RESUMO

Blind deconvolution is to recover a sharp version of a given blurry image or signal when the blur kernel is unknown. Because this problem is ill-conditioned in nature, effectual criteria pertaining to both the sharp image and blur kernel are required to constrain the space of candidate solutions. While the problem has been extensively studied for long, it is still unclear how to regularize the blur kernel in an elegant, effective fashion. In this paper, we show that the blurry image itself actually encodes rich information about the blur kernel, and such information can indeed be found by exploring and utilizing a well-known phenomenon, that is, sharp images are often high pass, whereas blurry images are usually low pass. More precisely, we shall show that the blur kernel can be retrieved through analyzing and comparing how the spectrum of an image as a convolution operator changes before and after blurring. Subsequently, we establish a convex kernel regularizer, which depends only on the given blurry image. Interestingly, the minimizer of this regularizer guarantees to give a good estimate to the desired blur kernel if the original image is sharp enough. By combining this powerful regularizer with the prevalent nonblind devonvolution techniques, we show how we could significantly improve the deblurring results through simulations on synthetic images and experiments on realistic images.

18.
IEEE Trans Image Process ; 23(11): 4786-98, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25248181

RESUMO

In this paper, we utilize structured learning to simultaneously address two intertwined problems: 1) human pose estimation (HPE) and 2) garment attribute classification (GAC), which are valuable for a variety of computer vision and multimedia applications. Unlike previous works that usually handle the two problems separately, our approach aims to produce an optimal joint estimation for both HPE and GAC via a unified inference procedure. To this end, we adopt a preprocessing step to detect potential human parts from each image (i.e., a set of candidates) that allows us to have a manageable input space. In this way, the simultaneous inference of HPE and GAC is converted to a structured learning problem, where the inputs are the collections of candidate ensembles, outputs are the joint labels of human parts and garment attributes, and joint feature representation involves various cues such as pose-specific features, garment-specific features, and cross-task features that encode correlations between human parts and garment attributes. Furthermore, we explore the strong edge evidence around the potential human parts so as to derive more powerful representations for oriented human parts. Such evidences can be seamlessly integrated into our structured learning model as a kind of energy function, and the learning process could be performed by standard structured support vector machines algorithm. However, the joint structure of the two problems is a cyclic graph, which hinders efficient inference. To resolve this issue, we compute instead approximate optima using an iterative procedure, where in each iteration, the variables of one problem are fixed. In this way, satisfactory solutions can be efficiently computed by dynamic programming. Experimental results on two benchmark data sets show the state-of-the-art performance of our approach.


Assuntos
Vestuário , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Postura/fisiologia , Imagem Corporal Total/métodos , Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
19.
IEEE Trans Image Process ; 22(11): 4380-93, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-23893719

RESUMO

We address the following subspace learning problem: supposing we are given a set of labeled, corrupted training data points, how to learn the underlying subspace, which contains three components: an intrinsic subspace that captures certain desired properties of a data set, a penalty subspace that fits the undesired properties of the data, and an error container that models the gross corruptions possibly existing in the data. Given a set of data points, these three components can be learned by solving a nuclear norm regularized optimization problem, which is convex and can be efficiently solved in polynomial time. Using the method as a tool, we propose a new discriminant analysis (i.e., supervised subspace learning) algorithm called Corruptions Tolerant Discriminant Analysis (CTDA), in which the intrinsic subspace is used to capture the features with high within-class similarity, the penalty subspace takes the role of modeling the undesired features with high between-class similarity, and the error container takes charge of fitting the possible corruptions in the data. We show that CTDA can well handle the gross corruptions possibly existing in the training data, whereas previous linear discriminant analysis algorithms arguably fail in such a setting. Extensive experiments conducted on two benchmark human face data sets and one object recognition data set show that CTDA outperforms the related algorithms.


Assuntos
Algoritmos , Artefatos , Inteligência Artificial , Biometria/métodos , Face/anatomia & histologia , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Humanos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
20.
Appl Opt ; 52(12): 2849-57, 2013 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-23669697

RESUMO

Two exact inverse solutions of Risley prisms have been given by previous authors, based on which we calculate the gradients of the scan field that open a way to investigate the nonlinear relationship between the slewing rate of the beam and the required angular velocities of the two wedge prisms in the Risley-prism-based beam steering system for target tracking. The limited regions and singularity point at the center and the edge of the field of regard are discussed. It is found that the maximum required rotational velocities of the two prisms for target tracking are nearly the same and are dependent on the altitude angle. The central limited region is almost independent of the prism parameters. The control singularity at the crossing center path can be avoided by switching the two solutions.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...