Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-36129871

RESUMO

Detecting forged handwriting is important in a wide variety of machine learning applications, and it is challenging when the input images are degraded with noise and blur. This article presents a new model based on conformable moments (CMs) and deep ensemble neural networks (DENNs) for forged handwriting detection in noisy and blurry environments. Since CMs involve fractional calculus with the ability to model nonlinearities and geometrical moments as well as preserving spatial relationships between pixels, fine details in images are preserved. This motivates us to introduce a DENN classifier, which integrates stenographic kernels and spatial features to classify input images as normal (original, clean images), altered (handwriting changed through copy-paste and insertion operations), noisy (added noise to original image), blurred (added blur to original image), altered-noise (noise is added to the altered image), and altered-blurred (blur is added to the altered image). To evaluate our model, we use a newly introduced dataset, which comprises handwritten words altered at the character level, as well as several standard datasets, namely ACPR 2019, ICPR 2018-FDC, and the IMEI dataset. The first two of these datasets include handwriting samples that are altered at the character and word levels, and the third dataset comprises forged International Mobile Equipment Identity (IMEI) numbers. Experimental results demonstrate that the proposed method outperforms the existing methods in terms of classification rate.

2.
Nucleic Acids Res ; 49(18): e106, 2021 10 11.
Artigo em Inglês | MEDLINE | ID: mdl-34291293

RESUMO

Raw sequencing reads of miRNAs contain machine-made substitution errors, or even insertions and deletions (indels). Although the error rate can be low at 0.1%, precise rectification of these errors is critically important because isoform variation analysis at single-base resolution such as novel isomiR discovery, editing events understanding, differential expression analysis, or tissue-specific isoform identification is very sensitive to base positions and copy counts of the reads. Existing error correction methods do not work for miRNA sequencing data attributed to miRNAs' length and per-read-coverage properties distinct from DNA or mRNA sequencing reads. We present a novel lattice structure combining kmers, (k - 1)mers and (k + 1)mers to address this problem. The method is particularly effective for the correction of indel errors. Extensive tests on datasets having known ground truth of errors demonstrate that the method is able to remove almost all of the errors, without introducing any new error, to improve the data quality from every-50-reads containing one error to every-1300-reads containing one error. Studies on experimental miRNA sequencing datasets show that the errors are often rectified at the 5' ends and the seed regions of the reads, and that there are remarkable changes after the correction in miRNA isoform abundance, volume of singleton reads, overall entropy, isomiR families, tissue-specific miRNAs, and rare-miRNA quantities.


Assuntos
Biologia Computacional/métodos , Sequenciamento de Nucleotídeos em Larga Escala/métodos , MicroRNAs/análise , Análise de Sequência de DNA/métodos , Algoritmos , Animais , Bases de Dados Genéticas , Humanos , Salmão/genética
3.
BMC Bioinformatics ; 22(Suppl 6): 142, 2021 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-34078284

RESUMO

BACKGROUND: Genomic reads from sequencing platforms contain random errors. Global correction algorithms have been developed, aiming to rectify all possible errors in the reads using generic genome-wide patterns. However, the non-uniform sequencing depths hinder the global approach to conduct effective error removal. As some genes may get under-corrected or over-corrected by the global approach, we conduct instance-based error correction for short reads of disease-associated genes or pathways. The paramount requirement is to ensure the relevant reads, instead of the whole genome, are error-free to provide significant benefits for single-nucleotide polymorphism (SNP) or variant calling studies on the specific genes. RESULTS: To rectify possible errors in the short reads of disease-associated genes, our novel idea is to exploit local sequence features and statistics directly related to these genes. Extensive experiments are conducted in comparison with state-of-the-art methods on both simulated and real datasets of lung cancer associated genes (including single-end and paired-end reads). The results demonstrated the superiority of our method with the best performance on precision, recall and gain rate, as well as on sequence assembly results (e.g., N50, the length of contig and contig quality). CONCLUSION: Instance-based strategy makes it possible to explore fine-grained patterns focusing on specific genes, providing high precision error correction and convincing gene sequence assembly. SNP case studies show that errors occurring at some traditional SNP areas can be accurately corrected, providing high precision and sensitivity for investigations on disease-causing point mutations.


Assuntos
Genoma , Sequenciamento de Nucleotídeos em Larga Escala , Algoritmos , Genômica , Análise de Sequência de DNA
4.
J Environ Manage ; 288: 112377, 2021 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-33780820

RESUMO

Advanced householder profiling using digital water metering data analytics has been acknowledged as a core strategy for promoting water conservation because of its ability to provide near real-time feedback to customers and instil long-term conservation behaviours. Customer profiling based on household water consumption data collected through digital water meters helps to identify the water consumption patterns and habits of customers. This study employed advanced customer profiling techniques adapted from the machine learning research domain to analyse high-resolution data collected from residential digital water meters. Data analytics techniques were applied on already disaggregated end-use water consumption data (e.g., shower and taps) for creating in-depth customer profiling at various intervals (e.g., 15, 30, and 60 min). The developed user profiling approach has some learning functionality as it can ascertain and accommodate changing behaviours of residential customers. The developed advanced user profiling technique was shown to be beneficial since it identified residential customer behaviours that were previously unseen. Furthermore, the technique can identify and address novel changes in behaviours, which is an important feature for promoting and sustaining long-term water conservation behaviours. The research has implications for researchers in data analytics and water demand management, and also for practitioners and government policy advisors seeking to conserve valuable potable-water resources.


Assuntos
Recursos Hídricos , Água , Abastecimento de Água
5.
IEEE Trans Image Process ; 30: 150-162, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33112745

RESUMO

Traditional tensor decomposition methods, e.g., two dimensional principal component analysis and two dimensional singular value decomposition, that minimize mean square errors, are sensitive to outliers. To overcome this problem, in this paper we propose a new robust tensor decomposition method using generalized correntropy criterion (Corr-Tensor). A Lagrange multiplier method is used to effectively optimize the generalized correntropy objective function in an iterative manner. The Corr-Tensor can effectively improve the robustness of tensor decomposition with the existence of outliers without introducing any extra computational cost. Experimental results demonstrated that the proposed method significantly reduces the reconstruction error on face reconstruction and improves the accuracies on handwritten digit recognition and facial image clustering.

6.
Neural Netw ; 121: 441-451, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31610415

RESUMO

Since the principal component analysis and its variants are sensitive to outliers that affect their performance and applicability in real world, several variants have been proposed to improve the robustness. However, most of the existing methods are still sensitive to outliers and are unable to select useful features. To overcome the issue of sensitivity of PCA against outliers, in this paper, we introduce two-dimensional outliers-robust principal component analysis (ORPCA) by imposing the joint constraints on the objective function. ORPCA relaxes the orthogonal constraints and penalizes the regression coefficient, thus, it selects important features and ignores the same features that exist in other principal components. It is commonly known that square Frobenius norm is sensitive to outliers. To overcome this issue, we have devised an alternative way to derive objective function. Experimental results on four publicly available benchmark datasets show the effectiveness of joint feature selection and provide better performance as compared to state-of-the-art dimensionality-reduction methods.


Assuntos
Aprendizado de Máquina/normas , Análise de Componente Principal
7.
IEEE Trans Image Process ; 28(12): 5963-5976, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31199259

RESUMO

Effectively describing and recognizing leaf shapes under arbitrary variations, particularly from a large database, remains an unsolved problem. In this research, we attempted a new strategy of describing leaf shapes by walking and measuring along a bunch of chords that pass through the shape. A novel chord bunch walks (CBW) descriptor is developed through the chord walking behavior that effectively integrates the shape image function over the walked chord to reflect both the contour features and the inner properties of the shape. For each contour point, the chord bunch groups multiple pairs of chords to build a hierarchical framework for a coarse-to-fine description that can effectively characterize not only the subtle differences among leaf margin patterns but also the interior part of the shape contour formed inside a self-overlapped or compound leaf. Instead of using optimal correspondence based matching, a Log-Min distance that encourages one-to-one correspondences is proposed for efficient and effective CBW matching. The proposed CBW shape analysis method is invariant to rotation, scaling, translation, and mirror transforms. Five experiments, including image retrieval of compound leaves, image retrieval of naturally self-overlapped leaves, and retrieval of mixed leaves on three large scale datasets, are conducted. The proposed method achieved large accuracy increases with low computational costs over the state-of-the-art benchmarks, which indicates the research potential along this direction.

8.
IEEE Trans Neural Syst Rehabil Eng ; 27(6): 1117-1127, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-31021801

RESUMO

Accurate classification of Electroencephalogram (EEG) signals plays an important role in diagnoses of different type of mental activities. One of the most important challenges, associated with classification of EEG signals is how to design an efficient classifier consisting of strong generalization capability. Aiming to improve the classification performance, in this paper, we propose a novel multiclass support matrix machine (M-SMM) from the perspective of maximizing the inter-class margins. The objective function is a combination of binary hinge loss that works on C matrices and spectral elastic net penalty as regularization term. This regularization term is a combination of Frobenius and nuclear norm, which promotes structural sparsity and shares similar sparsity patterns across multiple predictors. It also maximizes the inter-class margin that helps to deal with complex high dimensional noisy data. The extensive experiment results supported by theoretical analysis and statistical tests show the effectiveness of the M-SMM for solving the problem of classifying EEG signals associated with motor imagery in brain-computer interface applications.


Assuntos
Eletroencefalografia/classificação , Máquina de Vetores de Suporte , Algoritmos , Interfaces Cérebro-Computador , Humanos , Aprendizado de Máquina , Processos Mentais/fisiologia , Processamento de Sinais Assistido por Computador
9.
Bioinformatics ; 34(18): 3069-3077, 2018 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-29672669

RESUMO

Motivation: CRISPR/Cas9 system is a widely used genome editing tool. A prediction problem of great interests for this system is: how to select optimal single-guide RNAs (sgRNAs), such that its cleavage efficiency is high meanwhile the off-target effect is low. Results: This work proposed a two-step averaging method (TSAM) for the regression of cleavage efficiencies of a set of sgRNAs by averaging the predicted efficiency scores of a boosting algorithm and those by a support vector machine (SVM). We also proposed to use profiled Markov properties as novel features to capture the global characteristics of sgRNAs. These new features are combined with the outstanding features ranked by the boosting algorithm for the training of the SVM regressor. TSAM improved the mean Spearman correlation coefficiencies comparing with the state-of-the-art performance on benchmark datasets containing thousands of human, mouse and zebrafish sgRNAs. Our method can be also converted to make binary distinctions between efficient and inefficient sgRNAs with superior performance to the existing methods. The analysis reveals that highly efficient sgRNAs have lower melting temperature at the middle of the spacer, cut at 5'-end closer parts of the genome and contain more 'A' but less 'G' comparing with inefficient ones. Comprehensive further analysis also demonstrates that our tool can predict an sgRNA's cutting efficiency with consistently good performance no matter it is expressed from an U6 promoter in cells or from a T7 promoter in vitro. Availability and implementation: Online tool is available at http://www.aai-bioinfo.com/CRISPR/. Python and Matlab source codes are freely available at https://github.com/penn-hui/TSAM. Supplementary information: Supplementary data are available at Bioinformatics online.


Assuntos
Algoritmos , Animais , Sistemas CRISPR-Cas , Humanos , Camundongos , RNA Guia de Cinetoplastídeos/genética , Análise de Regressão , Software , Máquina de Vetores de Suporte , Peixe-Zebra/genética
10.
Oncotarget ; 8(45): 78901-78916, 2017 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-29108274

RESUMO

Disease-related protein-coding genes have been widely studied, but disease-related non-coding genes remain largely unknown. This work introduces a new vector to represent diseases, and applies the newly vectorized data for a positive-unlabeled learning algorithm to predict and rank disease-related long non-coding RNA (lncRNA) genes. This novel vector representation for diseases consists of two sub-vectors, one is composed of 45 elements, characterizing the information entropies of the disease genes distribution over 45 chromosome substructures. This idea is supported by our observation that some substructures (e.g., the chromosome 6 p-arm) are highly preferred by disease-related protein coding genes, while some (e.g., the 21 p-arm) are not favored at all. The second sub-vector is 30-dimensional, characterizing the distribution of disease gene enriched KEGG pathways in comparison with our manually created pathway groups. The second sub-vector complements with the first one to differentiate between various diseases. Our prediction method outperforms the state-of-the-art methods on benchmark datasets for prioritizing disease related lncRNA genes. The method also works well when only the sequence information of an lncRNA gene is known, or even when a given disease has no currently recognized long non-coding genes.

11.
Artigo em Inglês | MEDLINE | ID: mdl-29990239

RESUMO

The latest sequencing technologies such as the Pacific Biosciences (PacBio) and Oxford Nanopore machines can generate long reads at the length of thousands of nucleic bases which is much longer than the reads at the length of hundreds generated by Illumina machines. However, these long reads are prone to much higher error rates, for example 15%, making downstream analysis and applications very difficult. Error correction is a process to improve the quality of sequencing data. Hybrid correction strategies have been recently proposed to combine Illumina reads of low error rates to fix sequencing errors in the noisy long reads with good performance. In this paper, we propose a new method named Bicolor, a bi-level framework of hybrid error correction for further improving the quality of PacBio long reads. At the first level, our method uses a de Bruijn graph-based error correction idea to search paths in pairs of solid -mers iteratively with an increasing length of -mer. At the second level, we combine the processed results under different parameters from the first level. In particular, a multiple sequence alignment algorithm is used to align those similar long reads, followed by a voting algorithm which determines the final base at each position of the reads. We compare the superior performance of Bicolor with three state-of-the-art methods on three real data sets. Results demonstrate that Bicolor always achieves the highest identity ratio. Bicolor also achieves a higher alignment ratio () and a higher number of aligned reads than the current methods on two data sets. On the third data set, our method is closely competitive to the current methods in terms of number of aligned reads and genome coverage. The C++ source codes of our algorithm are freely available at https://github.com/yuansliu/Bicolor.

12.
IEEE Trans Image Process ; 25(12): 5622-5634, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-27623587

RESUMO

Text recognition in video/natural scene images has gained significant attention in the field of image processing in many computer vision applications, which is much more challenging than recognition in plain background images. In this paper, we aim to restore complete character contours in video/scene images from gray values, in contrast to the conventional techniques that consider edge images/binary information as inputs for text detection and recognition. We explore and utilize the strengths of zero crossing points given by the Laplacian to identify stroke candidate pixels (SPC). For each SPC pair, we propose new symmetry features based on gradient magnitude and Fourier phase angles to identify probable stroke candidate pairs (PSCP). The same symmetry properties are proposed at the PSCP level to choose seed stroke candidate pairs (SSCP). Finally, an iterative algorithm is proposed for SSCP to restore complete character contours. Experimental results on benchmark databases, namely, the ICDAR family of video and natural scenes, Street View Data, and MSRA data sets, show that the proposed technique outperforms the existing techniques in terms of both quality measures and recognition rate. We also show that character contour restoration is effective for text detection in video and natural scene images.

13.
Stud Health Technol Inform ; 214: 22-8, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26210413

RESUMO

Computer-based decision support information systems have been promoted for their potential to improve physician performance and patient outcomes and support clinical decision making. The current case study reported design and implementation of a high-level decision support system (DSS) which facilitated the flow of data from operational level to top managers and leadership level of hospitals. The results shows that development of a DSS improve data connectivity, timing, and responsiveness issues via centralised sourcing and storing of principal health-related information in the hospital. The implementation of the system has resulted in significant enhancements in outpatient waiting times management.


Assuntos
Sistemas de Apoio a Decisões Clínicas/estatística & dados numéricos , Eficiência Organizacional/estatística & dados numéricos , Registros Eletrônicos de Saúde/organização & administração , Sistemas de Informação Hospitalar/estatística & dados numéricos , Hospitais Universitários/estatística & dados numéricos , Listas de Espera , Queensland , Revisão da Utilização de Recursos de Saúde
14.
Stud Health Technol Inform ; 204: 38-46, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25087525

RESUMO

This study conducts a systematic literature review on the application of the three-dimensional virtual worlds (3DVW) in healthcare context. During the past decade, 3DVWs have emerged as a cutting edge technology that has much to offer to the healthcare sector. Our systematic review began with an initial set of 1088 studies published from 1990 to 2013 which have used 3DVWs for the healthcare specific purposes. We found a variety of areas of application for the 3DVWs in healthcare, and categorised them into the following categories: education, treatment, evaluation, lifestyle and simulation. The presented big picture of application areas of 3DVWs in this study can be very valuable and insightful for the researchers and healthcare community.


Assuntos
Tecnologia Biomédica/métodos , Atenção à Saúde/métodos , Ecossistema , Tecnologia Educacional/métodos , Imageamento Tridimensional/métodos , Internet , Interface Usuário-Computador , Gráficos por Computador , Comportamento Cooperativo , Modelos Teóricos
15.
J Med Internet Res ; 16(2): e47, 2014 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-24550130

RESUMO

BACKGROUND: A three-dimensional virtual world (3DVW) is a computer-simulated electronic 3D virtual environment that users can explore, inhabit, communicate, and interact with via avatars, which are graphical representations of the users. Since the early 2000s, 3DVWs have emerged as a technology that has much to offer the health care sector. OBJECTIVE: The purpose of this study was to characterize different application areas of various 3DVWs in health and medical context and categorize them into meaningful categories. METHODS: This study employs a systematic literature review on the application areas of 3DVWs in health care. Our search resulted in 62 papers from five top-ranking scientific databases published from 1990 to 2013 that describe the use of 3DVWs for health care specific purposes. We noted a growth in the number of academic studies on the topic since 2006. RESULTS: We found a wide range of application areas for 3DVWs in health care and classified them into the following six categories: academic education, professional education, treatment, evaluation, lifestyle, and modeling. The education category, including professional and academic education, contains the largest number of papers (n=34), of which 23 are related to the academic education category and 11 to the professional education category. Nine papers are allocated to treatment category, and 8 papers have contents related to evaluation. In 4 of the papers, the authors used 3DVWs for modeling, and 3 papers targeted lifestyle purposes. The results indicate that most of the research to date has focused on education in health care. We also found that most studies were undertaken in just two countries, the United States and the United Kingdom. CONCLUSIONS: 3D virtual worlds present several innovative ways to carry out a wide variety of health-related activities. The big picture of application areas of 3DVWs presented in this review could be of value and offer insights to both the health care community and researchers.


Assuntos
Simulação por Computador , Pesquisa sobre Serviços de Saúde , Interface Usuário-Computador , Humanos
16.
Nucl Med Biol ; 35(7): 755-61, 2008 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-18848660

RESUMO

INTRODUCTION: There is a lot of interest towards creating therapies and vaccines for Bacillus anthracis, a bacterium which causes anthrax in humans and which spores can be made into potent biological weapons. Systemic injection of lethal factor (LF), edema factor (EF) and protective antigen (PA) in mice produces toxicity, and this protocol is commonly used to investigate the efficacy of specific antibodies in passive protection and vaccine studies. Availability of toxins labeled with imageable radioisotopes would allow to demonstrate their tissue distribution after intravenous injection at toxin concentration that are below pharmacologically significant to avoid masking by toxic effects. METHODS: LF, EF and PA were radiolabeled with (188)Re and (99m)Tc, and their performance in vitro was evaluated by macrophages and Chinese hamster ovary cells toxicity assays and by binding to macrophages. Scintigraphic imaging and biodistribution of intravenously (IV) injected (99m)Tc-and (123)I-labeled toxins was performed in BALB/c mice. RESULTS: Radiolabeled toxins preserved their biological activity. Scatchard-type analysis of the binding of radiolabeled PA to the J774.16 macrophage-like cells revealed 6.6 x 10(4) binding sites per cell with a dissociation constant of 6.7 nM. Comparative scintigraphic imaging of mice injected intravenously with either (99m)Tc-or (123)I-labeled PA, EF and LF toxins demonstrated similar biodistribution patterns with early localization of radioactivity in the liver, spleen, intestines and excretion through kidneys. The finding of renal excretion shortly after IV injection strongly suggests that toxins are rapidly degraded which could contribute to the variability of mouse toxigenic assays. Biodistribution studies confirmed that all three toxins concentrated in the liver and the presence of high levels of radioactivity again implied rapid degradation in vivo. CONCLUSIONS: The availability of (188)Re and (99m)Tc-labeled PA, LF and EF toxins allowed us to confirm the number of PA binding sites per cell, to provide an estimate of the dissociation constant of PA for its receptor and to demonstrate tissue distribution of toxins in mice after intravenous injection.


Assuntos
Toxinas Bacterianas/farmacocinética , Radioisótopos , Rênio , Animais , Antígenos de Bactérias , Células CHO , Cricetinae , Cricetulus , Feminino , Radioisótopos do Iodo , Camundongos , Camundongos Endogâmicos BALB C , Tecnécio , Distribuição Tecidual
17.
Org Lett ; 10(2): 301-4, 2008 Jan 17.
Artigo em Inglês | MEDLINE | ID: mdl-18085786

RESUMO

A new mechanistic principle for reporting the phosphorylation of tyrosine is described, which should prove applicable to even the most fastidious of protein tyrosine kinases, as demonstrated by the acquisition of a fluorescent sensor for the extraordinarily demanding anaplastic lymphoma kinase.


Assuntos
Proteínas Tirosina Quinases/metabolismo , Tirosina/metabolismo , Sequência de Aminoácidos , Quinase do Linfoma Anaplásico , Antineoplásicos/química , Corantes Fluorescentes , Estrutura Molecular , Fosforilação , Proteínas Tirosina Quinases/análise , Proteínas Tirosina Quinases/química , Receptores Proteína Tirosina Quinases
19.
J Am Chem Soc ; 128(43): 14016-7, 2006 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-17061870

RESUMO

Protein tyrosine kinases serve as key mediators of signaling pathways, biochemical highways that control various aspects of cell behavior. Although fluorescent reporters of tyrosine kinases have been described, these species can suffer immediate phosphorylation upon exposure to the cellular milieu, thereby hindering a detailed analysis of kinase activity as a function of the cell cycle or exposure to environmental stimuli. The first example of a light-regulated tyrosine kinase reporter is described herein, which allows the investigator to control when kinase activity is sampled. In addition, the set of sensors created in this study contain different fluorophores, each with its own unique photophysical properties, thereby furnishing an array of choices that can be used in combination with other intracellular probes.


Assuntos
Luz , Proteínas Tirosina Quinases/metabolismo , Linhagem Celular , Humanos , Ressonância Magnética Nuclear Biomolecular , Fosforilação
20.
Anal Bioanal Chem ; 386(6): 1773-9, 2006 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-17031623

RESUMO

In this study, an investigation was undertaken to determine whether the predictive accuracy of an indirect, multiwavelength spectroscopic technique for rapidly determining oxygen demand (OD) values is affected by the use of unfiltered and turbid samples, as well as by the use of absorbance values measured below 200 nm. The rapid OD technique was developed that uses UV-Vis spectroscopy and artificial neural networks (ANNs) to indirectly determine chemical oxygen demand (COD) levels. It was found that the most accurate results were obtained when a spectral range of 190-350 nm was provided as data input to the ANN, and when using unfiltered samples below a turbidity range of 150 NTU. This is because high correlations of above 0.90 were obtained with the data using the standard COD method. This indicates that samples can be measured directly without the additional need for preprocessing by filtering. Samples with turbidity values higher than 150 NTU were found to produce poor correlations with the standard COD method, which made them unsuitable for accurate, real-time, on-line monitoring of OD levels.


Assuntos
Oxigênio/análise , Eliminação de Resíduos Líquidos , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...