Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
1.
IEEE Trans Cybern ; 51(2): 829-838, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31902791

RESUMO

Single-image dehazing has been an important topic given the commonly occurred image degradation caused by adverse atmosphere aerosols. The key to haze removal relies on an accurate estimation of global air-light and the transmission map. Most existing methods estimate these two parameters using separate pipelines which reduces the efficiency and accumulates errors, thus leading to a suboptimal approximation, hurting the model interpretability, and degrading the performance. To address these issues, this article introduces a novel generative adversarial network (GAN) for single-image dehazing. The network consists of a novel compositional generator and a novel deeply supervised discriminator. The compositional generator is a densely connected network, which combines fine-scale and coarse-scale information. Benefiting from the new generator, our method can directly learn the physical parameters from data and recover clean images from hazy ones in an end-to-end manner. The proposed discriminator is deeply supervised, which enforces that the output of the generator to look similar to the clean images from low-level details to high-level structures. To the best of our knowledge, this is the first end-to-end generative adversarial model for image dehazing, which simultaneously outputs clean images, transmission maps, and air-lights. Extensive experiments show that our method remarkably outperforms the state-of-the-art methods. Furthermore, to facilitate future research, we create the HazeCOCO dataset which is currently the largest dataset for single-image dehazing.

2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 116-119, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33017944

RESUMO

Many prior studies on EEG-based emotion recognition did not consider the spatial-temporal relationships among brain regions and across time. In this paper, we propose a Regionally-Operated Domain Adversarial Network (RODAN), to learn spatial-temporal relationships that correlate between brain regions and time. Moreover, we incorporate the attention mechanism to enable cross-domain learning to capture both spatial-temporal relationships among the EEG electrodes and an adversarial mechanism to reduce the domain shift in EEG signals. To evaluate the performance of RODAN, we conduct subject-dependent, subject-independent, and subject-biased experiments on both DEAP and SEED-IV data sets, which yield encouraging results. In addition, we also discuss the biased sampling issue often observed in EEG-based emotion recognition and present an unbiased benchmark for both DEAP and SEED-IV.


Assuntos
Eletroencefalografia , Emoções , Encéfalo , Eletrodos , Aprendizagem
3.
Sci Rep ; 10(1): 1447, 2020 01 29.
Artigo em Inglês | MEDLINE | ID: mdl-31996715

RESUMO

Lifelog photo review is considered to enhance the recall of personal events. While a sizable body of research has explored the neural basis of autobiographical memory (AM), there is limited neural evidence on the retrieval-based enhancement effect on event memory among older adults in the real-world environment. This study examined the neural processes of AM as was modulated by retrieval practice through lifelog photo review in older adults. In the experiment, blood-oxygen-level dependent response during subjects' recall of recent events was recorded, where events were cued by photos that may or may not have been exposed to a priori retrieval practice (training). Subjects remembered more episodic details under the trained relative to non-trained condition. Importantly, the neural correlates of AM was exhibited by (1) dissociable cortical areas related to recollection and familiarity, and (2) a positive correlation between the amount of recollected episodic details and cortical activation within several lateral temporal and parietal regions. Further analysis of the brain activation pattern at a few regions of interest within the core remember network showed a training_condition × event_detail interaction effect, suggesting that the boosting effect of retrieval practice depended on the level of recollected event details.


Assuntos
Córtex Cerebral/fisiologia , Memória Episódica , Memória de Longo Prazo/fisiologia , Rememoração Mental/fisiologia , Neurônios/fisiologia , Adulto , Idoso , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa , Transmissão Sináptica
4.
IEEE Trans Image Process ; 29(1): 2052-2065, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31647436

RESUMO

While analyzing the performance of state-of-the-art R-CNN based generic object detectors, we find that the detection performance for objects with low object-region-percentages (ORPs) of the bounding boxes are much lower than the overall average. Elongated objects are examples. To address the problem of low ORPs for elongated object detection, we propose a hybrid approach which employs a Faster R-CNN to achieve robust detections of object parts, and a novel model-driven clustering algorithm to group the related partial detections and suppress false detections. First, we train a Faster R-CNN with partial region proposals of suitable and stable ORPs. Next, we introduce a deep CNN (DCNN) for orientation classification on the partial detections. Then, on the outputs of the Faster R-CNN and DCNN, the algorithm of adaptive model-driven clustering first initializes a model of an elongated object with a data-driven process on local partial detections, and refines the model iteratively by model-driven clustering and data-driven model updating. By exploiting Faster R-CNN to produce robust partial detections and model-driven clustering to form a global representation, our method is able to generate a tight oriented bounding box for elongated object detection. We evaluate the effectiveness of our approach on two typical elongated objects in the COCO dataset, and other typical elongated objects, including rigid objects (pens, screwdrivers and wrenches) and non-rigid objects (cracks). Experimental results show that, compared with the state-of-the-art approaches, our method achieves a large margin of improvements for both detection and localization of elongated objects in images.

5.
IEEE Trans Pattern Anal Mach Intell ; 41(8): 1783-1796, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-30273143

RESUMO

We introduce a new problem of gaze anticipation on future frames which extends the conventional gaze prediction problem to go beyond current frames. To solve this problem, we propose a new generative adversarial network based model, Deep Future Gaze (DFG), encompassing two pathways: DFG-P is to anticipate gaze prior maps conditioned on the input frame which provides task influences; DFG-G is to learn to model both semantic and motion information in future frame generation. DFG-P and DFG-G are then fused to anticipate future gazes. DFG-G consists of two networks: a generator and a discriminator. The generator uses a two-stream spatial-temporal convolution architecture (3D-CNN) for explicitly untangling the foreground and background to generate future frames. It then attaches another 3D-CNN for gaze anticipation based on these synthetic frames. The discriminator plays against the generator by distinguishing the synthetic frames of the generator from the real frames. Experimental results on the publicly available egocentric and third person video datasets show that DFG significantly outperforms all competitive baselines. We also demonstrate that DFG achieves better performance of gaze prediction on current frames in egocentric and third person videos than state-of-the-art methods.

6.
Nat Commun ; 9(1): 3730, 2018 09 13.
Artigo em Inglês | MEDLINE | ID: mdl-30213937

RESUMO

Searching for a target object in a cluttered scene constitutes a fundamental challenge in daily vision. Visual search must be selective enough to discriminate the target from distractors, invariant to changes in the appearance of the target, efficient to avoid exhaustive exploration of the image, and must generalize to locate novel target objects with zero-shot training. Previous work on visual search has focused on searching for perfect matches of a target after extensive category-specific training. Here, we show for the first time that humans can efficiently and invariantly search for natural objects in complex scenes. To gain insight into the mechanisms that guide visual search, we propose a biologically inspired computational model that can locate targets without exhaustive sampling and which can generalize to novel objects. The model provides an approximation to the mechanisms integrating bottom-up and top-down signals during search in natural scenes.


Assuntos
Atenção , Reconhecimento Visual de Modelos , Visão Ocular , Percepção Visual/fisiologia , Adulto , Simulação por Computador , Sinais (Psicologia) , Feminino , Humanos , Masculino , Psicofísica , Tempo de Reação , Fatores de Tempo , Adulto Jovem
7.
IEEE Trans Cybern ; 48(5): 1540-1552, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-29621004

RESUMO

Social working memory (SWM) plays an important role in navigating social interactions. Inspired by studies in psychology, neuroscience, cognitive science, and machine learning, we propose a probabilistic model of SWM to mimic human social intelligence for personal information retrieval (IR) in social interactions. First, we establish a semantic hierarchy as social long-term memory to encode personal information. Next, we propose a semantic Bayesian network as the SWM, which integrates the cognitive functions of accessibility and self-regulation. One subgraphical model implements the accessibility function to learn the social consensus about IR-based on social information concept, clustering, social context, and similarity between persons. Beyond accessibility, one more layer is added to simulate the function of self-regulation to perform the personal adaptation to the consensus based on human personality. Two learning algorithms are proposed to train the probabilistic SWM model on a raw dataset of high uncertainty and incompleteness. One is an efficient learning algorithm of Newton's method, and the other is a genetic algorithm. Systematic evaluations show that the proposed SWM model is able to learn human social intelligence effectively and outperforms the baseline Bayesian cognitive model. Toward real-world applications, we implement our model on Google Glass as a wearable assistant for social interaction.

8.
IEEE Trans Cybern ; 47(4): 841-854, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26955058

RESUMO

Inspired by progresses in cognitive science, artificial intelligence, computer vision, and mobile computing technologies, we propose and implement a wearable virtual usher for cognitive indoor navigation based on egocentric visual perception. A novel computational framework of cognitive wayfinding in an indoor environment is proposed, which contains a context model, a route model, and a process model. A hierarchical structure is proposed to represent the cognitive context knowledge of indoor scenes. Given a start position and a destination, a Bayesian network model is proposed to represent the navigation route derived from the context model. A novel dynamic Bayesian network (DBN) model is proposed to accommodate the dynamic process of navigation based on real-time first-person-view visual input, which involves multiple asynchronous temporal dependencies. To adapt to large variations in travel time through trip segments, we propose an online adaptation algorithm for the DBN model, leading to a self-adaptive DBN. A prototype system is built and tested for technical performance and user experience. The quantitative evaluation shows that our method achieves over 13% improvement in accuracy as compared to baseline approaches based on hidden Markov model. In the user study, our system guides the participants to their destinations, emulating a human usher in multiple aspects.

9.
Cogn Process ; 16 Suppl 1: 319-22, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26216757

RESUMO

During wayfinding in a novel environment, we encounter many new places. Some of those places are encoded by our spatial memory. But how does the human brain "decides" which locations are more important than others, and how do backtracking and repetition priming enhances memorization of these scenes? In this work, we explore how backtracking improves encoding of encountered locations. We also check whether repetition priming helps with further memory enhancement. We recruited 20 adults. Each participant was guided through an unfamiliar indoor environment. The participants were instructed to remember the path, as they would need to backtrack by themselves. Two groups were defined: the first group performed a spatial memory test at the goal destination and after backtracking; the second group performed the test only after backtracking. The mean spatial memory scores of the first group improved significantly after backtracking: from 49.8 to 60.8%. The score of the second group was 62%. No difference was found in performance between the first group and the second group. Backtracking alone significantly improves spatial memory of visited places. Surprisingly, repetition priming does not further enhance memorization of these places. This result may suggest that spatial reasoning causes significant cognitive load that thwarts further improvement of spatial memory of locations.


Assuntos
Meio Ambiente , Priming de Repetição/fisiologia , Memória Espacial/fisiologia , Adulto , Feminino , Humanos , Masculino , Estimulação Luminosa , Adulto Jovem
10.
IEEE Trans Neural Netw Learn Syst ; 25(12): 2212-25, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25420244

RESUMO

In this paper, we propose a hybrid architecture that combines the image modeling strengths of the bag of words framework with the representational power and adaptability of learning deep architectures. Local gradient-based descriptors, such as SIFT, are encoded via a hierarchical coding scheme composed of spatial aggregating restricted Boltzmann machines (RBM). For each coding layer, we regularize the RBM by encouraging representations to fit both sparse and selective distributions. Supervised fine-tuning is used to enhance the quality of the visual representation for the categorization task. We performed a thorough experimental evaluation using three image categorization data sets. The hierarchical coding scheme achieved competitive categorization accuracies of 79.7% and 86.4% on the Caltech-101 and 15-Scenes data sets, respectively. The visual representations learned are compact and the model's inference is fast, as compared with sparse coding methods. The low-level representations of descriptors that were learned using this method result in generic features that we empirically found to be transferrable between different image data sets. Further analysis reveal the significance of supervised fine-tuning when the architecture has two layers of representations as opposed to a single layer.

11.
IEEE Trans Pattern Anal Mach Intell ; 36(1): 195-201, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24231877

RESUMO

This paper presents a visual saliency modeling technique that is efficient and tolerant to the image scale variation. Different from existing approaches that rely on a large number of filters or complicated learning processes, the proposed technique computes saliency from image histograms. Several two-dimensional image co-occurrence histograms are used, which encode not only "how many" (occurrence) but also "where and how" (co-occurrence) image pixels are composed into a visual image, hence capturing the "unusualness" of an object or image region that is often perceived by either global "uncommonness" (i.e., low occurrence frequency) or local "discontinuity" with respect to the surrounding (i.e., low co-occurrence frequency). The proposed technique has a number of advantageous characteristics. It is fast and very easy to implement. At the same time, it involves minimal parameter tuning, requires no training, and is robust to image scale variation. Experiments on the AIM dataset show that a superior shuffled AUC (sAUC) of 0.7221 is obtained, which is higher than the state-of-the-art sAUC of 0.7187.

12.
Artigo em Inglês | MEDLINE | ID: mdl-24110606

RESUMO

The study of stem cells is one of the most important biomedical research. Understanding their development could allow multiple applications in regenerative medicine. For this purpose, automated solutions for the observation of stem cell development process are needed. This study introduces an on-line analysis method for the modelling of neurosphere evolution during the early time of their development under phase contrast microscopy. From the corresponding phase contrast time-lapse sequences, we extract information from the neurosphere using a combination of phase contrast physics deconvolution and curve detection for locate the cells inside the neurosphere. Then, based on prior biological knowledge, we generate possible and optimal 3-dimensional configuration using 2D to 3D registration methods and evolutionary optimisation algorithm.


Assuntos
Microscopia de Contraste de Fase/métodos , Células-Tronco Neurais/citologia , Algoritmos , Diferenciação Celular/fisiologia , Bases de Dados Factuais , Processamento de Imagem Assistida por Computador , Modelos Teóricos
13.
IEEE Trans Biomed Eng ; 58(1): 88-94, 2011 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-20952329

RESUMO

Under the framework of computer-aided eye disease diagnosis, this paper presents an automatic optic disc (OD) detection technique. The proposed technique makes use of the unique circular brightness structure associated with the OD, i.e., the OD usually has a circular shape and is brighter than the surrounding pixels whose intensity becomes darker gradually with their distances from the OD center. A line operator is designed to capture such circular brightness structure, which evaluates the image brightness variation along multiple line segments of specific orientations that pass through each retinal image pixel. The orientation of the line segment with the minimum/maximum variation has specific pattern that can be used to locate the OD accurately. The proposed technique has been tested over four public datasets that include 130, 89, 40, and 81 images of healthy and pathological retinas, respectively. Experiments show that the designed line operator is tolerant to different types of retinal lesion and imaging artifacts, and an average OD detection accuracy of 97.4% is obtained.


Assuntos
Técnicas de Diagnóstico Oftalmológico , Processamento de Imagem Assistida por Computador/métodos , Disco Óptico/anatomia & histologia , Retina/anatomia & histologia , Bases de Dados Factuais , Diagnóstico por Computador/métodos , Humanos
14.
Artigo em Inglês | MEDLINE | ID: mdl-22255744

RESUMO

Live imaging of neural stem cells and progenitors is important to follow the biology of these cells. Non-invasive imaging techniques, such as phase contrast microscopy, are preferred as neural stem cells are very sensitive to photoxic damage cause by excitation of fluorescent molecules. However, large illumination variations and weak foreground/background contrast make phase contrast images challenging for image processing. In the current work, we propose a new method to segment neurospheres imaged under phase contrast microscopy by employing high dynamic range imaging and advanced level-set method. The use of high dynamic range imaging enhances the fused image by expressing cell signatures from various exposure captures. We apply advanced level-set method in cell segmentation to improve the detection rate over simple methods such as thresholding. Validation experiments in the analysis of 21 images containing over 400 cells have demonstrated accuracy improvements over existing techniques.


Assuntos
Microscopia de Contraste de Fase/métodos , Neurônios/patologia , Algoritmos , Animais , Diagnóstico por Imagem/métodos , Reações Falso-Positivas , Humanos , Processamento de Imagem Assistida por Computador , Luz , Iluminação , Modelos Estatísticos , Células-Tronco Neurais/citologia , Óptica e Fotônica , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Células-Tronco/citologia
15.
Invest Ophthalmol Vis Sci ; 52(3): 1314-9, 2011 Mar 10.
Artigo em Inglês | MEDLINE | ID: mdl-21051727

RESUMO

PURPOSE: To validate a new computer-aided diagnosis (CAD) imaging program for the assessment of nuclear lens opacity. METHODS: Slit-lamp lens photographs from the Singapore Malay Eye Study (SiMES) were graded using both the CAD imaging program and manual assessment method by a trained grader using the Wisconsin Cataract Grading System. Cataract was separately assessed clinically during the study using Lens Opacities Classification System III (LOCS III). The repeatability of CAD and Wisconsin grading methods were assessed using 160 paired images. The agreement between the CAD and Wisconsin grading methods, and the correlations of CAD with Wisconsin and LOCS III were assessed using the SiMES sample (5547 eyes from 2951 subjects). RESULTS: In assessing the repeatability, the coefficient of variation (CoV) was 8.10% (95% confidence interval [CI], 7.21-8.99), and the intraclass correlation coefficient (ICC) was 0.96 (95% CI, 0.93-0.96) for the CAD method. There was high agreement between the CAD and Wisconsin methods, with a mean difference (CAD minus Wisconsin) of -0.02 (95% limit of agreement, -0.91 and 0.87) and an ICC of 0.81 (95% CI, 0.80-0.82). CAD parameters were also significantly correlated with LOCS III grading (all P < 0.001). CONCLUSIONS: This new CAD imaging program assesses nuclear lens opacity with results comparable to the manual grading using the Wisconsin System. This study shows that an automated, precise, and quantitative assessment of nuclear cataract is possible.


Assuntos
Catarata/diagnóstico , Diagnóstico por Computador , Diagnóstico por Imagem/métodos , Técnicas de Diagnóstico Oftalmológico , Núcleo do Cristalino/patologia , Área Sob a Curva , Catarata/classificação , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fotografação , Reprodutibilidade dos Testes
16.
IEEE Trans Med Imaging ; 30(1): 94-107, 2011 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-20679026

RESUMO

In clinical diagnosis, a grade indicating the severity of nuclear cataract is often manually assigned by a trained ophthalmologist to a patient after comparing the lens' opacity severity in his/her slit-lamp images with a set of standard photos. This grading scheme is often subjective and time-consuming. In this paper, a novel computer-aided diagnosis method via ranking is proposed to facilitate nuclear cataract grading following conventional clinical decision-making process. The grade of nuclear cataract in a slit-lamp image is predicted using its neighboring labeled images in a ranked image list, which is achieved using a learned ranking function. This ranking function is learned via direct optimization on a newly proposed approximation to a ranking evaluation measure. Our proposed method has been evaluated by a large dataset composed of 1000 different cases, which are collected from an ongoing clinical population-based study. Both experimental results and comparison with several existing methods demonstrate the benefit of grading via ranking by our proposed method.


Assuntos
Catarata/diagnóstico , Diagnóstico por Computador/métodos , Diagnóstico por Imagem/métodos , Técnicas de Diagnóstico Oftalmológico , Estatísticas não Paramétricas , Humanos , Núcleo do Cristalino/patologia , Fotografação/métodos , Exame Físico/instrumentação , Exame Físico/métodos , Projetos de Pesquisa
17.
Artigo em Inglês | MEDLINE | ID: mdl-22255472

RESUMO

Cataract remains a leading cause for blindness worldwide. Cataract diagnosis via human grading is subjective and time-consuming. Several methods of automatic grading are currently available, but each of them suffers from some drawbacks. In this paper, a new approach for automatic detection based on texture and intensity analysis is proposed to address the problems of existing methods and improve the performance from three aspects, namely ROI detection, lens mask generation and opacity detection. In the detection method, image clipping and texture analysis are applied to overcome the over-detection problem for clear lens images and global thresholding is exploited to solve the under-detection problem for severe cataract images. The proposed method is tested on 725 retro-illumination lens images randomly selected from a database of a community study. Experiments show improved performance compared with the state-of-the-art method.


Assuntos
Algoritmos , Catarata/diagnóstico , Interpretação de Imagem Assistida por Computador/métodos , Iluminação/métodos , Oftalmoscopia/métodos , Reconhecimento Automatizado de Padrão/métodos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
18.
Artigo em Inglês | MEDLINE | ID: mdl-21096260

RESUMO

Cataract is the leading cause of blindness and posterior subcapsular cataract (PSC) leads to significant visual impairment. An automatic approach for detecting PSC opacity in retro-illumination images is investigated. The features employed include intensity, edge, size and spatial location. The system was tested using 441 images. The automatic detection was compared with the human expert. The sensitivity and specificity are 82.6% and 80% respectively. The preliminary research indicates it is feasible to apply automatic detection in the clinical screening of PSC in the future.


Assuntos
Automação/métodos , Opacificação da Cápsula/diagnóstico , Opacificação da Cápsula/patologia , Interpretação de Imagem Assistida por Computador/métodos , Cápsula do Cristalino/patologia , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Humanos , Iluminação , Pessoa de Meia-Idade , Pupila
19.
Artigo em Inglês | MEDLINE | ID: mdl-20879355

RESUMO

A video recording of an examination by Wireless Capsule Endoscopy (WCE) may typically contain more than 55,000 video frames, which makes the manual visual screening by an experienced gastroenterologist a highly time-consuming task. In this paper, we propose a novel method of epitomized summarization of WCE videos for efficient visualization to a gastroenterologist. For each short sequence of a WCE video, an epitomized frame is generated. New constraints are introduced into the epitome formulation to achieve the necessary visual quality for manual examination, and an EM algorithm for learning the epitome is derived. First, the local context weights are introduced to generate the epitomized frame. The epitomized frame preserves the appearance of all the input patches from the frames of the short sequence. Furthermore, by introducing spatial distributions for semantic interpretation of image patches in our epitome formulation, we show that it also provides a framework to facilitate the semantic description of visual features to generate organized visual summarization of WCE video, where the patches in different positions correspond to different semantic information. Our experiments on real WCE videos show that, using epitomized summarization, the number of frames have to be examined by the gastroenterologist can be reduced to less than one-tenth of the original frames in the video.


Assuntos
Algoritmos , Endoscopia por Cápsula/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Técnica de Subtração , Telemetria/métodos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
20.
IEEE Trans Biomed Eng ; 57(10): 2605-8, 2010 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-20595078

RESUMO

Under the framework of computer-aided diagnosis, optical coherence tomography (OCT) has become an established ocular imaging technique that can be used in glaucoma diagnosis by measuring the retinal nerve fiber layer thickness. This letter presents an automated retinal layer segmentation technique for OCT images. In the proposed technique, an OCT image is first cut into multiple vessel and nonvessel sections by the retinal blood vessels that are detected through an iterative polynomial smoothing procedure. The nonvessel sections are then filtered by a bilateral filter and a median filter that suppress the local image noise but keep the global image variation across the retinal layer boundary. Finally, the layer boundaries of the filtered nonvessel sections are detected, which are further classified to different retinal layers to determine the complete retinal layer boundaries. Experiments over OCT for four subjects show that the proposed technique segments an OCT image into five layers accurately.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Retina/anatomia & histologia , Vasos Retinianos/anatomia & histologia , Tomografia de Coerência Óptica/métodos , Algoritmos , Diagnóstico por Computador , Humanos , Disco Óptico/anatomia & histologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...