Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters











Database
Language
Publication year range
1.
Appl Plant Sci ; 2(8)2014 Aug.
Article in English | MEDLINE | ID: mdl-25202648

ABSTRACT

PREMISE OF THE STUDY: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • METHODS: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • RESULTS: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • DISCUSSION: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.

2.
BMC Bioinformatics ; 12: 260, 2011 Jun 24.
Article in English | MEDLINE | ID: mdl-21702966

ABSTRACT

BACKGROUND: The ability to search for and precisely compare similar phenotypic appearances within and across species has vast potential in plant science and genetic research. The difficulty in doing so lies in the fact that many visual phenotypic data, especially visually observed phenotypes that often times cannot be directly measured quantitatively, are in the form of text annotations, and these descriptions are plagued by semantic ambiguity, heterogeneity, and low granularity. Though several bio-ontologies have been developed to standardize phenotypic (and genotypic) information and permit comparisons across species, these semantic issues persist and prevent precise analysis and retrieval of information. A framework suitable for the modeling and analysis of precise computable representations of such phenotypic appearances is needed. RESULTS: We have developed a new framework called the Computable Visually Observed Phenotype Ontological Framework for plants. This work provides a novel quantitative view of descriptions of plant phenotypes that leverages existing bio-ontologies and utilizes a computational approach to capture and represent domain knowledge in a machine-interpretable form. This is accomplished by means of a robust and accurate semantic mapping module that automatically maps high-level semantics to low-level measurements computed from phenotype imagery. The framework was applied to two different plant species with semantic rules mined and an ontology constructed. Rule quality was evaluated and showed high quality rules for most semantics. This framework also facilitates automatic annotation of phenotype images and can be adopted by different plant communities to aid in their research. CONCLUSIONS: The Computable Visually Observed Phenotype Ontological Framework for plants has been developed for more efficient and accurate management of visually observed phenotypes, which play a significant role in plant genomics research. The uniqueness of this framework is its ability to bridge the knowledge of informaticians and plant science researchers by translating descriptions of visually observed phenotypes into standardized, machine-understandable representations, thus enabling the development of advanced information retrieval and phenotype annotation analysis tools for the plant science community.


Subject(s)
Phenotype , Plants/anatomy & histology , Plants/genetics , Vocabulary, Controlled , Algorithms , Databases, Genetic , Fruit/anatomy & histology , Genomics , Genotype , Semantics , Zea mays/anatomy & histology , Zea mays/genetics
3.
IEEE Trans Geosci Remote Sens ; 45(4): 839-852, 2007 Apr.
Article in English | MEDLINE | ID: mdl-18270555

ABSTRACT

Searching for relevant knowledge across heterogeneous geospatial databases requires an extensive knowledge of the semantic meaning of images, a keen eye for visual patterns, and efficient strategies for collecting and analyzing data with minimal human intervention. In this paper, we present our recently developed content-based multimodal Geospatial Information Retrieval and Indexing System (GeoIRIS) which includes automatic feature extraction, visual content mining from large-scale image databases, and high-dimensional database indexing for fast retrieval. Using these underpinnings, we have developed techniques for complex queries that merge information from heterogeneous geospatial databases, retrievals of objects based on shape and visual characteristics, analysis of multiobject relationships for the retrieval of objects in specific spatial configurations, and semantic models to link low-level image features with high-level visual descriptors. GeoIRIS brings this diverse set of technologies together into a coherent system with an aim of allowing image analysts to more rapidly identify relevant imagery. GeoIRIS is able to answer analysts' questions in seconds, such as "given a query image, show me database satellite images that have similar objects and spatial relationship that are within a certain radius of a landmark."

4.
J Bioinform Comput Biol ; 5(6): 1193-213, 2007 Dec.
Article in English | MEDLINE | ID: mdl-18172925

ABSTRACT

There are thousands of maize mutants, which are invaluable resources for plant research. Geneticists use them to study underlying mechanisms of biochemistry, cell biology, cell development, and cell physiology. To streamline the understanding of such complex processes, researchers need the most current versions of genetic and physical maps, tools with the ability to recognize novel phenotypes or classify known phenotypes, and an intimate knowledge of the biochemical processes generating physiological and phenotypic effects. They must also know how all of these factors change and differ among species, diverse alleles, germplasms, and environmental conditions. While there are robust databases, such as MaizeGDB, for some of these types of raw data, other crucial components are missing. Moreover, the management of visually observed mutant phenotypes is still in its infant stage, let alone the complex query methods that can draw upon high-level and aggregated information to answer the questions of geneticists. In this paper, we address the scientific challenge and propose to develop a robust framework for managing the knowledge of visually observed phenotypes, mining the correlation of visual characteristics with genetic maps, and discovering the knowledge relating to cross-species conservation of visual and genetic patterns. The ultimate goal of this research is to allow a geneticist to submit phenotypic and genomic information on a mutant to a knowledge base and ask, "What genes or environmental factors cause this visually observed phenotype?".


Subject(s)
Mutation , Phenotype , Zea mays/genetics , Computational Biology , Databases, Genetic , Genes, Plant , Image Processing, Computer-Assisted , Knowledge Bases , Zea mays/anatomy & histology
5.
IEEE Trans Inf Technol Biomed ; 9(4): 538-53, 2005 Dec.
Article in English | MEDLINE | ID: mdl-16379371

ABSTRACT

Information technology offers great opportunities for supporting radiologists' expertise in decision support and training. However, this task is challenging due to difficulties in articulating and modeling visual patterns of abnormalities in a computational way. To address these issues, well established approaches to content management and image retrieval have been studied and applied to assist physicians in diagnoses. Unfortunately, most of the studies lack the flexibility of sharing both explicit and tacit knowledge involved in the decision making process, while adapting to each individual's opinion. In this paper, we propose a knowledge repository and exchange framework for diagnostic image databases called "evolutionary system for semantic exchange of information in collaborative environments" (Essence). This framework uses semantic methods to describe visual abnormalities, and offers a solution for tacit knowledge elicitation and exchange in the medical domain. Also, our approach provides a computational and visual mechanism for associating synonymous semantics of visual abnormalities. We conducted several experiments to demonstrate the system's capability of matching synonym terms, and the benefit of using tacit knowledge in improving the meaningfulness of semantic queries.


Subject(s)
Databases, Factual , Decision Support Systems, Clinical , Expert Systems , Image Interpretation, Computer-Assisted/methods , Information Storage and Retrieval/methods , Medical Records Systems, Computerized , User-Computer Interface , Artificial Intelligence , Humans , Information Dissemination/methods , Pattern Recognition, Automated/methods , Radiology Information Systems , Semantics
SELECTION OF CITATIONS
SEARCH DETAIL