Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
1.
Stud Health Technol Inform ; 314: 3-13, 2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38784996

RESUMO

Health and social care systems around the globe currently undergo a transformation towards personalized, preventive, predictive, participative precision medicine (5PM), considering the individual health status, conditions, genetic and genomic dispositions, etc., in personal, social, occupational, environmental and behavioral context. This transformation is strongly supported by technologies such as micro- and nanotechnologies, advanced computing, artificial intelligence, edge computing, etc. For enabling communication and cooperation between actors from different domains using different methodologies, languages and ontologies based on different education, experiences, etc., we have to understand the transformed health ecosystems and all its components in structure, function and relationships in the necessary detail ranging from elementary particles up to the universe. That way, we advance design and management of the complex and highly dynamic ecosystem from data to knowledge level. The challenge is the consistent, correct and formalized representation of the transformed health ecosystem from the perspectives of all domains involved, representing and managing them based on related ontologies. The resulting business view of the real-world ecosystem must be interrelated using the ISO/IEC 21838 Top Level Ontologies standard. Thereafter, the outcome can be transformed into implementable solutions using the ISO/IEC 10746 Open Distributed Processing Reference Model. Model and framework for this system-oriented, architecture-centric, ontology-based, policy-driven approach have been developed by the first author and meanwhile standardized as ISO 23903 Interoperability and Integration Reference Architecture.


Assuntos
Medicina de Precisão , Humanos , Inteligência Artificial
2.
J Am Soc Mass Spectrom ; 34(12): 2857-2863, 2023 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-37874901

RESUMO

Liquid chromatography-mass spectrometry (LC-MS) metabolomics studies produce high-dimensional data that must be processed by a complex network of informatics tools to generate analysis-ready data sets. As the first computational step in metabolomics, data processing is increasingly becoming a challenge for researchers to develop customized computational workflows that are applicable for LC-MS metabolomics analysis. Ontology-based automated workflow composition (AWC) systems provide a feasible approach for developing computational workflows that consume high-dimensional molecular data. We used the Automated Pipeline Explorer (APE) to create an AWC for LC-MS metabolomics data processing across three use cases. Our results show that APE predicted 145 data processing workflows across all the three use cases. We identified six traditional workflows and six novel workflows. Through manual review, we found that one-third of novel workflows were executable whereby the data processing function could be completed without obtaining an error. When selecting the top six workflows from each use case, the computational viable rate of our predicted workflows reached 45%. Collectively, our study demonstrates the feasibility of developing an AWC system for LC-MS metabolomics data processing.


Assuntos
Hominidae , Software , Animais , Fluxo de Trabalho , Metabolômica/métodos , Espectrometria de Massas , Cromatografia Líquida/métodos
3.
J Biomed Semantics ; 14(1): 14, 2023 09 20.
Artigo em Inglês | MEDLINE | ID: mdl-37730667

RESUMO

BACKGROUND: Clinical early warning scoring systems, have improved patient outcomes in a range of specializations and global contexts. These systems are used to predict patient deterioration. A multitude of patient-level physiological decompensation data has been made available through the widespread integration of early warning scoring systems within EHRs across national and international health care organizations. These data can be used to promote secondary research. The diversity of early warning scoring systems and various EHR systems is one barrier to secondary analysis of early warning score data. Given that early warning score parameters are varied, this makes it difficult to query across providers and EHR systems. Moreover, mapping and merging the parameters is challenging. We develop and validate the Early Warning System Scores Ontology (EWSSO), representing three commonly used early warning scores: the National Early Warning Score (NEWS), the six-item modified Early Warning Score (MEWS), and the quick Sequential Organ Failure Assessment (qSOFA) to overcome these problems. METHODS: We apply the Software Development Lifecycle Framework-conceived by Winston Boyce in 1970-to model the activities involved in organizing, producing, and evaluating the EWSSO. We also follow OBO Foundry Principles and the principles of best practice for domain ontology design, terms, definitions, and classifications to meet BFO requirements for ontology building. RESULTS: We developed twenty-nine new classes, reused four classes and four object properties to create the EWSSO. When we queried the data our ontology-based process could differentiate between necessary and unnecessary features for score calculation 100% of the time. Further, our process applied the proper temperature conversions for the early warning score calculator 100% of the time. CONCLUSIONS: Using synthetic datasets, we demonstrate the EWSSO can be used to generate and query health system data on vital signs and provide input to calculate the NEWS, six-item MEWS, and qSOFA. Future work includes extending the EWSSO by introducing additional early warning scores for adult and pediatric patient populations and creating patient profiles that contain clinical, demographic, and outcomes data regarding the patient.


Assuntos
Escore de Alerta Precoce , Adulto , Criança , Humanos , Software
4.
J Pers Med ; 13(8)2023 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-37623460

RESUMO

The ongoing transformation of health systems around the world aims at personalized, preventive, predictive, participative precision medicine, supported by technology. It considers individual health status, conditions, and genetic and genomic dispositions in personal, social, occupational, environmental and behavioral contexts. In this way, it transforms health and social care from art to science by fully understanding the pathology of diseases and turning health and social care from reactive to proactive. The challenge is the understanding and the formal as well as consistent representation of the world of sciences and practices, i.e., of multidisciplinary and dynamic systems in variable context. This enables mapping between the different disciplines, methodologies, perspectives, intentions, languages, etc., as philosophy or cognitive sciences do. The approach requires the deployment of advanced technologies including autonomous systems and artificial intelligence. This poses important ethical and governance challenges. This paper describes the aforementioned transformation of health and social care ecosystems as well as the related challenges and solutions, resulting in a sophisticated, formal reference architecture. This reference architecture provides a system-theoretical, architecture-centric, ontology-based, policy-driven model and framework for designing and managing intelligent and ethical ecosystems in general and health ecosystems in particular.

5.
Front Med (Lausanne) ; 10: 1073313, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37007792

RESUMO

This paper provides an overview of current linguistic and ontological challenges which have to be met in order to provide full support to the transformation of health ecosystems in order to meet precision medicine (5 PM) standards. It highlights both standardization and interoperability aspects regarding formal, controlled representations of clinical and research data, requirements for smart support to produce and encode content in a way that humans and machines can understand and process it. Starting from the current text-centered communication practices in healthcare and biomedical research, it addresses the state of the art in information extraction using natural language processing (NLP). An important aspect of the language-centered perspective of managing health data is the integration of heterogeneous data sources, employing different natural languages and different terminologies. This is where biomedical ontologies, in the sense of formal, interchangeable representations of types of domain entities come into play. The paper discusses the state of the art of biomedical ontologies, addresses their importance for standardization and interoperability and sheds light to current misconceptions and shortcomings. Finally, the paper points out next steps and possible synergies of both the field of NLP and the area of Applied Ontology and Semantic Web to foster data interoperability for 5 PM.

6.
J Clin Transl Sci ; 7(1): e3, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36755541

RESUMO

Background/Objective: Informed consent forms (ICFs) and practices vary widely across institutions. This project expands on previous work at the University of Arkansas for Medical Sciences (UAMS) Center for Health Literacy to develop a plain language ICF template. Our interdisciplinary team of researchers, comprised of biomedical informaticists, health literacy experts, and stakeholders in the Institutional Review Board (IRB) process, has developed the ICF Navigator, a novel tool to facilitate the creation of plain language ICFs that comply with all relevant regulatory requirements. Methods: Our team first developed requirements for the ICF Navigator tool. The tool was then implemented by a technical team of informaticists and software developers, in consultation with an informed consent legal expert. We developed and formalized a detailed knowledge map modeling regulatory requirements for ICFs, which drives workflows within the tool. Results: The ICF Navigator is a web-based tool that guides researchers through creating an ICF as they answer questions about their project. The navigator uses those responses to produce a clear and compliant ICF, displaying a real-time preview of the final form as content is added. Versioning and edits can be tracked to facilitate collaborative revisions by the research team and communication with the IRB. The navigator helps guide the creation of study-specific language, ensures compliance with regulatory requirements, and ensures that the resulting ICF is easy to read and understand. Conclusion: The ICF Navigator is an innovative, customizable, open-source software tool that helps researchers produce custom readable and compliant ICFs for research studies involving human subjects.

7.
Metabolomics ; 19(2): 11, 2023 02 06.
Artigo em Inglês | MEDLINE | ID: mdl-36745241

RESUMO

BACKGROUND: Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a popular approach for metabolomics data acquisition and requires many data processing software tools. The FAIR Principles - Findability, Accessibility, Interoperability, and Reusability - were proposed to promote open science and reusable data management, and to maximize the benefit obtained from contemporary and formal scholarly digital publishing. More recently, the FAIR principles were extended to include Research Software (FAIR4RS). AIM OF REVIEW: This study facilitates open science in metabolomics by providing an implementation solution for adopting FAIR4RS in the LC-HRMS metabolomics data processing software. We believe our evaluation guidelines and results can help improve the FAIRness of research software. KEY SCIENTIFIC CONCEPTS OF REVIEW: We evaluated 124 LC-HRMS metabolomics data processing software obtained from a systematic review and selected 61 software for detailed evaluation using FAIR4RS-related criteria, which were extracted from the literature along with internal discussions. We assigned each criterion one or more FAIR4RS categories through discussion. The minimum, median, and maximum percentages of criteria fulfillment of software were 21.6%, 47.7%, and 71.8%. Statistical analysis revealed no significant improvement in FAIRness over time. We identified four criteria covering multiple FAIR4RS categories but had a low %fulfillment: (1) No software had semantic annotation of key information; (2) only 6.3% of evaluated software were registered to Zenodo and received DOIs; (3) only 14.5% of selected software had official software containerization or virtual machine; (4) only 16.7% of evaluated software had a fully documented functions in code. According to the results, we discussed improvement strategies and future directions.


Assuntos
Metabolômica , Software , Metabolômica/métodos , Cromatografia Líquida/métodos , Espectrometria de Massas/métodos , Gerenciamento de Dados
8.
J Clin Transl Sci ; 7(1): e32, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36845317

RESUMO

Background: The murder of George Floyd created national outcry that echoed down to national institutions, including universities and academic systems to take a hard look at systematic and systemic racism in higher education. This motivated the creation of a fear and tension-minimizing, curricular offering, "Courageous Conversations," collaboratively engaging students, staff, and faculty in matters of diversity, equity, and inclusion (DEI) in the Department of Health Outcomes and Biomedical Informatics at the University of Florida. Methods: A qualitative design was employed assessing narrative feedback from participants during the Fall semester of 2020. Additionally, the ten-factor model implementation framework was applied and assessed. Data collection included two focus groups and document analysis with member-checking. Thematic analysis (i.e., organizing, coding, synthesizing) was used to analyze a priori themes based on the four agreements of the courageous conversations framework, stay engaged, expect to experience discomfort, speak your truth, and expect and accept non-closure. Results: A total of 41 participants of which 20 (48.78%) were department staff members, 11 (26.83%) were department faculty members, and 10 (24.30%) were graduate students. The thematic analysis revealed 1) that many participants credited their learning experiences to what their peers had said about their own personal lived experiences during group sessions, and 2) several participants said they would either retake the course or recommend it to a colleague. Conclusion: With structured implementation, courageous conversations can be an effective approach to create more diverse, equitable, and inclusive spaces in training programs with similar DEI ecosystems.

9.
Phys Med Biol ; 68(1)2022 12 23.
Artigo em Inglês | MEDLINE | ID: mdl-36279873

RESUMO

The cancer imaging archive (TICA) receives and manages an ever-increasing quantity of clinical (non-image) data containing valuable information about subjects in imaging collections. To harmonize and integrate these data, we have first cataloged the types of information occurring across public TCIA collections. We then produced mappings for these diverse instance data using ontology-based representation patterns and transformed the data into a knowledge graph in a semantic database. This repository combined the transformed instance data with relevant background knowledge from domain ontologies. The resulting repository of semantically integrated data is a rich source of information about subjects that can be queried across imaging collections. Building on this work we have implemented and deployed a REST API and a user-facing semantic cohort builder tool. This tool allows allow researchers and other users to search and identify groups of subject-level records based on non-image data that were not queryable prior to this work. The search results produced by this interface link to images, allowing users to quickly identify and view images matching the selection criteria, as well as allowing users to export the harmonized clinical data.


Assuntos
Neoplasias , Software , Humanos , Semântica , Neoplasias/diagnóstico por imagem , Diagnóstico por Imagem , Bases de Dados Factuais
10.
Stud Health Technol Inform ; 295: 302-303, 2022 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-35773868

RESUMO

Integration of clinical-pathological information of Biobanks with genomics-epidemiological data/inferences in a structured and consistent manner, mitigating inherent heterogeneities of sites/sources of data/sample collection, processing, and information storage hurdles, is primary to achieving an automated surveillance system. Genomics Integrated Biobanking Ontology (GIBO) presents a solution for preserving the contextual meaning of heterogeneous data, while interlinking different genomics and epidemiological concepts in machine comprehensible format with the biobank framework. GIBO an OWL ontology introduces 84 new classes to integrate genomics data relevant to public health.


Assuntos
Bancos de Espécimes Biológicos , Genômica , Armazenamento e Recuperação da Informação , Saúde Pública , Manejo de Espécimes
11.
J Pers Med ; 12(5)2022 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-35629179

RESUMO

To improve patient outcomes after trauma, the need to decrypt the post-traumatic immune response has been identified. One prerequisite to drive advancement in understanding that domain is the implementation of surgical biobanks. This paper focuses on the outcomes of patients with one of two diagnoses: post-traumatic arthritis and osteomyelitis. In creating surgical biobanks, currently, many obstacles must be overcome. Roadblocks exist around scoping of data that is to be collected, and the semantic integration of these data. In this paper, the generic component model and the Semantic Web technology stack are used to solve issues related to data integration. The results are twofold: (a) a scoping analysis of data and the ontologies required to harmonize and integrate it, and (b) resolution of common data integration issues in integrating data relevant to trauma surgery.

12.
Metabolites ; 12(1)2022 Jan 17.
Artigo em Inglês | MEDLINE | ID: mdl-35050209

RESUMO

Clinical metabolomics emerged as a novel approach for biomarker discovery with the translational potential to guide next-generation therapeutics and precision health interventions. However, reproducibility in clinical research employing metabolomics data is challenging. Checklists are a helpful tool for promoting reproducible research. Existing checklists that promote reproducible metabolomics research primarily focused on metadata and may not be sufficient to ensure reproducible metabolomics data processing. This paper provides a checklist including actions that need to be taken by researchers to make computational steps reproducible for clinical metabolomics studies. We developed an eight-item checklist that includes criteria related to reusable data sharing and reproducible computational workflow development. We also provided recommended tools and resources to complete each item, as well as a GitHub project template to guide the process. The checklist is concise and easy to follow. Studies that follow this checklist and use recommended resources may facilitate other researchers to reproduce metabolomics results easily and efficiently.

13.
Stud Health Technol Inform ; 285: 3-14, 2021 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-34734847

RESUMO

For meeting the challenge of aging, multi-diseased societies, cost containment, workforce development and consumerism by improved care quality and patient safety as well as more effective and efficient care processes, health and social care systems around the globe undergo an organizational, methodological and technological transformation towards personalized, preventive, predictive, participative precision medicine (P5 medicine). This paper addresses chances, challenges and risks of specific disruptive methodologies and technologies for the transformation of health and social care systems, especially focusing on the deployment of intelligent and autonomous systems.


Assuntos
Inteligência Artificial , Medicina de Precisão , Humanos
14.
Stud Health Technol Inform ; 285: 159-164, 2021 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-34734868

RESUMO

The wide-spread use of Common Data Models and information models in biomedical informatics encourages assumptions that those models could provide the entirety of what is needed for knowledge representation purposes. Based on the lack of computable semantics in frequently used Common Data Models, there appears to be a gap between knowledge representation requirements and these models. In this use-case oriented approach, we explore how a system-theoretic, architecture-centric, ontology-based methodology can help to better understand this gap. We show how using the Generic Component Model helps to analyze the data management system in a way that allows accounting for data management procedures inside the system and knowledge representation of the real world at the same time.


Assuntos
Ontologias Biológicas , Semântica , Gerenciamento de Dados
16.
Database (Oxford) ; 20212021 07 09.
Artigo em Inglês | MEDLINE | ID: mdl-34244718

RESUMO

The Ontology for Biomedical Investigations (OBI) underwent a focused review of assay term annotations, logic and hierarchy with a goal to improve and standardize these terms. As a result, inconsistencies in W3C Web Ontology Language (OWL) expressions were identified and corrected, and additionally, standardized design patterns and a formalized template to maintain them were developed. We describe here this informative and productive process to describe the specific benefits and obstacles for OBI and the universal lessons for similar projects.


Assuntos
Ontologias Biológicas , Idioma , Padrões de Referência
17.
Trauma Surg Acute Care Open ; 5(1): e000473, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32789188

RESUMO

BACKGROUND: During the past several decades, the American College of Surgeons has led efforts to standardize trauma care through their trauma center verification process and Trauma Quality Improvement Program. Despite these endeavors, great variability remains among trauma centers functioning at the same level. Little research has been conducted on the correlation between trauma center organizational structure and patient outcomes. We are attempting to close this knowledge gap with the Comparative Assessment Framework for Environments of Trauma Care (CAFE) project. METHODS: Our first action was to establish a shared terminology that we then used to build the Ontology of Organizational Structures of Trauma centers and Trauma systems (OOSTT). OOSTT underpins the web-based CAFE questionnaire that collects detailed information on the particular organizational attributes of trauma centers and trauma systems. This tool allows users to compare their organizations to an aggregate of other organizations of the same type, while collecting their data. RESULTS: In collaboration with the American College of Surgeons Committee on Trauma, we tested the system by entering data from three trauma centers and four trauma systems. We also tested retrieval of answers to competency questions. DISCUSSION: The data we gather will be made available to public health and implementation science researchers using visualizations. In the next phase of our project, we plan to link the gathered data about trauma center attributes to clinical outcomes.

18.
Med Phys ; 47(11): 5953-5965, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32772385

RESUMO

PURPOSE: The dataset contains annotations for lung nodules collected by the Lung Imaging Data Consortium and Image Database Resource Initiative (LIDC) stored as standard DICOM objects. The annotations accompany a collection of computed tomography (CT) scans for over 1000 subjects annotated by multiple expert readers, and correspond to "nodules ≥ 3 mm", defined as any lesion considered to be a nodule with greatest in-plane dimension in the range 3-30 mm regardless of presumed histology. The present dataset aims to simplify reuse of the data with the readily available tools, and is targeted towards researchers interested in the analysis of lung CT images. ACQUISITION AND VALIDATION METHODS: Open source tools were utilized to parse the project-specific XML representation of LIDC-IDRI annotations and save the result as standard DICOM objects. Validation procedures focused on establishing compliance of the resulting objects with the standard, consistency of the data between the DICOM and project-specific representation, and evaluating interoperability with the existing tools. DATA FORMAT AND USAGE NOTES: The dataset utilizes DICOM Segmentation objects for storing annotations of the lung nodules, and DICOM Structured Reporting objects for communicating qualitative evaluations (nine attributes) and quantitative measurements (three attributes) associated with the nodules. The total of 875 subjects contain 6859 nodule annotations. Clustering of the neighboring annotations resulted in 2651 distinct nodules. The data are available in TCIA at https://doi.org/10.7937/TCIA.2018.h7umfurq. POTENTIAL APPLICATIONS: The standardized dataset maintains the content of the original contribution of the LIDC-IDRI consortium, and should be helpful in developing automated tools for characterization of lung lesions and image phenotyping. In addition to those properties, the representation of the present dataset makes it more FAIR (Findable, Accessible, Interoperable, Reusable) for the research community, and enables its integration with other standardized data collections.


Assuntos
Neoplasias Pulmonares , Bases de Dados Factuais , Humanos , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Tomografia Computadorizada por Raios X
19.
Stud Health Technol Inform ; 270: 1089-1093, 2020 Jun 16.
Artigo em Inglês | MEDLINE | ID: mdl-32570549

RESUMO

The paper introduces a structured approach to transforming healthcare towards personalized, preventive, predictive, participative precision (P5) medicine and the related organizational, methodological and technological requirements. Thereby, the deployment of autonomous systems and artificial intelligence is inevitably. The paper discusses opportunities and challenges of those technologies from a humanistic and ethical perspective. It shortly introduces the essential concepts and principles, and critically discusses some relevant projects. Finally, it offers ways for correctly representing, specifying, implementing and deploying autonomous and intelligent systems under an ethical perspective.


Assuntos
Inteligência Artificial , Medicina , Atenção à Saúde , Princípios Morais
20.
AMIA Annu Symp Proc ; 2020: 554-563, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33936429

RESUMO

A longstanding issue with knowledge bases that discuss drug-drug interactions (DDIs) is that they are inconsistent with one another. Computerized support might help experts be more objective in assessing DDI evidence. A requirement for such systems is accurate automatic classification of evidence types. In this pilot study, we developed a hierarchical classifier to classify clinical DDI studies into formally defined evidence types. The area under the ROC curve for sub-classifiers in the ensemble ranged from 0.78 to 0.87. The entire system achieved an F1 of 0.83 and 0.63 on two held-out datasets, the latter consisting focused on completely novel drugs from what the system was trained on. The results suggest that it is feasible to accurately automate the classification of a sub-set of DDI evidence types and that the hierarchical approach shows promise. Future work will test more advanced feature engineering techniques while expanding the system to classify a more complex set of evidence types.


Assuntos
Mineração de Dados/métodos , Bases de Dados Factuais , Interações Medicamentosas , Aprendizado de Máquina , Publicações , Computadores , Mineração de Dados/estatística & dados numéricos , Bases de Dados Factuais/estatística & dados numéricos , Humanos , Processamento de Linguagem Natural , Projetos Piloto
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...