Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
1.
PeerJ Comput Sci ; 10: e2097, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38983207

RESUMO

With the rapid advancement of robotics technology, an increasing number of researchers are exploring the use of natural language as a communication channel between humans and robots. In scenarios where language conditioned manipulation grounding, prevailing methods rely heavily on supervised multimodal deep learning. In this paradigm, robots assimilate knowledge from both language instructions and visual input. However, these approaches lack external knowledge for comprehending natural language instructions and are hindered by the substantial demand for a large amount of paired data, where vision and language are usually linked through manual annotation for the creation of realistic datasets. To address the above problems, we propose the knowledge enhanced bottom-up affordance grounding network (KBAG-Net), which enhances natural language understanding through external knowledge, improving accuracy in object grasping affordance segmentation. In addition, we introduce a semi-automatic data generation method aimed at facilitating the quick establishment of the language following manipulation grounding dataset. The experimental results on two standard dataset demonstrate that our method outperforms existing methods with the external knowledge. Specifically, our method outperforms the two-stage method by 12.98% and 1.22% of mIoU on the two dataset, respectively. For broader community engagement, we will make the semi-automatic data construction method publicly available at https://github.com/wmqu/Automated-Dataset-Construction4LGM.

2.
Artif Intell Rev ; : 1-32, 2023 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-37362886

RESUMO

With the explosive growth of artificial intelligence (AI) and big data, it has become vitally important to organize and represent the enormous volume of knowledge appropriately. As graph data, knowledge graphs accumulate and convey knowledge of the real world. It has been well-recognized that knowledge graphs effectively represent complex information; hence, they rapidly gain the attention of academia and industry in recent years. Thus to develop a deeper understanding of knowledge graphs, this paper presents a systematic overview of this field. Specifically, we focus on the opportunities and challenges of knowledge graphs. We first review the opportunities of knowledge graphs in terms of two aspects: (1) AI systems built upon knowledge graphs; (2) potential application fields of knowledge graphs. Then, we thoroughly discuss severe technical challenges in this field, such as knowledge graph embeddings, knowledge acquisition, knowledge graph completion, knowledge fusion, and knowledge reasoning. We expect that this survey will shed new light on future research and the development of knowledge graphs.

3.
Int J Soc Robot ; 15(3): 445-472, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-34804257

RESUMO

Social companion robots are getting more attention to assist elderly people to stay independent at home and to decrease their social isolation. When developing solutions, one remaining challenge is to design the right applications that are usable by elderly people. For this purpose, co-creation methodologies involving multiple stakeholders and a multidisciplinary researcher team (e.g., elderly people, medical professionals, and computer scientists such as roboticists or IoT engineers) are designed within the ACCRA (Agile Co-Creation of Robots for Ageing) project. This paper will address this research question: How can Internet of Robotic Things (IoRT) technology and co-creation methodologies help to design emotional-based robotic applications? This is supported by the ACCRA project that develops advanced social robots to support active and healthy ageing, co-created by various stakeholders such as ageing people and physicians. We demonstra this with three robots, Buddy, ASTRO, and RoboHon, used for daily life, mobility, and conversation. The three robots understand and convey emotions in real-time using the Internet of Things and Artificial Intelligence technologies (e.g., knowledge-based reasoning).

4.
Biotechnol Adv ; 62: 108069, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36442697

RESUMO

Metabolic engineering encompasses several widely-used strategies, which currently hold a high seat in the field of biotechnology when its potential is manifesting through a plethora of research and commercial products with a strong societal impact. The genomic revolution that occurred almost three decades ago has initiated the generation of large omics-datasets which has helped in gaining a better understanding of cellular behavior. The itinerary of metabolic engineering that has occurred based on these large datasets has allowed researchers to gain detailed insights and a reasonable understanding of the intricacies of biosystems. However, the existing trail-and-error approaches for metabolic engineering are laborious and time-intensive when it comes to the production of target compounds with high yields through genetic manipulations in host organisms. Machine learning (ML) coupled with the available metabolic engineering test instances and omics data brings a comprehensive and multidisciplinary approach that enables scientists to evaluate various parameters for effective strain design. This vast amount of biological data should be standardized through knowledge engineering to train different ML models for providing accurate predictions in gene circuits designing, modification of proteins, optimization of bioprocess parameters for scaling up, and screening of hyper-producing robust cell factories. This review briefs on the premise of ML, followed by mentioning various ML methods and algorithms alongside the numerous omics datasets available to train ML models for predicting metabolic outcomes with high-accuracy. The combinative interplay between the ML algorithms and biological datasets through knowledge engineering have guided the recent advancements in applications such as CRISPR/Cas systems, gene circuits, protein engineering, metabolic pathway reconstruction, and bioprocess engineering. Finally, this review addresses the probable challenges of applying ML in metabolic engineering which will guide the researchers toward novel techniques to overcome the limitations.


Assuntos
Biotecnologia , Engenharia Metabólica , Engenharia Metabólica/métodos , Sistemas CRISPR-Cas , Engenharia de Proteínas , Aprendizado de Máquina
5.
Front Psychol ; 13: 996609, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36507004

RESUMO

Personality disorders are psychological ailments with a major negative impact on patients, their families, and society in general, especially those of the dramatic and emotional type. Despite all the research, there is still no consensus on the best way to assess and treat them. Traditional assessment of personality disorders has focused on a limited number of psychological constructs or behaviors using structured interviews and questionnaires, without an integrated and holistic approach. We present a novel methodology for the study and assessment of personality disorders consisting in the development of a Bayesian network, whose parameters have been obtained by the Delphi method of consensus from a group of experts in the diagnosis and treatment of personality disorders. The result is a probabilistic graphical model that represents the psychological variables related to the personality disorders along with their relations and conditional probabilities, which allow identifying the symptoms with the highest diagnostic potential. This model can be used, among other applications, as a decision support system for the assessment and treatment of personality disorders of the dramatic or emotional cluster. In this paper, we discuss the need to validate this model in the clinical population along with its strengths and limitations.

6.
Artif Intell Med ; 129: 102324, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35659389

RESUMO

BACKGROUND: Traditionally guideline (GL)-based Decision Support Systems (DSSs) use a centralized infrastructure to generate recommendations to care providers, rather than to patients at home. However, managing patients at home is often preferable, reducing costs and empowering patients. Thus, we wanted to explore an option in which patients, in particular chronic patients, might be assisted by a local DSS, which interacts as needed with the central DSS engine, to manage their disease outside the standard clinical settings. OBJECTIVES: To design, implement, and demonstrate the technical and clinical feasibility of a new architecture for a distributed DSS that provides patients with evidence-based guidance, offered through applications running on the patients' mobile devices, monitoring and reacting to changes in the patient's personal environment, and providing the patients with appropriate GL-based alerts and personalized recommendations; and increase the overall robustness of the distributed application of the GL. METHODS: We have designed and implemented a novel projection-callback (PCB) model, in which small portions of the evidence-based guideline's procedural knowledge are projected from a projection engine within the central DSS server, to a local DSS that resides on each patient's mobile device. The local DSS applies the knowledge using the mobile device's local resources. The GL projections generated by the projection engine are adapted to the patient's previously defined preferences and, implicitly, to the patient's current context, in a manner that is embodied in the projected therapy plans. When appropriate, as defined by a temporal pattern within the projected plan, the local DSS calls back the central DSS, requesting further assistance, possibly another projection. To support the new model, the initial specification of the GL includes two levels: one for the central DSS, and one for the local DSS. We have implemented a distributed GL-based DSS using the projection-callback model within the MobiGuide EU project, which automatically manages chronic patients at home using sensors on the patients and their mobile phone. We assessed the new GL specification process, by specifying two very different, complex GLs: for Gestational Diabetes Mellitus, and for Atrial Fibrillation. Then, we evaluated the new computational architecture by applying the two GLs to the automated clinical management, at real time, of patients in two different countries: Spain and Italy, respectively. RESULTS: The specification using the new projection-callback model was found to be quite feasible. We found significant differences between the distributed versions of the two GLs, suggesting further research directions and possibly additional ways to analyze and characterize GLs. Applying the two GLs to the two patient populations proved highly feasible as well. The mean time between the central and local interactions was quite different for the two GLs: 3.95 ± 1.95 days in the case of the gestational diabetes domain, and 23.80 ± 12.47 days, in the case of the atrial fibrillation domain, probably corresponding to the difference in the distributed specifications of the two GLs. Most of the interaction types were due to projections to the local DSS (83%); others were data notifications, mostly to change context (17%). Some of the data notifications were triggered due to technical errors. The robustness of the distributed architecture was demonstrated through the successful recovery from multiple crashes of the local DSS. CONCLUSIONS: The new projection-callback model has been demonstrated to be feasible, from specification to distributed application. Different GLs might significantly differ, however, in their distributed specification and application characteristics. Distributed medical DSSs can facilitate the remote management of chronic patients by enabling the central DSSs to delegate, in a dynamic fashion, determined by the patient's context, much of the monitoring and treatment management decisions to the mobile device. Patients can be kept in their home environment, while still maintaining, through the projection-callback mechanism, several of the advantages of a central DSS, such as access to the patient's longitudinal record, and to an up-to-date evidence-based GL repository.


Assuntos
Aplicativos Móveis , Tomada de Decisões Assistida por Computador , Humanos
7.
Comput Biol Med ; 145: 105313, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35405400

RESUMO

Rare disease data is often fragmented within multiple heterogeneous siloed regional disease registries, each containing a small number of cases. These data are particularly sensitive, as low subject counts make the identification of patients more likely, meaning registries are not inclined to share subject level data outside their registries. At the same time access to multiple rare disease datasets is important as it will lead to new research opportunities and analysis over larger cohorts. To enable this, two major challenges must therefore be overcome. The first is to integrate data at a semantic level, so that it is possible to query over registries and return results which are comparable. The second is to enable queries which do not take subject level data from the registries. To meet the first challenge, this paper presents the FAIRVASC ontology to manage data related to the rare disease anti-neutrophil cytoplasmic antibody (ANCA) associated vasculitis (AAV), which is based on the harmonisation of terms in seven European data registries. It has been built upon a set of key clinical questions developed by a team of experts in vasculitis selected from the registry sites and makes use of several standard classifications, such as Systematized Nomenclature of Medicine - Clinical Terms (SNOMED-CT) and Orphacode. It also presents the method for adding semantic meaning to AAV data across the registries using the declarative Relational to Resource Description Framework Mapping Language (R2RML). To meet the second challenge a federated querying approach is presented for accessing aggregated and pseudonymized data, and which supports analysis of AAV data in a manner which protects patient privacy. For additional security the federated querying approach is augmented with a method for auditing queries (and the uplift process) using the provenance ontology (PROV-O) to track when queries and changes occur and by whom. The main contribution of this work is the successful application of semantic web technologies and federated queries to provide a novel infrastructure that can readily incorporate additional registries, thus providing access to harmonised data relating to unprecedented numbers of patients with rare disease, while also meeting data privacy and security concerns.


Assuntos
Web Semântica , Vasculite , Humanos , Doenças Raras , Sistema de Registros , Systematized Nomenclature of Medicine
8.
Bioengineering (Basel) ; 9(3)2022 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-35324779

RESUMO

Shape memory materials have been playing an important role in a wide range of bioengineering applications. At the same time, recent developments of graphene-based nanostructures, such as nanoribbons, have demonstrated that, due to the unique properties of graphene, they can manifest superior electronic, thermal, mechanical, and optical characteristics ideally suited for their potential usage for the next generation of diagnostic devices, drug delivery systems, and other biomedical applications. One of the most intriguing parts of these new developments lies in the fact that certain types of such graphene nanoribbons can exhibit shape memory effects. In this paper, we apply machine learning tools to build an interatomic potential from DFT calculations for highly ordered graphene oxide nanoribbons, a material that had demonstrated shape memory effects with a recovery strain up to 14.5% for 2D layers. The graphene oxide layer can shrink to a metastable phase with lower constant lattice through the application of an electric field, and returns to the initial phase through an external mechanical force. The deformation leads to an electronic rearrangement and induces magnetization around the oxygen atoms. DFT calculations show no magnetization for sufficiently narrow nanoribbons, while the machine learning model can predict the suppression of the metastable phase for the same narrower nanoribbons. We can improve the prediction accuracy by analyzing only the evolution of the metastable phase, where no magnetization is found according to DFT calculations. The model developed here allows also us to study the evolution of the phases for wider nanoribbons, that would be computationally inaccessible through a pure DFT approach. Moreover, we extend our analysis to realistic systems that include vacancies and boron or nitrogen impurities at the oxygen atomic positions. Finally, we provide a brief overview of the current and potential applications of the materials exhibiting shape memory effects in bioengineering and biomedical fields, focusing on data-driven approaches with machine learning interatomic potentials.

9.
Artigo em Inglês | MEDLINE | ID: mdl-35055616

RESUMO

Accident, injury, and fatality rates remain disproportionately high in the construction industry. Information from past mishaps provides an opportunity to acquire insights, gather lessons learned, and systematically improve safety outcomes. Advances in data science and industry 4.0 present new unprecedented opportunities for the industry to leverage, share, and reuse safety information more efficiently. However, potential benefits of information sharing are missed due to accident data being inconsistently formatted, non-machine-readable, and inaccessible. Hence, learning opportunities and insights cannot be captured and disseminated to proactively prevent accidents. To address these issues, a novel information sharing system is proposed utilizing linked data, ontologies, and knowledge graph technologies. An ontological approach is developed to semantically model safety information and formalize knowledge pertaining to accident cases. A multi-algorithmic approach is developed for automatically processing and converting accident case data to a resource description framework (RDF), and the SPARQL protocol is deployed to enable query functionalities. Trials and test scenarios utilizing a dataset of 200 real accident cases confirm the effectiveness and efficiency of the system in improving information access, retrieval, and reusability. The proposed development facilitates a new "open" information sharing paradigm with major implications for industry 4.0 and data-driven applications in construction safety management.


Assuntos
Ontologias Biológicas , Web Semântica , Disseminação de Informação , Reconhecimento Automatizado de Padrão , Semântica
10.
JAMIA Open ; 4(4): ooab106, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34927003

RESUMO

OBJECTIVE: Clinical Knowledge Authoring Tools (CKATs) are integral to the computerized Clinical Decision Support (CDS) development life cycle. CKATs enable authors to generate accurate, complete, and reliable digital knowledge artifacts in a relatively efficient and affordable manner. This scoping review aims to compare knowledge authoring tools and derive the common features of CKATs. MATERIALS AND METHODS: We performed a keyword-based literature search, followed by a snowball search, to identify peer-reviewed publications describing the development or use of CKATs. We used PubMed and Embase search engines to perform the initial search (n = 1579). After removing duplicate articles, nonrelevant manuscripts, and not peer-reviewed publication, we identified 47 eligible studies describing 33 unique CKATs. The reviewed CKATs were further assessed, and salient characteristics were extracted and grouped as common CKAT features. RESULTS: Among the identified CKATs, 55% use an open source platform, 70% provide an application programming interface for CDS system integration, and 79% provide features to validate/test the knowledge. The majority of the reviewed CKATs describe the flow of information, offer a graphical user interface for knowledge authors, and provide intellisense coding features (94%, 97%, and 97%, respectively). The composed list of criteria for CKAT included topics such as simulating the clinical setting, validating the knowledge, standardized clinical models and vocabulary, and domain independence. None of the reviewed CKATs met all common criteria. CONCLUSION: Our scoping review highlights the key specifications for a CKAT. The CKAT specification proposed in this review can guide CDS authors in developing more targeted CKATs.

11.
Artigo em Inglês | MEDLINE | ID: mdl-34886304

RESUMO

Three key challenges to a whole-system approach to process improvement in health systems are the complexity of socio-technical activity, the capacity to change purposefully, and the consequent capacity to proactively manage and govern the system. The literature on healthcare improvement demonstrates the persistence of these problems. In this project, the Access-Risk-Knowledge (ARK) Platform, which supports the implementation of improvement projects, was deployed across three healthcare organisations to address risk management for the prevention and control of healthcare-associated infections (HCAIs). In each organisation, quality and safety experts initiated an ARK project and participated in a follow-up survey and focus group. The platform was then evaluated against a set of fifteen needs related to complex system transformation. While the results highlighted concerns about the platform's usability, feedback was generally positive regarding its effectiveness and potential value in supporting HCAI risk management. The ARK Platform addresses the majority of identified needs for system transformation; other needs were validated in the trial or are undergoing development. This trial provided a starting point for a knowledge-based solution to enhance organisational governance and develop shared knowledge through a Community of Practice that will contribute to sustaining and generalising that change.


Assuntos
Atenção à Saúde , Conhecimento , Programas Governamentais , Instalações de Saúde , Organizações
12.
Stud Health Technol Inform ; 285: 277-280, 2021 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-34734886

RESUMO

The aim of the work presented in this article was to develop a conceptual model for behavior change progress, which could be used for automated assessment of reasons for progress or non-progress. The model was developed based on theories for behavior change, and evaluated by domain experts. The information models of two prototype systems of a digital coach under development for preventing cardio-vascular diseases and stress respectively, were evaluated by comparing the content of the prototypes with concepts in the model. The conceptual model was found useful as instrument to evaluate to what extent the prototypes are based in theories for behavior change, whether some vital information is missing, and to identify mechanisms for short and long time goal setting. Moreover, the connection between the ontology underpinning the prototypes and the conceptual model could be defined. Future work includes the integration of the conceptual model to function as a meta-ontology, which could be used for capturing causal relationships between information collected by the applications at baseline and at runtime.


Assuntos
Modelos Teóricos
13.
Procedia Comput Sci ; 192: 3580-3589, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34630752

RESUMO

The Covid-19 pandemic caused serious turbulences in most aspects of humans activities. Due to the need to address the epidemic developments at extreme scales, ranging from the entire population of the country down to the level of individual citizens, a construction of adequate mathematical models faces substantial difficulties caused by lacking knowledge of the mechanisms driving transmission of the infections and the very nature of the resulting disease. Therefore, in modeling Covid-19 and its effects, a shift from the knowledge-intensive systems paradigm to the data-intensive one is needed. The current paper is devoted to the architecture of ProME, a data-intensive system for forecasting the Covid-19 and decision making support needed to mitigate the pandemics effects. The system has been constructed to address the mentioned challenges and to allow further relatively easy adaptations to the dynamically changing situation. The system is mainly based on open-source solutions so can be reproduced whenever similar challenges occur.

14.
MethodsX ; 8: 101477, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34434876

RESUMO

A method is proposed for generating application domain agnostic data for training and evaluating machine learning systems. The proposed method randomly generates an expert system network based upon user specified parameters. This expert system serves as a generic model of an unspecified phenomena. The expert system is run to determine the ideal output from a set of random inputs. These inputs and ideal output are used for training and testing a machine learning system. This allows a machine learning technology to be developed and tested without requiring compatible test data to be collected or before data collection as a proof-of-concept validation of system operations. It also allows systems to be tested without data error noise or with known levels of noise and with other perturbations, to facilitate analysis. It may also facilitate testing system security, adversarial attacks and conducting other types of research into machine learning systems. •Provides an application domain agnostic way to test machine learning technologies and facilitates the generalization of results.•Allows technologies to be tested with data with different characteristics without having to locate datasets that have these characteristics.•Utilizes randomly generated network to represent non-specific phenomena which can be used for training and testing machine learning techniques.

15.
Genes (Basel) ; 12(7)2021 06 29.
Artigo em Inglês | MEDLINE | ID: mdl-34209818

RESUMO

This study builds a coronavirus knowledge graph (KG) by merging two information sources. The first source is Analytical Graph (AG), which integrates more than 20 different public datasets related to drug discovery. The second source is CORD-19, a collection of published scientific articles related to COVID-19. We combined both chemo genomic entities in AG with entities extracted from CORD-19 to expand knowledge in the COVID-19 domain. Before populating KG with those entities, we perform entity disambiguation on CORD-19 collections using Wikidata. Our newly built KG contains at least 21,700 genes, 2500 diseases, 94,000 phenotypes, and other biological entities (e.g., compound, species, and cell lines). We define 27 relationship types and use them to label each edge in our KG. This research presents two cases to evaluate the KG's usability: analyzing a subgraph (ego-centered network) from the angiotensin-converting enzyme (ACE) and revealing paths between biological entities (hydroxychloroquine and IL-6 receptor; chloroquine and STAT1). The ego-centered network captured information related to COVID-19. We also found significant COVID-19-related information in top-ranked paths with a depth of three based on our path evaluation.


Assuntos
COVID-19 , Bases de Conhecimento , COVID-19/epidemiologia , COVID-19/etiologia , Cloroquina/farmacologia , Gráficos por Computador , Bases de Dados Factuais , Doença pelo Vírus Ebola/tratamento farmacológico , Humanos , Hidroxicloroquina/farmacologia , Reconhecimento Automatizado de Padrão , Peptidil Dipeptidase A/genética , PubMed , Receptores de Interleucina-6/sangue , SARS-CoV-2 , Fator de Transcrição STAT1
16.
Sensors (Basel) ; 21(6)2021 Mar 17.
Artigo em Inglês | MEDLINE | ID: mdl-33803046

RESUMO

The copper mining industry is increasingly using artificial intelligence methods to improve copper production processes. Recent studies reveal the use of algorithms, such as Artificial Neural Network, Support Vector Machine, and Random Forest, among others, to develop models for predicting product quality. Other studies compare the predictive models developed with these machine learning algorithms in the mining industry as a whole. However, not many copper mining studies published compare the results of machine learning techniques for copper recovery prediction. This study makes a detailed comparison between three models for predicting copper recovery by leaching, using four datasets resulting from mining operations in Northern Chile. The algorithms used for developing the models were Random Forest, Support Vector Machine, and Artificial Neural Network. To validate these models, four indicators or values of merit were used: accuracy (acc), precision (p), recall (r), and Matthew's correlation coefficient (mcc). This paper describes the dataset preparation and the refinement of the threshold values used for the predictive variable most influential on the class (the copper recovery). Results show both a precision over 98.50% and also the model with the best behavior between the predicted and the real values. Finally, the obtained models have the following mean values: acc = 0.943, p = 88.47, r = 0.995, and mcc = 0.232. These values are highly competitive when compared with those obtained in similar studies using other approaches in the context.

17.
Neural Netw ; 139: 168-178, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33721699

RESUMO

Although zero-shot learning (ZSL) has an inferential capability of recognizing new classes that have never been seen before, it always faces two fundamental challenges of the cross modality and cross-domain challenges. In order to alleviate these problems, we develop a generative network-based ZSL approach equipped with the proposed Cross Knowledge Learning (CKL) scheme and Taxonomy Regularization (TR). In our approach, the semantic features are taken as inputs, and the output is the synthesized visual features generated from the corresponding semantic features. CKL enables more relevant semantic features to be trained for semantic-to-visual feature embedding in ZSL, while Taxonomy Regularization (TR) significantly improves the intersections with unseen images with more generalized visual features generated from generative network. Extensive experiments on several benchmark datasets (i.e., AwA1, AwA2, CUB, NAB and aPY) show that our approach is superior to these state-of-the-art methods in terms of ZSL image classification and retrieval.


Assuntos
Classificação/métodos , Bases de Conhecimento , Aprendizado de Máquina , Humanos , Semântica
18.
Brain Inform ; 7(1): 2, 2020 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-32219575

RESUMO

Research advancements in neuroscience entail the production of a substantial amount of data requiring interpretation, analysis, and integration. The complexity and diversity of neuroscience data necessitate the development of specialized databases and associated standards and protocols. NeuroMorpho.Org is an online repository of over one hundred thousand digitally reconstructed neurons and glia shared by hundreds of laboratories worldwide. Every entry of this public resource is associated with essential metadata describing animal species, anatomical region, cell type, experimental condition, and additional information relevant to contextualize the morphological content. Until recently, the lack of a user-friendly, structured metadata annotation system relying on standardized terminologies constituted a major hindrance in this effort, limiting the data release pace. Over the past 2 years, we have transitioned the original spreadsheet-based metadata annotation system of NeuroMorpho.Org to a custom-developed, robust, web-based framework for extracting, structuring, and managing neuroscience information. Here we release the metadata portal publicly and explain its functionality to enable usage by data contributors. This framework facilitates metadata annotation, improves terminology management, and accelerates data sharing. Moreover, its open-source development provides the opportunity of adapting and extending the code base to other related research projects with similar requirements. This metadata portal is a beneficial web companion to NeuroMorpho.Org which saves time, reduces errors, and aims to minimize the barrier for direct knowledge sharing by domain experts. The underlying framework can be progressively augmented with the integration of increasingly autonomous machine intelligence components.

19.
Artif Intell Med ; 109: 101896, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-34756213

RESUMO

Atrial Fibrillation (AF) at an early stage has a short duration and is sometimes asymptomatic, making it difficult to detect. Although the use of mobile sensing devices has provided the possibility of real-time cardiac detection, it is highly susceptible to the noise signals generated by body movement. Therefore, it is of great importance to study early AF detection for mobile terminals with noise immunity. Extracting effective features is critical to AF detection, but most existing studies used shallow time, frequency or time-frequency energy (TFE) features with weak representation that need to rely on long ECG signals to capture the variation in information and cannot sensitively capture the subtle variation caused by early AF. In addition, most studies only considered the discrimination of AF from normal sinus rhythm (SR) signals, ignoring the interference of noise and other signals. This study proposes three new deep features that can accurately capture the subtle variation in short ECG segments caused by early AF, examines the interference of noise and other signals generated by the mobile terminal and proposes a new feature set for early AF detection. We use six popular classifiers to evaluate the relative effectiveness of the deep features we developed against the features extracted by two conventional time-frequency methods, and the performance of the proposed feature set for detecting early AF. Our study shows that the best results for classifying AF and SR are obtained by Random Forest (RF), with 0.96 F1 score. The best results for classifying four types of signal are obtained by Extreme Gradient Boosting (XGBoost), with overall F1 score 0.88 and the individual F1 score for classifying SR, AF, Other and Noisy with 0.91, 0.90, 0.73, and 0.96, respectively.


Assuntos
Fibrilação Atrial , Algoritmos , Fibrilação Atrial/diagnóstico , Diagnóstico Precoce , Eletrocardiografia , Humanos , Processamento de Sinais Assistido por Computador , Fatores de Tempo
20.
BMC Med Inform Decis Mak ; 19(Suppl 4): 152, 2019 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-31391056

RESUMO

BACKGROUND: The existing community-wide bodies of biomedical ontologies are known to contain quality and content problems. Past research has revealed various errors related to their semantics and logical structure. Automated tools may help to ease the ontology construction, maintenance, assessment and quality assurance processes. However, there are relatively few tools that exist that can provide this support to knowledge engineers. METHOD: We introduce OntoKeeper as a web-based tool that can automate quality scoring for ontology developers. We enlisted 5 experienced ontologists to test the tool and then administered the System Usability Scale to measure their assessment. RESULTS: In this paper, we present usability results from 5 ontologists revealing high system usability of OntoKeeper, and use-cases that demonstrate its capabilities in previous published biomedical ontology research. CONCLUSION: To the best of our knowledge, OntoKeeper is the first of a few ontology evaluation tools that can help provide ontology evaluation functionality for knowledge engineers with good usability.


Assuntos
Ontologias Biológicas , Software , Humanos , Conhecimento , Semântica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...