Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
NPJ Digit Med ; 5(1): 194, 2022 Dec 26.
Artigo em Inglês | MEDLINE | ID: mdl-36572766

RESUMO

There is an increasing interest in developing artificial intelligence (AI) systems to process and interpret electronic health records (EHRs). Natural language processing (NLP) powered by pretrained language models is the key technology for medical AI systems utilizing clinical narratives. However, there are few clinical language models, the largest of which trained in the clinical domain is comparatively small at 110 million parameters (compared with billions of parameters in the general domain). It is not clear how large clinical language models with billions of parameters can help medical AI systems utilize unstructured EHRs. In this study, we develop from scratch a large clinical language model-GatorTron-using >90 billion words of text (including >82 billion words of de-identified clinical text) and systematically evaluate it on five clinical NLP tasks including clinical concept extraction, medical relation extraction, semantic textual similarity, natural language inference (NLI), and medical question answering (MQA). We examine how (1) scaling up the number of parameters and (2) scaling up the size of the training data could benefit these NLP tasks. GatorTron models scale up the clinical language model from 110 million to 8.9 billion parameters and improve five clinical NLP tasks (e.g., 9.6% and 9.5% improvement in accuracy for NLI and MQA), which can be applied to medical AI systems to improve healthcare delivery. The GatorTron models are publicly available at: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og .

2.
Cognition ; 149: 104-20, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26836401

RESUMO

Our starting point is the apparently-contradictory results in the psycholinguistic literature regarding whether, when interpreting a definite referring expressions, listeners process relative to the common ground from the earliest moments of processing. We propose that referring expressions are not interpreted relative solely to the common ground or solely to one's Private (or egocentric) knowledge, but rather reflect the simultaneous integration of the two perspectives. We implement this proposal in a Bayesian model of reference resolution, focusing on the model's predictions for two prior studies: Keysar, Barr, Balin, and Brauner (2000) and Heller, Grodner and Tanenhaus (2008). We test the model's predictions in a visual-world eye-tracking experiment, demonstrating that the original results cannot simply be attributed to different perspective-taking strategies, and showing how they can arise from the same perspective-taking behavior.


Assuntos
Comunicação , Relações Interpessoais , Psicolinguística , Teorema de Bayes , Compreensão , Movimentos Oculares , Humanos
3.
Neural Comput ; 20(6): 1473-94, 2008 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-18254696

RESUMO

In cortical neural networks, connections from a given neuron are either inhibitory or excitatory but not both. This constraint is often ignored by theoreticians who build models of these systems. There is currently no general solution to the problem of converting such unrealistic network models into biologically plausible models that respect this constraint. We demonstrate a constructive transformation of models that solves this problem for both feedforward and dynamic recurrent networks. The resulting models give a close approximation to the original network functions and temporal dynamics of the system, and they are biologically plausible. More precisely, we identify a general form for the solution to this problem. As a result, we also describe how the precise solution for a given cortical network can be determined empirically.


Assuntos
Córtex Cerebral/citologia , Inibição Neural/fisiologia , Redes Neurais de Computação , Sinapses/fisiologia , Animais , Simulação por Computador , Modelos Neurológicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...