Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Behav Res Methods ; 2023 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-37789187

RESUMO

Tools like ChatGPT, which allow people to unlock the potential of large language models (LLMs), have taken the world by storm. ChatGPT's ability to produce written output of remarkable quality has inspired, or forced, academics to consider its consequences for both research and education. In particular, the question of what constitutes authorship, and how to evaluate (scientific) contributions has received a lot of attention. However, its impact on (online) human data collection has mostly flown under the radar. The current paper examines how ChatGPT can be (mis)used in the context of generating norming data. We found that ChatGPT is able to produce sensible output, resembling that of human participants, for a typicality rating task. Moreover, the test-retest reliability of ChatGPT's ratings was similar to that of human participants tested 1 day apart. We discuss the relevance of these findings in the context of (online) human data collection, focusing both on opportunities (e.g., (risk-)free pilot data) and challenges (e.g., data fabrication).

2.
Q J Exp Psychol (Hove) ; 72(8): 2084-2109, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-30704340

RESUMO

Recent advances in the field of computational linguistics have led to the development of various prediction-based models of semantics. These models seek to infer word representations from large text collections by predicting target words from neighbouring words (or vice versa). The resulting representations are vectors in a continuous space, collectively called word embeddings. Although psychological plausibility was not a primary concern for the developers of predictive models, it has been the topic of several recent studies in the field of psycholinguistics. That is, word embeddings have been linked to similarity ratings, word associations, semantic priming, word recognition latencies, and so on. Here, we build on this work by investigating category structure. Throughout seven experiments, we sought to predict human typicality judgements from two languages, Dutch and English, using different semantic spaces. More specifically, we extracted a number of predictor variables, and evaluated how well they could capture the typicality gradient of common categories (e.g., birds, fruit, vehicles, etc.). Overall, the performance of predictive models was rather modest and did not compare favourably with that of an older count-based model. These results are somewhat disappointing given the enthusiasm surrounding predictive models. Possible explanations and future directions are discussed.


Assuntos
Formação de Conceito , Modelos Psicológicos , Psicolinguística , Semântica , Adulto , Humanos
3.
BMC Bioinformatics ; 19(1): 259, 2018 07 09.
Artigo em Inglês | MEDLINE | ID: mdl-29986664

RESUMO

BACKGROUND: Bilingual lexicon induction (BLI) is an important task in the biomedical domain as translation resources are usually available for general language usage, but are often lacking in domain-specific settings. In this article we consider BLI as a classification problem and train a neural network composed of a combination of recurrent long short-term memory and deep feed-forward networks in order to obtain word-level and character-level representations. RESULTS: The results show that the word-level and character-level representations each improve state-of-the-art results for BLI and biomedical translation mining. The best results are obtained by exploiting the synergy between these word-level and character-level representations in the classification model. We evaluate the models both quantitatively and qualitatively. CONCLUSIONS: Translation of domain-specific biomedical terminology benefits from the character-level representations compared to relying solely on word-level representations. It is beneficial to take a deep learning approach and learn character-level representations rather than relying on handcrafted representations that are typically used. Our combined model captures the semantics at the word level while also taking into account that specialized terminology often originates from a common root form (e.g., from Greek or Latin).


Assuntos
Mineração de Dados/métodos , Aprendizado Profundo , Processamento de Linguagem Natural , Semântica , Humanos , Bases de Conhecimento , Multilinguismo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...