Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 41
Filter
1.
Cogn Sci ; 48(5): e13450, 2024 May.
Article in English | MEDLINE | ID: mdl-38747458

ABSTRACT

A word often expresses many different morphological functions. Which part of a word contributes to which part of the overall meaning is not always clear, which raises the question as to how such functions are learned. While linguistic studies tacitly assume the co-occurrence of cues and outcomes to suffice in learning these functions (Baer-Henney, Kügler, & van de Vijver, 2015; Baer-Henney & van de Vijver, 2012), error-driven learning suggests that contingency rather than contiguity is crucial (Nixon, 2020; Ramscar, Yarlett, Dye, Denny, & Thorpe, 2010). In error-driven learning, cues gain association strength if they predict a certain outcome, and they lose strength if the outcome is absent. This reduction of association strength is called unlearning. So far, it is unclear if such unlearning has consequences for cue-outcome associations beyond the ones that get reduced. To test for such consequences of unlearning, we taught participants morphophonological patterns in an artificial language learning experiment. In one block, the cues to two morphological outcomes-plural and diminutive-co-occurred within the same word forms. In another block, a single cue to only one of these two outcomes was presented in a different set of word forms. We wanted to find out, if participants unlearn this cue's association with the outcome that is not predicted by the cue alone, and if this allows the absent cue to be associated with the absent outcome. Our results show that if unlearning was possible, participants learned that the absent cue predicts the absent outcome better than if no unlearning was possible. This effect was stronger if the unlearned cue was more salient. This shows that unlearning takes place even if no alternative cues to an absent outcome are provided, which highlights that learners take both positive and negative evidence into account-as predicted by domain general error-driven learning.


Subject(s)
Cues , Learning , Humans , Female , Language , Adult , Male , Young Adult , Linguistics
2.
Med Image Anal ; 95: 103159, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38663318

ABSTRACT

We have developed a United framework that integrates three self-supervised learning (SSL) ingredients (discriminative, restorative, and adversarial learning), enabling collaborative learning among the three learning ingredients and yielding three transferable components: a discriminative encoder, a restorative decoder, and an adversary encoder. To leverage this collaboration, we redesigned nine prominent self-supervised methods, including Rotation, Jigsaw, Rubik's Cube, Deep Clustering, TransVW, MoCo, BYOL, PCRL, and Swin UNETR, and augmented each with its missing components in a United framework for 3D medical imaging. However, such a United framework increases model complexity, making 3D pretraining difficult. To overcome this difficulty, we propose stepwise incremental pretraining, a strategy that unifies the pretraining, in which a discriminative encoder is first trained via discriminative learning, the pretrained discriminative encoder is then attached to a restorative decoder, forming a skip-connected encoder-decoder, for further joint discriminative and restorative learning. Last, the pretrained encoder-decoder is associated with an adversarial encoder for final full discriminative, restorative, and adversarial learning. Our extensive experiments demonstrate that the stepwise incremental pretraining stabilizes United models pretraining, resulting in significant performance gains and annotation cost reduction via transfer learning in six target tasks, ranging from classification to segmentation, across diseases, organs, datasets, and modalities. This performance improvement is attributed to the synergy of the three SSL ingredients in our United framework unleashed through stepwise incremental pretraining. Our codes and pretrained models are available at GitHub.com/JLiangLab/StepwisePretraining.


Subject(s)
Imaging, Three-Dimensional , Supervised Machine Learning , Humans , Imaging, Three-Dimensional/methods , Algorithms
3.
Cogn Sci ; 48(2): e13404, 2024 02.
Article in English | MEDLINE | ID: mdl-38294059

ABSTRACT

Sequence learning is fundamental to a wide range of cognitive functions. Explaining how sequences-and the relations between the elements they comprise-are learned is a fundamental challenge to cognitive science. However, although hundreds of articles addressing this question are published each year, the actual learning mechanisms involved in the learning of sequences are rarely investigated. We present three experiments that seek to examine these mechanisms during a typing task. Experiments 1 and 2 tested learning during typing single letters on each trial. Experiment 3 tested for "chunking" of these letters into "words." The results of these experiments were used to examine the mechanisms that could best account for them, with a focus on two particular proposals: statistical transitional probability learning and discriminative error-driven learning. Experiments 1 and 2 showed that error-driven learning was a better predictor of response latencies than either n-gram frequencies or transitional probabilities. No evidence for chunking was found in Experiment 3, probably due to interspersing visual cues with the motor response. In addition, learning occurred across a greater distance in Experiment 1 than Experiment 2, suggesting that the greater predictability that comes with increased structure leads to greater learnability. These results shed new light on the mechanism responsible for sequence learning. Despite the widely held assumption that transitional probability learning is essential to this process, the present results suggest instead that the sequences are learned through a process of discriminative learning, involving prediction and feedback from prediction error.


Subject(s)
Learning , Serial Learning , Humans , Serial Learning/physiology , Learning/physiology , Cognition , Reaction Time/physiology , Cues
4.
J Multidiscip Healthc ; 16: 4039-4051, 2023.
Article in English | MEDLINE | ID: mdl-38116305

ABSTRACT

Introduction: The paper presents a hybrid generative/discriminative classification method aimed at identifying abnormalities, such as cancer, in lung X-ray images. Methods: The proposed method involves a generative model that performs generative embedding in Probabilistic Component Analysis (PrCA). The primary goal of PrCA is to model co-existing information within a probabilistic framework, with the intent to locate the feature vector space for X-ray data based on a defined kernel structure. A kernel-based classifier, grounded in information-theoretic principles, was employed in this study. Results: The performance of the proposed method is evaluated against nearest neighbour (NN) classifiers and support vector machine (SVM) classifiers, which use a diagonal covariance matrix and incorporate normal linear and non-linear kernels, respectively. Discussion: The method is found to achieve superior accuracy, offering a viable solution to the class of problems presented. Accuracy rates achieved by the kernels in the NN and SVM models were 95.02% and 92.45%, respectively, suggesting the method's competitiveness with state-of-the-art approaches.

5.
Cogn Psychol ; 146: 101598, 2023 11.
Article in English | MEDLINE | ID: mdl-37716109

ABSTRACT

Trial-to-trial effects have been found in a number of studies, indicating that processing a stimulus influences responses in subsequent trials. A special case are priming effects which have been modelled successfully with error-driven learning (Marsolek, 2008), implying that participants are continuously learning during experiments. This study investigates whether trial-to-trial learning can be detected in an unprimed lexical decision experiment. We used the Discriminative Lexicon Model (DLM; Baayen et al., 2019), a model of the mental lexicon with meaning representations from distributional semantics, which models error-driven incremental learning with the Widrow-Hoff rule. We used data from the British Lexicon Project (BLP; Keuleers et al., 2012) and simulated the lexical decision experiment with the DLM on a trial-by-trial basis for each subject individually. Then, reaction times were predicted with Generalized Additive Models (GAMs), using measures derived from the DLM simulations as predictors. We extracted measures from two simulations per subject (one with learning updates between trials and one without), and used them as input to two GAMs. Learning-based models showed better model fit than the non-learning ones for the majority of subjects. Our measures also provide insights into lexical processing and individual differences. This demonstrates the potential of the DLM to model behavioural data and leads to the conclusion that trial-to-trial learning can indeed be detected in unprimed lexical decision. Our results support the possibility that our lexical knowledge is subject to continuous changes.


Subject(s)
Discrimination Learning , Semantics , Humans , Learning , Reaction Time/physiology , Individuality , Decision Making
6.
Behav Sci (Basel) ; 13(6)2023 Jun 07.
Article in English | MEDLINE | ID: mdl-37366731

ABSTRACT

Fear generalization is a crucial mechanism underlying maladaptive behavior, but factors influencing this process are not fully understood. We investigated the effects of cue training and context on fear generalization and how cognitive rules influence responses to different conditions. We also examined the role of stimulus intensity in fear generalization to provide insight into fear generalization mechanisms. Participants (n = 104) completed a fear emotion task with two stages: acquisition and generalization testing. Subjective fear expectancy ratings were used as outcome measures. Participants who received single threat cue training exhibited stronger fear generalization responses than those who received discrimination training with threat and safe cues. Participants who received discrimination training and used linear rules had the strongest fear response to the largest stimulus. Therefore, a safe cue may mitigate fear generalization but could increase fear responses to more intense stimuli. Altering context did not change the fear generalization response because fear generalization is mainly governed by the association between the conditioned stimulus and the unconditioned fear stimulus. The present study emphasizes the multifaceted nature of fear generalization and the importance of examining multiple factors to understand this phenomenon. These findings elucidate fear learning and provide insights needed for effective interventions for maladaptive behavior.

7.
Neuropsychologia ; 180: 108468, 2023 02 10.
Article in English | MEDLINE | ID: mdl-36610492

ABSTRACT

Despite its widespread use to measure functional lateralization of language in healthy subjects, the neurocognitive bases of the visual field effect in lateralized reading are still debated. Crucially, the lack of knowledge on the nature of the visual field effect is accompanied by a lack of knowledge on the relative impact of psycholinguistic factors on its measurement, thus potentially casting doubts on its validity as a functional laterality measure. In this study, an eye-tracking-controlled tachistoscopic lateralized lexical decision task (Experiment 1) was administered to 60 right-handed and 60 left-handed volunteers and word length, orthographic neighborhood, word frequency, and imageability were manipulated. The magnitude of visual field effect was bigger in right-handed than in left-handed participants. Across the whole sample, a visual field-by-frequency interaction was observed, whereby a comparatively smaller effect of word frequency was detected in the left visual field/right hemisphere (LVF/RH) than in the right visual field/left hemisphere (RVF/LH). In a subsequent computational study (Experiment 2), efficient (LH) and inefficient (RH) activation of lexical orthographic nodes was modelled by means of the Naïve Discriminative Learning approach. Computational data simulated the effect of visual field and its interaction with frequency observed in the Experiment 1. Data suggest that the visual field effect can be biased by word frequency. Less distinctive connections between orthographic cues and lexical/semantic output units in the RH than in the LH can account for the emergence of the visual field effect and its interaction with word frequency.


Subject(s)
Reading , Visual Fields , Humans , Brain , Language , Functional Laterality/physiology , Reaction Time
8.
Front Hum Neurosci ; 17: 1242720, 2023.
Article in English | MEDLINE | ID: mdl-38259337

ABSTRACT

Word frequency is a strong predictor in most lexical processing tasks. Thus, any model of word recognition needs to account for how word frequency effects arise. The Discriminative Lexicon Model (DLM) models lexical processing with mappings between words' forms and their meanings. Comprehension and production are modeled via linear mappings between the two domains. So far, the mappings within the model can either be obtained incrementally via error-driven learning, a computationally expensive process able to capture frequency effects, or in an efficient, but frequency-agnostic solution modeling the theoretical endstate of learning (EL) where all words are learned optimally. In the present study we show how an efficient, yet frequency-informed mapping between form and meaning can be obtained (Frequency-informed learning; FIL). We find that FIL well approximates an incremental solution while being computationally much cheaper. FIL shows a relatively low type- and high token-accuracy, demonstrating that the model is able to process most word tokens encountered by speakers in daily life correctly. We use FIL to model reaction times in the Dutch Lexicon Project by means of a Gaussian Location Scale Model and find that FIL predicts well the S-shaped relationship between frequency and the mean of reaction times but underestimates the variance of reaction times for low frequency words. FIL is also better able to account for priming effects in an auditory lexical decision task in Mandarin Chinese, compared to EL. Finally, we used ordered data from CHILDES to compare mappings obtained with FIL and incremental learning. We show that the mappings are highly correlated, but that with FIL some nuances based on word ordering effects are lost. Our results show how frequency effects in a learning model can be simulated efficiently, and raise questions about how to best account for low-frequency words in cognitive models.

9.
Domain Adapt Represent Transf (2022) ; 13542: 66-76, 2022 Sep.
Article in English | MEDLINE | ID: mdl-36507899

ABSTRACT

Uniting three self-supervised learning (SSL) ingredients (discriminative, restorative, and adversarial learning) enables collaborative representation learning and yields three transferable components: a discriminative encoder, a restorative decoder, and an adversary encoder. To leverage this advantage, we have redesigned five prominent SSL methods, including Rotation, Jigsaw, Rubik's Cube, Deep Clustering, and TransVW, and formulated each in a United framework for 3D medical imaging. However, such a United framework increases model complexity and pretraining difficulty. To overcome this difficulty, we develop a stepwise incremental pretraining strategy, in which a discriminative encoder is first trained via discriminative learning, the pretrained discriminative encoder is then attached to a restorative decoder, forming a skip-connected encoder-decoder, for further joint discriminative and restorative learning, and finally, the pretrained encoder-decoder is associated with an adversarial encoder for final full discriminative, restorative, and adversarial learning. Our extensive experiments demonstrate that the stepwise incremental pretraining stabilizes United models training, resulting in significant performance gains and annotation cost reduction via transfer learning for five target tasks, encompassing both classification and segmentation, across diseases, organs, datasets, and modalities. This performance is attributed to the synergy of the three SSL ingredients in our United framework unleashed via stepwise incremental pretraining. All codes and pretrained models are available at GitHub.com/JLiangLab/StepwisePretraining.

10.
Interspeech ; 2022: 2018-2022, 2022.
Article in English | MEDLINE | ID: mdl-36341466

ABSTRACT

Major Depressive Disorder (MDD) is a severe illness that affects millions of people, and it is critical to diagnose this disorder as early as possible. Detecting depression from voice signals can be of great help to physicians and can be done without any invasive procedure. Since relevant labelled data are scarce, we propose a modified Instance Discriminative Learning (IDL) method, an unsupervised pre-training technique, to extract augment-invariant and instance-spread-out embeddings. In terms of learning augment-invariant embeddings, various data augmentation methods for speech are investigated, and time-masking yields the best performance. To learn instance-spreadout embeddings, we explore methods for sampling instances for a training batch (distinct speaker-based and random sampling). It is found that the distinct speaker-based sampling provides better performance than the random one, and we hypothesize that this result is because relevant speaker information is preserved in the embedding. Additionally, we propose a novel sampling strategy, Pseudo Instance-based Sampling (PIS), based on clustering algorithms, to enhance spread-out characteristics of the embeddings. Experiments are conducted with DepAudioNet on DAIC-WOZ (English) and CONVERGE (Mandarin) datasets, and statistically significant improvements, with p-value 0.0015 and 0.05, respectively, are observed using PIS in the detection of MDD relative to the baseline without pre-training.

11.
Biol Lett ; 18(11): 20220321, 2022 11.
Article in English | MEDLINE | ID: mdl-36382372

ABSTRACT

Transitive inference (TI) describes the ability to infer relationships between stimuli that have never been seen together before. Social cichlids can use TI in a social setting where observers assess dominance status after witnessing contests between different dyads of conspecifics. If cognitive processes are domain-general, animals should use abilities evolved in a social context also in a non-social context. Therefore, if TI is domain-general in fish, social fish should also be able to use TI in non-social tasks. Here we tested whether the cooperatively breeding cichlid Neolamprologus pulcher can infer transitive relationships between artificial stimuli in a non-social context. We used an associative learning paradigm where the fish received a food reward when correctly solving a colour discrimination task. Eleven of 12 subjects chose the predicted outcome for TI in the first test trial and five subjects performed with 100% accuracy in six successive test trials. We found no evidence that the fish solved the TI task by value transfer. Our findings show that fish also use TI in non-social tasks with artificial stimuli, thus generalizing past results reported in a social context and hinting toward a domain-general cognitive mechanism.


Subject(s)
Cichlids , Cues , Animals , Color , Reward
12.
Front Psychol ; 13: 754395, 2022.
Article in English | MEDLINE | ID: mdl-35548492

ABSTRACT

The uncertainty associated with paradigmatic families has been shown to correlate with their phonetic characteristics in speech, suggesting that representations of complex sublexical relations between words are part of speaker knowledge. To better understand this, recent studies have used two-layer neural network models to examine the way paradigmatic uncertainty emerges in learning. However, to date this work has largely ignored the way choices about the representation of inflectional and grammatical functions (IFS) in models strongly influence what they subsequently learn. To explore the consequences of this, we investigate how representations of IFS in the input-output structures of learning models affect the capacity of uncertainty estimates derived from them to account for phonetic variability in speech. Specifically, we examine whether IFS are best represented as outputs to neural networks (as in previous studies) or as inputs by building models that embody both choices and examining their capacity to account for uncertainty effects in the formant trajectories of word final [ɐ], which in German discriminates around sixty different IFS. Overall, we find that formants are enhanced as the uncertainty associated with IFS decreases. This result dovetails with a growing number of studies of morphological and inflectional families that have shown that enhancement is associated with lower uncertainty in context. Importantly, we also find that in models where IFS serve as inputs-as our theoretical analysis suggests they ought to-its uncertainty measures provide better fits to the empirical variance observed in [ɐ] formants than models where IFS serve as outputs. This supports our suggestion that IFS serve as cognitive cues during speech production, and should be treated as such in modeling. It is also consistent with the idea that when IFS serve as inputs to a learning network. This maintains the distinction between those parts of the network that represent message and those that represent signal. We conclude by describing how maintaining a "signal-message-uncertainty distinction" can allow us to reconcile a range of apparently contradictory findings about the relationship between articulation and uncertainty in context.

13.
Behav Res Methods ; 54(5): 2221-2251, 2022 10.
Article in English | MEDLINE | ID: mdl-35032022

ABSTRACT

Error-driven learning algorithms, which iteratively adjust expectations based on prediction error, are the basis for a vast array of computational models in the brain and cognitive sciences that often differ widely in their precise form and application: they range from simple models in psychology and cybernetics to current complex deep learning models dominating discussions in machine learning and artificial intelligence. However, despite the ubiquity of this mechanism, detailed analyses of its basic workings uninfluenced by existing theories or specific research goals are rare in the literature. To address this, we present an exposition of error-driven learning - focusing on its simplest form for clarity - and relate this to the historical development of error-driven learning models in the cognitive sciences. Although historically error-driven models have been thought of as associative, such that learning is thought to combine preexisting elemental representations, our analysis will highlight the discriminative nature of learning in these models and the implications of this for the way how learning is conceptualized. We complement our theoretical introduction to error-driven learning with a practical guide to the application of simple error-driven learning models in which we discuss a number of example simulations, that are also presented in detail in an accompanying tutorial.


Subject(s)
Artificial Intelligence , Discrimination Learning , Humans , Machine Learning , Algorithms , Brain
14.
Front Psychol ; 12: 720713, 2021.
Article in English | MEDLINE | ID: mdl-34867600

ABSTRACT

This study addresses a series of methodological questions that arise when modeling inflectional morphology with Linear Discriminative Learning. Taking the semi-productive German noun system as example, we illustrate how decisions made about the representation of form and meaning influence model performance. We clarify that for modeling frequency effects in learning, it is essential to make use of incremental learning rather than the end-state of learning. We also discuss how the model can be set up to approximate the learning of inflected words in context. In addition, we illustrate how in this approach the wug task can be modeled. The model provides an excellent memory for known words, but appropriately shows more limited performance for unseen data, in line with the semi-productivity of German noun inflection and generalization performance of native German speakers.

15.
Front Psychol ; 12: 678712, 2021.
Article in English | MEDLINE | ID: mdl-34408699

ABSTRACT

Recent evidence for the influence of morphological structure on the phonetic output goes unexplained by established models of speech production and by theories of the morphology-phonology interaction. Linear discriminative learning (LDL) is a recent computational approach in which such effects can be expected. We predict the acoustic duration of 4,530 English derivative tokens with the morphological functions DIS, NESS, LESS, ATION, and IZE in natural speech data by using predictors derived from a linear discriminative learning network. We find that the network is accurate in learning speech production and comprehension, and that the measures derived from it are successful in predicting duration. For example, words are lengthened when the semantic support of the word's predicted articulatory path is stronger. Importantly, differences between morphological categories emerge naturally from the network, even when no morphological information is provided. The results imply that morphological effects on duration can be explained without postulating theoretical units like the morpheme, and they provide further evidence that LDL is a promising alternative for modeling speech production.

16.
SN Comput Sci ; 2(6): 420, 2021.
Article in English | MEDLINE | ID: mdl-34426802

ABSTRACT

Deep learning (DL), a branch of machine learning (ML) and artificial intelligence (AI) is nowadays considered as a core technology of today's Fourth Industrial Revolution (4IR or Industry 4.0). Due to its learning capabilities from data, DL technology originated from artificial neural network (ANN), has become a hot topic in the context of computing, and is widely applied in various application areas like healthcare, visual recognition, text analytics, cybersecurity, and many more. However, building an appropriate DL model is a challenging task, due to the dynamic nature and variations in real-world problems and data. Moreover, the lack of core understanding turns DL methods into black-box machines that hamper development at the standard level. This article presents a structured and comprehensive view on DL techniques including a taxonomy considering various types of real-world tasks like supervised or unsupervised. In our taxonomy, we take into account deep networks for supervised or discriminative learning, unsupervised or generative learning as well as hybrid learning and relevant others. We also summarize real-world application areas where deep learning techniques can be used. Finally, we point out ten potential aspects for future generation DL modeling with research directions. Overall, this article aims to draw a big picture on DL modeling that can be used as a reference guide for both academia and industry professionals.

17.
Front Psychol ; 12: 680889, 2021.
Article in English | MEDLINE | ID: mdl-34434139

ABSTRACT

Recent research has shown that seemingly identical suffixes such as word-final /s/ in English show systematic differences in their phonetic realisations. Most recently, durational differences between different types of /s/ have been found to also hold for pseudowords: the duration of /s/ is longest in non-morphemic contexts, shorter with suffixes, and shortest in clitics. At the theoretical level such systematic differences are unexpected and unaccounted for in current theories of speech production. Following a recent approach, we implemented a linear discriminative learning network trained on real word data in order to predict the duration of word-final non-morphemic and plural /s/ in pseudowords using production data by a previous production study. It is demonstrated that the duration of word-final /s/ in pseudowords can be predicted by LDL networks trained on real word data. That is, duration of word-final /s/ in pseudowords can be predicted based on their relations to the lexicon.

18.
Cognition ; 212: 104697, 2021 07.
Article in English | MEDLINE | ID: mdl-33798952

ABSTRACT

In the last two decades, statistical clustering models have emerged as a dominant model of how infants learn the sounds of their language. However, recent empirical and computational evidence suggests that purely statistical clustering methods may not be sufficient to explain speech sound acquisition. To model early development of speech perception, the present study used a two-layer network trained with Rescorla-Wagner learning equations, an implementation of discriminative, error-driven learning. The model contained no a priori linguistic units, such as phonemes or phonetic features. Instead, expectations about the upcoming acoustic speech signal were learned from the surrounding speech signal, with spectral components extracted from an audio recording of child-directed speech as both inputs and outputs of the model. To evaluate model performance, we simulated infant responses in the high-amplitude sucking paradigm using vowel and fricative pairs and continua. The simulations were able to discriminate vowel and consonant pairs and predicted the infant speech perception data. The model also showed the greatest amount of discrimination in the expected spectral frequencies. These results suggest that discriminative error-driven learning may provide a viable approach to modelling early infant speech sound acquisition.


Subject(s)
Speech Perception , Speech , Child , Humans , Infant , Language Development , Learning , Phonetics
19.
Mol Neurobiol ; 58(3): 1248-1259, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33123980

ABSTRACT

Olfactory perception and learning play a vital role in the animal's entire life for habituation and survival. Insulin and insulin receptor signaling is well known to modulate the olfactory function and is also involved in the regulation of neurogenesis. A very high density of insulin receptors is present in the olfactory bulb (OB), the brain area involved in the olfactory function, where active adult neurogenesis also takes place. Hence, our study was aimed to explore the effect of intranasal insulin treatment and the involvement of the subventricular zone-olfactory bulb (SVZ-OB) neurogenesis on olfactory discriminative learning and memory in intracerebroventricular streptozotocin (ICV STZ) rat model. Our findings revealed that intranasal insulin treatment significantly increased ICV STZ-induced decrease in the olfactory discriminative learning. No significant change was observed in the post-treatment olfactory memory upon ICV STZ and intranasal insulin treatment. ICV STZ also caused a substantial decline in the SVZ-OB neurogenesis, as indicated by the reduction in the number of 5-bromo-2'-deoxyuridine (BrdU+) cells, BrdU+ Nestin+ cells, and Doublecortin (DCX+) cells, which was reversed by intranasal insulin treatment. Intranasal insulin treatment also increased the number of immature neurons reaching the olfactory bulb (OB) as indicated by an increase in the DCX expression in the OB as compared to the ICV STZ administered group. ICV STZ administration also resulted in the modulation of the expression of the genes regulating postnatal SVZ-OB neurogenesis like Mammalian achaete scute homolog 1 (Mash 1), Neurogenin 2 (Ngn 2), Neuronal differentiation 1 (Neuro D1), and T box brain protein 2 (Tbr 2). Intranasal insulin treatment reverted these changes in gene expression, which might be responsible for the observed increase in the SVZ-OB neurogenesis and hence the olfactory discriminative learning.


Subject(s)
Discrimination Learning , Insulin/administration & dosage , Lateral Ventricles/pathology , Neurogenesis , Olfactory Bulb/pathology , Up-Regulation , Administration, Intranasal , Animals , Bromodeoxyuridine/metabolism , Discrimination Learning/drug effects , Disease Models, Animal , Doublecortin Domain Proteins , Doublecortin Protein , Gene Expression Regulation/drug effects , Insulin/pharmacology , Male , Microtubule-Associated Proteins/metabolism , Nestin/metabolism , Neurogenesis/drug effects , Neurogenesis/genetics , Neuropeptides/metabolism , Olfactory Bulb/drug effects , Rats, Sprague-Dawley , Streptozocin , Up-Regulation/drug effects , Up-Regulation/genetics
20.
Behav Res Methods ; 53(3): 945-976, 2021 06.
Article in English | MEDLINE | ID: mdl-32377973

ABSTRACT

Pseudowords have long served as key tools in psycholinguistic investigations of the lexicon. A common assumption underlying the use of pseudowords is that they are devoid of meaning: Comparing words and pseudowords may then shed light on how meaningful linguistic elements are processed differently from meaningless sound strings. However, pseudowords may in fact carry meaning. On the basis of a computational model of lexical processing, linear discriminative learning (LDL Baayen et al., Complexity, 2019, 1-39, 2019), we compute numeric vectors representing the semantics of pseudowords. We demonstrate that quantitative measures gauging the semantic neighborhoods of pseudowords predict reaction times in the Massive Auditory Lexical Decision (MALD) database (Tucker et al., 2018). We also show that the model successfully predicts the acoustic durations of pseudowords. Importantly, model predictions hinge on the hypothesis that the mechanisms underlying speech production and comprehension interact. Thus, pseudowords emerge as an outstanding tool for gauging the resonance between production and comprehension. Many pseudowords in the MALD database contain inflectional suffixes. Unlike many contemporary models, LDL captures the semantic commonalities of forms sharing inflectional exponents without using the linguistic construct of morphemes. We discuss methodological and theoretical implications for models of lexical processing and morphological theory. The results of this study, complementing those on real words reported in Baayen et al., (Complexity, 2019, 1-39, 2019), thus provide further evidence for the usefulness of LDL both as a cognitive model of the mental lexicon, and as a tool for generating new quantitative measures that are predictive for human lexical processing.


Subject(s)
Comprehension , Discrimination Learning , Humans , Psycholinguistics , Semantics , Speech
SELECTION OF CITATIONS
SEARCH DETAIL
...