Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Cogn Sci ; 46(2): e13079, 2022 02.
Article in English | MEDLINE | ID: mdl-35122314

ABSTRACT

Subject-verb agreement errors are common in sentence production. Many studies have used experimental paradigms targeting the production of subject-verb agreement from a sentence preamble (The key to the cabinets) and eliciting verb errors (… *were shiny). Through reanalysis of previous data (50 experiments; 102,369 observations), we show that this paradigm also results in many errors in preamble repetition, particularly of local noun number (The key to the *cabinet). We explore the mechanisms of both errors in parallelism in producing syntax (PIPS), a model in the Gradient Symbolic Computation framework. PIPS models sentence production using a continuous-state stochastic dynamical system that optimizes grammatical constraints (shaped by previous experience) over vector representations of symbolic structures. At intermediate stages in the computation, grammatical constraints allow multiple competing parses to be partially activated, resulting in stable but transient conjunctive blend states. In the context of the preamble completion task, memory constraints reduce the strength of the target structure, allowing for co-activation of non-target parses where the local noun controls the verb (notional agreement and locally agreeing relative clauses) and non-target parses that include structural constituents with contrasting number specifications (e.g., plural instead of singular local noun). Simulations of the preamble completion task reveal that these partially activated non-target parses, as well the need to balance accurate encoding of lexical and syntactic aspects of the prompt, result in errors. In other words: Because sentence processing is embedded in a processor with finite memory and prior experience with production, interference from non-target production plans causes errors.


Subject(s)
Language , Semantics , Humans
2.
Lang Acquis ; 24(4): 283-306, 2017.
Article in English | MEDLINE | ID: mdl-33033424

ABSTRACT

In this paper two dimensions of handshape complexity are analyzed as potential building blocks of phonological contrast-joint complexity and finger group complexity. We ask whether sign language patterns are elaborations of those seen in the gestures produced by hearing people without speech (pantomime) or a more radical re-organization of them. Data from adults and children are analyzed to address issues of cross-linguistic variation, emergence, and acquisition. Study 1 addresses these issues in adult signers and gesturers from the United States, Italy, China, and Nicaragua. Study 2 addresses these issues in child and adult groups (signers and gesturers) from the United States, Italy, and Nicaragua. We argue that handshape undergoes a fairly radical reorganization, including loss and reorganization of iconicity and feature redistribution, as phonologization takes place in both of these dimensions. Moreover, while the patterns investigated here are not evidence of duality of patterning, we conclude that they are indeed phonological, and that they appear earlier than related morphosyntactic patterns that use the same types of handshape.

3.
Front Psychol ; 7: 867, 2016.
Article in English | MEDLINE | ID: mdl-27375543

ABSTRACT

Learning is typically understood as a process in which the behavior of an organism is progressively shaped until it closely approximates a target form. It is easy to comprehend how a motor skill or a vocabulary can be progressively learned-in each case, one can conceptualize a series of intermediate steps which lead to the formation of a proficient behavior. With grammar, it is more difficult to think in these terms. For example, center embedding recursive structures seem to involve a complex interplay between multiple symbolic rules which have to be in place simultaneously for the system to work at all, so it is not obvious how the mechanism could gradually come into being. Here, we offer empirical evidence from a new artificial language (or "artificial grammar") learning paradigm, Locus Prediction, that, despite the conceptual conundrum, recursion acquisition occurs gradually, at least for a simple formal language. In particular, we focus on a variant of the simplest recursive language, a (n) b (n) , and find evidence that (i) participants trained on two levels of structure (essentially ab and aabb) generalize to the next higher level (aaabbb) more readily than participants trained on one level of structure (ab) combined with a filler sentence; nevertheless, they do not generalize immediately; (ii) participants trained up to three levels (ab, aabb, aaabbb) generalize more readily to four levels than participants trained on two levels generalize to three; (iii) when we present the levels in succession, starting with the lower levels and including more and more of the higher levels, participants show evidence of transitioning between the levels gradually, exhibiting intermediate patterns of behavior on which they were not trained; (iv) the intermediate patterns of behavior are associated with perturbations of an attractor in the sense of dynamical systems theory. We argue that all of these behaviors indicate a theory of mental representation in which recursive systems lie on a continuum of grammar systems which are organized so that grammars that produce similar behaviors are near one another, and that people learning a recursive system are navigating progressively through the space of these grammars.

4.
J Exp Psychol Learn Mem Cogn ; 40(2): 326-47, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24245535

ABSTRACT

Psycholinguistic research spanning a number of decades has produced diverging results with regard to the nature of constraint integration in online sentence processing. For example, evidence that language users anticipatorily fixate likely upcoming referents in advance of evidence in the speech signal supports rapid context integration. By contrast, evidence that language users activate representations that conflict with contextual constraints, or only indirectly satisfy them, supports nonintegration or late integration. Here we report on a self-organizing neural network framework that addresses 1 aspect of constraint integration: the integration of incoming lexical information (i.e., an incoming word) with sentence context information (i.e., from preceding words in an unfolding utterance). In 2 simulations, we show that the framework predicts both classic results concerned with lexical ambiguity resolution (Swinney, 1979; Tanenhaus, Leiman, & Seidenberg, 1979), which suggest late context integration, and results demonstrating anticipatory eye movements (e.g., Altmann & Kamide, 1999), which support rapid context integration. We also report 2 experiments using the visual world paradigm that confirm a new prediction of the framework. Listeners heard sentences like "The boy will eat the white …" while viewing visual displays with objects like a white cake (i.e., a predictable direct object of "eat"), white car (i.e., an object not predicted by "eat," but consistent with "white"), and distractors. In line with our simulation predictions, we found that while listeners fixated white cake most, they also fixated white car more than unrelated distractors in this highly constraining sentence (and visual) context.


Subject(s)
Attention/physiology , Concept Formation/physiology , Models, Psychological , Semantics , Verbal Behavior , Eye Movements/physiology , Female , Humans , Male , Photic Stimulation , Psycholinguistics , Students , Universities
5.
Top Cogn Sci ; 5(3): 634-67, 2013 Jul.
Article in English | MEDLINE | ID: mdl-23798028

ABSTRACT

We examine two connectionist networks-a fractal learning neural network (FLNN) and a Simple Recurrent Network (SRN)-that are trained to process center-embedded symbol sequences. Previous work provides evidence that connectionist networks trained on infinite-state languages tend to form fractal encodings. Most such work focuses on simple counting recursion cases (e.g., anbn), which are not comparable to the complex recursive patterns seen in natural language syntax. Here, we consider exponential state growth cases (including mirror recursion), describe a new training scheme that seems to facilitate learning, and note that the connectionist learning of these cases has a continuous metamorphosis property that looks very different from what is achievable with symbolic encodings. We identify a property-ragged progressive generalization-which helps make this difference clearer. We suggest two conclusions. First, the fractal analysis of these more complex learning cases reveals the possibility of comparing connectionist networks and symbolic models of grammatical structure in a principled way-this helps remove the black box character of connectionist networks and indicates how the theory they support is different from symbolic approaches. Second, the findings indicate the value of future, linked mathematical and empirical work on these models-something that is more possible now than it was 10 years ago.


Subject(s)
Computer Simulation , Language , Models, Theoretical , Neural Networks, Computer , Algorithms , Fractals , Generalization, Psychological , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...