Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 64
Filtrar
1.
Perspect Behav Sci ; 46(1): 119-136, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37006601

RESUMO

The evolutionary theory of behavior dynamics (ETBD) is a complexity theory, which means that it is stated in the form of simple low-level rules, the repeated operation of which generates high-level outcomes that can be compared to data. The low-level rules of the theory implement Darwinian processes of selection, reproduction, and mutation. This tutorial is an introduction to the ETBD for a general audience, and illustrates how the theory is used to animate artificial organisms that can behave continuously in any experimental environment. Extensive research has shown that the theory generates behavior in artificial organisms that is indistinguishable in qualitative and quantitative detail from the behavior of live organisms in a wide variety of experimental environments. An overview and summary of this supporting evidence is provided. The theory may be understood to be computationally equivalent to the biological nervous system, which means that the algorithmic operation of the theory and the material operation of the nervous system give the same answers. The applied relevance of the theory is also discussed, including the creation of artificial organisms with various forms of psychopathology that can be used to study clinical problems and their treatment. Finally, possible future directions are discussed, such as the extension of the theory to behavior in a two-dimensional grid world.

2.
J Exp Anal Behav ; 119(1): 117-128, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36416717

RESUMO

A test of the evolutionary theory was conducted by replicating Bradshaw et al.'s (1977, 1978, 1979) experiments in which human participants worked on single-alternative variable-interval (VI) schedules of reinforcement under three punishment conditions: no punishment, superimposed VI punishment, and superimposed variable-ratio (VR) punishment. Artificial organisms (AOs) animated by the theory worked in the same environments. Four principal findings were reported for the human participants: (1) their behavior was well described by an hyperbola in all conditions, (2) the asymptote of the hyperbola under VI punishment was equal to the asymptote in the absence of punishment, but the asymptote under VR punishment was lower than the asymptote in the absence of punishment, (3) the parameter in the denominator of the hyperbola was larger under both VI and VR punishment than in the absence of punishment, and (4) response suppression under punishment was greater at lower than at higher reinforcement frequencies. These four outcomes were also observed in the behavior of the AOs working in the same environments, thereby confirming the theory's first-order predictions about the effects of punishment on single-alternative responding.


Assuntos
Punição , Reforço Psicológico , Humanos , Esquema de Reforço , Evolução Biológica
5.
J Exp Anal Behav ; 116(2): 225-242, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34383960

RESUMO

Artificial organisms (AOs) animated by an evolutionary theory of behavior dynamics (ETBD) worked on concurrent interval schedules with a standard reinforcer magnitude on 1 alternative and a range of reinforcer magnitudes on the other. The reinforcer magnitudes on the second alternative were hedonically scaled using the generalized matching law. The AOs then worked on single interval schedules that arranged various combinations of the scaled reinforcer magnitudes and a range of nominal schedule values. This produced bivariate response rate data to which 5 candidate equations were fitted. One equation was found to provide the best description of the bivariate data in terms of percentage of variance accounted for, information criterion value, and residual profile. This equation consisted of 2 factors, 1 entailing the scaled magnitude, 1 entailing the obtained reinforcement rate, and both expressed in the form of exponentiated hyperbolas. The theory's prediction of the bivariate equation, along with additional predictions of the theory, were tested on data from an experiment in which rats pressed levers for various concentrations of sucrose pellets. The bivariate equation predicted by the theory was confirmed, as were all the additional predictions of the theory that could be tested on this data set.


Assuntos
Comportamento de Escolha , Reforço Psicológico , Animais , Evolução Biológica , Ratos , Esquema de Reforço , Sacarose
6.
J Exp Anal Behav ; 115(3): 747-768, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33711206

RESUMO

We performed three experiments to improve the quality and retention of data obtained from a Procedure for Rapidly Establishing Steady-State Behavior (PRESS-B; Klapes et al., 2020). In Experiment 1, 120 participants worked on nine concurrent random-interval random-interval (conc RI RI) schedules and were assigned to four conditions of varying changeover delay (COD) length. The 0.5-s COD condition group exhibited the fewest instances of exclusive reinforcer acquisition. Importantly, this group did not differ in generalized matching law (GML) fit quality from the other groups. In Experiment 2, 60 participants worked on nine conc RI RI schedules with a wider range of scheduled reinforcement rate ratios than was used in Experiment 1. Participants showed dramatic reductions in exclusive reinforcer acquisition. Experiment 3 entailed a replication of Experiment 2 wherein blackout periods were implemented between the schedule presentations and each schedule remained in operation until at least one reinforcer was acquired on each alternative. GML fit quality was slightly more consistent in Experiment 3 than in the previous experiments. Thus, these results suggest that future PRESS-B studies should implement a shorter COD, a wider and richer scheduled reinforcement rate ratio range, and brief blackouts between schedule presentations for optimal data quality and retention.


Assuntos
Condicionamento Operante , Reforço Psicológico , Comportamento de Escolha , Humanos , Esquema de Reforço
7.
Perspect Behav Sci ; 44(4): 561-580, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35098025

RESUMO

This article provides an overview of highlights from 60 years of basic research on choice that are relevant to the assessment and treatment of clinical problems. The quantitative relations developed in this research provide useful information about a variety of clinical problems including aggressive, antisocial, and delinquent behavior, attention-deficit/hyperactivity disorder (ADHD), bipolar disorder, chronic pain syndrome, intellectual disabilities, pedophilia, and self-injurious behavior. A recent development in this field is an evolutionary theory of behavior dynamics that is used to animate artificial organisms (AOs). The behavior of AOs animated by the theory has been shown to conform to the quantitative relations that have been developed in the choice literature over the years, which means that the theory generates these relations as emergent outcomes, and therefore provides a theoretical basis for them. The theory has also been used to create AOs that exhibit specific psychopathological behavior, the assessment and treatment of which has been studied virtually. This modeling of psychopathological behavior has contributed to our understanding of the nature and treatment of the problems in humans.

8.
Perspect Behav Sci ; 44(4): 581-603, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35098026

RESUMO

The subtypes of automatically reinforced self-injurious behavior (ASIB) delineated by Hagopian and colleagues (Hagopian et al., 2015; 2017) demonstrated how functional-analysis (FA) outcomes may predict the efficacy of various treatments. However, the mechanisms underlying the different patterns of responding obtained during FAs and corresponding differences in treatment efficacy have remained unclear. A central cause of this lack of clarity is that some proposed mechanisms, such as differences in the reinforcing efficacy of the products of ASIB, are difficult to manipulate. One solution may be to model subtypes of ASIB using mathematical models of behavior in which all aspects of the behavior can be controlled. In the current study, we used the evolutionary theory of behavior dynamics (ETBD; McDowell, 2019) to model the subtypes of ASIB, evaluate predictions of treatment efficacy, and replicate recent research aiming to test explanations for subtype differences. Implications for future research related to ASIB are discussed.

9.
J Exp Anal Behav ; 114(3): 430-446, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-33025598

RESUMO

The axiomatic principle that all behavior is choice was incorporated into a revised implementation of an evolutionary theory's account of behavior on single schedules. According to this implementation, target responding occurs in the context of background responding and reinforcement. In Phase 1 of the research, the target responding of artificial organisms (AOs) animated by the revised theory was found to be well described by an exponentiated hyperbola, the parameters of which varied as a function of the background reinforcement rate. In Phase 2, the effect of reinforcer magnitude on the target behavior of the AOs was studied. As in Phase 1, the AOs' behavior was well described by an exponentiated hyperbola, the parameters of which varied with both the target reinforcer magnitude and the background reinforcement rate. Evidence from experiments with live organisms was found to be consistent with the Phase-1 predictions of the revised theory. The Phase-2 predictions have not been tested. The revised implementation of the theory can be used to study the effects of superimposing punishment on single-schedule responding, and it may lead to the discovery of a function that relates response rate to both the rate and magnitude of reinforcement on single schedules.


Assuntos
Evolução Biológica , Comportamento de Escolha , Animais , Comportamento , Humanos , Modelos Biológicos , Teoria Psicológica , Esquema de Reforço , Reforço Psicológico
10.
J Exp Anal Behav ; 114(1): 142-159, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32543721

RESUMO

Previous continuous choice laboratory procedures for human participants are either prohibitively time-intensive or result in inadequate fits of the generalized matching law (GML). We developed a rapid-acquisition laboratory procedure (Procedure for Rapidly Establishing Steady-State Behavior, or PRESS-B) for studying human continuous choice that reduces participant burden and produces data that is well-described by the GML. To test the procedure, 27 human participants were exposed to 9 independent concurrent random-interval random-interval reinforcement schedules over the course of a single, 37-min session. Fits of the GML to the participants' data accounted for large proportions of variance (median R2 : 0.94), with parameter estimates that were similar to those previously found in human continuous choice studies [median a: 0.67; median log(b): -0.02]. In summary, PRESS-B generates human continuous choice behavior in the laboratory that conforms to the GML with limited experimental duration.


Assuntos
Comportamento de Escolha , Aprendizagem por Discriminação , Condicionamento Operante , Humanos , Estimulação Luminosa , Esquema de Reforço , Reforço Psicológico
11.
J Exp Anal Behav ; 112(2): 128-143, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31385310

RESUMO

An implementation of punishment in the evolutionary theory of behavior dynamics is proposed, and is applied to responding on concurrent schedules of reinforcement with superimposed punishment. In this implementation, punishment causes behaviors to mutate, and to do so with a higher probability in a lean reinforcement context than in a rich one. Computational experiments were conducted in an attempt to replicate three findings from experiments with live organisms. These are (1) when punishment is superimposed on one component of a concurrent schedule, response rate decreases in the punished component and increases in the unpunished component, (2) when punishment is superimposed on both components at equal scheduled rates, preference increases over its no-punishment baseline, and (3) when punishment is superimposed on both components at rates that are proportional to the scheduled rates of reinforcement, preference remains unchanged from the baseline preference. Artificial organisms animated by the theory, and working on concurrent schedules with superimposed punishment, reproduced all of these findings. Given this outcome, it may be possible to discover a steady-state mathematical description of punished choice in live organisms by studying the punished choice behavior of artificial organisms animated by the evolutionary theory.


Assuntos
Evolução Biológica , Teoria Psicológica , Punição/psicologia , Animais , Comportamento de Escolha , Columbidae , Condicionamento Operante , Modelos Psicológicos , Ratos , Esquema de Reforço , Reforço Psicológico
12.
J Exp Anal Behav ; 111(2): 166-182, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30706474

RESUMO

Regularization, or shrinkage estimation, refers to a class of statistical methods that constrain the variability of parameter estimates when fitting models to data. These constraints move parameters toward a group mean or toward a fixed point (e.g., 0). Regularization has gained popularity across many fields for its ability to increase predictive power over classical techniques. However, articles published in JEAB and other behavioral journals have yet to adopt these methods. This paper reviews some common regularization schemes and speculates as to why articles published in JEAB do not use them. In response, we propose our own shrinkage estimator that avoids some of the possible objections associated with the reviewed regularization methods. Our estimator works by mixing weighted individual and group (WIG) data rather than by constraining parameters. We test this method on a problem of model selection. Specifically, we conduct a simulation study on the selection of matching-law-based punishment models, comparing WIG with ordinary least squares (OLS) regression, and find that, on average, WIG outperforms OLS in this context.


Assuntos
Análise do Comportamento Aplicada/estatística & dados numéricos , Modelos Estatísticos , Estatística como Assunto , Simulação por Computador , Análise dos Mínimos Quadrados , Punição
13.
J Exp Anal Behav ; 111(1): 130-145, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30656712

RESUMO

The evolutionary theory of behavior dynamics is a complexity theory that instantiates the Darwinian principles of selection, reproduction, and mutation in a genetic algorithm. The algorithm is used to animate artificial organisms that behave continuously in time and can be placed in any experimental environment. The present paper is an update on the status of the theory. It includes a summary of the evidence supporting the theory, a list of the theory's untested predictions, and a discussion of how the algorithmic operations of the theory may correspond to material reality. Based on the evidence reviewed here, the evolutionary theory appears to be a strong candidate for a comprehensive theory of adaptive behavior.


Assuntos
Comportamento , Evolução Biológica , Teoria Psicológica , Algoritmos , Animais , Comportamento Animal , Humanos
14.
J Exp Anal Behav ; 110(3): 323-335, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30195256

RESUMO

An evolutionary theory of adaptive behavior dynamics was tested by studying the behavior of artificial organisms (AOs) animated by the theory, working on concurrent ratio schedules with unequal and equal ratios in the components. The evolutionary theory implements Darwinian rules of selection, reproduction, and mutation in the form of a genetic algorithm that causes a population of potential behaviors to evolve under the selection pressure of consequences from the environment. On concurrent ratio schedules with unequal ratios in the components, the AOs tended to respond exclusively on the component with the smaller ratio, provided that ratio was not too large and the difference between the ratios was not too small. On concurrent ratio schedules with equal ratios in the components, the AOs tended to respond exclusively on one component, provided the equal ratios were not too large. In addition, the AOs' preference on the latter schedules adjusted rapidly when the equal ratios were changed between conditions, but their steady-state preference was a continuous function of the value of the equal ratios. Most of these outcomes are consistent with the results of experiments with live organisms, and consequently support the evolutionary theory.


Assuntos
Comportamento , Evolução Biológica , Animais , Simulação por Computador , Meio Ambiente , Humanos , Teoria Psicológica , Reprodução
15.
J Exp Anal Behav ; 109(2): 336-348, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29509286

RESUMO

A direct-suppression, or subtractive, model of punishment has been supported as the qualitatively and quantitatively superior matching law-based punishment model (Critchfield, Paletz, MacAleese, & Newland, 2003; de Villiers, 1980; Farley, 1980). However, this conclusion was made without testing the model against its predecessors, including the original (Herrnstein, 1961) and generalized (Baum, 1974) matching laws, which have different numbers of parameters. To rectify this issue, we reanalyzed a set of data collected by Critchfield et al. (2003) using information theoretic model selection criteria. We found that the most advanced version of the direct-suppression model (Critchfield et al., 2003) does not convincingly outperform the generalized matching law, an account that does not include punishment rates in its prediction of behavior allocation. We hypothesize that this failure to outperform the generalized matching law is due to significant theoretical shortcomings in model development. To address these shortcomings, we present a list of requirements that all punishment models should satisfy. The requirements include formal statements of flexibility, efficiency, and adherence to theory. We compare all past punishment models to the items on this list through algebraic arguments and model selection criteria. None of the models presented in the literature thus far meets all of the requirements.


Assuntos
Modelos Psicológicos , Punição/psicologia , Animais , Condicionamento Clássico , Inibição Psicológica , Modelos Estatísticos , Teoria Psicológica , Ratos , Reforço Psicológico
16.
Behav Processes ; 140: 61-68, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-28373055

RESUMO

Two competing predictions of matching theory and an evolutionary theory of behavior dynamics, and one additional prediction of the evolutionary theory, were tested in a critical experiment in which human participants worked on concurrent schedules for money (Dallery et al., 2005). The three predictions concerned the descriptive adequacy of matching theory equations, and of equations describing emergent equilibria of the evolutionary theory. Tests of the predictions falsified matching theory and supported the evolutionary theory.


Assuntos
Evolução Biológica , Modelos Psicológicos , Teoria Psicológica , Humanos , Esquema de Reforço , Reforço Psicológico
18.
J Exp Anal Behav ; 105(3): 445-58, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-27193244

RESUMO

A survey of residual analysis in behavior-analytic research reveals that existing methods are problematic in one way or another. A new test for residual trends is proposed that avoids the problematic features of the existing methods. It entails fitting cubic polynomials to sets of residuals and comparing their effect sizes to those that would be expected if the sets of residuals were random. To this end, sampling distributions of effect sizes for fits of a cubic polynomial to random data were obtained by generating sets of random standardized residuals of various sizes, n. A cubic polynomial was then fitted to each set of residuals and its effect size was calculated. This yielded a sampling distribution of effect sizes for each n. To test for a residual trend in experimental data, the median effect size of cubic-polynomial fits to sets of experimental residuals can be compared to the median of the corresponding sampling distribution of effect sizes for random residuals using a sign test. An example from the literature, which entailed comparing mathematical and computational models of continuous choice, is used to illustrate the utility of the test.


Assuntos
Interpretação Estatística de Dados , Estatística como Assunto , Análise de Variância , Pesquisa Comportamental/métodos , Humanos , Modelos Estatísticos , Método de Monte Carlo
19.
Behav Processes ; 127: 52-61, 2016 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-27018201

RESUMO

The unified theory of reinforcement has been used to develop models of behavior over the last 20 years (Donahoe et al., 1993). Previous research has focused on the theory's concordance with the respondent behavior of humans and animals. In this experiment, neural networks were developed from the theory to extend the unified theory of reinforcement to operant behavior on single-alternative variable-interval schedules. This area of operant research was selected because previously developed neural networks could be applied to it without significant alteration. Previous research with humans and animals indicates that the pattern of their steady-state behavior is hyperbolic when plotted against the obtained rate of reinforcement (Herrnstein, 1970). A genetic algorithm was used in the first part of the experiment to determine parameter values for the neural networks, because values that were used in previous research did not result in a hyperbolic pattern of behavior. After finding these parameters, hyperbolic and other similar functions were fitted to the behavior produced by the neural networks. The form of the neural network's behavior was best described by an exponentiated hyperbola (McDowell, 1986; McLean and White, 1983; Wearden, 1981), which was derived from the generalized matching law (Baum, 1974). In post-hoc analyses the addition of a baseline rate of behavior significantly improved the fit of the exponentiated hyperbola and removed systematic residuals. The form of this function was consistent with human and animal behavior, but the estimated parameter values were not.


Assuntos
Condicionamento Operante , Redes Neurais de Computação , Teoria Psicológica , Reforço Psicológico , Comportamento de Escolha , Esquema de Reforço
20.
J Exp Anal Behav ; 105(2): 270-90, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-27002687

RESUMO

McDowell's evolutionary theory of behavior dynamics (McDowell, 2004) instantiates populations of behaviors (abstractly represented by integers) that evolve under the selection pressure of the environment in the form of positive reinforcement. Each generation gives rise to the next via low-level Darwinian processes of selection, recombination, and mutation. The emergent patterns can be analyzed and compared to those produced by biological organisms. The purpose of this project was to explore the effects of high mutation rates on behavioral variability in environments that arranged different reinforcer rates and magnitudes. Behavioral variability increased with the rate of mutation. High reinforcer rates and magnitudes reduced these effects; low reinforcer rates and magnitudes augmented them. These results are in agreement with live-organism research on behavioral variability. Various combinations of mutation rates, reinforcer rates, and reinforcer magnitudes produced similar high-level outcomes (equifinality). These findings suggest that the independent variables that describe an experimental condition interact; that is, they do not influence behavior independently. These conclusions have implications for the interpretation of high levels of variability, mathematical undermatching, and the matching theory. The last part of the discussion centers on a potential biological counterpart for the rate of mutation, namely spontaneous fluctuations in the brain's default mode network.


Assuntos
Comportamento Animal , Evolução Biológica , Animais , Meio Ambiente , Modelos Psicológicos , Mutação , Reforço Psicológico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...