Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
Add more filters










Publication year range
1.
Behav Processes ; 208: 104860, 2023 May.
Article in English | MEDLINE | ID: mdl-36967093

ABSTRACT

McDowell's Evolutionary Theory of Behavior Dynamics (ETBD) has been shown to model a wide range of live organism behavior with excellent descriptive accuracy. Recently, artificial organisms (AOs) animated by the ETBD were shown to replicate the resurgence of a target response following downshifts in the density of reinforcement for an alternative response and across repeated iterations of the traditional three-phase resurgence paradigm in a manner commensurate with nonhuman subjects. In the current investigation, we successfully replicated an additional study that used this traditional three-phase resurgence paradigm with human participants. We fitted two models based on the Resurgence as Choice (RaC) theory to the data generated by the AOs. Because the models had varying numbers of free parameters, we used an information-theoretic approach to compare the models against one another. We found that a version of the Resurgence as Choice in Context model that incorporates aspects of Davison and colleague's Contingency Discriminability Model provided the best description of the resurgence data emitted by the AOs when accounting for the models' complexity. Last, we discuss considerations when developing and testing new quantitative models of resurgence that account for the ever-growing literature of resurgence.


Subject(s)
Conditioning, Operant , Reinforcement, Psychology , Humans , Conditioning, Operant/physiology , Reinforcement Schedule , Biological Evolution , Extinction, Psychological/physiology
2.
J Exp Anal Behav ; 119(1): 117-128, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36416717

ABSTRACT

A test of the evolutionary theory was conducted by replicating Bradshaw et al.'s (1977, 1978, 1979) experiments in which human participants worked on single-alternative variable-interval (VI) schedules of reinforcement under three punishment conditions: no punishment, superimposed VI punishment, and superimposed variable-ratio (VR) punishment. Artificial organisms (AOs) animated by the theory worked in the same environments. Four principal findings were reported for the human participants: (1) their behavior was well described by an hyperbola in all conditions, (2) the asymptote of the hyperbola under VI punishment was equal to the asymptote in the absence of punishment, but the asymptote under VR punishment was lower than the asymptote in the absence of punishment, (3) the parameter in the denominator of the hyperbola was larger under both VI and VR punishment than in the absence of punishment, and (4) response suppression under punishment was greater at lower than at higher reinforcement frequencies. These four outcomes were also observed in the behavior of the AOs working in the same environments, thereby confirming the theory's first-order predictions about the effects of punishment on single-alternative responding.


Subject(s)
Punishment , Reinforcement, Psychology , Humans , Reinforcement Schedule , Biological Evolution
3.
Behav Processes ; 203: 104776, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36336310

ABSTRACT

Recently, Redner et al. (2022) examined the nature of resurgence across repeated iterations of the traditional three-phase resurgence procedure with four pigeons. Although extant research findings in this area are mixed, Redner et al. found that resurgence generally increased in magnitude with repetition. These findings provide a baseline against which future research examining resurgence using this three-phase procedure can be compared and contrasted. The purpose of the present investigation was to examine resurgence via concurrent schedule arrangements similar to those described by Redner et al. with 30 artificial organisms (AOs) animated by the Evolutionary Theory of Behavior Dynamics (McDowell, 2004). We quantified the prevalence of resurgence across iterations and found that resurgence occurred in 86.7 % (156 of 180) iterations across all 30 AOs. This is strikingly similar to the resurgence prevalence estimates of 87.5 % reported by both Redner et al. (2022). However, we also found that the magnitude of target responding generally did not change significantly with repetition. This finding is inconsistent with Redner et al. but is consistent with the predictions of prominent quantitative models of behavioral persistence and a number of relevant studies (Volkert et al., 2009; Gratz et al., 2019). We also conducted exploratory analyses to examine how several variables (e.g., sensitivity to reinforcement, reinforcer magnitude, number of sessions of exposure to various phases) affect the prevalence and magnitude of resurgence among AOs.


Subject(s)
Conditioning, Operant , Extinction, Psychological , Animals , Reinforcement Schedule , Reinforcement, Psychology , Columbidae
5.
Behav Processes ; 197: 104623, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35318109

ABSTRACT

McDowell's (2004) Evolutionary Theory of Behavior Dynamics (ETBD) is a computational theory that has reproduced a wide variety of behavioral phenomena observed in material reality. Here, we extended the generality of the ETBD by successfully replicating laboratory studies of resurgence with live animals using artificial organisms (AOs) animated by the theory. We ran AOs on concurrent random-interval random-interval (conc RI RI) schedules of reinforcement wherein one alternative (i.e., a target behavior) was reinforced while the other alternative (i.e., an alternative behavior) was not reinforced. Then, we placed the target behavior on extinction and reinforced the alternative response, producing a shift in allocation of responding from the target behavior to the alternative response. Finally, schedule thinning of the alternative response (i.e., downshifts) resulted in resurgence of target behavior. Our findings indicated that resurgence increased as a function of the relative downshift in reinforcement rate and magnitude, replicating findings from previous studies with live animals. These results further illustrate the utility of the ETBD for generating dynamic behavioral data and serve as a proof-of-concept for a novel computational approach for studying and understanding resurgence in future studies.


Subject(s)
Conditioning, Operant , Extinction, Psychological , Animals , Conditioning, Operant/physiology , Extinction, Psychological/physiology , Reinforcement Schedule , Reinforcement, Psychology
7.
J Exp Anal Behav ; 115(3): 747-768, 2021 05.
Article in English | MEDLINE | ID: mdl-33711206

ABSTRACT

We performed three experiments to improve the quality and retention of data obtained from a Procedure for Rapidly Establishing Steady-State Behavior (PRESS-B; Klapes et al., 2020). In Experiment 1, 120 participants worked on nine concurrent random-interval random-interval (conc RI RI) schedules and were assigned to four conditions of varying changeover delay (COD) length. The 0.5-s COD condition group exhibited the fewest instances of exclusive reinforcer acquisition. Importantly, this group did not differ in generalized matching law (GML) fit quality from the other groups. In Experiment 2, 60 participants worked on nine conc RI RI schedules with a wider range of scheduled reinforcement rate ratios than was used in Experiment 1. Participants showed dramatic reductions in exclusive reinforcer acquisition. Experiment 3 entailed a replication of Experiment 2 wherein blackout periods were implemented between the schedule presentations and each schedule remained in operation until at least one reinforcer was acquired on each alternative. GML fit quality was slightly more consistent in Experiment 3 than in the previous experiments. Thus, these results suggest that future PRESS-B studies should implement a shorter COD, a wider and richer scheduled reinforcement rate ratio range, and brief blackouts between schedule presentations for optimal data quality and retention.


Subject(s)
Conditioning, Operant , Reinforcement, Psychology , Choice Behavior , Humans , Reinforcement Schedule
8.
Perspect Behav Sci ; 44(4): 641-665, 2021 Dec.
Article in English | MEDLINE | ID: mdl-35098029

ABSTRACT

The generalized matching law (GML) has been used to describe the behavior of individual organisms in operant chambers, artificial environments, and nonlaboratory human settings. Most of these analyses have used a handful of participants to determine how well the GML describes choice in the experimental arrangement or how some experimental manipulation influences estimated matching parameters. Though the GML accounts very well for choice in a variety of contexts, the generality of the GML to all individuals in a population is unknown. That is, no known studies have used the GML to describe the individual behavior of all individuals in a population. This is likely because the data from every individual in the population has not historically been available or because time and computational constraints made population-level analyses prohibitive. In this study, we use open data on baseball pitches to provide an example of how big data methods can be combined with the GML to: (1) scale within-subjects designs to the population level; (2) track individual members of a population over time; (3) easily segment the population into subgroups for further analyses within and between groups; and (4) compare GML fits and estimated parameters to performance. These were accomplished for each of 2,374 individuals in a population using 8,467,473 observations of behavior-environment relationships spanning 11 years. In total, this study is a proof of concept for how behavior analysts can use data-science techniques to extend individual-level quantitative analyses of behavior to the population-level focused on domains of social relevance.

9.
J Exp Anal Behav ; 114(3): 430-446, 2020 11.
Article in English | MEDLINE | ID: mdl-33025598

ABSTRACT

The axiomatic principle that all behavior is choice was incorporated into a revised implementation of an evolutionary theory's account of behavior on single schedules. According to this implementation, target responding occurs in the context of background responding and reinforcement. In Phase 1 of the research, the target responding of artificial organisms (AOs) animated by the revised theory was found to be well described by an exponentiated hyperbola, the parameters of which varied as a function of the background reinforcement rate. In Phase 2, the effect of reinforcer magnitude on the target behavior of the AOs was studied. As in Phase 1, the AOs' behavior was well described by an exponentiated hyperbola, the parameters of which varied with both the target reinforcer magnitude and the background reinforcement rate. Evidence from experiments with live organisms was found to be consistent with the Phase-1 predictions of the revised theory. The Phase-2 predictions have not been tested. The revised implementation of the theory can be used to study the effects of superimposing punishment on single-schedule responding, and it may lead to the discovery of a function that relates response rate to both the rate and magnitude of reinforcement on single schedules.


Subject(s)
Biological Evolution , Choice Behavior , Animals , Behavior , Humans , Models, Biological , Psychological Theory , Reinforcement Schedule , Reinforcement, Psychology
10.
J Exp Anal Behav ; 114(1): 142-159, 2020 07.
Article in English | MEDLINE | ID: mdl-32543721

ABSTRACT

Previous continuous choice laboratory procedures for human participants are either prohibitively time-intensive or result in inadequate fits of the generalized matching law (GML). We developed a rapid-acquisition laboratory procedure (Procedure for Rapidly Establishing Steady-State Behavior, or PRESS-B) for studying human continuous choice that reduces participant burden and produces data that is well-described by the GML. To test the procedure, 27 human participants were exposed to 9 independent concurrent random-interval random-interval reinforcement schedules over the course of a single, 37-min session. Fits of the GML to the participants' data accounted for large proportions of variance (median R2 : 0.94), with parameter estimates that were similar to those previously found in human continuous choice studies [median a: 0.67; median log(b): -0.02]. In summary, PRESS-B generates human continuous choice behavior in the laboratory that conforms to the GML with limited experimental duration.


Subject(s)
Choice Behavior , Discrimination Learning , Conditioning, Operant , Humans , Photic Stimulation , Reinforcement Schedule , Reinforcement, Psychology
11.
J Exp Anal Behav ; 112(2): 128-143, 2019 09.
Article in English | MEDLINE | ID: mdl-31385310

ABSTRACT

An implementation of punishment in the evolutionary theory of behavior dynamics is proposed, and is applied to responding on concurrent schedules of reinforcement with superimposed punishment. In this implementation, punishment causes behaviors to mutate, and to do so with a higher probability in a lean reinforcement context than in a rich one. Computational experiments were conducted in an attempt to replicate three findings from experiments with live organisms. These are (1) when punishment is superimposed on one component of a concurrent schedule, response rate decreases in the punished component and increases in the unpunished component, (2) when punishment is superimposed on both components at equal scheduled rates, preference increases over its no-punishment baseline, and (3) when punishment is superimposed on both components at rates that are proportional to the scheduled rates of reinforcement, preference remains unchanged from the baseline preference. Artificial organisms animated by the theory, and working on concurrent schedules with superimposed punishment, reproduced all of these findings. Given this outcome, it may be possible to discover a steady-state mathematical description of punished choice in live organisms by studying the punished choice behavior of artificial organisms animated by the evolutionary theory.


Subject(s)
Biological Evolution , Psychological Theory , Punishment/psychology , Animals , Choice Behavior , Columbidae , Conditioning, Operant , Models, Psychological , Rats , Reinforcement Schedule , Reinforcement, Psychology
12.
J Exp Anal Behav ; 110(3): 323-335, 2018 11.
Article in English | MEDLINE | ID: mdl-30195256

ABSTRACT

An evolutionary theory of adaptive behavior dynamics was tested by studying the behavior of artificial organisms (AOs) animated by the theory, working on concurrent ratio schedules with unequal and equal ratios in the components. The evolutionary theory implements Darwinian rules of selection, reproduction, and mutation in the form of a genetic algorithm that causes a population of potential behaviors to evolve under the selection pressure of consequences from the environment. On concurrent ratio schedules with unequal ratios in the components, the AOs tended to respond exclusively on the component with the smaller ratio, provided that ratio was not too large and the difference between the ratios was not too small. On concurrent ratio schedules with equal ratios in the components, the AOs tended to respond exclusively on one component, provided the equal ratios were not too large. In addition, the AOs' preference on the latter schedules adjusted rapidly when the equal ratios were changed between conditions, but their steady-state preference was a continuous function of the value of the equal ratios. Most of these outcomes are consistent with the results of experiments with live organisms, and consequently support the evolutionary theory.


Subject(s)
Behavior , Biological Evolution , Animals , Computer Simulation , Environment , Humans , Psychological Theory , Reproduction
13.
J Exp Anal Behav ; 109(2): 336-348, 2018 03.
Article in English | MEDLINE | ID: mdl-29509286

ABSTRACT

A direct-suppression, or subtractive, model of punishment has been supported as the qualitatively and quantitatively superior matching law-based punishment model (Critchfield, Paletz, MacAleese, & Newland, 2003; de Villiers, 1980; Farley, 1980). However, this conclusion was made without testing the model against its predecessors, including the original (Herrnstein, 1961) and generalized (Baum, 1974) matching laws, which have different numbers of parameters. To rectify this issue, we reanalyzed a set of data collected by Critchfield et al. (2003) using information theoretic model selection criteria. We found that the most advanced version of the direct-suppression model (Critchfield et al., 2003) does not convincingly outperform the generalized matching law, an account that does not include punishment rates in its prediction of behavior allocation. We hypothesize that this failure to outperform the generalized matching law is due to significant theoretical shortcomings in model development. To address these shortcomings, we present a list of requirements that all punishment models should satisfy. The requirements include formal statements of flexibility, efficiency, and adherence to theory. We compare all past punishment models to the items on this list through algebraic arguments and model selection criteria. None of the models presented in the literature thus far meets all of the requirements.


Subject(s)
Models, Psychological , Punishment/psychology , Animals , Conditioning, Classical , Inhibition, Psychological , Models, Statistical , Psychological Theory , Rats , Reinforcement, Psychology
14.
Behav Processes ; 140: 61-68, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28373055

ABSTRACT

Two competing predictions of matching theory and an evolutionary theory of behavior dynamics, and one additional prediction of the evolutionary theory, were tested in a critical experiment in which human participants worked on concurrent schedules for money (Dallery et al., 2005). The three predictions concerned the descriptive adequacy of matching theory equations, and of equations describing emergent equilibria of the evolutionary theory. Tests of the predictions falsified matching theory and supported the evolutionary theory.


Subject(s)
Biological Evolution , Models, Psychological , Psychological Theory , Humans , Reinforcement Schedule , Reinforcement, Psychology
15.
J Exp Anal Behav ; 105(3): 445-58, 2016 05.
Article in English | MEDLINE | ID: mdl-27193244

ABSTRACT

A survey of residual analysis in behavior-analytic research reveals that existing methods are problematic in one way or another. A new test for residual trends is proposed that avoids the problematic features of the existing methods. It entails fitting cubic polynomials to sets of residuals and comparing their effect sizes to those that would be expected if the sets of residuals were random. To this end, sampling distributions of effect sizes for fits of a cubic polynomial to random data were obtained by generating sets of random standardized residuals of various sizes, n. A cubic polynomial was then fitted to each set of residuals and its effect size was calculated. This yielded a sampling distribution of effect sizes for each n. To test for a residual trend in experimental data, the median effect size of cubic-polynomial fits to sets of experimental residuals can be compared to the median of the corresponding sampling distribution of effect sizes for random residuals using a sign test. An example from the literature, which entailed comparing mathematical and computational models of continuous choice, is used to illustrate the utility of the test.


Subject(s)
Data Interpretation, Statistical , Statistics as Topic , Analysis of Variance , Behavioral Research/methods , Humans , Models, Statistical , Monte Carlo Method
16.
Eat Weight Disord ; 21(3): 487-492, 2016 Sep.
Article in English | MEDLINE | ID: mdl-26545593

ABSTRACT

PURPOSE: The purpose of this study was to examine the differences among actual body size, perceived body size, and ideal body size in overweight and obese young adult women. METHODS: Actual body size was assessed by body mass index (BMI), while self-perceived and ideal body sizes were assessed by the Body image assessment tool-body dimension. Descriptive statistics were calculated and analysis of variance (ANOVA) was performed on actual BMI as a function of perceived BMI. RESULTS: Of the 42 participants included in the study, 12 were overweight (25 ≤ BMI < 30), 18 were obese 1 (30 ≤ BMI < 35), and 12 were obese 2 (35 ≤ BMI ≤ 39.48). The mean ideal body size of participants was 25.34 ± 1.33. Participants in general perceived their body size (BMI: 35.82 ± 1.06) to be higher than their actual body size (32.84 ± 0.95). Overweight participants had a significantly higher mean body size misperception than obese 2 individuals (µ dif = -6.68, p < .001). CONCLUSION: Perception accuracy of body size differs in women by BMI. Weight loss programs need to be tailored to consider body size misperception in order to improve treatment outcomes for overweight and obese young women.


Subject(s)
Body Image/psychology , Body Size/physiology , Obesity/psychology , Overweight/psychology , Self Concept , Size Perception/physiology , Body Mass Index , Female , Humans , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...