Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 34
Filter
Add more filters










Publication year range
1.
J Exp Psychol Anim Behav Process ; 27(1): 79-94, 2001 Jan.
Article in English | MEDLINE | ID: mdl-11199517

ABSTRACT

Some studies have found that extinction leaves response structures unaltered; others have found that response variability is increased. Responding by Long-Evans rats was extinguished after 3 schedules. In one, reinforcement depended on repetitions of a particular response sequence across 3 operanda. In another, sequences were reinforced only if they varied. In the third, reinforcement was yoked: not contingent upon repetitions or variations. In all cases, rare sequences increased during extinction--variability increased--but the ordering of sequence probabilities was generally unchanged, the most common sequences during reinforcement continuing to be most frequent in extinction. The rats' combination of generally doing what worked before but occasionally doing something very different may maximize the possibility of reinforcement from a previously bountiful source while providing necessary variations for new learning.


Subject(s)
Discrimination Learning , Extinction, Psychological , Animals , Conditioning, Operant , Male , Rats , Rats, Long-Evans , Reinforcement Schedule , Reproducibility of Results
2.
Behav Res Methods Instrum Comput ; 32(3): 407-16, 2000 Aug.
Article in English | MEDLINE | ID: mdl-11029813

ABSTRACT

Two pairs of experiments enabled students to compare their own operant behaviors with those of rats. The students played computer games for points, and the rats pressed levers for food. The first pair of experiments showed that, under concurrent schedules of reinforcement, relative frequencies of choices between two alternatives increased linearly in rats and people as functions of relative frequencies of reinforcement, with similar biases and undermatching observed in both species. The second pair of experiments showed that behavioral variability was controlled by reinforcers contingent on variability, this again true for both species. These experiments helped demonstrate the relevance of animal operant research to an explanation of human operant behavior.


Subject(s)
Choice Behavior , Animals , Behavior , Behavior, Animal , Conditioning, Operant , Food , Humans , Rats , Reinforcement, Psychology , Video Games
3.
J Appl Behav Anal ; 33(2): 151-65, 2000.
Article in English | MEDLINE | ID: mdl-10885524

ABSTRACT

Five adolescents with autism, 5 adult control participants, and 4 child controls received rewards for varying their sequences of responses while playing a computer game. In preceding and following phases, rewards were provided at approximately the same rate but were independent of variability. The most important finding was that, when reinforced, variability increased significantly in all groups. Reinforced variability could provide the necessary behavioral substrate for individuals with autism to learn new responses.


Subject(s)
Autistic Disorder/psychology , Reinforcement, Psychology , Stereotypic Movement Disorder/epidemiology , Adolescent , Child , Female , Humans , Male
4.
Psychon Bull Rev ; 7(2): 284-91, 2000 Jun.
Article in English | MEDLINE | ID: mdl-10909135

ABSTRACT

Pigeons' choice reaction times (RTs) increased as a linear function of log2 number of potential target stimuli (Experiments 1-3), as would be predicted by Hick's law. The values of intercepts and slopes decreased with training (Experiments 2 and 3) and with differential reinforcement of short RTs under percentile reinforcement contingencies (Experiment 3). RT functions obtained from human subjects were also consistent with Hick's law, but slopes for pigeons were significantly lower than those for humans (Experiments 4 and 5). These results extend the generality of Hick's law to pigeons but are inconsistent with Jensen's claim that the parameters of the Hick function are related to intelligence.


Subject(s)
Choice Behavior , Columbidae , Intelligence , Psychomotor Performance , Reaction Time , Adult , Animals , Conditioning, Operant , Female , Humans , Male , Models, Psychological , Practice, Psychological , Reinforcement, Psychology , Species Specificity
5.
J Exp Psychol Anim Behav Process ; 26(1): 98-111, 2000 Jan.
Article in English | MEDLINE | ID: mdl-10650547

ABSTRACT

Reinforcement of variability may help to explain operant learning. Three groups of rats were reinforced, in different phases, whenever the following target sequences of left (L) and right (R) lever presses occurred: LR, RLL, LLR, RRLR, RLLRL, and in Experiment 2, LLRRL. One group (variability [VAR]) was concurrently reinforced once per minute for sequence variations, a second group also once per minute but independently of variations, that is, for any sequences (ANY), and a control group (CON) received no additional reinforcers. The 3 groups learned the easiest targets equally. For the most difficult targets, CON animals' responding extinguished whereas both VAR and ANY responded at high rates. Only the VAR animals learned, however. Thus, concurrent reinforcers--contingent on variability or not--helped to maintain responding when difficult sequences were reinforced, but learning those sequences depended on reinforcement of variations.


Subject(s)
Conditioning, Operant , Reinforcement Schedule , Animals , Learning , Male , Rats , Rats, Long-Evans
6.
Behav Brain Res ; 94(1): 51-9, 1998 Jul.
Article in English | MEDLINE | ID: mdl-9708839

ABSTRACT

To test whether instrumental behavior of some children with attention deficit hyperactivity disorder (ADHD) is more variable than control subjects, the sequences of responses by three groups of children were compared: (i) those diagnosed with ADHD who lived in a residential treatment facility for children demonstrating aggressive behavior; (ii) age-matched children from the same facility who did not have an ADHD diagnosis but who had also demonstrated abnormal levels of aggression; and (iii) normal children from a local school. The experiment consisted of rewarding children for responding in a computer 'game' on two keys of a keyboard during each of three phases. In phase 1, rewards were provided independently of sequence variability (IND). In phase 2, rewards depended upon highly variable sequences of left and right responses. Phase 3 was a return to the IND contingencies. The results showed that sequence variability was higher in the sequence variability (VARY) phase than in the preceding IND phase, but remained high when the contingencies returned to IND, thus replicating previous findings. In none of the phases did the ADHD children respond more variably than the controls. However, ADHD subjects made more off-task responses than did controls. Thus, although the present research showed differences between ADHD and control subjects, there was no evidence to support higher behavioral variability in the ADHD subjects.


Subject(s)
Aggression/psychology , Attention Deficit Disorder with Hyperactivity/diagnosis , Attention Deficit and Disruptive Behavior Disorders/diagnosis , Conditioning, Operant , Individuality , Attention Deficit Disorder with Hyperactivity/psychology , Attention Deficit and Disruptive Behavior Disorders/psychology , Attention Deficit and Disruptive Behavior Disorders/therapy , Child , Humans , Motivation , Psychomotor Performance , Reinforcement Schedule , Residential Treatment
7.
J Exp Psychol Anim Behav Process ; 22(4): 497-508, 1996 Oct.
Article in English | MEDLINE | ID: mdl-8865615

ABSTRACT

Anticipation of rewards had different effects on operant variability than on operant repetition. We reinforced variable (VAR) response sequences in groups of rats and pigeons and repetitive (REP) response sequences in separate groups. A fixed number of variations or repetitions was required per food reinforcer (e.g., fixed-ratio 4). Although VAR contingencies resulted in high levels of variability and REP contingencies in high repetition, opposite patterns of performance accuracy were observed as rewards were approached. Likelihood of satisfying REP contingencies increased within the fixed ratio, whereas likelihood of satisfying VAR contingencies decreased. These opposite patterns of accuracy were also generated by conditioned reinforcing stimuli correlated with food. Constraints on variability by proximity to reinforcers may explain some detrimental effects of reward.


Subject(s)
Conditioning, Operant , Motivation , Reinforcement Schedule , Serial Learning , Animals , Columbidae , Male , Mental Recall , Rats , Species Specificity
8.
J Exp Anal Behav ; 65(1): 129-44, 1996 Jan.
Article in English | MEDLINE | ID: mdl-8583193

ABSTRACT

The spontaneously hypertensive rat (SHR) may model aspects of human attention deficit hyperactivity disorder (ADHD). For example, just as responses by children with ADHD tend to be variable, so too SHRs often respond more variably than do Wistar-Kyoto (WKY) control rats. The present study asked whether behavioral variability in the SHR strain is influenced by rearing environment, a question related to hypotheses concerning the etiology of human ADHD. Some rats from each strain were reared in an enriched environment (housed socially), and others were reared in an impoverished environment (housed in isolation). Four groups--enriched SHR, impoverished SHR, enriched WKY, and impoverished WKY--were studied under two reinforcement contingencies, one in which reinforcement was independent of response variability and the other in which reinforcement depended upon high variability. The main finding was that rearing environment did not influence response variability (enriched and impoverished subjects responded similarly throughout). However, rearing environment affected body weight (enriched subjects weighted more than impoverished subjects) and response rate (impoverished subjects generally responded faster than enriched subjects). In addition, SHRs tended to respond variably throughout the experiment, whereas WKYs were more sensitive to the variability contingencies. Thus, behavioral variability was affected by genetic strain and by reinforcement contingency but not by the environment in which the subjects were reared.


Subject(s)
Arousal , Attention Deficit Disorder with Hyperactivity/psychology , Conditioning, Operant , Social Environment , Animals , Arousal/genetics , Attention Deficit Disorder with Hyperactivity/genetics , Body Weight , Disease Models, Animal , Genotype , Humans , Male , Rats , Rats, Inbred SHR , Rats, Inbred WKY , Social Isolation
9.
Physiol Behav ; 56(5): 939-44, 1994 Nov.
Article in English | MEDLINE | ID: mdl-7824595

ABSTRACT

The spontaneously hypertensive rat (SHR) may serve as an animal model of human attention deficit hyperactivity disorder (ADHD). We compared performances of SHRs and Wistar-Kyoto normotensive controls rats (WKY) in two experiments. When rewarded for varying sequences of responses across two manipulanda, the SHRs were more likely to vary than the WKYs. On the other hand, when rewarded for repetitions of a small number of sequences, the WKYs were more likely to learn to repeat. Both of these results confirm previous findings. Injecting 0.75 mg/kg d-amphetamine facilitated learning by SHRs to repeat the required sequences, with amphetamine-injected SHRs learning as rapidly as saline-injected, control WKYs. On the other hand, amphetamine tended to increase variability in both strains when high levels of variations were required for reward, and to decrease it in both strains when low levels of variability were required. Thus, amphetamine may have different effects on reinforced repetitions vs. reinforced variations.


Subject(s)
Arousal/drug effects , Attention Deficit Disorder with Hyperactivity/psychology , Conditioning, Operant/drug effects , Dextroamphetamine/pharmacology , Mental Recall/drug effects , Motor Activity/drug effects , Serial Learning/drug effects , Animals , Disease Models, Animal , Dose-Response Relationship, Drug , Male , Rats , Rats, Inbred SHR , Rats, Inbred WKY , Reinforcement Schedule , Retention, Psychology/drug effects , Species Specificity
10.
J Exp Anal Behav ; 62(1): 149-56, 1994 Jul.
Article in English | MEDLINE | ID: mdl-8064210

ABSTRACT

An apparatus was developed to study social reinforcement in the rat. Four Long-Evans female rats were trained to press a lever via shaping, with the reinforcer being access to a castrated male rat. Responding under a fixed-ratio schedule and in extinction was also observed. Social access was found to be an effective reinforcer. When social reinforcement was compared with food reinforcement under similar conditions of deprivation and reinforcer duration, no significant differences were observed.


Subject(s)
Conditioning, Operant , Rats , Reinforcement, Psychology , Social Environment , Animals , Behavior, Animal , Extinction, Psychological , Female , Habituation, Psychophysiologic
11.
Behav Neural Biol ; 59(2): 126-35, 1993 Mar.
Article in English | MEDLINE | ID: mdl-8476380

ABSTRACT

When spontaneously hypertensive rats (SHR) and Wystar-Kyoto normotensive control rats (WKY) were rewarded in a 12-arm radial maze (Experiment 1), the SHRs varied their arm choices more, making fewer repetition errors than the WKYs. Similarly when rewards depended on variable sequences of responses on two levers in an operant chamber (Experiment 2), SHRs' sequences were more variable than those of WKYs. A requirement for response variability was then combined with a requirement to repeat selected responses in the radial maze (Experiment 3) and operant chamber (Experiment 4). WKYs learned to repeat more readily than the SHRs, whereas SHRs varied more readily. Thus, when subjects had to repeat responses, SHRs were at a disadvantage, but when variability was adaptive, SHRs excelled. The high variability of SHRs, together with their difficulty in learning to repeat, may have parallels in children diagnosed with attention deficit disorder with hyperactivity (ADDH).


Subject(s)
Blood Pressure , Learning , Animals , Behavior, Animal , Conditioning, Operant , Male , Rats , Rats, Inbred WKY , Reinforcement, Psychology
12.
Physiol Behav ; 51(1): 145-9, 1992 Jan.
Article in English | MEDLINE | ID: mdl-1741441

ABSTRACT

Operant variability was compared in four groups of Long-Evans rats (young males, young females, mature males and mature females) under two different conditions. Under VAR contingencies, where response variability was required for reinforcement, a sequence of four responses on left (L) and right (R) levers had to differ from each of the preceding four sequences. If LLLL, LRLL, RRRR, and RRLL had just occurred, then a RLRL sequence, for example, would be reinforced in the next trial, but LRLL would not. Sequence variability was compared to that under YOKE contingencies, where reinforcement was provided whether or not the rats varied their responses. We found that young rats behaved more variably than mature rats, this effect most pronounced under the YOKE contingencies, where variability was not required. On the other hand, variability was not related to gender under either VAR or YOKE conditions. Thirdly, all groups were sensitive to the schedule contingencies, behaving more variably under VAR than under YOKE. Thus age and schedule requirements influenced operant variability, but gender did not.


Subject(s)
Aging/psychology , Conditioning, Operant , Motivation , Reinforcement Schedule , Animals , Association Learning , Female , Male , Rats , Sex Factors
13.
Behav Anal ; 14(1): 1-13, 1991.
Article in English | MEDLINE | ID: mdl-22478072
15.
J Exp Anal Behav ; 54(1): 1-12, 1990 Jul.
Article in English | MEDLINE | ID: mdl-2398323

ABSTRACT

Response sequences emitted by five Long-Evans rats were reinforced under a two-component multiple schedule. In the REPEAT component, food pellets were contingent upon completion of a left-left-right-right (LLRR) sequence on two levers. In the VARY component, pellets were contingent upon variable sequences (i.e., a sequence was reinforced only if it differed from each of the previous five sequences). The rats learned to emit LLRR sequences in the REPEAT component and variable sequences in VARY. Intraperitoneal injections of ethanol (1.25, 1.75, and 2.25 g/kg) significantly increased sequence variability in REPEAT, thereby lowering reinforcement probability, but had little effect on sequence variability in the VARY component. These results extend previous findings that alcohol impairs the performance of reinforced repetitions but not of reinforced variations in response sequences.


Subject(s)
Appetitive Behavior/drug effects , Association Learning/drug effects , Ethanol/pharmacology , Learning/drug effects , Memory/drug effects , Mental Recall/drug effects , Reinforcement Schedule , Animals , Dose-Response Relationship, Drug , Injections, Intraperitoneal , Male , Rats , Rats, Inbred Strains
16.
Biol Psychol ; 30(3): 203-17, 1990 Jun.
Article in English | MEDLINE | ID: mdl-2282369

ABSTRACT

Two groups of students attempted to learn sequences of letter-number pairs. For both groups, a tone signalled each error. However, for aversive punishment subjects, a mildly painful electric shock followed the tone 20% of the time, whereas the informational punishment subjects received only the tone. Skin conductance responses (SCRs) and cardiac interbeat intervals indicated the presence of an orienting response to the tone in informational punishment subjects and a defense response to the tone in aversive punishment subjects. Accompanying these were significant differences in behavior: aversive punishment subjects completed fewer sequences and had higher error rates. The two groups did not differ in measures of tonic arousal. Session trends suggested that the cardiac orienting response developed in both groups as subjects learned to use the information in the punishment contingency. Defense responses to aversive punishers may complete with orienting responses necessary to the efficient learning of complex tasks.


Subject(s)
Arousal/physiology , Learning/physiology , Orientation/physiology , Punishment , Adolescent , Adult , Behavior Therapy/methods , Electroshock , Female , Galvanic Skin Response , Heart Rate , Humans , Male
17.
Psychopharmacology (Berl) ; 102(1): 49-55, 1990.
Article in English | MEDLINE | ID: mdl-2392507

ABSTRACT

Six rats were rewarded with food pellets for repeating a particular sequence of four responses on two levers, namely left-left-right-right. Ethanol (0.75 g/kg and 2.0 g/kg injected IP) increased the variability of sequences under these "repeat" contingencies, resulting in fewer rewarded trials. Six other rats were rewarded only if their sequence of left and right responses in the current trial differed from each of the previous five trials. Ethanol had little effect on sequence variability and no effect on reward probability under these "vary" contingencies. The relative difficulties of the repeat and vary tasks were manipulated to show that task difficulty did not account for the results. Thus alcohol increased or maintained behavioral variability, thereby impairing reinforced repetitions but not reinforced variations. When previously reported results from rats in radial arm mazes were compared with a simulated random model, alcohol was found to increase behavioral variability under that procedure as well.


Subject(s)
Conditioning, Operant/drug effects , Ethanol/pharmacology , Animals , Male , Random Allocation , Rats , Reinforcement Schedule
18.
J Exp Anal Behav ; 42(3): 397-406, 1984 Nov.
Article in English | MEDLINE | ID: mdl-16812398

ABSTRACT

Operant researchers rarely use the arena of applied psychology to motivate or to judge their research. Absence of tests by application weakens the field of basic operant research. Early in their development, the physical and biological sciences emphasized meliorative aspects of research. Improvement of human life was a major goal of these young sciences. This paper argues that if basic operant researchers analogously invoked a melioration criterion, the operant field might avoid its tendency toward ingrowth and instead generate a broadly influential science. Operant researchers could incorporate melioration by (a) creating animal models to study applied problems; (b) confronting questions raised by applied analysts and testing hypotheses in applied settings; or (c) performing self-experiments-that is, using experimental methods and behavioral techniques to study and change the experimenter's behavior.

20.
Physiol Behav ; 30(6): 863-6, 1983 Jun.
Article in English | MEDLINE | ID: mdl-6611690

ABSTRACT

Five pigeons were allowed one hour of access to food after variable intervals of deprivation averaging 23 hours. Five other pigeons were allowed one hour of food after fixed 23 hour intervals. It was found that the amount eaten by birds in an environment continually alternating between deprivation and one-hour ad lib feeding could be better predicted as a linear function of their body weights than as a 2nd degree polynomial function of the number of hours they were deprived.


Subject(s)
Body Weight , Eating , Food Deprivation , Animals , Columbidae , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...