Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Behav Processes ; 172: 104059, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31954811

RESUMO

Many studies have investigated how variability in animal behavior is induced by different reinforcement schedules. However, the animal species and experimental settings have varied between these studies. The present study investigated the variability of pigeons' pecking location to operandum under fixed-ratio, fixed-interval, variable-ratio, and variable-interval schedules. A circular response area that was 22.4 cm in diameter was used so that the pecking responses would be effective over a wide range. Pigeons were exposed to a multiple-ratio yoked-interval schedule; the pairs of schedules were fixed-ratio and fixed-interval or variable-ratio and variable-interval. We used a Bayesian statistical approach to probabilistically evaluate the effects of reinforcement schedules on the variability. Bayesian statistical analysis showed the following: (1) interval schedules provided greater variability than did the ratio schedules under the same interreinforcement intervals, (2) fixed schedules provided greater variability than did variable schedules for both the ratio and interval schedule requirements, (3) the larger schedule requirement generated greater variability under the fixed schedules but this effect was not observed on the variable schedules, and (4) the variability did not change as time elapsed in each trial. These results suggested that each reinforcement schedule has a specific effect on the variability of the response location.


Assuntos
Teorema de Bayes , Columbidae , Condicionamento Operante , Esquema de Reforço , Animais , Modelos Estatísticos , Reforço Psicológico
2.
Behav Processes ; 157: 346-353, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30059765

RESUMO

Streams of operant responses are arranged in bouts separated by pauses and differences in performance in reinforcement schedules with identical inter-reinforcement intervals (IRIs) are primarily due to differences in within-bout response rate, not in bout-initiation rate. The present study used hierarchical Bayesian modeling as a new method to quantify the properties of the response bout. A Bernoulli distribution was utilized to express the probability to stay in bout/pause, while a Poisson distribution was utilized to quantify the within-bout response rates. We compared bout/pause patterns between variable-ratio (VR) and variable-interval (VI) schedules across IRIs. The model estimation revealed no difference in within-bout staying probability between schedules. However, response rates of within-bout responses were higher in VR than VI across IRIs. These results are consistent with previous analyses using a log-survivor plot to describe within-bout responses and bouts-initiation responses. In addition, a simulation study was performed to examine how sensitively the model estimate the parameters according to different bout initiation rates. These result showed that the within-bout staying probability was affected by changes in between-bout while within-bout response rate parameters were not. This suggests model estimation robustness of the model estimation to dissociate within-bout and between-bout parameters during different reinforcement schedules.


Assuntos
Condicionamento Operante/fisiologia , Reforço Psicológico , Animais , Teorema de Bayes , Modelos Teóricos , Esquema de Reforço
3.
Behav Processes ; 132: 12-21, 2016 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-27619956

RESUMO

The purpose of this study was to analyze the response pattern difference between variable-ratio (VR) and variable-interval (VI) schedules of reinforcement by modeling interresponse time distributions of rats' lever presses. All eight rats showed higher response rates under VR 30 than under inter-reinforcement intervals yoked VI. The 30 models consisting of single Exponential (with and without the lower limit on interresponse times), Weibull, Normal, Log-Normal or Gamma distributions, all possible two component combinations of those, and 3 and 4 component models consisting of Weibull, Normal, Log-Normal, or Gamma distribution combinations were compared. The 4 component Log-Normal model was the best in terms of the Akaike information criterion and visual inspection of fitting outcome. Parameter estimates for the L4 model showed that the VR-VI response rate difference is due to a difference in short interresponse times or within bout responses. This results suggests that the VR-VI response rate difference is not an indication of a difference in the overall tendency to respond but it is rather a difference in terms of what types of response patterns are engendered between the two schedules.


Assuntos
Tempo de Reação , Esquema de Reforço , Animais , Condicionamento Operante , Masculino , Modelos Psicológicos , Ratos
4.
Behav Processes ; 114: 72-7, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25783804

RESUMO

The strengthening view of reinforcement attributes behavior change to changes in the response strength or the value of the reinforcer. In contrast, the shaping view explains behavior change as shaping different response units through differential reinforcement. In this paper, we evaluate how well these two views explain: (1) the response-rate difference between variable-ratio and variable-interval schedules that provide the same reinforcement rate; and (2) the phenomenon of matching in choice. The copyist model (Tanno and Silberberg, 2012) - a shaping-view account - can provided accurate predictions of these phenomena without a strengthening mechanism; however, the model has limitations. It cannot explain the relation between behavior change and stimulus control, reinforcer amount, and reinforcer quality. These relations seem easily explained by a strengthening view. Future work should be directed at a model which combine the strengths of these two types of accounts.


Assuntos
Modelos Psicológicos , Reforço Psicológico , Animais , Comportamento de Escolha
5.
Learn Behav ; 43(1): 54-71, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25515591

RESUMO

In Conditions 1 and 3 of our Experiment 1, rats pressed levers for food in a two-component multiple schedule. The first component was concurrent variable-ratio (VR) 20 variable-interval (VI) 90 s, and the second was concurrent yoked VI (its reinforcement rate equaled that of the prior component's VR) VI 90 s. In Condition 2, the VR was changed to tandem VR 20, differential reinforcement of low rates (DRL) 0.8 s. Local response rates were higher in the VR than in the yoked VI schedule, and this difference disappeared between tandem VR DRL and yoked VI. The relative time allocations to VR and yoked VI, as well as to tandem VR DRL and yoked VI, were approximately the same across conditions. In Experiment 2, rats chose in a single session between five different VI pairs, each lasting for 12 reinforcer presentations (variable-environment procedure). The across-schedule hourly reinforcement rates were 120 and 40, respectively, in Conditions 1-3 and 4-6. During Conditions 2 and 5, one lever's VI was changed to tandem VI, DRL 2 s. High covariation between relative time allocations and relative reinforcer frequencies, as well as invariance in local response rates to the schedules, was evident in all conditions. In addition, the relative local response rates were biased toward the unchanged VI in Conditions 2 and 5. These results demonstrate two-process control of choice: Inter-response-time reinforcement controls the local response rate, and relative reinforcer frequency controls relative time allocations.


Assuntos
Comportamento de Escolha , Reforço Psicológico , Animais , Condicionamento Operante , Masculino , Ratos , Ratos Wistar , Tempo de Reação , Esquema de Reforço , Fatores de Tempo
6.
Psychopharmacology (Berl) ; 231(1): 85-95, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-23963529

RESUMO

RATIONALE: Drug effects on delay discounting are thought to reflect changes in sensitivity to reinforcer delay, although other behavioral mechanisms might be involved. One strategy for revealing the influence of different behavioral mechanisms is to alter features of the procedures in which they are studied. OBJECTIVE: This experiment examined whether the order of delay presentation under within-session delay discounting procedures impacts drug effects on discounting. METHODS: Rats responded under a discrete-trial choice procedure in which responses on one lever delivered one food pellet immediately and responses on the other lever delivered three food pellets either immediately or after a delay. The delay to the larger reinforcer (0, 4, 8, 16, and 32 s) was varied within session and the order of delay presentation (ascending or descending) varied between groups. RESULTS: Amphetamine (0.1-1.78 mg/kg) and methylphenidate (1.0-17.8 mg/kg) shifted delay functions upward in the ascending group (increasing choice of the larger reinforcer) and downward in the descending group (decreasing choice of the larger reinforcer). Morphine (1.0-10.0 mg/kg) and delta-9-tetrahydrocannabinol (0.32-5.6 mg/kg) tended to shift the delay functions downward, regardless of order of delay presentation, thereby reducing choice of the larger reinforcer, even when both reinforcers were delivered immediately. CONCLUSION: The effects of amphetamine and methylphenidate under delay discounting procedures differed depending on the order of delay presentation, indicating that drug-induced changes in discounting were due, in part, to mechanisms other than altered sensitivity to reinforcer delay. Instead, amphetamine and methylphenidate altered responding in a manner consistent with increased behavioral perseveration.


Assuntos
Anfetamina/farmacologia , Estimulantes do Sistema Nervoso Central/farmacologia , Condicionamento Operante/efeitos dos fármacos , Metilfenidato/farmacologia , Animais , Área Sob a Curva , Interpretação Estatística de Dados , Relação Dose-Resposta a Droga , Dronabinol/farmacologia , Alucinógenos/farmacologia , Masculino , Morfina/farmacologia , Entorpecentes/farmacologia , Ratos , Ratos Sprague-Dawley , Reforço Psicológico , Recompensa
7.
J Exp Anal Behav ; 98(3): 341-54, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-23144509

RESUMO

In Experiment 1, food-deprived rats responded to one of two schedules that were, with equal probability, associated with a sample lever. One schedule was always variable ratio, while the other schedule, depending on the trial within a session, was: (a) a variable-interval schedule; (b) a tandem variable-interval, differential-reinforcement-of-low-rate schedule; or (c) a tandem variable-interval, differential-reinforcement-of-high-rate schedule. Completion of a sample-lever schedule, which took approximately the same time regardless of schedule, presented two comparison levers, one associated with each sample-lever schedule. Pressing the comparison lever associated with the schedule just presented produced food, while pressing the other produced a blackout. Conditional-discrimination accuracy was related to the size of the difference in reinforced interresponse times and those that preceded it (predecessor interresponse times) between the variable-ratio and other comparison schedules. In Experiment 2, control by predecessor interresponse times was accentuated by requiring rats to discriminate between a variable-ratio schedule and a tandem schedule that required emission of a sequence of a long, then a short interresponse time in the tandem's terminal schedule. These discrimination data are compatible with the copyist model from Tanno and Silberberg (2012) in which response rates are determined by the succession of interresponse times between reinforcers weighted so that each interresponse time's role in rate determination diminishes exponentially as a function of its distance from reinforcement.


Assuntos
Aprendizagem por Discriminação , Tempo de Reação , Esquema de Reforço , Reforço Psicológico , Percepção do Tempo , Animais , Condicionamento Operante , Masculino , Reconhecimento Fisiológico de Modelo , Ratos , Ratos Wistar
8.
Psychon Bull Rev ; 19(5): 759-78, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22673925

RESUMO

The variety of different performances maintained by schedules of reinforcement complicates comprehensive model creation. The present account assumes the simpler goal of modeling the performances of only variable reinforcement schedules because they tend to maintain steady response rates over time. The model presented assumes that rate is determined by the mean of interresponse times (time between two responses) between successive reinforcers, averaged so that their contribution to that mean diminishes exponentially with distance from reinforcement. To respond, the model randomly selects an interresponse time from the last 300 of these mean interresponse times, the selection likelihood arranged so that the proportion of session time spent emitting each of these 300 interresponse times is the same. This interresponse time defines the mean of an exponential distribution from which one is randomly chosen for emission. The response rates obtained approximated those found on several variable schedules. Furthermore, the model reproduced three effects: (1) the variable ratio maintaining higher response rates than does the variable interval; (2) the finding for variable schedules that when the reinforcement rate varies from low to high, the response rate function has an ascending and then descending limb; and (3) matching on concurrent schedules. Because these results are due to an algorithm that reproduces reinforced interresponse times, responding to single and concurrent schedules is viewed as merely copying what was reinforced before.


Assuntos
Condicionamento Operante , Esquema de Reforço , Animais , Humanos , Aprendizagem , Modelos Psicológicos , Probabilidade , Fatores de Tempo
9.
Learn Behav ; 38(4): 382-93, 2010 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-21048229

RESUMO

In the first condition in Experiment 1, 6 rats were exposed to concurrent variable ratio (VR) 30, variable interval (VI) 30-sec schedules. In the next two conditions, the subjects were exposed to concurrent VI VI schedules and concurrent tandem VI-differential-reinforcement-of-high-rate VI schedules. For the latter conditions, the overall and relative reinforcer rates equaled those in the first condition. Only minor differences appeared in time allocation (a molar measure) across conditions. However, local response rate differences (a molecular measure) appeared between schedule types consistently with the interresponse times these schedules reinforced. In Experiment 2, these findings reappeared when the prior experiment was replicated with 5 subjects, except that the VR schedule was replaced by a VI plus linear feedback schedule. These results suggest that within the context tested, the molar factor of relative reinforcement rate controls preference, whereas the molecular factor of the relation between interresponse times and reinforcer probability controls the local response rate.


Assuntos
Atenção , Comportamento de Escolha , Aprendizagem por Discriminação , Tempo de Reação , Esquema de Reforço , Animais , Aprendizagem por Associação , Masculino , Motivação , Ratos , Ratos Wistar
10.
J Exp Anal Behav ; 91(2): 157-67, 2009 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-19794831

RESUMO

Food-deprived rats in Experiment 1 responded to one of two tandem schedules that were, with equal probability, associated with a sample lever. The tandem schedules' initial links were different random-interval schedules. Their values were adjusted to approximate equality in time to completing each tandem schedule's response requirements. The tandem schedules differed in their terminal links: One reinforced short interresponse times; the other reinforced long ones. Tandem-schedule completion presented two comparison levers, one of which was associated with each tandem schedule. Pressing the lever associated with the sample-lever tandem schedule produced a food pellet. Pressing the other produced a blackout. The difference between terminal-link reinforced interresponse times varied across 10-trial blocks within a session. Conditional-discrimination accuracy increased with the size of the temporal difference between terminal-link reinforced interresponse times. In Experiment 2, one tandem schedule was replaced by a random ratio, while the comparison schedule was either a tandem schedule that only reinforced long interresponse times or a random-interval schedule. Again, conditional-discrimination accuracy increased with the temporal difference between the two schedules' reinforced interresponse times. Most rats mastered the discrimination between random ratio and random interval, showing that the interresponse times reinforced by these schedules can serve to discriminate between these schedules.


Assuntos
Discriminação Psicológica , Tempo de Reação , Esquema de Reforço , Animais , Condicionamento Operante , Aprendizagem por Discriminação , Masculino , Ratos , Ratos Wistar
11.
J Exp Anal Behav ; 89(1): 5-14, 2008 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-18338672

RESUMO

This study focused on variables that may account for response-rate differences under variable-ratio (VR) and variable-interval (VI) schedules of reinforcement. Four rats were exposed to VR, VI, tandem VI differential-reinforcement-of-high-rate, regulated-probability-interval, and negative-feedback schedules of reinforcement that provided the same rate of reinforcement. Response rates were higher under the VR schedule than the VI schedule, and the rates on all other schedules approximated those under the VR schedule. The median reinforced interresponse time (IRT) under the VI schedule was longer than for the other schedules. Thus, differences in reinforced IRTs correlated with differences in response rate, an outcome suggestive of the molecular control of response rate. This conclusion was complemented by the additional finding that the differences in molar reinforcement-feedback functions had little discernible impact on responding.


Assuntos
Condicionamento Operante , Aprendizagem por Probabilidade , Esquema de Reforço , Animais , Retroalimentação Psicológica , Masculino , Ratos , Ratos Wistar
12.
Behav Processes ; 78(1): 10-6, 2008 May.
Artigo em Inglês | MEDLINE | ID: mdl-18178339

RESUMO

In Experiment 1, each of three humans knowledgeable about operant schedules used mouse clicks to respond to a "work key" presented on a monitor. On a random half of the presentations, work-key responses that completed a variable ratio (VR) 12 produced a tone. After five tones, the work key was replaced by two report keys. Pressing the right or left report key, respectively, added or subtracted yen50 from a counter and produced the work key. On the other half of the presentations, a variable interval (VI) associated with the work key was defined so its interreinforcer intervals approximated the time it took to complete the variable ratio. After five tone-producing completions of this schedule, the report keys were presented. Left or right report-key presses, respectively, added or subtracted yen50 from the counter. Subjects achieved high yen totals. In Experiment 2, the procedure was changed by requiring an interresponse time after completion of the variable interval that approximated the duration of the reinforced interresponse time on the variable ratio. Prior to beginning, subjects were shown how a sequence of response bouts and pauses could be used to predict schedule type. Subjects again achieved high levels of accuracy. These results show humans can discriminate ratio from interval schedules even when those schedules provide the same rate of reinforcement and reinforced interresponse times.


Assuntos
Condicionamento Operante , Aprendizagem por Discriminação , Tempo de Reação , Esquema de Reforço , Percepção do Tempo , Aprendizagem por Associação , Autoexperimentação , Humanos , Masculino , Valores de Referência , Recompensa
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...