Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
Add more filters










Publication year range
1.
J Exp Anal Behav ; 119(1): 25-35, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36346194

ABSTRACT

Tversky and Kahneman (1981) told participants to imagine they were at a store about to purchase an item. They were asked if they would be willing to drive 20 min to another store to receive a $5 discount on the item's price. Most participants were willing, but only when the original price of the item was small ($15); when the original price was relatively large ($125), most said they would not drive 20 min for a $5 discount. We examined this framing effect in 296 participants, but instead used a psychophysical-adjustment procedure to obtain quantitative estimates of the discount required with different (a) item prices, (b) delays until the item's receipt, and (c) opportunity costs (in "driving" vs. "delivery" tasks). We systematically replicated Tversky and Kahneman's results, but also extended them by showing a substantial influence of opportunity costs on the consumer discounts required. A behavioral model of delay discounting-additive-utility theory-accounted for 97% of the variance in these consumer discounts.


Subject(s)
Choice Behavior , Consumer Behavior , Humans , Costs and Cost Analysis , Commerce
2.
J Exp Anal Behav ; 111(3): 387-404, 2019 05.
Article in English | MEDLINE | ID: mdl-31038743

ABSTRACT

Four hundred and fifty participants were recruited from Amazon Mechanical Turk across 3 experiments to test the predictions of a hyperbolic discounting equation in accounting for human choices involving variable delays or multiple rewards (Mazur, 1984, 1986). In Experiment 1, participants made hypothetical choices between 2 monetary alternatives, 1 consisting of a fixed delay and another consisting of 2 delays of equal probability (i.e., a variable-delay procedure). In Experiment 2, participants made hypothetical monetary choices between a single, immediate reward and 2 rewards, 1 immediate and 1 delayed (i.e., a double-reward procedure). Experiment 3 also used a double-reward procedure, but with 2 delayed rewards. Participants in all 3 experiments also completed a standard delay-discounting task. Finally, 3 reward amounts were tested in each type of task ($100, $1000, and $5000). In the double-reward conditions (Experiments 2 and 3), the results were in good qualitative and quantitative agreement with Mazur's model (1984, 1986). In contrast, when participants made choices involving variable delays (Experiment 1), there was relatively poor qualitative and quantitative agreement with this model. These results, along with our previous findings, suggest the structure of questions in hypothetical tasks with humans can be a strong determinant of the choice pattern.


Subject(s)
Choice Behavior , Delay Discounting , Reward , Adult , Female , Humans , Male , Models, Psychological , Probability
3.
J Exp Anal Behav ; 106(1): 1-21, 2016 07.
Article in English | MEDLINE | ID: mdl-27353633

ABSTRACT

Prior research has shown that nonhumans show an extreme preference for variable- over fixed-delays to reinforcement. This well-established preference for variability occurs because a reinforcer's strength or "value" decreases according to a curvilinear function as its delay increases. The purpose of the present experiments was to investigate whether this preference for variability occurs with human participants making hypothetical choices. In three experiments, participants recruited from Amazon Mechanical Turk made choices between variable and fixed monetary rewards. In a variable-delay procedure, participants repeatedly chose between a reward delivered either immediately or after a delay (with equal probability) and a reward after a fixed delay (Experiments 1 and 2). In a double-reward procedure, participants made choices between an alternative consisting of two rewards, one delivered immediately and one after a delay, and a second alternative consisting of a single reward delivered after a delay (Experiments 1 and 3). Finally, all participants completed a standard delay-discounting task. Although we observed both curvilinear discounting and magnitude effects in the standard discounting task, we found no consistent evidence of a preference for variability-as predicted by two prominent models of curvilinear discounting (i.e., a simple hyperbola and a hyperboloid)-in our variable-delay and double-reward procedures. This failure to observe a preference for variability may be attributed to the hypothetical, rule-governed nature of choices in the present study. In such contexts, participants may adopt relatively simple strategies for making more complex choices.


Subject(s)
Choice Behavior , Delay Discounting , Reward , Adult , Female , Humans , Male , Probability , Reinforcement, Psychology
4.
J Exp Anal Behav ; 102(2): 198-212, 2014 Sep.
Article in English | MEDLINE | ID: mdl-25130299

ABSTRACT

Four rats responded on concurrent variable-interval schedules that delivered token stimuli (stimulus lights arranged vertically above each of two side levers). During exchange periods, each token could be exchanged for one food pellet by responding on a center lever, with one response required for each pellet delivery. In different conditions, the exchange requirements (number of tokens that had to be earned before they could be exchanged for food) varied between one and four for the two response levers. The experiments were closely patterned after research with pigeons by Mazur and Biondi (2013), and the results from the rats in the present experiment were similar. Response percentages on the two levers changed as each additional token was earned, and these patterns indicated that choice was controlled by both the time to the exchange periods and the number of food pellets that were delivered in the exchange period. In some conditions, the exchange requirement was three tokens for each lever, but the token lights were not turned on as they were earned for one of the two keys. The rats showed a slight preference for the lever without the token lights, which may indicate that the token lights were not serving as conditioned reinforcers (a result also found by Mazur and Biondi with pigeons). Overall, these results suggest that, in this choice procedure, the token stimuli served primarily as discriminative stimuli that signaled the temporal proximity and quantity of the primary reinforcer, food.


Subject(s)
Choice Behavior , Reinforcement Schedule , Animals , Conditioning, Operant , Cues , Male , Rats , Rats, Long-Evans , Token Economy
5.
J Exp Anal Behav ; 99(2): 159-78, 2013 Mar.
Article in English | MEDLINE | ID: mdl-23460072

ABSTRACT

Twelve pigeons responded on concurrent variable-interval schedules that delivered token stimuli (stimulus lights for some pigeons, and white circles on the response keys for others). During exchange periods, each token could be exchanged for food on a fixed-ratio 1 schedule. Across conditions, the exchange requirements (number of tokens that had to be earned before they could be exchanged for food) varied between one and four for the two response keys. The main findings were that the pigeons' response percentages varied as a function of the number of tokens earned at any given moment, and they were determined by both the delays to food and by the number of food deliveries in the exchange periods. In some conditions, tokens had to be earned but were not visible during the variable-interval schedules for one or both keys. When one key had visible tokens and the other did not, the pigeons showed a preference for the key without visible tokens. A model based on the matching law and a hyperbolic delay-discounting equation could account for the main patterns of choice responding, and for how response percentages changed as successive tokens were earned. The results are consistent with the view that the token stimuli served as discriminative stimuli that signaled the current delays to food.


Subject(s)
Choice Behavior , Token Economy , Animals , Columbidae , Conditioning, Operant , Male , Models, Psychological , Reinforcement Schedule
6.
J Comp Psychol ; 126(4): 407-20, 2012 Nov.
Article in English | MEDLINE | ID: mdl-22582816

ABSTRACT

In the Monty Hall dilemma, an individual chooses between three options, only one of which will deliver a prize. After the initial choice, one of the nonchosen options is revealed as a losing option, and the individual can choose to stay with the original choice or switch to the other remaining option. Previous studies have found that most adults stay with their initial choice, although the chances of winning are 2/3 for switching and 1/3 for staying. Pigeons, college students, and preschool children were given many trials on this task to examine how their choices might change with experience. The college students began to switch on a majority of trials much sooner than the pigeons, contrary to the findings by Herbranson and Schroeder (2010) that pigeons perform better than people on this task. In all three groups, some individuals approximated the optimal strategy of switching on every trial, but most did not. Many of the preschoolers immediately showed a pattern of always switching or always staying and continued this pattern throughout the experiment. In a condition where the probability of winning was 90% after a switch, all college students and all but one pigeon learned to switch on nearly every trial. The results suggest that one main impediment to learning the optimal strategy in the Monty Hall task, even after repeated trials, is the difficulty in discriminating the different reinforcement probabilities for switching versus staying.


Subject(s)
Choice Behavior , Columbidae , Adolescent , Animals , Child, Preschool , Female , Games, Experimental , Humans , Male , Probability , Reinforcement Schedule , Reward , Students/psychology , Young Adult
7.
J Exp Anal Behav ; 97(2): 215-30, 2012 Mar.
Article in English | MEDLINE | ID: mdl-22389527

ABSTRACT

Parallel experiments with rats and pigeons examined whether the size of a pre-trial ratio requirement would affect choices in a self-control situation. In different conditions, either 1 response or 40 responses were required before each trial. In the first half of each experiment, an adjusting-ratio schedule was used, in which subjects could choose a fixed-ratio schedule leading to a small reinforcer, or an adjusting-ratio schedule leading to a larger reinforcer. The size of the adjusting ratio requirement was increased and decreased over trials based on the subject's responses, in order to estimate an indifference point-a ratio at which the two alternatives were chosen about equally often. The second half of each experiment used an adjusting-delay procedure-fixed and adjusting delays to the small and large reinforcers were used instead of ratio requirements. In some conditions, particularly with the reinforcer delays, the rats had consistently longer adjusting delays with the larger pre-trial ratios, reflecting a greater tendency to choose the larger, delayed reinforcer when more responding was required to reach the choice point. No consistent effects of the pre-trial ratio were found for the pigeons in any of the conditions. These results may indicate that rats are more sensitive to the long-term reinforcement rates of the two alternatives, or they may result from a shallower temporal discounting rate for rats than for pigeons, a difference that has been observed in previous studies.


Subject(s)
Choice Behavior , Conditioning, Operant , Reinforcement Schedule , Animals , Columbidae , Male , Rats , Rats, Long-Evans
8.
J Exp Anal Behav ; 95(1): 41-56, 2011 Jan.
Article in English | MEDLINE | ID: mdl-21541170

ABSTRACT

Parallel experiments with rats and pigeons examined reasons for previous findings that in choices with probabilistic delayed reinforcers, rats' choices were affected by the time between trials whereas pigeons' choices were not. In both experiments, the animals chose between a standard alternative and an adjusting alternative. A choice of the standard alternative led to a short delay (1 s or 3 s), and then food might or might not be delivered. If food was not delivered, there was an "interlink interval," and then the animal was forced to continue to select the standard alternative until food was delivered. A choice of the adjusting alternative always led to food after a delay that was systematically increased and decreased over trials to estimate an indifference point--a delay at which the two alternatives were chosen about equally often. Under these conditions, the indifference points for both rats and pigeons increased as the interlink interval increased from 0 s to 20 s, indicating decreased preference for the probabilistic reinforcer with longer time between trials. The indifference points from both rats and pigeons were well described by the hyperbolic-decay model. In the last phase of each experiment, the animals were not forced to continue selecting the standard alternative if food was not delivered. Under these conditions, rats' choices were affected by the time between trials whereas pigeons' choices were not, replicating results of previous studies. The differences between the behavior of rats and pigeons appears to be the result of procedural details, not a fundamental difference in how these two species make choices with probabilistic delayed reinforcers.


Subject(s)
Choice Behavior , Memory, Short-Term , Probability Learning , Reinforcement Schedule , Animals , Columbidae , Conditioning, Operant , Male , Motivation , Psychomotor Performance , Rats , Rats, Long-Evans , Species Specificity , Time Perception
10.
J Exp Psychol Anim Behav Process ; 36(3): 321-33, 2010 Jul.
Article in English | MEDLINE | ID: mdl-20658863

ABSTRACT

Two experiments on discrete-trial choice examined the conditions under which pigeons would exhibit exclusive preference for the better of two alternatives as opposed to distributed preference (making some choices for each alternative). In Experiment 1, pigeons chose between red and green response keys that delivered food after delays of different durations, and in Experiment 2 they chose between red and green keys that delivered food with different probabilities. Some conditions of Experiment 1 had fixed delays to food and other conditions had variable delays. In both experiments, exclusive or nearly exclusive preference for the better alternative was found in some conditions, but distributed preference was found in other conditions, especially in Experiment 2 when key location varied randomly over trials. The results were used to evaluate several different theories about discrete-trial choice. The results suggest that exclusive preference for one alternative is a frequent outcome in discrete-trial choice. When distributed preference does occur, it is not the result of inherent tendencies to sample alternatives or to match response percentages to the values of the alternatives. Rather, distributed preference may occur when two factors (such as reinforcer delay and position bias) compete for the control of choice, or when the consequences for the two alternatives are similar and difficult to discriminate.


Subject(s)
Choice Behavior/physiology , Conditioning, Operant/physiology , Reinforcement Schedule , Reinforcement, Psychology , Animals , Columbidae , Male , Probability , Reaction Time/physiology , Time Factors
11.
J Exp Anal Behav ; 91(2): 197-211, 2009 Mar.
Article in English | MEDLINE | ID: mdl-19794834

ABSTRACT

An adjusting-delay procedure was used to study the choices of pigeons and rats when both delay and amount of reinforcement were varied. In different conditions, the choice alternatives included one versus two reinforcers, one versus three reinforcers, and three versus two reinforcers. The delay to one alternative (the standard alternative) was kept constant in a condition, and the delay to the other (the adjusting alternative) was increased or decreased many times a session so as to estimate an indifference point--a delay at which the two alternatives were chosen about equally often. Indifference functions were constructed by plotting the adjusting delay as a function of the standard delay for each pair of reinforcer amounts. The experiments were designed to test the prediction of a hyperbolic decay equation that the slopes of the indifference functions should increase as the ratio of the two reinforcer amounts increased. Consistent with the hyperbolic equation, the slopes of the indifference functions depended on the ratios of the two reinforcer amounts for both pigeons and rats. These results were not compatible with an exponential decay equation, which predicts slopes of 1 regardless of the reinforcer amounts. Combined with other data, these findings provide further evidence that delay discounting is well described by a hyperbolic equation for both species, but not by an exponential equation. Quantitative differences in the y-intercepts of the indifference functions from the two species suggested that the rate at which reinforcer strength decreases with increasing delay may be four or five times slower for rats than for pigeons.


Subject(s)
Choice Behavior , Reinforcement Schedule , Animals , Columbidae , Conditioning, Operant , Male , Models, Psychological , Rats , Rats, Long-Evans , Reward
12.
Learn Behav ; 36(4): 301-10, 2008 Nov.
Article in English | MEDLINE | ID: mdl-18927053

ABSTRACT

Pigeons responded in a successive-encounters procedure that consisted of a search period, a choice period, and a handling period. The search period was either a fixed-interval or a mixed-interval schedule presented on the center key of a three-key chamber. Upon completion of the search period, the center key was turned off and the two side keys were lit. A pigeon could either accept a delay followed by food (by pecking the right key) or reject this option and return to the search period (by pecking the left key). During the choice period, a red right key represented the long alternative (a long handling delay followed by food), and a green right key represented the short alternative (a short handling delay followed by food). The experiment consisted of a series of comparisons for which optimal diet theory predicted no changes in preference for the long alternative (because the overall rates of reinforcement were unchanged), whereas the hyperbolic-decay model predicted changes in preference (because the delays to the next possible reinforcer were varied). In all comparisons, the results supported the predictions of the hyperbolic-decay model, which states that the value of a reinforcer is inversely related to the delay between a choice response and reinforcer delivery.


Subject(s)
Choice Behavior , Reinforcement, Psychology , Animals , Behavior, Animal , Columbidae , Male , Time Factors
13.
J Exp Anal Behav ; 89(1): 1-3, 2008 Jan.
Article in English | MEDLINE | ID: mdl-18338671
14.
Learn Behav ; 35(3): 169-76, 2007 Aug.
Article in English | MEDLINE | ID: mdl-17918422

ABSTRACT

Rats chose between alternatives that differed in the number of reinforcers and in the delay to each reinforcer. A left leverpress led to two reinforcers, each delivered after a fixed delay. A right leverpress led to one reinforcer after an adjusting delay. The adjusting delay was increased or decreased many times in a session, depending on the rat's choices, in order to estimate an indifference point--a delay at which the two alternatives were chosen about equally often. Both the number of reinforcers and their individual delays affected the indifference points. The overall pattern of results was well described by the hyperbolic-decay model, which states that each additional reinforcer delivered by an alternative increases preference for that alternative but that a reinforcer's effect is inversely related to its delay. Two other possible delay-discounting equations, an exponential equation and a reciprocal equation, did not produce satisfactory predictions for these data. Adding an additional free parameter to the hyperbolic equation as an exponent for delay did not appreciably improve the predictions, suggesting that raising delay to some power other than 1.0 was unnecessary. The results were qualitatively similar to those from a previous experiment with pigeons, but quantitative differences suggested that the rates of delay discounting were several times slower for rats than for pigeons.


Subject(s)
Choice Behavior , Conditioning, Operant , Motivation , Reinforcement Schedule , Time Perception , Animals , Male , Models, Theoretical , Rats , Rats, Sprague-Dawley
15.
J Exp Anal Behav ; 88(1): 73-85, 2007 Jul.
Article in English | MEDLINE | ID: mdl-17725052

ABSTRACT

Pigeons responded in a successive-encounters procedure that consisted of a search state, a choice state, and a handling state. The search state was either a fixed-interval or mixed-interval schedule presented on the center key of a three-key chamber. Upon completion of the search state, the choice state was presented, in which the center key was off and the two side keys were lit. A pigeon could either accept a delay followed by food (by pecking the right key) or reject this option and return to the search state (by pecking the left key). During the choice state, a red right key represented the long alternative (a long handling delay followed by food), and a green right key represented the short alternative (a short handling delay followed by food). In some conditions, both the short and long alternatives were fixed-time schedules, and in other conditions both were mixed-time schedules. Contrary to the predictions of both optimal foraging theory and delay-reduction theory, the percentage of trials on which pigeons accepted the long alternative depended on whether the search and handling schedules were fixed or mixed. They were more likely to accept the long alternative when the search states were fixed-interval rather than mixed-interval schedules, and more likely to reject the long alternative when the handling states were fixed-time rather than mixed-time schedules. This pattern of results was in qualitative agreement with the predictions of the hyperbolic-decay model, which states that the value of a reinforcer is inversely related to the delay between a choice response and reinforcer delivery.


Subject(s)
Choice Behavior , Reinforcement, Psychology , Animals , Columbidae , Male
16.
Behav Processes ; 75(2): 220-4, 2007 Jun.
Article in English | MEDLINE | ID: mdl-17343994

ABSTRACT

An adjusting-delay procedure was used to study rats' choices with probabilistic and delayed reinforcers, and to compare them with previous results from pigeons. A left lever press led to a 5-s delay signaled by a light and a tone, followed by a food pellet on 50% of the trials. A right lever press led to an adjusting delay signaled by a light followed by a food pellet on 100% of the trials. In some conditions, the light and tone for the probabilistic reinforcer were present only on trials that delivered food. In other conditions, the light and tone were present on all trials that the left lever was chosen. Similar studies with pigeons [Mazur, J.E., 1989. Theories of probabilistic reinforcement. J. Exp. Anal. Behav. 51, 87-99; Mazur, J.E., 1991. Conditioned reinforcement and choice with delayed and uncertain primary reinforcers. J. Exp. Anal. Behav. 63, 139-150] found that choice of the probabilistic reinforcer increased dramatically when the delay-interval stimuli were omitted on no-food trials, but this study found no such effect with the rats. In other conditions, the probability of food was varied, and comparisons to previous studies with pigeons indicated that rats showed greater sensitivity to decreasing reinforcer probabilities. The results support the hypothesis that rats' choices in these situations depend on the total time between a choice response and a reinforcer, whereas pigeons' choices are strongly influenced by the presence of delay-interval stimuli.


Subject(s)
Association Learning/physiology , Choice Behavior/physiology , Probability Learning , Reinforcement, Psychology , Time Perception/physiology , Animals , Columbidae , Rats , Rats, Sprague-Dawley , Species Specificity , Time Factors
17.
J Exp Anal Behav ; 86(2): 211-22, 2006 Sep.
Article in English | MEDLINE | ID: mdl-17002228

ABSTRACT

Pigeons responded on concurrent-chains schedules with equal variable-interval schedules as initial links. One terminal link delivered a single reinforcer after a fixed delay, and the other terminal link delivered either three or five reinforcers, each preceded by a fixed delay. Some conditions included a postreinforcer delay after the single reinforcer to equate the total durations of the two terminal links, but other conditions did not include such a postreinforcer delay. With short initial links, preference for the single-reinforcer alternative decreased when a postreinforcer delay was present, but with long initial links, the postreinforcer delays had no significant effect on preference. In conditions with a postreinforcer delay, preference for the single-reinforcer alternative frequently switched from above 50% to below 50% as the initial links were lengthened. This pattern of results was consistent with delay-reduction theory (Squires & Fantino, 1971), but not with the contextual-choice model (Grace, 1994) or the hyperbolic value-added model (Mazur, 2001) as they have usually been applied. However, the hyperbolic value-added model could account for the results if its calculations were expanded to include reinforcers delivered in later terminal links. The implications of these findings for models of concurrent-chains performance are discussed.


Subject(s)
Appetitive Behavior , Choice Behavior , Motivation , Reinforcement Schedule , Animals , Association Learning , Columbidae , Discrimination Learning , Psychomotor Performance
18.
J Exp Anal Behav ; 85(2): 275-91, 2006 Mar.
Article in English | MEDLINE | ID: mdl-16673829

ABSTRACT

The use of mathematical models in the experimental analysis of behavior has increased over the years, and they offer several advantages. Mathematical models require theorists to be precise and unambiguous, often allowing comparisons of competing theories that sound similar when stated in words. Sometimes different mathematical models may make equally accurate predictions for a large body of data. In such cases, it is important to find and investigate situations for which the competing models make different predictions because, unless two models are actually mathematically equivalent, they are based on different assumptions about the psychological processes that underlie an observed behavior. Mathematical models developed in basic behavioral research have been used to predict and control behavior in applied settings, and they have guided research in other areas of psychology. A good mathematical model can provide a common framework for understanding what might otherwise appear to be diverse and unrelated behavioral phenomena. Because psychologists vary in their quantitative skills and in their tolerance for mathematical equations, it is important for those who develop mathematical models of behavior to find ways (such as verbal analogies, pictorial representations, or concrete examples) to communicate the key premises of their models to nonspecialists.


Subject(s)
Models, Theoretical , Social Behavior , Humans , Reinforcement, Psychology
19.
J Exp Anal Behav ; 83(3): 263-79, 2005 May.
Article in English | MEDLINE | ID: mdl-16050037

ABSTRACT

In Experiment 1 with rats, a left lever press led to a 5-s delay and then a possible reinforcer. A right lever press led to an adjusting delay and then a certain reinforcer. This delay was adjusted over trials to estimate an indifference point, or a delay at which the two alternatives were chosen about equally often. Indifference points increased as the probability of reinforcement for the left lever decreased. In some conditions with a 20% chance of food, a light above the left lever was lit during the 5-s delay on all trials, but in other conditions, the light was only lit on those trials that ended with food. Unlike previous results with pigeons, the presence or absence of the delay light on no-food trials had no effect on the rats' indifference points. In other conditions, the rats showed less preference for the 20% alternative when the time between trials was longer. In Experiment 2 with rats, fixed-interval schedules were used instead of simple delays, and the presence or absence of the fixed-interval requirement on no-food trials had no effect on the indifference points. In Experiment 3 with rats and Experiment 4 with pigeons, the animals chose between a fixed-ratio 8 schedule that led to food on 33% of the trials and an adjusting-ratio schedule with food on 100% of the trials. Surprisingly, the rats showed less preference for the 33% alternative in conditions in which the ratio requirement was omitted on no-food trials. For the pigeons, the presence or absence of the ratio requirement on no-food trials had little effect. The results suggest that there may be differences between rats and pigeons in how they respond in choice situations involving delayed and probabilistic reinforcers.


Subject(s)
Behavior, Animal/physiology , Choice Behavior , Reaction Time , Reinforcement, Psychology , Animals , Columbidae , Male , Probability , Rats , Rats, Sprague-Dawley , Species Specificity
20.
Behav Processes ; 69(2): 137-8; author reply 159-63, 2005 May 31.
Article in English | MEDLINE | ID: mdl-15845298

ABSTRACT

This research on decision-making heuristics is similar to research on animal learning in at least two ways. First, optimality modeling has not proven to be very useful for either research area. Second, both of these research areas seek to find general principles (or heuristics) that are applicable to different species in different settings. However, the basic principles of classical and operant conditioning seem to be more uniform across species and situations, whereas decision-making heuristics can vary for different species and different situations, even for tasks with very similar characteristics.


Subject(s)
Algorithms , Decision Making , Learning , Models, Psychological , Animals , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...