Abstract
We study a dynamic game in which players compete for a prize. In a waiting game with two-sided private information about strength levels, players choose fighting, fleeing, or waiting. Players earn a “deterrence value” on top of the prize if their opponent escapes without a battle. We show that this value is a key determinant of the type of equilibrium. For intermediate values, sorting takes place, with weaker players fleeing before others fight. Time then helps to reduce battles. In an experiment, we find support for the key theoretical predictions and document suboptimal predatory fighting.
I. Introduction
FOLLOWING Maynard Smith's (1974) seminal contribution, competition for a prize is often modeled as the war of attrition. In this game, players choose the time at which they intend to flee. Time is costly, and players may differ in their opportunity costs. The player who waits the longest wins the prize and both players pay a cost proportional to the time it takes for the losing player to flee. Maynard Smith refers to this type of interaction as a “display.” In a display, no physical contact takes place, although if it does, it does not settle the battle or convey information about which player would win an escalated conflict.
The main contribution of our paper is that we develop and analyze a game in which at any moment, players can not only wait or flee; they also have the option to start a fight. In case of a fight, a battle ensues and the stronger player wins the prize while the losing player incurs a loss. This dynamic fight-or-flight game allows us to make sense of a much wider variety of competitions. It captures the essence of many types of interactions in which the timing of actions plays a crucial role, such as R&D races, litigation, the launch of political or advertising campaigns, and firm acquisitions. It also fits situations in the animal kingdom, where animals fight over territory or prey. In all these examples, players can “flee” (e.g., reduce R&D spending, settle), wait to see if the other gives in, or initiate a fight (e.g., sue the opponent, start a hostile takeover), forcing the other into a battle.
Our dynamic game helps to understand why in some situations, players want to wait and see if the other flees without a battle, while in other circumstances, both want to act as quickly as possible. To illustrate the former type of situation, consider two political candidates who may wait a long time before they officially announce that they are running for office. If the other flees without a battle, they avoid the costs of a costly campaign that is required to win a fight. Male elephant seals that contest the right of exclusive access to a harem usually wait a couple of minutes to allow the other to flee without a bloody fight.
In other instances, players want to act as quickly as possible. A firm that wants to expand its market by acquiring a competitor should act quickly to prevent the prospective target from selling its valuable assets. Another possible interpretation is that compared to letting the other escape, by winning a fight the player sends a stronger signal about its strength to other players, thereby discouraging other players from ever making a challenge. A firm that drives out another firm by force will deter potential future competitors more than if the other firm left voluntarily. In a lawless society without a state monopoly of violence, people may want to rob each other if they can. In an encounter, the stronger player prefers to act as quickly as possible to avoid that the other flees without losing his money.
Notice that both types of examples are not well described by the war of attrition. In the first type of example, it may happen that players fight after a waiting period, which is not a possibility in the war of attrition. The war of attrition also does not capture the essence of the second type of interaction. In particular, the war of attrition does not accommodate that strong players decide to fight in a split-second.
In this paper, we analyze the fight-or-flight game theoretically and experimentally. Theoretically, we identify a key parameter, the deterrence value, that determines how the competition between two players will unfold. The deterrence value is the amount that a player earns on top of the prize if the other player manages to escape. Our theoretical analysis based on standard preferences yields two main novel insights. First, if the deterrence value is negative, all player types will rush and act in a split-second. A negative deterrence value is illustrated by the sale of valuable assets by a fleeing prospective firm in the takeover example. If the deterrence value is positive, players prefer to avoid the costly fight and wait before they act. In the example where two political candidates engage in a battle for office, the costs to organize a campaign represent a positive deterrence value.
The second insight is that if the deterrence value is positive but not too large, sorting will occur in the dynamic fight-or-flight game. That is, the weakest players will flee just before the end. Thus, the dynamic structure helps players to avoid costly fights, in comparison to a static version of the game that is stripped of its time element. These two results cannot be obtained in a standard war of attrition. In that game, players' waiting correlates positively with their strength, and rushing by all types is never observed in equilibrium. Moreover, the dynamic standard war of attrition does not help players to sort and avoid costly fights in comparison to the static version (Hörisch & Kirchkamp, 2010).
We also investigate what happens in a behavioral model in which players differ in their degree of risk aversion. This model yields two additional testable implications. First, it predicts that sorting will occur in a wider set of circumstances than in the standard model. Second, it predicts that the more risk-averse players flee more frequently before the end.
We test the predictions in an experiment in which we systematically vary the deterrence value and the dynamic or static nature of the game between treatments. Our experimental findings support some of the key features of the theory, at least in terms of its comparative statics. With a negative deterrence value, subjects quickly learn to decide in a split-second. With a positive deterrence value, subjects tend to wait much longer and indeed use time to sort. In agreement with the model of heterogeneous risk aversion, we find that endogenous timing reduces the likelihood of costly battles in a wider set of circumstances than predicted by standard theory. Subjects classified as more risk averse on the basis of an independent task are indeed the ones who tend to flee more often early in the game. Thus, while not all results are consistent with the point predictions of the model, in terms of comparative statics, behavior often moves in the expected direction.
An interesting finding that deviates from the predictions is that a sizable minority of subjects fight early when the deterrence value is positive. This is the case even after ample time to learn. This finding is in stark contrast with some behavioral findings in related dynamic games. For instance, Roth, Murnighan, and Schoumaker (1988) report that the deadline effect, a striking concentration of agreements in the final seconds of the game, is the most robust behavioral finding in a class of games designed to test axiomatic models of Nash bargaining. Roth and Ockenfels (2002) and Ockenfels and Roth (2006) identify substantial last-minute bidding in second-price auctions. They attribute this phenomenon of sniping to both strategic and naive considerations of the bidders. We discuss some potential explanations for the anomaly of early fighting in our contest game at the end of the section IV.
One feature of our experimental design is that time is discrete but with very short time intervals. This makes it hard for subjects to precisely time their actions and could be one of the reasons behind the decrease in costly battles in the dynamic games. In a follow-up experiment, we make it easier for subjects to time their action by making the time intervals longer. Consistent with the theoretical predictions, we no longer observe a decrease in battles compared to the static games when the deterrence value is negative. In other respects, the results closely resemble that of the original experiment.
Our paper contributes to the literature on dynamic games in which players compete for a prize. Several studies compare dynamic with static environments. Hörisch and Kirchkamp (2010) investigate how experimental subjects behave in static and dynamic versions of the war of attrition and some closely related games. Theoretically, the dynamic version of a war of attrition does not help players to sort, and indeed, they do not observe such a difference in their experiments.1 Theoretically, in an auction with symmetric interdependent valuations, Goeree and Offerman (2003a) do not find that the efficiency of a dynamic English auction is improved compared to the static second-price auction. In contrast, Kirchkamp and Moldovanu (2004) investigate a setup where a bidder's value is determined by his own signal in combination with the signal of his right neighbor. In this setting, bidders can retrieve valuable information in a dynamic auction process. In an experiment, they find that the efficiency of the English auction is higher than in a second-price auction in which no such information can be retrieved, which accords with theory.2
II. Theory
A. Dynamic Fight-or-Flight game
We first describe the dynamic version of the fight-or-flight game. In this section, we present a basic version of the game. In section IIC and online appendix B, we discuss several extensions.
Time is discrete, with a finite number of periods . For each , as long as the game has not ended, the two players independently decide to wait, flee (, for “retreat”), or fight (). In the final period, players can no longer wait and have to choose or . The game ends with at least one player choosing or , at which point the action set becomes null. At the start, each player is privately informed of her fighting ability . It is common knowledge that is independently drawn from a uniform distribution over the unit interval. A player's strategy lists for every ability the number of periods in which she chooses to wait and her choice if play reaches the period in which she wants to act. A player type's strategy is described as , where . This means that player with ability will choose action (fight or flee) in period if the other player did not fight or flee earlier.
The game ends as soon as one of the players decides to fight or flee. The outcome can be a battle or an escape. A battle occurs if the player with the shortest waiting time chooses to fight or if both choose to fight at the same time. An escape occurs if the player with the shorter waiting time chooses to flee or if they both choose to flee at the same time. If one of the players chooses to fight and the other chooses to flee at the same time, an escape occurs with probability and a battle with probability .
Payoffs.
In case of a battle, the player with the higher ability receives (the prize), and the other earns , where . In case of an escape, the player who chose to flee earns 0 while the other earns , the prize plus a deterrence payoff . This deterrence value can be positive or negative. A positive deterrence value captures situations where fighting is costly, so that players prefer to get the prize without fighting for it. A negative deterrence value captures situations in which beating the other generates a higher value compared to when the other escapes. We restrict the analysis to , so that if the other escapes, this always gives a higher payoff than escaping. As tie-breaking rules, we assume that if there is a battle between equally strong players, it is randomly determined (with equal probability) which player receives and which player receives . If both players decided to flee at the same time, it is randomly determined (with equal probability) who earns 0 and who earns . Alternatively, players could be allowed to share the prize equally in case both flee. This would not affect the theoretical analysis if players are risk neutral.
We assume that players maximize their expected utility and do not discount the future. In online appendix B, we analyze the case with discounting, but here our aim is to show how time per se affects the ability of players to sort themselves according to their strength. The case without discounting is also relevant for many cases, such as when the cost of waiting is small compared to the prize, the maximum duration of the game is short, or the consumption of the prize happens at a fixed point in time.3
B. Equilibrium
We look for pure-strategy Bayesian Nash equilibria. In this section, we derive equilibria under the assumption that players have threshold strategies, where types below a certain threshold flee and types above that threshold fight. Intuitively, stronger types have more to gain from fighting. We also assume that no type acts after the period in which the stronger type acts. In appendix A, we show that all equilibrium profiles satisfy these properties.
Negative deterrence value. .
For a negative deterrence value, the payoff of winning a battle exceeds that of allowing the other to escape. In this case, there is a unique equilibrium outcome in which all players fight or flee immediately. The very strong types will want to fight, and very weak types will want to flee. If the weakest types would flee after , the strongest types have an incentive to fight before that so the opponent does not escape. But then the weakest types would deviate to fleeing earlier. This implies that the strongest types fight immediately, and the weakest types flee immediately. Any other type will then act immediately as well. Acting later is costly, because it does not result in fewer battles with stronger types who fight and gives weaker types the possibility of escaping.
The threshold is increasing in the probability of an escape . As increases, fighting against weaker types becomes less attractive since they become more likely to escape. More types will then flee in equilibrium. The effect of on depends on the value of . For , an increase in has a larger impact on the fleeing payoff than on the fighting payoff. This means fleeing becomes more attractive, and more types will flee in equilibrium. For , the reverse is true.
Positive deterrence value. .
With a positive deterrence value, players are better off when the other manages to escape than when they win a battle. In this case, all the action will be concentrated in the final two periods of the game. Intuitively, sufficiently strong players will wait until the last period to give other players the option to escape. Fighting should take place only in the last period. Weaker types will then also prefer to wait until at least the penultimate period, since waiting until then gives opponents the option to escape without the risk of ending up in a fight.
Thus there can be three types of equilibrium outcomes. If , there is a rushing equilibrium in which all types immediately fight or flee. For intermediate positive values of , there is a timing equilibrium in which some types wait until the penultimate period and then flee, while all others wait until the final period and then fight or flee. For high values of , there is a waiting equilibrium in which all types wait until the last period and then fight or flee. While we derived these equilibria under the assumption that players have threshold strategies, in appendix A, we show that no other equilibria exist. The equilibrium outcome is generically unique, except for or .
(Equilibrium).
If , the unique equilibrium outcome is a rushing equilibrium in which all players act immediately. Players with abilities flee at , and players with abilities fight at .
If , the unique equilibrium outcome is a timing equilibrium in which a fraction of types flee in period , a fraction of types flee in period , and all types above fight in period .
If , the unique equilibrium outcome is a waiting equilibrium in which types flee in period T and types fight in period T, and for any .
All proofs are in appendix A.
(a) The solid dots indicate the experimentally implemented values (with , and ). (b) Rushing occurs to the left of the vertical axis, timing occurs between the vertical axis and the shaded area, waiting occurs in the shaded area. The dashed lines show a decrease in the escape probability () (for ). The dark shaded area shows the waiting equilibrium for the lower value of .
(a) The solid dots indicate the experimentally implemented values (with , and ). (b) Rushing occurs to the left of the vertical axis, timing occurs between the vertical axis and the shaded area, waiting occurs in the shaded area. The dashed lines show a decrease in the escape probability () (for ). The dark shaded area shows the waiting equilibrium for the lower value of .
To shed light on whether the dynamic time element of the fight-or-flight game decreases costly battles, we use a static version of the game as benchmark. In the static game, players choose simultaneously between fight and flee, and the same payoffs result as when players reach the final period of the dynamic game. The Bayesian Nash equilibrium of the static game coincides with the equilibrium of the dynamic game for parameters where all players act in the same period (that is, either case i or case iii described in proposition 1).
An interesting feature of the timing equilibrium of the dynamic game is that sorting takes place over time, resulting in fewer battles compared to what happens in the static game. In the dynamic game, the strongest types remain in the game until the last period, while some weaker types flee before any battle may take place. Moreover, a smaller fraction of types will fight; fighting becomes less attractive with fewer relatively weak players remaining.
(Battles and Sorting). Compared to a static (simultaneous-move) version of the game:
- (i)
The frequency of battles is reduced in case of a timing equilibrium and the same in case of a rushing or waiting equilibrium.
- (ii)
The rate at which the weaker player in a pair manages to escape is increased in case of a timing equilibrium and the same in case of a rushing or waiting equilibrium.
C. Extensions
Heterogeneous risk aversion.
A surprising feature of the analysis with a homogeneous population is that the set of deterrence values for which the timing equilibrium materializes does not depend on players' risk aversion. This result changes when the population is heterogeneous in the degree of risk aversion. Intuitively, players who are relatively averse to risks will want to flee earlier. Indeed, a population that is heterogeneous in the degree of risk aversion can sustain a timing equilibrium for a larger set of deterrence values. We show this in a simple framework with two levels of risk aversion and outline the two main strategic features of this model.
Suppose that a fraction of the population has a risk aversion parameter , and a fraction has . A player's value of is private information, but all players know the distribution. Consider the case where is very small. In that case, the threshold levels derived assuming homogeneous risk aversion in section IIA are not much affected for the less risk-averse types. Fix an equilibrium in which , so that all types with wait until period .
Another feature of this model is that the more risk-averse types will be the ones who flee more frequently before the end. To see this, note that for the ability level for which the less risk-averse type is indifferent between fleeing in period and period , the more risk-averse type still strictly prefers to flee in period . The reason is that the expected payoff of fleeing in period is not affected by the degree of risk aversion (since there are no negative payoffs), while the expected payoff of fleeing in period decreases in a player's risk aversion (since the negative payoff when a battle is lost weighs more heavily). In the experiment, we will test these two implications of the model with heterogeneous risk aversion.
Other extensions.
We also considered some natural extensions of the model. Here we describe the main qualitative features of these extensions. In appendix B, we provide further details of these extensions, as well as discussion of the pros and cons of discrete versus continuous modeling in waiting games.
So far we simply assumed that the stronger player always wins a battle. A natural possibility is that stronger types are more likely to win but do not win with certainty. When relative strength correlates sufficiently strongly with winning a battle, the results are qualitatively the same. That is, with a positive deterrence value, all the action will be concentrated in the final two periods. The strongest types still want to fight in the final period, while no type wants to flee before the penultimate period. Likewise, with a negative deterrence value, all types will still act immediately. When the link between relative strength and winning a battle becomes weak, other types of equilibria exist. In the extreme case, where each type has an almost equal chance of winning a battle against any other type, there can be equilibria where all types prefer to fight, possibly at different periods. There can also be an equilibrium in which all types prefer to flee in the last period.
Another natural extension is discounting of future payoffs. Conditional on a discount factor sufficiently close to 1, our main theoretical findings remain qualitatively similar. That is, we find a rushing equilibrium when , a timing equilibrium when , and a waiting equilibrium when . As in the timing equilibrium without discounting, all the action happens in the penultimate and last periods. The main difference with the model without discounting is that the thresholds now also depend on the discount factor. When discounting is important, the comparison between the static and dynamic case becomes less clear-cut in terms of welfare: the higher degree of sorting comes at the cost of waiting longer.
A final variant that yields somewhat different predictions is the one where players face a known cost for time. Here, it may happen that weak players decide to drop out earlier than the penultimate period. In a recent paper, Song and Houser (2021) study the interesting case of costly waiting in detail.
In the experiment, we focus on the variant where time is not costly for two reasons. First, it allows us to investigate in a meaningful way how the dynamic game helps players avoid costly battles compared to the static game where time plays no role. Second, we think that it is a stronger result if players use time as a sorting device when time is not costly.
III. Experimental Design and Procedures
A. Design
Subjects participated in a laboratory experiment in which they played the fight-or-flight game. In all treatments, we set the value of winning a battle to and losing a battle to . The probability of an escape when at the same time one player decided to fight and the other decided to flee was set to . Each subject played the game forty rounds, with random rematching after every round within a matching group of eight subjects. At the start of each round, the subjects were informed of their fighting ability for that round, which was an integer number from . They knew that each number was equally likely, that each subject faced the same distribution, and that draws were independent across subjects and rounds. At the end of a round, each subject was informed of the outcome, the paired subject's fighting ability, and the resulting payoffs.
We implemented two treatment variations. The first treatment variable was the deterrence variable , which was either 6, 6, or 12. The second concerned the dynamic or static nature of the flight-or-fight game. This gives a design. Every subject participated in only one of the treatments. In total, 360 subjects participated, with seven or eight independent matching groups per treatment. Table 1 presents an overview.
Overview of Treatments
Treatment Version . | Deterrence Value () . | N Subjects . | N Matching Groups . |
---|---|---|---|
Dynamic | −6 | 64 | 8 |
Dynamic | 6 | 56 | 7 |
Dynamic | 12 | 64 | 8 |
Static | −6 | 56 | 7 |
Static | 6 | 64 | 8 |
Static | 12 | 56 | 7 |
Treatment Version . | Deterrence Value () . | N Subjects . | N Matching Groups . |
---|---|---|---|
Dynamic | −6 | 64 | 8 |
Dynamic | 6 | 56 | 7 |
Dynamic | 12 | 64 | 8 |
Static | −6 | 56 | 7 |
Static | 6 | 64 | 8 |
Static | 12 | 56 | 7 |
In the dynamic fight-or-flight game, a 5 second countdown started after all subjects in the laboratory had indicated that they were ready to start. This ensured that subjects knew exactly when the game would start. During the game itself, a clock started counting down from 10 seconds to 0. The program divided the 10 seconds in 50 periods of 200 milliseconds each. Subjects implemented their strategies in real time. For instance, a subject could decide to wait for 5 seconds (i.e., for the first 25 periods) and then choose to fight, which would determine the outcome of the game (unless the other subject had already terminated the game earlier). This way she would implement the strategy . If subjects let the time run down to 0, they entered the endgame, in which they simultaneously decided between fight and flee (with no time constraints, as they decided simultaneously anyway).
Our dynamic game has 50 periods, more than the minimum required to test the theoretical predictions of the model, and short time intervals of 200 ms. Our goal was to have a design that is closer to the examples that motivated our research. A disadvantage is that rational subjects might find it hard to exactly implement equilibrium strategies in our setup. A follow-up experiment with longer time intervals addresses this concern (see section IVD).
The static version of the game abstracted from the time element and only consisted of the endgame of the dynamic version. That is, in this version of the game, subjects were immediately put in the same position as the players of the dynamic game who had both decided to wait until the end of the game. So in the static game, both subjects simultaneously chose between fight and flee.
After the main part, we obtained additional measurements. We assessed subjects' risk aversion using the method of Gächter, Johnson, and Herrmann (2007). A subject chooses whether to accept or reject six different lotteries. In a lottery, the winning amount is 6 euros. The losing amount varies across lotteries, from two to seven. In each lottery, the winning and the losing amounts are equally likely. If a subject rejects a lottery, she surely receives 0 euro. At the end of the experiment, one of the six lotteries is selected at random and played out for actual payment. The number of rejected lotteries is our measure of a subject's degree of risk aversion.
We also measured physical strength. We asked subjects to press a hand dynamometer as hard as they could, following the procedure of Sell et al. (2009). This measurement was obtained twice, and the best attempt was rewarded with 5 eurocents per kilo pushed. Finally, we obtained some self-reported measurements on social dominance and prestige (from Cheng, Tracy, & Henrich, 2010), perceived masculinity, sex, and age.4
B. Procedures
The experiment was computerized and run at CREED (University of Amsterdam). The instructions are in appendix E. Subjects read the instructions at their own pace. They could continue only after correctly answering test questions at the end of the instructions. To ease understanding, we used nonneutral labels such as “fight” and “escape.” Subjects were informed that there would be two parts, receiving new instructions at the start of each part.
During the experiment, subjects earned points, where 1 point €0.70 ($0.84). To avoid a net loss at the end of the experiment, they received a starting capital of 21 points, and any profits or losses would be added to or subtracted from this. At the end of the experiment, one round of the main part was randomly selected for payment. Total earnings averaged €19.09, ranging from €5.30 to €38.20.5 A session took approximately 65 to 75 minutes.6
IV. Results
In sections IVA and IVB, we consider the testable predictions following from propositions 1 and 2, respectively. Then, in section IVC, we turn to decisions at the individual level. All statistical tests comparing treatment differences use matching group averages as the independent unit of observation, unless indicated otherwise.
A. Timing of Actions
Lines are moving averages of three rounds.
Lines are moving averages of three rounds.
Period T indicates the endgame. Left panels are for the first twenty rounds and right panels for the final twenty rounds. Only observations where a player made a decision to fight or flee are included in the graph, that is, observations where a player was waiting when the other moved are omitted.
Period T indicates the endgame. Left panels are for the first twenty rounds and right panels for the final twenty rounds. Only observations where a player made a decision to fight or flee are included in the graph, that is, observations where a player was waiting when the other moved are omitted.
When the deterrence value is negative, players act immediately. When the deterrence value is positive, players are more likely to wait until the end of the game and they learn to wait longer.
In contrast to the theoretical predictions, some subjects move at the very beginning of the game when the deterrence value is positive. This fraction decreases over time, but even in the final twenty rounds (the right-hand panels of figure 3), we do observe such behavior. This behavior is not in line with the timing equilibrium or waiting equilibrium. We return to this anomaly when we discuss individual behavior (section IVC). The comparative static results of increasing are in line with the theoretical predictions.
B. Frequency of Battles and Sorting
Error bars indicate 95% confidence intervals, based on matching groups as the independent unit of observation.
Error bars indicate 95% confidence intervals, based on matching groups as the independent unit of observation.
The reduction of battles for is in line with the comparative static prediction following from proposition 2. For , the unique equilibrium outcome in the dynamic game is a timing equilibrium, resulting in fewer battles than in the equilibrium of the static game. Although we observe deviations from the timing equilibrium (in particular, some subjects move at the beginning of the game), we do find that the number of battles is reduced compared to the static case. The observed lower frequency of battles for is not expected if players are homogeneous in their risk aversion, but it is consistent with the comparative static prediction of our version of the model in which players differ in their degree of risk aversion.8 In contrast to the theoretical predictions, we also observe a decrease in battles when the deterrence value is negative. This result is, however, partly mechanical; even if all subjects wanted to act immediately, some subjects might be a fraction of a second slower than others, resulting in more escapes.9
It is also a possibility that random noise reduces the frequency of battles in the dynamic game. For instance, if players in the dynamic game choose fight, flee, and wait in each period with equal probabilities while players in the static game choose between flee and fight with equal probabilities, fewer battles will be observed in the static game.10 However, as we will illustrate in section IVC, the behavior of our subjects is very remote from this random benchmark. Our subjects respond in a sensible way to their private strength parameters. Moreover, in agreement with theory but in contrast to the random benchmark, we find that the dynamic nature matters most for reducing the frequency of battles when .
Also following proposition 2, we expect that players sort themselves according to their fighting ability in case of a timing equilibrium. The strongest players should wait longer than weaker players, giving weaker players the opportunity to escape. Hence, weaker players should manage to escape more frequently in the dynamic games than the static games if the deterrence value is positive. Our results are in line with this prediction. The right panel of figure 4 shows how often the weaker subject in a pair escapes. Subjects sort on fighting ability more often in the dynamic than the static game and the increase is larger for dynamic games with a positive deterrence value. For , the weaker player escapes in 12% of the matches in the static game and 26% of the matches in the dynamic game. For (), the weaker player escapes in 15% (18%) of the matches in the static game and 38% (45%) of the matches in the dynamic game. The difference-in-difference analysis reported in table A1 in appendix C shows that the larger increase for positive deterrence values is statistically significant.11
There are fewer battles in the dynamic game than in the static game. The dynamic version of the game helps players to sort themselves according to their fighting ability, and this effect is stronger when the deterrence value is positive.
C. Individual Behavior
In the category “wait” refers to subjects who made it to the endgame, and in “other moves” refers to subjects who did not make a move before the endgame but the other subject did.
In the category “wait” refers to subjects who made it to the endgame, and in “other moves” refers to subjects who did not make a move before the endgame but the other subject did.
In the appendix, we provide further details on individual strategies. In appendix D, we estimate individual cutoff strategies. We find that most behavior is consistent with the use of cutoff strategies: around 90% of all decisions are captured by individual cutoff strategies. There is substantial heterogeneity in the type of cutoff strategies that individuals employ. Although the estimated cutoffs organize the data very well, for a substantial number of subjects, the estimated cutoffs are remote from the theoretical prediction.
In section IVB, we reported that sorting was observed not only for but also for . Although behavior in both treatments does not exactly follow the predictions from the timing equilibrium (notably, some subjects move early on in the game), the finding that subjects sort in is consistent with the idea that heterogeneous risk aversion enlarges the set of environments for which the timing equilibrium applies. A more direct implication of heterogeneous risk aversion is that the more risk-averse players should flee early more often. Table 2 presents panel data probit regressions of how the probability of choosing to flee before the endgame () depends on a subject's level of risk aversion, together with some controls. In agreement with the model of heterogeneous risk aversion, more risk-averse subjects are more likely to flee before the endgame when and when , and the effect survives when we combine all three treatments.12
Fleeing before Endgame
. | (1) . | (2) . | (3) . | (4) . |
---|---|---|---|---|
. | . | . | . | All . |
Risk aversion | 0.037*** | −0.001 | 0.041** | 0.026*** |
(0.007) | (0.018) | (0.018) | (0.009) | |
Female | −0.025 | −0.012 | 0.003 | −0.008 |
(0.031) | (0.052) | (0.060) | (0.030) | |
Dominance | 0.007 | −0.028 | −0.001 | −0.006 |
(0.011) | (0.023) | (0.015) | (0.010) | |
Physical strength | −0.015 | −0.004 | 0.003 | −0.002 |
(0.019) | (0.020) | (0.020) | (0.011) | |
Fighting ability | −0.980*** | −0.881*** | −0.889*** | −0.935*** |
(0.003) | (0.034) | (0.016) | (0.013) | |
Round | 0.000 | −0.000 | −0.000 | −0.000 |
(0.001) | (0.001) | (0.001) | (0.000) | |
−0.127*** | ||||
(0.030) | ||||
−0.112*** | ||||
(0.022) | ||||
Observations | 2,520 | 2,080 | 2,520 | 7,120 |
. | (1) . | (2) . | (3) . | (4) . |
---|---|---|---|---|
. | . | . | . | All . |
Risk aversion | 0.037*** | −0.001 | 0.041** | 0.026*** |
(0.007) | (0.018) | (0.018) | (0.009) | |
Female | −0.025 | −0.012 | 0.003 | −0.008 |
(0.031) | (0.052) | (0.060) | (0.030) | |
Dominance | 0.007 | −0.028 | −0.001 | −0.006 |
(0.011) | (0.023) | (0.015) | (0.010) | |
Physical strength | −0.015 | −0.004 | 0.003 | −0.002 |
(0.019) | (0.020) | (0.020) | (0.011) | |
Fighting ability | −0.980*** | −0.881*** | −0.889*** | −0.935*** |
(0.003) | (0.034) | (0.016) | (0.013) | |
Round | 0.000 | −0.000 | −0.000 | −0.000 |
(0.001) | (0.001) | (0.001) | (0.000) | |
−0.127*** | ||||
(0.030) | ||||
−0.112*** | ||||
(0.022) | ||||
Observations | 2,520 | 2,080 | 2,520 | 7,120 |
Panel data probit regressions, with random effects at the subject level. Coefficients are average marginal effects. Dependent variable is a dummy indicating whether the player decided to flee before the endgame or not. Risk aversion is measured as the number of rejected lotteries. Dominance and physical strength are normalized (mean 0 and SD 1). Fighting ability takes on values between 0 and 1. Standard errors (clustered at the matching group level) in parentheses. Additional specifications with fewer or more controls are reported in table A4 in appendix C. , , and .
An anomaly is the fighting behavior early on in the game when there are benefits of letting the other escape, that is, when . In this case, fighting early is weakly dominated. Given the observed actions in the experiment, the losses of fighting early are substantial. Consider the strongest possible type who wins every fight. This type would earn 14% higher expected payoffs by waiting to fight in the endgame if and 42% higher expected payoffs if . Note that fighting early is even more costly for weaker types. One possible reason for why we observe this anomalous behavior is that subjects may need some time to learn. As figure 3 shows, we do indeed observe less of this behavior in the final twenty rounds compared to the first twenty rounds. Another, more psychological, explanation for fighting early on in the game might be a preference for social dominance. The evidence does not support this. Table 3 shows that the survey measure of social dominance is not a predictor of fighting early. We also do not find an association with physical strength, but we do find that women are more likely to fight early than men.13
Fighting in the First Second
. | All Rounds . | Final 20 Rounds . | ||||
---|---|---|---|---|---|---|
. | (1) . | (2) . | (3) . | (4) . | (5) . | (6) . |
. | . | . | all . | . | . | all . |
Risk aversion | −0.030** | −0.013 | −0.020** | −0.020 | −0.015 | −0.018* |
(0.015) | (0.018) | (0.010) | (0.017) | (0.013) | (0.011) | |
Female | 0.130 | 0.029 | 0.066 | 0.161 | 0.046 | 0.097** |
(0.100) | (0.058) | (0.050) | (0.111) | (0.049) | (0.048) | |
Dominance | 0.002 | 0.018 | 0.015 | −0.025 | 0.011 | 0.006 |
(0.019) | (0.021) | (0.016) | (0.021) | (0.014) | (0.011) | |
Physical strength | 0.009 | 0.001 | −0.001 | 0.025 | 0.022 | 0.022 |
(0.054) | (0.036) | (0.029) | (0.053) | (0.027) | (0.025) | |
Fighting ability | 0.415*** | 0.368*** | 0.389*** | 0.272*** | 0.218** | 0.242*** |
(0.067) | (0.060) | (0.044) | (0.072) | (0.090) | (0.048) | |
Round | −0.004*** | −0.005*** | −0.005*** | −0.002 | −0.003* | −0.002*** |
(0.001) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) | |
−0.017 | −0.026 | |||||
(0.042) | (0.036) | |||||
Observations | 2,080 | 2,520 | 4,600 | 1,040 | 1,260 | 2,300 |
. | All Rounds . | Final 20 Rounds . | ||||
---|---|---|---|---|---|---|
. | (1) . | (2) . | (3) . | (4) . | (5) . | (6) . |
. | . | . | all . | . | . | all . |
Risk aversion | −0.030** | −0.013 | −0.020** | −0.020 | −0.015 | −0.018* |
(0.015) | (0.018) | (0.010) | (0.017) | (0.013) | (0.011) | |
Female | 0.130 | 0.029 | 0.066 | 0.161 | 0.046 | 0.097** |
(0.100) | (0.058) | (0.050) | (0.111) | (0.049) | (0.048) | |
Dominance | 0.002 | 0.018 | 0.015 | −0.025 | 0.011 | 0.006 |
(0.019) | (0.021) | (0.016) | (0.021) | (0.014) | (0.011) | |
Physical strength | 0.009 | 0.001 | −0.001 | 0.025 | 0.022 | 0.022 |
(0.054) | (0.036) | (0.029) | (0.053) | (0.027) | (0.025) | |
Fighting ability | 0.415*** | 0.368*** | 0.389*** | 0.272*** | 0.218** | 0.242*** |
(0.067) | (0.060) | (0.044) | (0.072) | (0.090) | (0.048) | |
Round | −0.004*** | −0.005*** | −0.005*** | −0.002 | −0.003* | −0.002*** |
(0.001) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) | |
−0.017 | −0.026 | |||||
(0.042) | (0.036) | |||||
Observations | 2,080 | 2,520 | 4,600 | 1,040 | 1,260 | 2,300 |
Panel data prohibit regression with random effects at the subject level. Coefficients are average marginal effects. Dependent variable is a dummy indicating whether the player decided to fight in the first second or not. Risk aversion is measured as the number of rejected lotteries. Dominance and physical strength are normalized (mean 0 and SD 1). Fighting ability takes on values between 0 and 1. Standard errors (clustered at the matching group level) in parentheses. Additional specifications with fewer or more controls are reported in table A5 in appendix C. , , and .
It may be that some of our subjects start playing the game with a misguided behavioral rule that in contests, it generally pays off to strike first. Myerson (1991) proposes that behavior that is apparently suboptimal behavior can sometimes be understood by assuming that observed behavior is optimal in a related but more familiar environment, which he calls a “salient perturbation” (see Myerson, 1991; Samuelson, 2001; Jehiel, 2005). Alternatively, it could be that intuition favors fighting behavior. According to the social heuristics hypothesis (Rand, Greene, & Nowak, 2012; Rand et al., 2014) applied to our setting, if fighting is typically advantageous, it could become the intuitive response. Note that subjects who fight early on have limited opportunities to learn, since they never experience the benefits of waiting. This could explain why they do not converge fully to waiting until the end of the game.
The fact that we observe an approximately equal frequency of early battles when as when suggests that this behavior is not due to a separate utility component reflecting (for instance) a desire to control the outcome or a joy of winning. If people have a preference to control the outcome, we would expect fewer early battles when it becomes more costly in .14 Still, when play has not yet converged to equilibrium, we cannot exclude that early fighting is encouraged by players who experience a joy of winning when they beat the other in a battle. In our follow-up experiments reported in section IVD, we include some measures of joy of winning to get direct evidence for this possibility.15
A sizable minority of players acts immediately when the deterrence value is positive. This behavior decreases with experience.
D. Experiment 2
In the dynamic treatments, a period lasted 200 ms. Such short periods can make it hard for participants to precisely time their actions. This could potentially explain why even for , we observe fewer battles and more escapes in the dynamic game compared to the static game. We address this in a follow-up experiment.16
Experimental design and procedures.
The design of experiment 2 closely follows that of the first experiment. We collected data for all dynamic treatments, using periods of 5 seconds instead of 200 ms and four periods per round (with forty rounds in total). This gives subjects more scope to time their actions. We also added two items to the survey, measuring subjects' joy of winning. The first (incentivized) measure is taken from Sheremeta (2010, 2018). In this task, subjects can bid to win a contest with a prize of 0 points. For the second (nonincentivized) measure, subjects indicated how strongly they agreed with the statement: “I enjoy winning an amount by competing against another person more than I enjoy receiving that same amount without having to compete for it” (rated on a 7-point Likert scale).
The experiment was run online. Participants were recruited from the same subject pool as for the first experiment (excluding subjects who already participated). As in the first experiment, we included test questions at the end of the instructions. We showed the correct answers after two failed attempts on a question. We did this to prevent that subjects would log out if they had to wait too long. We kept track of the mistakes they made so that we can control for this in the analysis.
In total, 168 subjects participated, with seven matching groups of eight subjects in each of the three dynamic treatments (, , ).17 Sessions lasted around 60 minutes in total, and earnings varied between €4.20 and €35.70 (€19.00 on average).
Results.
Left: First twenty rounds. Right: Final twenty rounds. Only observations where a player made a decision to fight or flee are included (omitting observations where a player was waiting when the other moved).
Left: First twenty rounds. Right: Final twenty rounds. Only observations where a player made a decision to fight or flee are included (omitting observations where a player was waiting when the other moved).
Figure 4 plots the frequency of battles and escapes by the weaker player in both experiments. The results are very comparable to those of experiment 1. In particular, for positive values of , the dynamic game leads to a reduction in battles and an increase in escapes by the weaker player compared to the static version. The difference between the static and dynamic game is significant in all those cases (Mann-Whitney test, in all cases). The main difference with experiment 1 is that for a negative , there is no reduction in battles or increase in escapes compared to the static games ( for battles, for escapes). This supports the idea that in experiment 1, the decrease in battles and increase in escapes are driven by coordination failures: subjects may have attempted to immediately fight but were not always able to precisely time their action.18
In experiment 2, we again observe anomalous early fighting if . The two measures of joy of winning do not explain this early fighting, while the number of mistakes in the test questions and the social dominance score do explain (some) of the anomalous behavior (see table A7 in appendix C). In experiment 2, we do not replicate the finding that risk aversion correlates with fleeing before the endgame (see table A8 in appendix C).19
V. Conclusion
In this paper, we present a dynamic fight-or-flight game that makes sense of a large range of conflicts observed in practice. We highlight the crucial role that the deterrence value plays that players receive when the other player successfully escapes. If it is negative, players will act in a split-second. When it is positive, players will be patient and try to make the other player flee. An interesting feature of the analysis is that if the deterrence value is positive but not too large, sorting will occur. That is, the weakest players will flee just before the end, and thereby avoid costly battles. Thus, this paper clarifies how time can help people reach better outcomes in dynamic games, even when time is not costly. The important role of the deterrence value is confirmed in our experiments. Compared to a static version of the game, players are better able to avoid costly battles.
In the experiment, we find support for a behavioral version of the model that allows for heterogeneous risk aversion. In agreement with this model, sorting occurs for a wider range of situations than predicted by the model with standard preferences. In addition, subjects who appear to be more risk averse in an independent task tend to be the ones who more frequently flee early, although we do not replicate this in the follow-up experiment. We also observe an interesting anomaly. A fraction of the players choose to fight early even in situations where the strategic incentive is to be patient. Our conjecture is that some subjects come to the interaction with a homegrown notion that it generally pays off to strike early in contests. Over time, this costly behavior diminishes but does not disappear.
We think that our setup provides a lower limit of the amount of sorting that can be expected in practice. In our game, players manage to sort even though they do not receive any sensory input about the ability of the opponent. In particular when there is a strategic incentive to wait, sensory cues before or during the contest may help players to avoid costly fights. In an actual display, body odor or a high-pitched voice may reveal fear and help identify the weaker player (Mujica-Parodi et al., 2009; Sobin & Alpert, 1999). A dominant performance in a television show by a candidate running for presidential office may convince a weaker opponent that it is better to flee early. In the future, artificial intelligence may further help players to agree on how they are ranked in terms of ability before they engage in a costly battle. Relevant information about the opponent's ability will also affect players' decisions when the deterrence value is negative. However, in such situations, a positive frequency of battles cannot be avoided. Even when information about the opponent helps players to perfectly forecast who will win the fight, the stronger player will still want to catch the weaker player in a battle. We think that extending the analysis in this direction is an interesting avenue for future research.
Notes
There is a large literature on static contest games. Carrillo and Palfrey (2009) study a contest game that is quite close to our static benchmark. They find that subjects compromise more often than in equilibrium, and they discuss some explanations based on cognitive limitations. De Dreu et al. (2016) investigate a game in which a group of attackers competes with a group of defenders. They find that in-group defense is stronger and better coordinated than out-group aggression. Oprea, Henwood, and Friedman (2011) show how the matching protocol affects outcomes in continuous time Hawk-Dove games. Dechenaux, Kovenock, and Sheremeta (2015) provide a survey of the experimental literature on contest games.
The war of attrition has been applied to various settings, including versions with private information (Fudenberg & Tirole, 1986; Ponsati & Sákovics, 1995) and applications to public good provision (Bliss & Nalebuff, 1984; Weesie, 1993). Oprea, Wilson, and Zillante (2013) experimentally study war of attrition games with two-sided private information (as in Fudenberg & Tirole, 1986) and observe behavior close to theoretical predictions. More generally, the study of dynamic games reveals novel insights that significantly surpass what we know from the study of static games. Recent contributions include Potters, Sefton, and Vesterlund (2005), Levin and Peck (2008), Ivanov, Levin, and Peck (2009), Kolb (2015), and Agranov and Elliott (2017). The recent experimental literature on continuous time experiments shows that outcomes in continuous time may substantially differ from outcomes in discrete time (Friedman & Oprea, 2012; Oprea, Charness, & Friedman, 2014; Bigoni et al., 2015; Calford & Oprea, 2017).
By design, discounting also cannot play a role in the experiment.
Perceived masculinity is measured by the answer to the question: “On a scale from 1 (very feminine) to 7 (very masculine), how would you describe yourself?”
The payment subjects received consisted of the starting capital and their earnings in the fight-or-flight game, the lottery task, and the physical strength task.
In addition to the forty decision rounds (which lasted around 20 minutes), subjects spent time on the instructions and test questions (25 minutes), the lottery task, questionnaire and physical strength task (15 minutes), and payment of subjects (10 minutes).
Figure A1 in appendix C shows the distribution of actions by 200 ms periods.
As for , we also observe deviations from a timing equilibrium when as a number of subjects move early in the game. We discuss these deviations in more detail in section IVC.
Of the 15 percentage point difference in battles between static and dynamic games when , 6 percentage points can be attributed to escapes that occur just because the subject who wanted to fight is a fraction slower than the subject who wanted to flee. The remaining 9 percentage points can be attributed to more subjects fighting in the static games.
We thank a referee for this insight.
Figure A2 in appendix C shows decision times for weak and strong players separately. It confirms the comparative static prediction that stronger subjects wait longer than weaker subjects if the deterrence value is positive. Moreover, with experience, both weak and strong players learn to wait longer.
When we regress the estimated cutoff fighting ability below which subjects flee before the endgame on risk aversion and other individual characteristics, we obtain qualitatively similar results. The regressions are reported in table A2 in appendix C.
When we regress the estimated cutoff fighting ability above which subjects fight before the endgame on risk aversion and other individual characteristics, we obtain qualitatively similar results. The regressions are reported in table A3 in appendix C.
The same argumentation would apply to a distaste for surprise or suspense.
Sheremeta (2010), Price and Sheremeta (2011), and Cason, Masters, and Sheremeta (2018) all report evidence that joy of winning and risk aversion are important factors in driving subjects' behavior in contest games. In a second price auction with value uncertainty, Goeree and Offerman (2003b) find that bidders tend to submit bids below the expected value of the object, which suggests that risk aversion may be the stronger force. Sheremeta (2013) provides a survey.
We thank a referee for this suggestion.
We have some missing data for five subjects who lost the connection. If a subject could not be paired in a round because of this, he or she received the maximal payoff.
A regression analysis confirms these results. The interaction effects between dynamic timing and positive deterrence values are statistically significant, indicating that the effect of dynamic timing on battles and escapes matters more for (table A6 in appendix C).
If we combine the data of both experiments, risk aversion is significantly correlated with fleeing before the endgame and dominance with early fighting. See tables A9 and A10 in appendix C.
REFERENCES
Author notes
We thank the editor, two anonymous referees, Jian Song, and audiences at the University of Arizona, the University of Cologne, the University of Lyon, the University of Manchester University, Middlesex University, New York University, MPI Bonn, NHH Bergen, University of Oxford, UC San Diego, Utrecht University, University of Vienna, WZB Berlin and at IMEBESS Florence, M-BEES, NAG Toulouse, and TIBER for helpful suggestions and comments. Financial support from the Research Priority Area Behavioral Economics of the U. of Amsterdam, ANR–Labex IAST (Institute for Advanced Study in Toulouse), and CentER (Tilburg U.) is gratefully acknowledged.
A supplemental appendix is available online at https://doi.org/10.1162/rest_a_00961.