Abstract
The negotiator's dilemma is a vexing problem (structurally similar to the prisoner's dilemma), but it is made more complicated by the fact that many negotiations involve teams of negotiators. Although it is widely recognized that the behavior of groups of people can deviate in important ways from the behavior of individuals, the implications of this for the negotiator's dilemma are unclear.
For this article, I have used a computer simulation to explore the impact of different decision rules (majority rule, consensus, and weakest link) on how the approach of groups to negotiations might differ. The results suggest that the use of different decision rules can greatly complicate negotiations. While the majority rule is quite versatile in multiple contexts, a pairing of weakest link and consensus decision rules is highly problematic for finding a negotiated settlement.
Introduction
Negotiations are mixed‐motive situations in which both parties cooperate to build an agreement while simultaneously competing over the terms of that agreement (Schelling 1960). When negotiations involve many issues, it is often possible to find creative solutions that create value for all parties, but some negotiations are primarily questions of claiming value. In these zero‐sum situations, negotiators face a particularly acute dilemma. Should a negotiator place emphasis on cooperation (a soft approach) or on the competitive aspect of negotiations (a hardline approach)?
A negotiator who is too soft may succeed in reaching a settlement, but it may be a settlement that produces less value for that negotiator than the one that might have resulted had she or he used more hard‐nosed tactics. On the other hand, a negotiator who is too tough and unwilling to compromise may succeed in producing a favorable settlement but risks having negotiations break down with no settlement reached. This is the negotiator's dilemma, as it applies to value‐claiming situations.
While the negotiator's dilemma is a vexing problem in its own right, it is compounded when negotiators work in teams or in groups, which is common in modern international negotiations (Zartman and Berman 1982). For example, multilateral trade negotiations often involve large delegations of negotiators who must coordinate as they work on different subparts of a problem. Similarly, in negotiations to address disputes or build new partnerships, it is common for an organization to send a team of negotiators rather than assigning the task to a single individual.1
While group decision making and decision rules have been aggressively studied by scholars seeking to understand the choices that groups make (e.g., see Nurmi 1987; Klamler 2005; McCabe‐Dansted and Slinko 2006), the interaction between group decision rules and negotiation tactics has been less well explored.
In this article, I seek to provide insight into how different group decision rules (majoritarian, strongest link, and weakest link) can alter the outcomes reached by teams of negotiators. A better understanding of group behavior and the formal and informal rules by which groups make decisions could aid in our understanding of why negotiations break down and why conflicts that could be solved persist.
To explore group dynamics as they relate to the negotiator's dilemma, I have adapted a computer simulation that has been used previously in exploring the prisoner's dilemma.2 The prisoner's dilemma involves individuals (or in this case groups) struggling with the choice to exploit each other or to work together, making it an interesting environment in which to study cooperation.
Background
In the closing days of August 1995, American diplomat Richard Holbrooke met with Serbian leader Slobodan Milosevic as part of prenegotiations in the lead up to the Dayton peace conference, which sought to end the war in Bosnia–Herzegovina. At this meeting, Milosevic unveiled the rules that would govern the Serb delegation. The delegation was to comprise six members, three from Yugoslavia (including Milosevic) and three representing the Bosnian Serbs. The delegation would operate under a majoritarian rule with ties to be broken by the delegation leader. Milosevic presented the delegation rules to Holbrooke with his characteristic pride and panache because, as the delegation head and the selector of the Yugoslav delegation, he had essentially apportioned to himself the power to make peace. Holbrooke knew immediately what the delegation rules meant: a real chance for a deal (Holbrooke 1999).
The lead up to the Dayton negotiations illustrates how different decision rules can affect negotiations. Because of the diversity of perspectives among different Serbian political factions, finding a settlement that would be acceptable to the most hardline of the Serb delegation would have been exceedingly difficult. But by shaping the Serbian delegation's decision rule, Milosevic was able to lower the bar for a settlement — from winning over the most hard‐nosed of the Serb delegation to winning the majority of the delegation. A different set of decision rules might have scuttled the Dayton peace process because it would have been harder to force the hand of the Bosnian Serbs to accept concessions. The Milosevic‐led majority was more amenable to peace and had the clout to deliver a deal.
The relationship between group dynamics and negotiation can have multiple dimensions. In one prominent study of team negotiation, Leigh Thompson, Erika Peterson, and Susan Brodt (1996) found that while negotiation teams are perceived by adversaries to have an advantage in negotiations relative to single negotiators, such an advantage has not been borne out in negotiation experiments, which is unsurprising given the literature on group information processing and group decision making. Groups frequently do not exhibit the kind of rational cost‐benefit‐driven decision making that is often posited in negotiation analysis. Indeed, social psychology research (see Hogg and Tindale 2008) suggests that the behaviors of collections of people deviate in important ways from the rational‐unitary actor model that is often used in rational choice analyses of human behavior.
Simon Hug (1999), for example, explored the extent to which group decision making mirrors the rational‐unitary actor model. He concluded that even if one accepts that individuals will make rational decisions on their own, when those same rational decision makers form groups, those groups appear to deviate from expectations of the rational‐unitary model, which further compounds the complexity of group behavior. Indeed, in the seminal work on groupthink, Irving Janis found that people working collectively may perform far worse than we might expect they would have as individuals. Groupthink, Janis (1982: 9) argued, produces a “deterioration of mental efficiency, reality testing, and moral judgment.”
Empirical studies have similarly found that cohesive groups under specific conditions perform noticeably worse at making decisions. The problem appears to become more pronounced as group size increases (Mullen et al. 1994). For example, R. Scott Tindale (1993) noted that in situations in which there is no obviously correct answer, groups are frequently less successful in forming judgments than the most able members of each group would be as individuals.
On the other hand, group judgments tend to be better on average than the judgments of individuals. Similarly, Thompson, Peterson, and Brodt (1996) found that negotiation teams were quite good at developing creative solutions to problems but struggled with the value‐claiming aspects of negotiations, a finding that they attribute to the internal coordination problems of groups.
Thompson, Peterson, and Brodt (1996) also explained how the group's internal coordination problems can reduce the strength of a group's position to the level of its weakest link. In other words, groups are moved to make concessions by the most conciliatory member of the group, which makes it possible for shrewd negotiators to target the more conciliatory members of an adversary's team. I. William Zartman and Maureen Berman (1982: 213) described this phenomena as “singleshooting” or “leapfrogging.” The proposed remedy that Zartman and Berman offer is that groups alter the decision rules governing the group so that consensus is required. Such a move would switch the conditions for extracting concessions from a group from a “weakest link” decision rule to a “strongest link” decision rule.
While requiring consensus to make concessions is potentially an effective solution to the problem of adversaries targeting the weakest link in a group, this is not necessarily a desirable model from the perspective of group performance. Hug (1999) found that consensus decision rules produced considerable deviation from the results one would expect from the rational‐unitary actor model. By contrast, the decisions made by larger groups operating under a majoritarian rule were less problematic. In an earlier study, I (Urlacher 2008) similarly found that larger groups using a majoritarian decision rule performed better in establishing patterns of cooperation than did smaller groups. Others have found little difference between a majoritarian group decision rule and a consensus group decision rule in jury‐style decisions (Davis et al. 1975; Nemeth 1977; Kameda 1991).
Given this previous work, it is clear that groups create several distinct problems for negotiation. Some problems relate to internal group coordination and information sharing, some to social dynamics and peer pressure, and some to the role of group decision structures and institutions. In this article, I seek to better understand the last of these three challenges, paying attention to how group decision rules affect group performance in negotiations.
Simulating Negotiations
Computer simulations of negotiations are inevitably limited in their realism because of the dynamic, interactive, and creative nature of negotiations. The simulation used for this article was designed to work within those limits by simulating a simple, single‐issue negotiation process, a “fixed‐pie” scenario rather than one that offered opportunities to develop creative solutions to complex problems. As noted earlier, some experimental studies (Thompson, Peterson, and Brodt 1996) have suggested that dividing a fixed pool of value is particularly challenging for groups.
This simulation, which examines group performance in negotiations, seeks to represent the negotiator's dilemma and the related prisoner's dilemma scenario. The negotiator's dilemma captures the tension that negotiators face between taking too hard or too soft an approach. The prisoner's dilemma refers to a problem faced by two accused criminals who must choose between confessing or staying silent. The authorities offer each one a deal: testify against your accomplice and go free. But if both parties stay silent, the prosecution will only be able to prove a far lesser charge. But if both betray the other, both end up serving long sentences.
The prisoner's dilemma has a tension similar to the negotiator's dilemma in that mutual cooperation is beneficial, but parties can gain more though defection (i.e., a hardline strategy). This temptation to gain from a defection strategy can lead both parties to a worse outcome than would have resulted from mutual cooperation. Making use of the similarities between the prisoner's dilemma and the negotiator's dilemma, I adapted an existing computer simulation of the prisoner's dilemma to explore negotiations. The details of the simulation are presented at length below, starting with a simplified overview of the simulation.
The simulation presented in this article involves groups of computer agents who are trying to agree on how to divide a fixed pie. This fixed pie is represented by a continuum, or number line, that runs from 0 to 1. The ideal point for groups is at opposite ends of the continuum, but groups are willing to accept a range of other points on the continuum (see Figure One). A zone of possible agreement (ZOPA) exists, so it is possible for groups to come to an agreement about how to divide the value represented by the number line.
Over the course of the simulation, groups choose to make or to withdraw concessions, which correspond to offers at different points on the continuum.3 This continues until proposals have been made that both groups can accept. Identical simulations are run thousands of times with the decision rules that groups use to select conciliatory or hardline strategies adjusted.
Running identical simulations with different decision rule combinations makes it possible to explore the effect of decision rules on group behavior and the consequences of that behavior on the negotiation process. This approach, relying upon probabilistic decision making, differs somewhat from other studies of group negotiations that focus less on decision rules and more on intragroup deliberations (see Coser 1956; Pruitt and Rubin 1986; Majeski 2008).
Earlier Modeling Efforts
Two approaches to designing computer simulations of the prisoner's dilemma have become most prominent. Michael W. Macy (1991) developed a “stochastic learning” simulation designed to explicate the ability (or inability) of computer agents to build cooperative patterns over the course of many rounds or iterations.
In each iteration, computer‐generated agents select one of two possible strategies: cooperation or defection. The choices are driven by probabilities. Computer agents adjust these probabilities in response to the success or failure of each strategy in past iterations: agents who succeeded in exploiting an adversary through defection would thus become less likely to cooperate in future iterations. Similarly, failure to successfully take advantage of an adversary prompts a greater likelihood of future cooperation. In other words, agents are more likely to do what has worked in the past and less likely to try strategies that have failed.
Robert Axelrod's (1997, 1984) work differs from Macy's simulation strategy. Axelrod hosted a round robin “tournament” in which computer programs using different strategies competed against each other in a prisoner's dilemma game. The winning program used a strategy described as “tit‐for‐tat.” The tit‐for‐tat program would start each simulation with cooperation, but in all subsequent rounds or iterations of the simulation, the tit‐for‐tat program would mimic an adversary's move in the previous round. The tit‐for‐tat program did not involve learning by agents, but instead achieved cooperation through a reactive strategy of rewarding cooperation and punishing defection.
The agent learning aspect of Macy's simulation, along with the probabilistic behavior of agents, lends itself to exploring changes to agent behavior when different features of the game (or of the agents themselves) are changed. Joseph Young and I (Young and Urlacher 2007; Urlacher 2008) adapted Macy's approach to allow for groups of agents rather than individual agents to select strategies of cooperation and defection. The Young and Urlacher (2007) study found that a majoritarian decision rule produced higher levels of cooperation than an authoritarian decision rule. While these earlier studies provide the basic framework for thinking about how to build a simulation of the negotiator's dilemma, a number of adaptations were needed to apply the stochastic learning approach developed by Macy to negotiations.
For example, in prisoner's dilemma simulations, computer agents must make a choice to cooperate or defect in any given iteration of the simulation, and the payoffs for each agent are determined at the end of each iteration. Negotiations often feature a process of back and forth maneuvering as parties press for information, offer concessions, explore options, take proposals off the table, or pull back from talks completely. The payoff to different parties from these interactions is not known until a deal is concluded.
To capture this dynamic, the simulation presented in this article involves groups of agents proposing possible settlements in each iteration of the simulation. These proposals are specific points on the continuum representing all possible solutions. These proposals are adjusted incrementally in each iteration. In the first iteration, each group proposes its ideal point. With each subsequent iteration, groups have the opportunity to adjust their proposals incrementally, either away from a group's ideal point (i.e., a concession) or back toward the group's ideal point (i.e., the withdrawal of earlier proposed concessions). These incremental moves correspond to decisions to cooperate or defect in the prisoner's dilemma and to decisions to take a hardline or more conciliatory approach in the negotiator's dilemma.
If the two groups propose the same solution or make offers that overlap within the ZOPA, then a deal is struck. Overlap occurs in the simulation when Group A proposes a solution more favorable to Group B than what Group B had proposed (and vice versa). The overlap scenario would be the equivalent of offering a used car salesperson $5,000 for a car at the same moment that the salesperson offers to sell the car for $4,000. At this point, all parties should recognize that a deal is possible. In the simulation, the parties immediately split the difference in the overlap (i.e., the car would be sold for $4,500). In the real world, there would likely still be some discussions over the final dollar amount for the car.
With a general strategy for simulating a negotiating process in place, the next step is to lay out the process by which groups of computer agents decide on a hardline or conciliatory approach for each iteration of the simulation.
Group Decision Rules
In the simulation presented in this article, groups follow one of three different decision rules to select between a hardline and conciliatory strategy. The underlying process for individual agents to select a strategy is the same, regardless of how the group makes decisions. Each agent has a cooperation tendency, which can range from 0 to 1. This sets the probability that an agent will support a conciliatory approach (as opposed to a hardline approach).
The cooperation tendency of an agent changes over the course of the simulation as the agent learns from past experiences. When an agent's cooperation tendency is 0, it is mathematically impossible for an agent to vote in favor of offering concessions in the group decision process. When an agent's cooperation tendency is 1, it is mathematically impossible for an agent to vote for a hardline strategy. An agent with a current cooperation tendency of 0.5 will support offering concessions half the time and will support a hardline strategy of withdrawing concessions half the time.
This approach to selecting strategies by individual agents is, as Macy (1991: 811) points out, stochastic or random rather than deterministic. In Axelrod's tournament, the tit‐for‐tat computer program used a deterministic strategy: if an opponent offers to cooperate in the previous round, the tit‐for‐tat computer program will always respond with cooperation. With Macy's approach, the probability that an agent will choose to make concessions is simply that, a probability. Agents retain the ability to do unlikely things because their behavior is governed by probability.
It should also be stressed again that cooperation tendencies for agents are not fixed in the simulation. Rather, agents learn in response to feedback about the success or failure of different strategies. This process of learning results is a change to an agent's cooperation tendency, which in turn makes an agent more conciliatory or more hard‐nosed. As the cooperation tendencies of individual agents change over the course of the simulation, this has an effect on the behavior of groups, but the effect is not direct. Each group has a decision rule through which the choices made by individual agents are translated into a group decision.
Three different decision rules were used in this simulation:
Majority Rule
If more than half of the agents in a group using majority rule vote for cooperation, the group will make concessions. If less than half of the agents vote for cooperation, the group takes a hardline approach. In the case of a tie, no change is made from the previous proposal.
Weakest Link
If any agent in a group that is using the weakest link rule votes to cooperate, the group will make concessions. The group only selects a hardline strategy if all group members choose not to cooperate.
Strongest Link
A decision to make concessions under the strongest link decision rule requires that all group members vote for concessions. A decision not to offer concessions by any one group member leads to a hardline approach.
Agent Learning
At the start of each simulation, all agents have a cooperation tendency of zero, making them unwilling to offer concessions. As groups interact, individual agents adjust their cooperation tendency in response to their perceptions that cooperation, or the offering of concessions, is a viable strategy or merely a necessary strategy. Concessions may be seen as more viable when rivals reciprocate, but if a hardline strategy fails to move an adversary, concessions will be seen as increasingly necessary for talks to move forward. The learning mechanism articulated by Macy (1989) is used in this simulation.4 When a group chooses cooperation, the consequences of this choice will shape how individual agents view cooperation. If the group effort to cooperate succeeds and both groups move closer to a deal, then cooperation will be more likely in the future as agents in both groups will view cooperation as a viable and fair strategy for moving toward a final agreement. This satisfaction with offering concessions increases the cooperation tendencies of agents. On the other hand, if a group's choice of cooperation is met with a hardline strategy, this prompts agents in the cooperating group to reduce their cooperation tendencies, so they are less likely to be taken advantage of in the next round of negotiations.
A similar logic applies for group decisions to defect or take a hardline approach. If the other group cooperates, a hardline approach is seen as successful, and agents in the defecting group will be less likely to cooperate in the future. Finally, if a hardline approach produces stalemate because both groups opt not to cooperate, then agents will become more likely to consider cooperation or concessions in the future. Given this configuration of feedback, which parallels the logic of feedback and learning in Macy's prisoner's dilemma simulation, mutual cooperation and mutual defection both produce increases in the cooperation tendency of agents in both groups. A cooperate‐defect or defect‐cooperate outcome results in a reduction in the cooperation tendency of agents in both groups. Even though no payoffs are awarded to agents until a deal is struck, agents adjust their willingness to make concessions based on the movement of proposals made by groups in each iteration.
The Simulation Process
Given the logic discussed above, a computer program to simulate negotiations was created. This program requires the user to set a number of parameters before a simulation can be run. I selected informed but still arbitrary parameter values for the simulations presented in this article.5 Diagnostic testing that I ran before this study suggested that the size of the ZOPA, the maximum rate of adjustment to proposals made from one iteration to another, and the rate at which agents learn all have the effect of slowing down or speeding up convergence on a settlement. In other words, if the values defining the ZOPA, the learning rate, and the proposal adjustment rate are set low, it will take more iterations before groups arrive at a settlement.
Indeed, as Macy noted in discussing his simulation, parameters can be set so low that it is virtually impossible for agents to reach a stable cooperative pattern. This is noteworthy, but demonstrating that a smaller ZOPA makes a deal more difficult is not a particularly novel insight. Indeed, it would be a point of concern if this was not the case! Thus, the simulations presented in this article are calibrated so that the effects of different decision rules can be observed and reasonably distinguished.
I analyzed six different combinations of decision rules for this study. Each combination was analyzed with one thousand simulations of two groups. In each simulation, groups interacted for two hundred iterations. This produced a total of six thousand simulations, each with traceable histories of interactions between two groups. In analyzing the interaction histories from each of the simulations, only the iterations up to and including the first instance of a settlement were analyzed. All groups in each of the six thousand simulations were made up of five agents.
Expectations
The crux of the negotiator's dilemma is that taking a stance that is too hard may make it impossible to reach a deal, but taking a stance that is too soft may produce a deal of less value than might have been possible. In this simulation, negotiations do not break down but continue until a settlement is reached or the simulation ends. In theory, if a ZOPA exists, then with enough iterations every simulation should eventually produce a settlement.
Assuming that a weakest link decision rule is the most prone to make concessions and that a strongest link decision rule is least prone to make concessions, the following three propositions are offered:
Proposition One: The proportion of groups achieving a settlement should be lowest for strongest link groups and highest for weakest link groups.
Proposition Two: The number of iterations to a settlement should be highest for strongest link groups and lowest for weakest link groups.
Proposition Three: The terms of settlement should be worse for weakest link groups and better for strongest link groups.
The Simulations
In the analysis of these simulations, I examined three variables: the proportion of simulations that resulted in a settlement, the number of iterations taken to reach a settlement in each simulation, and the terms of the settlement reached by the two groups. For each group, the absolute value of the distance from the group's ideal point is used. Thus, smaller values indicate that a group was able to reach a deal with fewer concessions, and larger values indicate that more concessions were needed to reach a deal.
I have separated the discussion of results into simulations in which groups employ the same decision rule and simulations in which groups use different decision rules. This separation helps illustrate the paths to settlement that emerge from the different decision rules before exploring the interplay of different decision rules.
Negotiations with Like Decision Rules
Three thousand of the simulations run for this study involved groups with like decision rules. Each of these three thousand simulations was run for two hundred iterations. All simulations resulted in groups finding a settlement within two hundred iterations. Although this might suggest that the parameters of the simulation were set too loosely to observe variation across decision rules, the next section shows that the simulation conditions did not guarantee a settlement. Indeed, some of the other combinations of decision rules did quite poorly under these same conditions.
The three decision rules performed somewhat differently in regard to how quickly settlements occurred. Table One shows the mean number of iterations taken to reach a settlement for each of the three combinations of like decision rules. As anticipated, weakest link decision rules produce rapid convergence in just under twenty iterations and strongest link decision rules take longer, with slightly over twenty‐eight iterations on average to a settlement. The surprising result from this comparison is that majoritarian decision rules were the slowest of the three in producing a settlement, taking on average more than thirty iterations to produce a settlement. Although the difference between the strongest link and majoritarian decision rules may not seem substantively large, the difference is statistically significant at the 0.05 level, suggesting this difference is unlikely to be a statistical fluke.
Iterations to a Settlement by Group Decision Rule Combinations
Decision Rule Combination . | Mean . | Standard Error . | Confidence Intervals (95%) . | ||
---|---|---|---|---|---|
Group A . | Group B . | . | . | Lower Bound . | Upper Bound . |
Majority | Majority | 30.260 | 0.432 | 29.396 | 31.124 |
Weakest link | Weakest link | 19.67 | 0.267 | 19.136 | 20.204 |
Strongest link | Strongest link | 28.06 | 0.255 | 27.55 | 28.57 |
Decision Rule Combination . | Mean . | Standard Error . | Confidence Intervals (95%) . | ||
---|---|---|---|---|---|
Group A . | Group B . | . | . | Lower Bound . | Upper Bound . |
Majority | Majority | 30.260 | 0.432 | 29.396 | 31.124 |
Weakest link | Weakest link | 19.67 | 0.267 | 19.136 | 20.204 |
Strongest link | Strongest link | 28.06 | 0.255 | 27.55 | 28.57 |
The final metric used to evaluate the performance of each decision rule was the terms of the agreement struck by the two groups. The greater the absolute distance of a settlement from a group's ideal point, the worse the terms of the settlement are for the group. For the simulations examined in this section of the article, I found effectively no difference between decision rules (see Table Two). The differences that do exist are not statistically significant at the 0.05 level.
Terms of Settlement by Group Decision Rule Combinations
Decision Rule Combination . | Mean . | Standard Error . | Confidence Intervals (95%) . | ||
---|---|---|---|---|---|
Group A . | Group B . | . | . | Lower Bound . | Upper Bound . |
Majority | Majority | 0.5295 | 0.0025 | 0.5245 | 0.5345 |
Weakest link | Weakest link | 0.5285 | 0.003 | 0.5225 | 0.5345 |
Strongest link | Strongest link | 0.53 | 0.004 | 0.522 | 0.538 |
Decision Rule Combination . | Mean . | Standard Error . | Confidence Intervals (95%) . | ||
---|---|---|---|---|---|
Group A . | Group B . | . | . | Lower Bound . | Upper Bound . |
Majority | Majority | 0.5295 | 0.0025 | 0.5245 | 0.5345 |
Weakest link | Weakest link | 0.5285 | 0.003 | 0.5225 | 0.5345 |
Strongest link | Strongest link | 0.53 | 0.004 | 0.522 | 0.538 |
At this stage of the analysis, it appears that there is little evidence that different types of decision rules are an important factor in understanding group performance in negotiations. While a weakest link decision rule may speed groups to a deal, the simulations do not suggest that the deals reached are substantively different, or even that the likelihood of reaching a settlement is affected by decision rules.
This pattern is largely consistent with earlier simulations of the prisoner's dilemma. Macy (1989) has dug deeply into the dynamics of the stochastic computer simulations. He (1991: 821) pointed out that the secret to solving the prisoner's dilemma from a stochastic perspective is a “fortuitous sequence of consecutive symmetric moves.” Thus, we might reasonably expect that groups operating with the same decision rule may perform far better than groups using different decision rules from each other.
Negotiations between Parties with Different Decision Rules
In general, when groups are using similar decision rules, they are able to arrive at negotiated settlements. When groups use different decision rules, it becomes less likely that groups will converge on a mutually acceptable solution. When a majority decision rule is paired with a strongest link or weakest link rule, groups are able to converge on a settlement in roughly 70 percent of simulations. A pairing of the strongest link and weakest link decision rules, however, is cataclysmic in terms of achieving a settlement. In the one thousand simulations run for this pairing of decision rules, not once did groups achieve a negotiated settlement!
A review of the iteration histories from these simulations indicates that weakest link groups regularly make concessions when agents have relatively low average cooperation tendencies. High average cooperation tendencies are needed to overcome the institutional resistance to compromise embedded in the strongest link decision rule. This dynamic can be seen in Figure Two (below), which shows the probability of groups using different decision rules making concessions. The probabilities for each decision rule change across the continuum of cooperation tendencies, but the changes in the probability of cooperation follow markedly different paths.
Because the decisions to offer concessions made by groups using the strongest link decision rule and groups using the weakest link decision rule are almost invariably out‐of‐sync, stable patterns of cooperation do not emerge. Groups using the weakest link rule may find their offers of concessions met by hardline tactics, but weakest link groups have minimal ability to punish other groups for not reciprocating concessions. Agents in a group using the weakest link decision rule may see their individual cooperation tendencies plummet close to zero, which can produce one or two iterations in which the group employs a hardline approach, but given the group's decision rule and the shape of the probability curve depicted in Figure Two, a hardline approach cannot be sustained as the cooperation tendencies of agents begin to rise.
By contrast, the strongest link decision rule makes it difficult for groups to make sustained concessions when hardline tactics appear even occasionally successful. The probability curve for strongest link groups illustrates the very low probability of a strongest link group choosing cooperation until the average cooperation tendency of the group members has risen to approximately 0.9.
When considering the time it takes for groups to reach settlements, there appears to be little difference between the majoritarian‐strongest link and majoritarian‐weakest link simulations (see Table Three). Majoritarian groups take roughly 88.5 iterations to reach a settlement with weakest link groups and 90.6 iterations to reach a settlement with strongest link groups. This, however, is three to four times longer than is seen when groups operate under the same decision rule. Furthermore, the average number of iterations to settlement excludes the 30 percent of simulations in which a settlement was not reached at all. Clearly, different decision rules can complicate the negotiation process, but does this change the terms of settlements?
Iterations to a Settlement by Group Decision Rule Combinations
Decision Rule Combination . | Mean . | Standard Error . | Confidence Intervals (95%) . | ||
---|---|---|---|---|---|
Group A . | Group B . | . | . | Lower Bound . | Upper Bound . |
Majority | Weakest link | 88.46 | 1.885 | 84.69 | 92.23 |
Majority | Strongest link | 90.6 | 1.884 | 86.832 | 94.368 |
Weakest link | Strongest link | — |
Decision Rule Combination . | Mean . | Standard Error . | Confidence Intervals (95%) . | ||
---|---|---|---|---|---|
Group A . | Group B . | . | . | Lower Bound . | Upper Bound . |
Majority | Weakest link | 88.46 | 1.885 | 84.69 | 92.23 |
Majority | Strongest link | 90.6 | 1.884 | 86.832 | 94.368 |
Weakest link | Strongest link | — |
The evidence from Table Four suggests that different decision rules do have a powerful substantive impact on negotiation outcomes. When majority decision rule groups interact with weakest link groups, the majoritarian groups are able to push the settlement to the reservation point of weakest link groups. Weakest link groups are able to forge settlements with majoritarian groups, but the terms of the settlements tend to favor the majoritarian groups.
Terms of Settlement by Group Decision Rule Combinations
Decision Rule Combination . | Group A . | Group B . | Difference . | |||
---|---|---|---|---|---|---|
Group A . | Group B . | Mean . | Standard Error . | Mean . | Standard Error . | . |
Majority | Weakest link | 0.412 | 0.003 | 0.635 | 0.003 | −0.223*** |
Majority | Strongest link | 0.621 | 0.003 | 0.431 | 0.003 | 0.190*** |
Weakest link | Strongest link | — | — |
Decision Rule Combination . | Group A . | Group B . | Difference . | |||
---|---|---|---|---|---|---|
Group A . | Group B . | Mean . | Standard Error . | Mean . | Standard Error . | . |
Majority | Weakest link | 0.412 | 0.003 | 0.635 | 0.003 | −0.223*** |
Majority | Strongest link | 0.621 | 0.003 | 0.431 | 0.003 | 0.190*** |
Weakest link | Strongest link | — | — |
*p < 0.05; **p < 0.01; ***p < 0.001 for a two‐tailed test of significance.
When majoritarian groups interact with strongest link decision rule groups, the metaphorical negotiating tables are turned. The strongest link decision rule produces outcomes tilted in favor of the group relying on the strongest link decision rule. Majoritarian groups are able to reach a settlement, but that settlement is very close to the reservation points of majoritarian groups.
While the terms of the settlements might be highly tilted toward one group when decision rules vary, all of the settlements reached created value for the parties involved and were preferable to no agreement. From this perspective, the pairing of strongest link and weakest link decision rules produced the worst outcome by far. No value was created across the one thousand simulations using this combination of group decision rules.
Conclusion
This study has shown that how groups make decisions can alter how they perform in value‐claiming negotiations, at least when these negotiations are framed in terms of the negotiator's dilemma. While like decision rules lead to minimal substantive difference in how groups perform in negotiations, the use of different decision rules by groups can complicate or even doom the prospects for a settlement. This finding has potential tactical implications for structuring negotiation processes, but it also has larger theoretical implications for the study of group behavior in negotiations.
A great deal of work on group dynamics focuses on how intragroup interactions and decision making affect group outcomes. This simulation further reinforces what has become a widely recognized finding: group dynamics matter for group choices. The simulation presented here feeds group outcomes into a larger interaction process. Intragroup dynamics clearly matter, but depending upon the nature of the intergroup interaction and the internal dynamics of other groups, the consequences of intragroup dynamics can be muted or magnified. This finding is important and points to the need for greater theoretical segregation between studies of group performance on specific tasks and studies that frame group decision making as part of a larger process.
While computer agent simulations, such as the simulation presented here, often struggle to demonstrate the generalizability of outcomes to human interaction processes, it is somewhat easier to argue that the findings about institutional decision rules are generalizable. Still, additional work on this question could help validate (or refute) some of this study's findings and implications. One area to look at is adjusting the assumptions made in the simulation about how agents learn. Different learning mechanisms or learning rates may mute or magnify the effects of different decision rules.
Furthermore, replication of the group negotiation process simulated here using human participants would help assess the robustness of this study's findings when the simplifying assumptions needed for computer simulations are relaxed. Additionally, the stark impact on negotiations seen in the use of different decision rules by groups should be visible to some extent in real‐world interactions. Follow‐up case studies of real negotiations may provide a sense of how negotiation processes that involve groups play out in specific contexts. Case study work could also be valuable in teasing out the tactical implications of internal group decision rules and could be helpful in developing better insights into how group decision structures may be adapted to better navigate the negotiator's dilemma.
NOTES
I would like to offer thanks and recognition to those who have provided feedback on this project through its multiple incarnations. Some were anonymous but provided invaluable criticism. In particular, I would like to thank Angela Harrison‐Urlacher and Carissa Green for their efforts and assistance. This project is based on a computer simulation. The simulation software can be downloaded at http://business.und.edu/political‐science‐public‐administration/brian‐urlacher.cfm.
Fen Hampson and William Zartman (2012) discussed at length the complexity of multiparty (i.e., team) negotiations. Similarly, Peter Harris and Ben Reilly (1998) discuss how to structure negotiations to facilitate the resolution of long‐running conflicts with the assumption that in many cases each party will be represented by a team rather than a single negotiator.
David Lax and James Sebenius (1986) argued that when rendered in game theoretic terms, the negotiator's dilemma is structurally similar to the prisoner's dilemma.
While proposals offered by groups are adjusted up or down in each round, the simulation rules limit proposals so they cannot move beyond the 0 or 1 range. Furthermore, groups are unable to make a proposal that is larger than their own reservation point.
Macy's learning mechanism is used with slight modifications to address Piotr Swistak's (1990) critique of Macy's equation. Specifically, Macy's original equation can produce negative values for the cooperation tendency. An algorithm was included in the simulation code to prevent this from occurring. This modified equation ensures that the cooperation tendency for all agents is bounded at 0 and 1. See Macy (1989) for a detailed discussion of the learning mechanism. In Macy's equation, the speed of learning changes depending on the nature of the feedback, an agent's current cooperation tendency, and the “magnitude” of reinforcement, which in turn is linked to the payoff structure of the prisoner's dilemma. The magnitude and payoff structures embedded in the simulation mirror Macy's (1989) simulation. Users may alter the learning speed of agents by setting the maximum learning rate at the start of the simulation. As will be discussed later, the learning rate for the simulations presented in this article was 0.1, which allows for a maximum change in the cooperation tendency of agents in each iteration of the simulation to be 10 percent. It should be noted, however, that an agent's cooperation tendency will likely be shifted by a smaller amount based on the results of Macy's learning mechanism.
To facilitate replication efforts, the parameters used in all simulations are as follows. The ZOPA for this simulation was set to 0.4 (centered on 0.4 and 0.6 of the continuum). The learning rate for agents was set to 0.1. The proposal adjustment rate was set to 0.1. The adjustment rate reflects the maximum amount that groups can increment their proposals from one iteration to another. The actual incremental change in an iteration is a random value between 0 and the maximum adjustment rate. All groups include five agents each.