The Tragedy of the Coffeehouse:

Costly Riding, and How To Avert It

Louis Marinoff


1. My claim: box B is the only rational choice.

2. There is no dominant choice.

3. Probabilism eschews box A.

4. Maximizing expected utilities strongly prescribes box B.

5. Free versus costly riding.

6. A prediction concerning trial #3.

1. My claim: box B is the only rational choice.

There have been two empirical trials of Zeno's Coffeehouse challenge #1, and in both trials the players realized the worst of three possible outcomes. Listed in decreasing order of monetary value, these outcomes are:

(O1) $1,000 to each player, if and only if all players choose box A;

(O2) $100 to each player who chooses box B, if and only if at least one-fourth of the players choose box B;

(O3) $0 to each player, otherwise.

Uncontroversially, I claim:

(C1) that it is rational to prefer the best possible outcome;

(C2) that it is rational to seek the best attainable outcome;

(C3) that it is rational to avoid the worst possible outcome.

It is uncontroversial that (O1) is the best possible outcome, and (O3) the worst. The purpose of this analysis is to demonstrate that (O2) is the best attainable outcome. While it remains rational to prefer $1,000 to $100, I will show that the $1,000 outcome is, on probabilistic and empirical grounds, virtually unattainable. The only rational alternative, then, is to seek the $100 outcome, which remains preferable to nothing at all. This analysis should convince a sufficient number of players (i.e. more than one-fourth) that box B is the only rational choice, so that in a subsequent trial the best attainable outcome (O2) would be realized.

2. There is no dominant choice.

This problem partly resembles a tragedy of the commons, or a many-player prisoner's dilemma, in that it is a non-cooperative game in which summed individual interest leads to collective disadvantage (Hardin 1973). However, the problem differs in one key respect: its payoff matrix lacks a dominant choice. There is no option homologous to defection in the PD, in which the dominance argument states (validly) that, with regard to outcomes, each player is better off defecting no matter the other players choose. Dominance prescribes unconditional defection in a PD. But in our problem, no such prescription obtains. Each player is better off choosing box A if and only if all other players choose box A; better off choosing box B, if and only if at least one-fourth of all players choose box B. Neither choice dominates the other unconditionally. We may suppose that choosing box A is roughly analogous to cooperating; box B, to defecting: but we derive no unequivocal game-theoretic prescription from the supposition.

In PDs, Newcomb's problems and the like, maximizing expected utilities (MEU) is a classic and robust decision-theoretic alternative to the dominance principle (Nozick 1969, Marinoff 1992). It is extensible from two-player to many-player scenarios (Irvine 1993, Marinoff 1996). It often-though neither necessarily nor always-conflicts with dominance, prescribing cooperation instead of defection. In this problem, however, in conjunction with empirical data from the two trials, MEU will prescribe overwhelmingly that players choose box B, which is roughly analogous to mass-defection.

3. Probabilism eschews box A.

Prior to analyzing results from the trials, we can offer a strong hypothetical reason for eschewing box A. The empirical results themselves will strengthen this argument much further---almost beyond contestation. Recall that all players must choose box A in order that the best possible payoff of $1,000 obtain. Now consider the probability that all players will actually do so. Assume that each player does so with some average probability p (0 £ p £ 1), and that there are n players involved. Then the probability that all players will choose box A is p raised to the power of n, or pn. The question is: can you be certain that all other players will choose box A? In other words: can you reliably set the value of p at unity? If so, then you should choose box A; if not, then you shouldn't.

I claim that one can be reasonably certain only about one's own imminent choice, but that one is necessarily uncertain about the choices of others. This uncertainty may be quite small, but it drives the corresponding value of p below unity. This in turn diminishes the probability that all players will choose box A, even if p remains very close to unity. For example, suppose that p equals 0.999. Given 10 such players, what is the probability that all will choose box A? It is (.999)10, or about 99 percent. Given 100 such players, the probability falls to (.999)100, or about 90 percent. Given 1,000 such players, the probability plummets to (.999)1,000, or about 37 percent. The moral of this story is compelling: if the average probability that a player will choose box A is less than unity, then the probability that all players will choose box A falls sharply, as an exponentially decreasing function of the number of players.

Now let us examine the empirical data, which are far more pronounced than in the foregoing hypothetical case. In trial #1, 134 of 156 players selected box A. Whether on a Laplacean or a frequentist view of probabilism, we naturally interpret this empirical frequency as an average probability. Thus p = 134/156 = .859. Given n = 156, the probability that all players choose box A is (.859)156, or about 5×10-11. These odds are equivalent to one in twenty billion, which is to say they are not favorable.

In trial #2, which saw both an increase in n and a decrease in p, 198 of 247 players selected box A. This frequency yields an average probability of p = 198/247 = .802, which is significantly smaller than in trial #1 (but still large enough to secure a zero payoff). I suppose that relatively more trial #2 players, aware of the numerical results of trial #1, either understood or intuited the probabilistic implications of the problem, and were thus impelled to choose box B. The probability that all trial #2 players choose box A is (.802)247, or 2×10-24. These odds are equivalent to one in about five hundred sextillion (1 in 5×1023), which is the order of magnitude of Avogadro's number. So this probability corresponds to the chance of randomly picking one particular atom from a liter of an atomic gas at standard temperature and pressure. If you prefer a temporal metaphor, imagine that trial #2 could have been conducted in an elapsed time of one second, and further imagine a continuous and non-terminating succession of such trials. On these odds, we would expect to wait an average of 158 quadrillion (1.58×1016) years before encountering a trial on which all players choose box A.

In other words, for the numbers of players participating in trials #1 and #2, the probability that all players choose box A is effectively zero. Thus the probability that some players choose (that is, that at least one player chooses) box B is effectively unity. It follows that the best possible outcome (O1) is virtually unattainable. The best attainable outcome (O2) obtains just in case at least one-fourth of the players choose box B. In consequence, a rational player should choose box B.

4. Maximizing expected utilities strongly prescribes box B.

Purely probabilistic considerations suggest that we eschew box A; maximizing expected utilities (MEU) prescribes that we choose box B. The expected utility (EU) of a choice is a sum of the products of two factors: for each possible outcome of a given choice, we multiply the probability of that outcome by the value of that outcome. By summing these products over all possible outcomes of a given choice, we find its expected utility. In our problem, the calculations are simplified owing to the zero values of two out of three possible outcomes for each choice, whose corresponding products are then identically zero, irrespective of the associated probabilities.

The expected utility of choosing box A (EUA) is just the probability that all (n) players choose box A, multiplied by the value to each player of that outcome ($1,000). Assume that the value of money is linear in its amount. Then EUA = pn×1000. Using the values of n and p from the trial #1, EUA1 = (.859)156×103 = 5×10-8.

The expected utility of choosing box B is more complicated to compute, because we require the probability that at least one-fourth of the players choose box B. Call it p(B:m³n/4). We can abbreviate the computation somewhat by finding the complement of the probability that more than three-fourths of the players choose box A. Call that p(A:m>3n/4). If it is not the case that more than three-fourths of the players choose box A, then it must be the case that least one-fourth of the players choose box B. Thus p(B:m³n/4) = 1 - p(A:m>3n/4). The latter probability is easier to compute because it entails fewer and smaller factorial terms.

Trial #1 had 156 players, three-fourths of which is exactly 117. So the probability that more than three-fourths choose box A is the sum of the probabilities that 118 players choose box A, that 119 players choose box A, ..., that 156 players choose box A. The general formula for probability p(m), that is of m occurrences of an attribute in n events, when each occurrence has probability p, is

p(m) = n!/m!(n-m)! × pm(1-p)n-m

With respect to trial #1, then,

p(118) = 156!/118!38! × (.859)118(.141)38

p(119) = 156!/119!37! × (.859)119(.141)37

.

.

.

p(156) = (.859)156

The result p(156) = (.859)156, which is the probability that all choose box A, derives from the special case where m = n. (By definition, factorial zero = 1.) Computing and summing all these terms, we find that

Hence p(B:m³39) = 1 - .9997842 = .0002158. This latter probability, recall, is the one we seek: it is the probability that at least one-fourth of the players choose box B. Since the value to each player of this outcome is $100, the expected utility of choosing box B in trial #1 is EUB1 = (2.16×10-4)×102 = 2.16×10-2.

To maximize expected utilities, we select the larger of EUA1 and EUB1. We find that the latter not only exceeds the former, but also does so excessively. (Nothing exceeds like excess.) The ratio EUB1:EUA1 = 432,000:1. Thus MEU emphatically prescribes choosing box B over box A.

With respect to trial #2, the result is even more conclusive. Recall that there were 247 players, of whom 198 chose box A. Via the prior method, the expected utility of choosing box A in trial #2 is EUA2 = 2.14×10-21.

To find the probability that more than three-quarters of the players choose box A, we need to sum the probabilities from 186 to 247 players inclusive:

p(186) = 247!/186!61! × (.802)186(.198)61

p(187) = 247!/187!60! × (.802)187(.198)60

.

.

.

p(247) = (.802)247

Computing and summing all these terms, we find that

Hence p(B:m³62) = 1 - .9754612 = .0245388. Again via the prior method, the expected utility of choosing box B is EUB2 = 2.45. The ratio EUB2:EUA2 = 1.4×1021:1. MEU's prescription for taking box B is overwhelmingly forceful.

5. Free versus costly riding.

In many-player games where dominance conflicts with maximizing expected utilities, a minority of defecting players can exploit a majority of cooperating ones. The tragedy of the commons affords one such example: a majority of farmers may decide not to increase their herds, realizing that widespread escalation of grazing would ruin the common land for all; but then a minority of farmers could increase their herds with impunity, and profit from the restraint of the majority. In an urban example (Glance and Huberman 1994), a group of office workers goes to lunch, and they agree before ordering their food to share the check in equal lots. If a majority orders salads while a minority orders steaks, the latter will profit at the expense of the former ("profit" in the sense of eating steak for little more than the cost of salad). In these and other many-player prisoner's dilemmas in which the majority cooperates, the few defectors are deservedly termed "free riders" (Pettit, 1986).

With respect to our problem, however, the situation is converted. Dominance is inapplicable, and maximizing expected utilities strongly prescribes choosing box B, the analog of defection. Why then, in both empirical trials, did the majority (i.e. more than three-fourths of the players) choose box A? Some, perhaps, were motivated by the gambler's fallacy: that the wager with the largest payoff is best (regardless of respective odds). I hypothesize that most who chose box A did so out of misguided cooperative predisposition. Prisoner's dilemmas and free riding are lately much-discussed; free riders attract moral censure; Pareto-optimal outcomes in such problems are attained through cooperation; choosing box A is the analog of cooperating; and thus a well-intentioned majority chose box A. Or so I surmise.

But since our problem is manifestly not a prisoner's dilemma-in that there is no dominant choice, hence no diverging prescriptions of dominance and MEU, hence no dilemma in the classic sense-the foregoing line of reasoning not only is unsound but also is inimical to realizing the best attainable outcome (O2). I hypothesize that the minority which chose box B realized full-well the extreme improbability of attaining the best possible outcome (O1), and so settled rationally on the best attainable one (O2). The would-be "cooperative" majority actually compelled the worst possible outcome (O3). So in our problem, which might be called "the tragedy of the coffeehouse," a rational minority of defectors is undermined by a well-intentioned but misguided majority of cooperators, which results in the worst outcome for all. Players who choose box A in this context might deservedly be called "costly riders."

6. A prediction concerning trial #3.

It is a well-known game-theoretic maxim that people are free to falsify any prediction about their behavior which is made known to them. Nonetheless, I assert that if a third empirical trial were held, and that if each player were rational and contemplated this analysis before choosing, then the tragedy of the coffeehouse would be averted. Sufficient numbers of formerly costly riders would switch to box B, such that at least one-fourth of the players (and possibly very many more) would select that option. So my prediction about human behavior in our problem is recursive: I predict that at least one-fourth of those who reason through my prediction will corroborate it.

Department of Philosophy

The City College of New York

Acknowledgements: I am grateful to Ron Barnette for his encouragement, and to all the players for providing such interesting data.

References

Glance, N. and Huberman, B. (1994) The dynamics of social dilemmas, Scientific American, March 1994, pp. 76-81.

Hardin, G. (1973) The tragedy of the commons, in: A. Baer (Ed.) Heredity and Society, (New York, The MacMillan Company).

Irvine, A. (1993) How Braess' paradox solves Newcomb's problem, International Studies in the Philosophy of Science, 7, pp. 141-160.

Marinoff, L. (1992) Maximizing expected utilities in the prisoner's dilemma, Journal of Conflict Resolution, 36, pp. 183-216.

Marinoff, L. (1996) How Braess' paradox solves Newcomb's problem: not!, International Studies in the Philosophy of Science, forthcoming.

Nozick, R. (1969) Newcomb's problem and two principles of choice, in N. Rescher, (Ed.) Essays in Honor of Carl Hempel (Dordrecht, Reidel).

Pettit, P. (1986) Free riding and foul dealing, Journal of Philosophy, 83, pp. 361-379.