Moral Commitment Problems

Let’s talk about a moral dilemma.

PIRG and similar organizations (“Grassroots Campaigns, Inc.” being the most important) are notorious for exploiting their canvassers, up to and including outright union-busting (from left-wing organizations!). Yet I’m given to understand that many of their employees are paid on commission. When a canvasser approaches me, do I give them money, to help a cause that I support as well as to make life a little better for the worker? Or do I refuse to give them money, on the grounds that if enough people refuse to give the PIRGs money until they stop mistreating their workers, they’ll be forced to straighten up?

I think this poses what we can think of as a moral commitment problem. Suppose that the moral worth of the boycott comes primarily from its efficacy in making life better for the workers. Then, if everyone else can be relied upon to boycott, my most moral option is to boycott. On the other hand, if everyone else (or enough others) cannot be relied upon to boycott, then my most moral action is to donate money.

We might think of similar situations in other spheres of ethical life. For example, I might have a choice between donating money to the poor in another country or making life a little better for my family, and think that the donation will only be effective if lots of people do it. If an effective donation is morally better than support for my family, which is morally better than an ineffective donation, I’m in a similar fix.

Can we draw insights from game theory in order to solve this kind of problem? Can moral commitment problems be solved the same way we solve strategic commitment problems?*

(Recommended reading: Peter Singer, Famine, Affluence and Morality; Liam Murphy, Moral Demands in Nonideal Theory; Mancur Olson, The Logic of Collective Action.)

* One thing it might suggest is that we ought to encourage the kind of moral theory that focuses on individual virtue rather than collective improvement. Why? Suppose I’m a utilitarian. I might have an incentive to be a moral free-rider — that is, I might have an incentive to support good social ends (moral public goods) but not sacrifice anything to bring them about (vote for the redistributive tax but demand an exemption for myself, say). It doesn’t seem like a utilitarian can condemn moral free-riding directly: if the good end is brought about, I don’t do anything wrong by not contributing. (However, the utilitarian might be able to condemn moral free-riding if it means the good doesn’t get provided, by the usual collective action problems.) By contrast, if I’m a Kantian, I might see myself as having an obligation to actually contribute. In effect, a group of Kantians has selective moral incentives, in a modified version of Olson’s terms. Collective moral goods are excludible to Kantians, but not to utilitarians. Therefore, Kantians can provide more of them.


12 Responses to “Moral Commitment Problems”

  1. Richard Says:

    A global or ‘valoric‘ utilitarian can assess characters and dispositions as well as acts. And it’s open to consequentialists to assess individual tokens or universal types (e.g. whether a disposition is good for eveyone to have). It sounds like you’re after something like the latter form of utilitarianism, rather than Kantianism per se.

  2. Paul Gowder Says:

    Richard, I’m not quite sure I understand your comment — can you elaborate on how this gives utilitarians the resources to deal with moral free-riding?

    It seems to me that ultimately, what consequentialists are assessing are states of affairs — whether they criticize individuals for only acts as well as dispositions, or whether they assess tokens or types of acts or dispositions, doesn’t seem to show a way to solve the problem.

    That is, suppose that there are two otherwise-identical worlds — world 1 where N people have the disposition to act to bring about some collective moral good, and world 2 where N-1 people have that disposition. Let’s say that the good gets provided equally well in each world.

    Perhaps I’m misunderstanding something about valoric utilitarianism, or type-consequentialism, but it seems to me that any utilitarian is committed to evaluating those two worlds as on a par — the only morally relevant fact about the two worlds, the presence of the good, is the same.

    (In fact, in my PIRG case, world 2 might actually be better — almost everyone does the boycott, so the boycott is effective, but some people additionally help individual workers get commissions — it looks like more goodness is achieved in the free-rider world.)

    If that’s true, I’m not sure where the utilitarian gets the resources to scold the free-riding n-1th person. (As noted, the utilitarian could scold the n-1th person if the collective action problem caused the good not to be provided, but if the good *is* provided, well…) A Kantian, of course, could effortlessly do so via the universalization principle.

  3. Paul Gowder Says:

    I did, however, confusingly misspeak in the last sentence of the post – if utilitarians can criticize dispositions that lead to less good-filled states of affairs, then it can deal with the moral free-rider when less good results – so if that’s what your comment was directed at, you’re right, I erred.

    Is there something still objectionable about moral free-riding when it doesn’t cost the ultimate good? I’m not sure, but my Kantian intuitions say yes… isn’t there something wrong with taking unfair advantage of others’ willingness to make sacrifices, even moral ones (to trade the individual moral good of helping the one employee in order to participate in the collective moral good of participating in the boycott)?

  4. Richard Says:

    [A couple of background points]

    ultimately, what consequentialists are assessing are states of affairs

    Kind of, though I’d say it slightly differently: Consequentialists assess everything with ultimate reference to states of affairs. But insofar as it’s a moral theory, not a theory of value, what it’s assessing is people.

    any utilitarian is committed to evaluating those two worlds as on a par

    That’s an odd remark, since the evaluation is nothing to do with utilitarianism. Presumably anyone is going to evaluate the two worlds on a par, since they’re stipulated to contain an equal amount of the good. Utilitarianism only comes into the picture when we use this fact to inform our assessments of moral agents and their actions.

    [Back to the main issue]

    My thought was that the following is a form of utilitarianism:

    X is good act (disposition) iff it would promote the good for everyone to do (have) X.

    That is, it’s perfectly possible to incorporate certain kinds of universalization constraints into consequentialist theories. (See, e.g., rule utilitarianism.) This seems to better capture what you have in mind than does Kantianism, since the kind of universalization involved in Kantianism is nothing to do with maximizing value.

  5. GNZ Says:

    “Is there something still objectionable about moral free-riding when it doesn’t cost the ultimate good?”

    Surely there would be something wrong with placing moral pressure on a person to make a sacrifice that they did not need to make (imagine someone being encouraged to cut off their hand because someone else had lost theirs in a heroic sacrifice).

    I think some if the issue with utilitarianism here is created by only considering the consequences in relation to the particular union negotiation (what matters is all the consequences, including for example long term ones for the union.) Generally a unionist utilitarian might consider free riding a long term threat.

  6. Paul Gowder Says:

    hmm… excellent points — time for me to rethink.

    GNZ, what about in the non-moral context? Is it right for me to take advantage of the cooperation of others in producing some non-moral good? (Suppose everyone else pays the police tax, and my skipping it won’t cause the police to go away…)

  7. GNZ Says:

    generally you should pay the tax (based on realistic assumptions) but you could stipulate assumptions where it is OK not to pay.

    I suggest the situations that need to be exclude are for the most parts the same niggling worries that make you uncomfortable with the free-riding solution.

    In utilitarianism the answers are more complex but that doesn’t mean they aren’t there.

  8. Daniel Says:

    Can we draw insights from game theory in order to solve this kind of problem? Can moral commitment problems be solved the same way we solve strategic commitment problems?

    Short answer: No to both.

    Longer answer: Not sure I understand why one would surmise that game theory would supply us with robust normative bases for resolving difficult moral problems. Admittedly, I don`t know much about game theory, but what I do know suggests that it has to do with strategic advisability.

    What is strategically preferable may be relevant to what is moral, but they hardly seem equivalent to me. As such, responding to some classically difficult moral commitment problem — like, say, the trolley problem — by arguing in context of what is strategically advisable seems to sidestep the central moral problem. Sometimes, it seems to me, what is strategically advisable — what is efficient, perhaps — is unethical, and sometimes what is ethical is strategically inadvisable.

    So perhaps game theory may be a useful tool for thinking about intractable moral problems, but alone it seems unlikely to do the heavy lifting needed to actually resolve difficult moral problems.


  9. Paul Gowder Says:

    Daniel: you might (or might not) be right — I genuinely wasn’t trying to pre-suggest an answer to the question. Rather, it seems interesting that the moral problem has the same basic structure as the parallel strategic problem, and thus one naturally wonders if the solution to the strategic problem can offer any insight on the moral…

  10. GNZ Says:

    As a concequentialist strategically advisable (to achieve ideal ends as opposed to selfish ends) and what is ethical seem to coincide. Of course it probably doesn’t for many other ethical systems.

  11. Daniel Says:

    Understood, and I certainly don`t doubt that game theory might supply some insight on intractable moral problems. I simply doubt that it is likely to resolve such problems as satisfactorily as it might seem with regards to problems of rational choice (yes, this does commit me — happily, I might add — to the view that morality is not exhausted by the set of rational choices). If so, perhaps this explains Matt`s observation that Posner`s writings on ethics/justice/moral philosophy are, shall we say, unconvincing?

  12. Uncommon Priors » When does non-ideal political theory really exist? How moral and political theory come apart. OR: Why Gerry Cohen is Right About Everything, Part. 9823948790. Says:

    [...] concludes that their non-ideal duties are different from their ideal duties, we have just created a moral collective action problem where everyone is — dutifully! — not giving to charity just because everyone else is [...]

Leave a Comment