Doxastic Voluntarism and some Puzzles About Rationality

There is a debate in philosophy about whether direct doxastic voluntarism is true — that is, whether people can deliberately change beliefs just by deciding to do so. (It’s a problematic position for several reasons. For example, if you believe P at the time that you decide to believe ~P, it seems like you must at some point in there hold the belief that you believe a false proposition, while believing that proposition — a contradiction, at least if you think that believing that P means believing that P is true.) The Internet Encyclopedia of Philosophy has an excellent article on it.

Many (most?) think, however, that indirect doxastic voluntarism is true: that people can change their beliefs as part of a course of action, and do so knowingly or even deliberately. For example, if you believe that social context plays a strong role in how human belief-formation actually works, then you can put yourself in a social context that will encourage a belief different from the one you have. In an extrme case, you can turn yourself over to someone else for coercive brainwashing, as in, for example, a cult, the KGB, or George Orwell’s ministry of truth from 1984.

This seems to raise problems for people who want to act and believe rationally, however. For if we can voluntarily do things that change our beliefs, then there might be cases where instrumental rationality conflicts with epistemic rationality. That is, it might be the case that rational action can predictably lead someone to false beliefs — that someone has to choose between rational action and beliefs. This is not just the familiar case of motivated irrationality, where people unconsciously cling to beliefs that are in their interest (such as beliefs about the inferiority of other races, or the just world hypothesis). Rather, people might knowingly adopt false beliefs, in the interest of some good result.

The following scenario should both illustrate the problem and show how it’s realistic, not merely academic. Imagine Allen. He’s an alcoholic. He makes an all-things considered judgment that it would be best for him to stop drinking. Let’s stipulate that this judgment is rational. He also holds the belief, which we can again stipulate as rational in the sense of given by the best evidence available to him after a rational search process, that the only way for someone with his psychological characteristics to stop drinking is to join Alcoholics Anonymous. Allen is also an atheist (again, rationally). However, he believes (again, rationally) that if he joins Alcoholics Anonymous, his psychological characteristics are such that he will be induced by social pressure to believe in God. Because he’s an atheist, he believes that if that belief change happens, it’ll be because his reasoning process will be warped by social pressure, and his new beliefs will be false and (more importantly) unwarranted by the evidence.

Is Allen’s decision to join AA rational? What about his belief in God, should he (as predicted) acquire one?

One way to think of this problem is to say that his belief in God will be irrational, but his plan overall will be rational. That is, to say that one can adopt a rational plan of action (a plan of action leading to an all-things-considered best solution, given one’s preferences), even though that rational plan of action leads to irrational beliefs (beliefs not justified by the best evidence available to oneself) — that the irrationality of the belief isn’t central to the rationality of the plan of action.

But there are cases where there seems to be a sufficiently close connection between the belief and the plan that it’s harder to sustain that distinction.

The case just noted is different from simple self-deception. Self-deception seems to be about adopting a plan with the object of changing beliefs. Call that incidental belief-change, and call self-deception intentional belief change. It seems like intentional belief change is harder to deal with.

I think intentional belief change takes two forms, to wit, self-fulfilling and non-self-fulfilling self-deception. The classic case for self-fulfilling simple self-deception is something like the man who believes he’s lousy at attracting women, and, knowing that self-confidence is attractive, consciously deceives himself into believing he’s good at attracting women, with the result that he becomes good at attracting women.

Non-self-fulfilling self-deception is more like Allen’s case, but with the additional stipulation that he wants to adopt the false belief in God, because he believes that adopting the false and unwarranted belief in God will be efficacious in his goal of stopping drinking. That is, the false belief is no longer a side-effect of an otherwise rational plan, it’s central to it.

What do we say about rationality in such cases? Do we give different evaluations for cases of a) incidental belief change, b) self-fulfilling self-deception, and c) non-self-fulfilling self-deception?

(cross-posted to Overcoming Bias, where it may eventually show up. maybe.)

Share


One Response to “Doxastic Voluntarism and some Puzzles About Rationality”

  1. Overcoming Bias : The Problem at the Heart of Pascal’s Wager Says:

    [...] is a revised version of a post that I originally wrote a couple of weeks ago, which appears in its original form as a lengthy excursus on doxastic voluntarism on my personal blog, Unco….  If you’re interested, you might check that out, though it’s less sound, I think, [...]

Leave a Comment