Saturday, May 2, 2009

Normal Beliefs: An Insanity Defense?

I am a primate running patchwork cognitive algorithms on relatively fragile wetware. We know that brain devices fail at relatively high rates. 19% of the US population has a mental illness of some sort, with a small fraction of these cases involving serious insanity or delusion. In addition, some people simply lack certain normal abilities, such as the 7% of males who are colorblind.

I and many of my associates have extraordinarily strange beliefs. Many of these are weird facts -- e.g., that an exact copy of me exists within a radius of 10^(10^29) meters. But others are logical conclusions (e.g., that libertarian free will is incoherent) and methodological notions (e.g., that Occam's razor makes the parallelism solution to the mind-body problem astronomically improbable). These latter kinds of beliefs theoretically involve certainty or near certainty.

But given my understanding of the frailty of human beliefs in general -- to say nothing of the tempting possibility that correct knowledge is out of the question, or that all of these statements are entirely meaningless -- should I assign nonzero probability to the possibility that I'm wrong about these conceptual matters?

One answer is to say "no": We all start with assumptions, and I'm making the assumptions that I'm making. This is my attitude toward things like Bayes' theorem and Occam's razor. In the same way that my impulse to prevent suffering is ultimately something that I want to do, "just because," so my faith in math and Bayesian epistemology is simply something the collection of atoms in my brain has chosen to have, and that's that. (I wonder: Is there any sense in which it would be possible to assign probability less than 1 to the Bayesian framework itself? Prima facie, this would be simply incoherent.)

But what about other, less foundational conclusions, like the incoherence of libertarian free will? It's not obvious to me that the negation of this conclusion would contradict my epistemological framework, since my position on the issue may stem from lack of imagination (I can't conceive of anything other than determinism or random behavior) rather than clear logical contradiction. On this point itself I'm uncertain -- maybe libertarian free will is logically impossible. But I'm not smart enough to be sure. And even if I felt sure, I very well might be mistaken, or even -- as suggested in the first paragraph -- completely insane.

Can probability be used to capture uncertainties of this type? In practice, the answer is clearly yes. I've done enough math homework problems to know that my probability of making an algebra mistake is not only nonzero but fairly high. And it's not incoherent to reason about errors of this type. For instance, if I do a utility calculation involving a complex algebraic formula, I may be uncertain as to whether I've made a sign error, in which case the answer would be negated. It's perfectly reasonable for me to assign, say, 90% probability to having done the computation correctly and 10% to having made the sign error and then multiply these by their corresponding utility-values-if-correct-computation. There's no mystery here: I'm just assigning probabilities over the conceptually unproblematic hypotheses "Brian got the right answer" vs. "Brian made a sign error."

In practice, of course, it's rarely useful to apply this sort of reasoning, because the number of wrong math answers is, needless to say, infinite. (Still, it might be useful to study the distribution of correct and incorrect answers that occur in practice. This reminds me of the suggestion by a friend that mathematicians might study the rates at which conjectures of certain types turn out to be true, in order to better estimate probabilities of theorems they can't actually prove. Indeed, statistical techniques have been used within the domain of automated theorem proving.) When someone objects to a rationalist's conclusion about such and such on the grounds that "Your cognitive algorithm might be flawed," the rationalist can usually reply, "Well, maybe, sure. But what am I going to do about it? Which element of the huge space of alternatives am I going to pick instead?"

Perhaps one answer to that question could be "Beliefs that fellow humans, running their own cognitive algorithms, have arrived at." After all, those people are primates trying to make sense of their environment just like you are, and it doesn't seem inconceivable that not only are you wrong but they're actually right. This would seem to suggest some degree of philosophical majoritarianism. Obviously we need to weight different people's beliefs according to the probability that their cognitive algorithms are sound, but we should keep in mind the fact that those weights are themselves circular.

How concerned should we be that, say, people who believe in parallelism of mind and body are actually correct?

1 comment:

  1. "(I wonder: Is there any sense in which it would be possible to assign probability less than 1 to the Bayesian framework itself? Prima facie, this would be simply incoherent.)"

    I don't think that this has been done elegantly, but it's definitely an important problem for multiple purposes.

    With respect to Occam assigning exponentially diminishing probability to special miracles, I tend to think of this in terms of the broad set of hypotheses to be considered: if my probabilities are to sum up to 1 I can't coherently assign all my 'the world is a lie' probability mass to whatever hypothesis has been brought to my attention in the last five minutes. The code of a short program can be contained in astronomically many ways within a larger program. An indifference principle gets you going from there.

    What do you mean by nonvanishing probability? After you consider cosmically many other conspiracy/world is a lie hypotheses, if you don't have a gerrymandered prior-equivalent (e.g. from standard psychological causes of confident religious belief) it looks to me like probability adjusted for impact becomes negligible for decision purposes, but what numbers are you thinking about?

    ReplyDelete