Monthly Archives: October 2008

Lemmata

Shocking fact: computers can’t do everything.

I know, I know, all of those years living in delusion. But get up off the floor, it’s not that bad. You see, what a computer can’t do tells us at least as much about the computer as what it can do. Actually, to be more exact, what a computer can’t do is the exact opposite (the complement) of what a computer can do, so the two things delineate the same boundary. By knowing the limitations on a computer’s power, I feel like I know much more about it than if you gave me a list of 100 things the computer actually can do.

For example, an extremely simple type of computer called a finite state automaton can be devised to do things like recognize all sequences of letters that end with the letter ‘z’, or that have five ‘a’s in a row. But, strangely, a finite state machine (that’s another name for finite state automaton) cannot do something so simple as telling you whether that sequence of letters has the same number of ‘a’s and ‘b’s in it. Weird.

For each more sophisticated type of computer, there is a well-defined limit to its ability to solve problems. Usually this is demonstrated using a fancy proof called a lemma. These lemmata are learned and utilized by computer science students the world over. An interesting question resulting from this observation is whether the human mind has a similar limit. In other words, is there a problem that a human being, using reason alone, could not solve even with unlimited amounts of time and an infinite ability to remember things (and an infinite desire to just sit there crunching numbers)?

414px-Complexity_classes.svgI’ve been discussing what’s known as computation theory. A related field, known as computational complexity theory, has similar implications. Rather than focusing on what a computer could theoretically compute with infinite time and memory, complexity theory focuses on how much time it takes to solve different problems. It turns out that many problems fall into a few basic classes, namely Polynomial (P), Non-deterministic Polynomial (NP), and NP Complete (NP-Complete). (For WAY more than you ever wanted to know about this topic, check out Stanford’s Complexity Garden and “Petting Zoo“.)

I wouldn’t be at all surprised if human rationality, as its own special type of computer, runs into similar limits of complexity and computability. In “So Open It’s Closed” we ran into the difficulty of proving that genocide is bad, and yet the vast majority of human beings (myself definitely included) would agree that it is. What would happen if everybody took that lack of proof to heart and just went about rampaging and slaughtering the ethnic group that most recently got on their nerves?

I consider this wall—against which human rationality can pound its head forever but never demonstrate anything—to be good evidence that our rationality is bounded. Not just in the sense of making expedient assumptions about things in order to save time and energy, but rather in the sense of what we can figure out over the long haul if we really put our minds to it. There’s a limit. We can see beyond it to a wider world around us, but we can’t quite escape from our complexity class prison, so most people just make long-term assumptions, assumptions which are vital to the development of both individuals and civilizations.

Revealed religions claim a source of knowledge—a means of proof—other than human rationality. Western philosophy and folk argument frequently make claims that cannot be substantiated rationally, and yet are still believed in. Ultimately, whether people like it or not, beliefs in something nobody can prove keep us from destroying each other!

Belief people “are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness” would have steered humanity around such profound disasters as the Holocaust, Stalin’s Gulag death camps, the Chinese Cultural Revolution, the needless slaughter of European conquest in the Americas, etc. I see this as a support of the validity of those who teach such morals, especially Christ and the prophets, and a support of the necessity of seeking out “[our] Creator” in order to reach beyond the limits of our own rationality.

There’s much to see, and I hope to see it. No finite state machine ever aspired so.

Priority

rosencrantzguildensternaredeadIn Bayesian statistics [based on those opening words I know I’ve already lost 50% of you] there is a concept of a priori and a posteriori beliefs. A prior probability distribution (also simply called a prior) indicates that prior to making an observation I already have some beliefs about the nature of the system being observed. For example, when you pick up a coin with the intent to flip it, in your mind you expect that the likelihood of heads and tails will be more or less equal. You have a uniform prior because you consider the probability of heads to be 50% and the probability of tails to be 50%.

Other sorts of priors are possible. You might give your favorite sports team higher odds of winning the national championship than another team, simply because they’re your favorite (or, alternately, because they’re awesome!) You could suppose that the likelihood of a Democratic candidate winning the presidency to be much higher than that of a Republican candidate due to the unpopularity of the incumbent party. Priors are essentially preexisting beliefs, and they get us pretty far in many situations. But, usually we want to leave the blissful world of prior beliefs and make some observations.

Depending on the strength of your prior, even after seeing a coin land heads up five thousand times in a row (something like that happened in Rosencrantz and Guildenstern Are Dead, which is one of the few things I actually got myself to learn in my statistics class) you might still pretty much believe that the coin was a fair coin; or, you might have already adjusted your beliefs and concluded that the coin is extremely biased in favor of heads. In either case, whether your new set of beliefs is identical to your prior beliefs, or whether they are radically different, you are now dealing with a posterior distribution, or posterior for short. A posterior is what you believe after examining your prior assumptions in the light of new evidence.

Whether we like it or not, we humans tend to operate in this Bayesian manner. Try as we might, we bring prior beliefs to the table in any situation. Whether or not you had tabula rasa at birth, you certainly don’t now, and you are affected by previous experiences. We update our beliefs in the light of new evidence. A stubbornly held belief might never really change, which indicates a nigh-unto infinite strength of prior relative to updated evidence. But in many situations we eventually acknowledge the weight of the evidence and change our beliefs. Or, you might say, our beliefs change, and they change us in turn.

So this guarantees a neverending march towards truth, as long as people allow evidence to influence them, right? Well, maybe. That assumes that your observations of the system are really representative of the nature of the system, and that they represent the entire system and not just one part of it. It assumes that the system even is something akin to what we think it is, or that we model the system in a way that is at least remotely connected to its actual makeup. And it assumes that we didn’t just by chance get a very strange sample that pushes our beliefs in the wrong direction.

In other words, there are lots of pitfalls for the would-be bayesian perfection of knowledge. And to think that we make so many choices based on such an imperfect system of prior and posterior belief, perception, and evaluation!

So Open It’s Closed

There is a point at which an open mind becomes so open that it closes in on itself. At least, that’s what I think. At some point, the willingness to consider any and every thought as at least initially equivalent can wash out any willingness to evaluate the accuracy of those viewpoints. At that point, the open thinker steps quietly away from the great stone fortress of knowledge and toward the art deco pavilion of perception, where the seeming unapproachability of truth leads some to abandon its pursuit.

Rothesay Pavilion (at:yellow book ltd)

Rothesay Pavilion is the finest of Scottish Art Deco (cc-by:yellow book ltd)

I try almost to the point of obsession to be open-minded. To me this means to never dismiss or discount anything simply out of hand, due to unfounded prior beliefs or biases. Everything, in theory at least, deserves a fair hearing. This is effective at avoiding the evils of ignorance and prejudice.

The more layers you are convinced divide you from an understanding of truth, the more difficulty you have committing to any one viewpoint. This acts as a hedge against becoming convinced of falsehoods. What happens, though, when I begin weighing one viewpoint, say something repulsive like “Genocide is good,” as an equal alongside something else like “It would be good to find a cure for HIV infection”? Easy, everyone knows that genocide is actually bad, so we drop that one right away.

But wait, isn’t that simply dismissing things out of hand? Perhaps there is some sort of redeeming quality of the pro-genocide position??? Sure, sure, in theory, everything is possible, so I suppose conceivably there could be support for the “genocide is good” viewpoint (though I seriously doubt it). But don’t we already have a good idea that genocide is bad? It seems like we’ve got a heuristic for that already. You know, something along the lines of: actions that lead to unnecessary suffering should be discouraged. No, no, that’s not it. I mean, we didn’t have to go in and do a thorough study of genocide and its demographic and societal implications (Does increased ethnic homogeneity decrease frictions internal to a nation? Does reduced population benefit the survivors by decreasing competition for limited natural resources?), we didn’t have to interview the perpetrators (How do you believe committing acts of genocide has helped you achieve your goals in life?) or the victims, nor did we have to commission a series of essays in memory of some obscure academic who died twenty-three years ago to explore it all from the Marxist angle. We didn’t have to form a blue ribbon commission to aggregate all of the disparate sources of information and come to one final determination of whether genocide was good or bad. No, we just sort of figured it out, based on something more like this: murder is bad, and genocide equals lots of murder, therefore genocide is lots of bad.

So where does that “murder is bad” thing come from? Well, everyone except for hardened mobsters and twisted modernist philosophers seems to think it’s a bad thing, so shouldn’t I think so, too? No. At least, not for that reason. That’s just following the crowd, the accepted argument fallacy. How about this: murder (and, by implication, genocide) is bad because it causes another person to cease to live, and in particular it does so contrary to their will, and we all know that depriving somebody of life, especially when they don’t want you to, is bad.

And why is that bad? I mean, is there some sort of imperative that should make me regret that? Everyone in the Western liberal tradition seems to agree that that is a bad thing, but no thought leader today ever gives a reason why.

It can’t be that the fact that murder is a nigh-unto-universal taboo amongst all cultures, places, and times. No, because a large number of people holding a particular belief does not make that belief more or less true. The truth of an idea is independent of whether it’s believed in or not. So why then is it wrong to murder?

I wouldn’t want somebody else to murder me. Maybe that’s a basis for objecting to murder? But why should my desire to avoid being murdered really mean that I should not murder another?

Now wait a minute. Not doing something to another person if I wouldn’t want that person to do the same thing to me sounds awfully familiar. It’s very Golden Rule-ish, isn’t it? Seems that there was some popular moralist a few millennia ago who argued in favor of that position, but that’s it, just another viewpoint to be dealt with from a distance but most definitely not believed in.

Do you see, now, how being so obsessively open has backed me into a corner? Insisting on evaluating every possibility,  I am left now with only two possibilities: either genocide and murder are wrong, or they aren’t. Either I attribute some sort of a priori wrongness to these acts of violence, or I am forced to conclude that genocide is morally neutral—which, ironically, would be a very morally non-neutral assertion to make.

Castle Bodiam is an impregnable stone fortress!

Castle Bodiam is an impregnable stone fortress! (cc-by:Misterzee)

So what will it be? Should I follow my obsession with open-mindedness to the brink of human depravity? Or should I believe that there really was a God who said “Thou shalt not kill“? Should I simply wave my hand at Auschwitz as if it carried the same moral significance as a supermarket? Or should I accept the teachings of one Jesus who said “All things whatsoever ye would that men should do to you, do ye even so to them”?

When rationality cannot prove that naked evils like systematic mass murder are actually bad, you’re forced to ask: What are the limits of rational thought? If rationality fails, then where else can we turn?