Sunday, September 20, 2009

Reflecting on Your Cognitive Algorithms

This post is largely a personal musing; the substantive content has been discussed elsewhere by many other authors.

One of the things that has most transformed the way I look at the world has been cognitive science, specifically the philosophical understanding that grounds it: Seeing the brain as a collection of cognitive algorithms running on biological hardware. This focus on not just what the brain does but how it might do it is fundamentally transformative.

For as long as I can remember, I had known about the types of psychological facts commonly reported in the news: For instance, that this particular region of the brain controls this particular function, or that certain drugs can treat certain brain disorders by acting in certain ways. And it's basic knowledge to almost everyone on the planet that operations inside the head are somehow important for cognitive function, because when people damage their brains, they lose certain abilities.

While I knew all of this abstractly, I never thought much about what it implied philosophically. I saw myself largely as a homunculus, a black box that performed various behaviors and had various emotions over time. Psychology, then, was like macroeconomics or population biology: What sort of trends do these black boxes tend to exhibit in given circumstances? I didn't think about the fact that my behaviors could be reduced further to particular cognitive-processing steps inside my brain.

Yet it seems pretty clear that such a reduction is possible. Think about computers, for instance. Like a human, a computer exhibits particular behaviors in particular circumstances, and certain types of damage cause certain, predictable malfunctions. Yet I don't think I ever pictured a computer as a distinct inner self that might potentially have free will; there were no ghosts or phantoms inside the machine. Once I had some exposure to computer architecture and software design, I could imagine what kinds of operations might be going on behind, say, my text-editor program. So why I did I picture other people and myself differently? My conceptions reflected how an algorithm feels from inside; I simply stopped at the basic homunculus intuition without breaking it apart.

Picturing yourself as a (really complicated and kludgey) computer program casts life in a new light. Rather than simply doing a particular, habitual action in a particular situation, I like to reflect upon, What sort of cognitive algorithm might be causing this behavior? Of course, I rarely have good answers -- studying that is what cognitive science is for -- but the fact that there is an answer soluble in principle gives a new angle on my own psychology. It's perhaps like the Buddhist notion of looking at yourself from the outside, distanced from the in-the-trenches raw experience of an emotion. And, optimistically, such a perspective might suggest ways to improve your psychology, perhaps by adopting new cognitive rituals. That is, of course, what self-help books have done for ages; the computer analogy (e.g., "brain hacks" or "mind hacks," as they're sometimes called) is just one more metaphor for describing the same thing.

Related is the realization that thought isn't a magical, instantaneous operation but, rather, requires physical work. Planning, envisioning scenarios, calculating the results of possible actions, acquiring information, debating different hypotheses about the way the world works, proving theorems, and so on are not -- as, say, logicians or economists often imagine them -- immediate and obvious; they involve computational effort that requires moving atoms around in the real world. For instance, the fact that you considered an option and then disregarded it is not a "wasted effort," because there's no other way to figure out the right answer than actually to do the calculation. Similarly, you're not at fault for failing to know something or for temporarily holding a misconception; the process of acquiring correct (or at least "less wrong") beliefs about the world requires substantive computation and physical interaction with other people. Changing your opinions when you discover you're in error isn't something to be embarrassed about -- it's an intrinsic step in the algorithm of acquiring better opinions itself.

No comments:

Post a Comment