Sunday, September 20, 2009

Reflecting on Your Cognitive Algorithms

This post is largely a personal musing; the substantive content has been discussed elsewhere by many other authors.

One of the things that has most transformed the way I look at the world has been cognitive science, specifically the philosophical understanding that grounds it: Seeing the brain as a collection of cognitive algorithms running on biological hardware. This focus on not just what the brain does but how it might do it is fundamentally transformative.

For as long as I can remember, I had known about the types of psychological facts commonly reported in the news: For instance, that this particular region of the brain controls this particular function, or that certain drugs can treat certain brain disorders by acting in certain ways. And it's basic knowledge to almost everyone on the planet that operations inside the head are somehow important for cognitive function, because when people damage their brains, they lose certain abilities.

While I knew all of this abstractly, I never thought much about what it implied philosophically. I saw myself largely as a homunculus, a black box that performed various behaviors and had various emotions over time. Psychology, then, was like macroeconomics or population biology: What sort of trends do these black boxes tend to exhibit in given circumstances? I didn't think about the fact that my behaviors could be reduced further to particular cognitive-processing steps inside my brain.

Yet it seems pretty clear that such a reduction is possible. Think about computers, for instance. Like a human, a computer exhibits particular behaviors in particular circumstances, and certain types of damage cause certain, predictable malfunctions. Yet I don't think I ever pictured a computer as a distinct inner self that might potentially have free will; there were no ghosts or phantoms inside the machine. Once I had some exposure to computer architecture and software design, I could imagine what kinds of operations might be going on behind, say, my text-editor program. So why I did I picture other people and myself differently? My conceptions reflected how an algorithm feels from inside; I simply stopped at the basic homunculus intuition without breaking it apart.

Picturing yourself as a (really complicated and kludgey) computer program casts life in a new light. Rather than simply doing a particular, habitual action in a particular situation, I like to reflect upon, What sort of cognitive algorithm might be causing this behavior? Of course, I rarely have good answers -- studying that is what cognitive science is for -- but the fact that there is an answer soluble in principle gives a new angle on my own psychology. It's perhaps like the Buddhist notion of looking at yourself from the outside, distanced from the in-the-trenches raw experience of an emotion. And, optimistically, such a perspective might suggest ways to improve your psychology, perhaps by adopting new cognitive rituals. That is, of course, what self-help books have done for ages; the computer analogy (e.g., "brain hacks" or "mind hacks," as they're sometimes called) is just one more metaphor for describing the same thing.

Related is the realization that thought isn't a magical, instantaneous operation but, rather, requires physical work. Planning, envisioning scenarios, calculating the results of possible actions, acquiring information, debating different hypotheses about the way the world works, proving theorems, and so on are not -- as, say, logicians or economists often imagine them -- immediate and obvious; they involve computational effort that requires moving atoms around in the real world. For instance, the fact that you considered an option and then disregarded it is not a "wasted effort," because there's no other way to figure out the right answer than actually to do the calculation. Similarly, you're not at fault for failing to know something or for temporarily holding a misconception; the process of acquiring correct (or at least "less wrong") beliefs about the world requires substantive computation and physical interaction with other people. Changing your opinions when you discover you're in error isn't something to be embarrassed about -- it's an intrinsic step in the algorithm of acquiring better opinions itself.

Saturday, September 12, 2009

Pain-free Animals?

The current Vegan Outreach newsletter contains a link to a New Scientist piece (as well as an unfortunate editorial) based on a fascinating article: "Knocking Out Pain in Livestock: Can Technology Succeed Where Morality has Stalled?" by Adam Shriver. The moral urgency of such a proposal seems to me obvious, so I was most interested in the discussion of its scientific plausibility.

Shriver presents two example proposals for what might be done. First, we might
create knockouts of other mammals (cows and pigs for starters) lacking the AC1 and AC8 enzymes. Interfering with the cAMP cycle in the brain reduces the affective dimension of chronic or persistent pain, rather than pain full stop, but this would still be an improvement over current circumstances. If we could eliminate the sensitization that occurs as a result of painful or traumatic experiences, the animals would still be better off than they are now.
Secondly,
Zhou-Feng Chen and colleagues searched the Allen Brain Atlas to find genes that were highly expressed in the ACC but not other areas of the brain [29]. One strong candidate was the peptide P311. The researchers created knockout mice lacking the expression of P311 and found that heat and mechanical sensitivity were normal in the animals. However, they then performed a conditioned place aversion test on the animals and found that the knockouts no longer demonstrated the conditioned place aversion caused by formalin injections, in stark contrast to control rats. Thus, at first glance, it appears that knocking out P311 in mice strongly diminishes the affective dimension of pain while keeping acute responses intact.

Furthermore, P311 is likely to play a similar role in all mammals (Chen, personal communication), so one presumably could engineer other mammals that have a reduced affective dimension of pain while maintaining the sensory dimension of pain.
Since I'm even more interested in wild-animal suffering than farm-animal suffering, in view of the vast difference in numbers of animals involved, my immediate question was whether similar techniques might one day be applicable there. Doing so is a lot trickier, because evolution produced the badness of pain for a reason. Shriver mentions this concern:
Since it seems likely that the affective dimension of pain played some role in determining the evolutionary fitness of organisms, we might question whether knockout livestock could really survive up through the point where they are normally slaughtered. However, it appears that the experimental rats were able to survive without complication at least in their cages (Chen, personal communication). This would be a good model for sows or veal calves who spend most of their lives confined in small pens where they can’t do much of anything that would injure or otherwise harm themselves.
Producing genetically fit wildlife without pain might require not just knocking out pain but replacing the "pain" - "pleasure" axis with a "less pleasure" - "more pleasure" axis, which could be much more difficult.

I mentioned that Shriver's proposal seems obviously valuable from my perspective, but unfortunately this isn't necessarily the case among the general public. As the New Scientist article notes:
[Alan] Goldberg also contends that public attitudes may make pain-free livestock a non-starter. He and colleague Renee Gardner conducted an online survey on the use of pain-free animals in research and found little public support, even among researchers who experiment on animals (Alternatives to Animal Testing and Experimentation, vol 14, p 145).
This underscores the importance of public outreach to change hearts and minds about wild-animal suffering and how it could be prevented.