Sunday, August 26, 2012

Pain vs. suffering and animals vs. humans

People sometimes ask me whether I make a distinction between "pain" and "suffering." The answer is "yes, I do," although one reason this might not be clear is that I have the following quotation from George Orwell at the top of my page called "On the Seriousness of Suffering":
Nothing in the world was so bad as physical pain.
Katja Grace wrote a blog post based on this quote, and in the comments, I made the following clarification:

First, I don’t entirely agree with Orwell’s choice of words, but I included the quote as he wrote it for the sake of readability. In particular, as many have pointed out, what matters is not “pain” directly but “suffering,” i.e., the response that “this feels really awful and I want it to stop.” The commenters raised several examples where pain itself isn’t aversive: Pain asymbolia, masochism, people given morphine, etc., not to mention self-cutting and other things people do in order to release endorphins/opioids to make themselves feel better.
I would also omit Orwell’s word “physical,” because because mental pain can be just as bad.

Pain asymbolia is the most clear proof that pain and suffering are distinct, because unlike masochism where one can imagine that pleasure chemicals are merely outweighing pain signals, in pain asymbolia, the quale of pain itself is not aversive.

This suggests the broader question, What gives valence to qualia? I think the details of how this happens are largely unknown, but presumably there are brain processes which "paint" a suffering gloss onto experiences in the same way as certain brain processes paint a hedonic gloss onto pleasure. It's these painting operations that I count as suffering and that I want to reduce.

A related theme is the classic distinction between nociception and conscious pain. As Jane A. Smith explains in "A Question of Pain in Invertebrates":
Invertebrates, it seems, exhibit nociceptive responses analogous to those shown by vertebrates. They can detect and respond to noxious stimuli, and in some cases, these responses can be modified by opioid substances. However, in humans, at least, there is a distinction to be made between the "registering" of a noxious stimulus and the "experience" of pain. In humans, pain "may be seen as the response of the whole awake conscious organism to noxious stimuli, seated.., at the highest levels in the central nervous system, involving emotional and other psychological components" (Iggo, 1984). Experiments on decorticate mammals have shown that complex, though stereotyped, motor responses to noxious stimuli may occur in the absence of consciousness and, therefore, of pain (Iggo, 1984). Thus, it is possible that invertebrates' responses to noxious stimuli (and modifications of these responses) could be simple reflexes, occurring without the animals being aware of experiencing something unpleasant, that is, without "suffering" something akin to what humans call pain.
And from Antonio Damasio, The Feeling of What Happens, as excerpted here:
Would one or all of those neural patter[n]s of injured tissue be the same thing as knowing one had pain? And the answer is, not really. Knowing that you have pain requires something else that occurs after the neural patterns that correspond to the substrate of pain – the nociceptive signals – are displayed in the appropriate areas of the brain stem, thalamus, and cerebral cortex and generate an image of pain, a feeling of pain.
So when I ask whether insects might be able to suffer, I don't mean just whether they can react against physical injury and learn to avoid it in the future. I'm asking whether they can perceive this injury as something that is happening to them and that they want to have stopped. I agree that the jury is very much still out on this question. If it seems as though I believe otherwise, it's because I'm trying to track the expected value rather than the most likely point estimate.

Now, given that suffering is different from pain and that suffering can involve strong non-physical emotional components, does this mean animals matter less than we might think because they don't suffer in high-level mental ways?

First, it's unclear whether the claim is true that animals have substantially less sophisticated mentation, at least for "higher" animals like mammals. Animals show many of the psychopathologies that humans do and are used as models for depression when testing drugs. Elephants have death rituals. Crows appear to go sledding for fun. Marc Bekoff, Jonathan Balcombe, and other ethologists have written numerous books documenting the complex emotional lives of mammals, birds, fish, and even octopuses.

But, suppose it is true that non-human animals don't have a similar degree of psychological depth to their experiences. It's not obvious that this means they suffer less intensely. Maybe the brain applies normalization to its experiences, so that it can appropriately encode relative priorities of various drives without using excessive amounts of energy/storage. For example, say a mouse's suffering is between 0 and -10, while a human's would be between 0 and -50 due to emotional depth. However, maybe the human brain doesn't care about perfect granularity among all of the values between 0 and -50; it only needs a sufficient granularity to make the right tradeoffs, so it downplays the importance of physical pain. In other words, a physical pain that would have been -10 for the mouse might be -2 for the human, because the human has so much else to worry about. This is pure speculation, and I wouldn't rest my argument on this point, but it seems possible. This discussion also gets into philosophical issues about how we want to care about and measure emotional intensity, which lie beyond the scope of the current post.

Finally, what if animals do suffer less, even after taking account of the brain's normalization processes? Well, I guess I would ask, How much less do they suffer? I don't think it's orders of magnitude less, and if not, then the basic calculations showing that, at the margin, animal welfare takes priority over human welfare would remain. Suppose you were a chicken being scalded and drowned alive in a boiling defeathering tank. How much less bad would this experience be if you didn't have broader thoughts about the end of your life, the injustice of your situation, how much you'll miss your friends, etc.? I suspect that the raw physical pain would overwhelm these subsidiary thoughts during the moment, and even if not, I don't think the higher-level thoughts would be 10 times stronger than the raw pain.

Moreover, there are many times when humans may in fact suffer less because of their understanding of the situation. Humans enduring a bout of food poisoning can know that the agony will end after a day or two and can know that their friends and family will help them in the mean time. Animals going through the same experience may have no idea what's happening to them, whether it will end, or what will become of their lives.

The points discussed above are fascinating to ponder, and it's valuable to hear from other people which of their own experiences they've found most unpleasant. That said, we modern humans live extremely comfortable lives compared with factory-farmed or wild animals, so it isn't surprising that most of our worst memories may be of purely emotional injury. In any event, regardless of where we settle on the question of the relative magnitudes of animal and human pain, physical and psychological pain, I don't think it's likely to tip the balance of our calculations about where our dollars and hours will do the most good.

11 comments:

  1. > I'm asking whether they can perceive this injury as something that is happening to them and that they want to have stopped

    I agree this is the central question.

    > I suspect that the raw physical pain would overwhelm these subsidiary thoughts during the moment, and even if not, I don't think the higher-level thoughts would be 10 times stronger than the raw pain.

    This also seems true--in the moment.

    However, humans suffer a lot throughout their lives, and I do not think that the majority of their suffering is at the moment of their death, even if it is a relatively bad one such as drowning.

    Agree that animals that live in painful circumstances such as factory farm cages probably suffer a lot throughout their lives.

    > we modern humans live extremely comfortable lives compared with factory-farmed or wild animals, so it isn't surprising that most of our worst memories may be of purely emotional injury

    People often talk of wanting to go back to hunter-gatherer days. Perhaps they are wrong and are falling victim to the "grass is always greener" effect, but perhaps we should give their professed desires some weight.

    ReplyDelete
  2. Hi Brian. Great post! One comment; you write:

    "I'm asking whether they can perceive this injury as something that is happening to them and that they want to have stopped"

    You seem to be construing the difference between suffering and mere pain as involving higher mental processes. However, I don't think the difference should be construed in such a way, if your goal is to identify the mental properties that we as utilitarians care about and to distinguish them from other mental properties with which they might be conflated. Instead, I'd say this.

    Experiences called ‘pains’ typically involve a number of different properties. In particular, such experiences typically involve a property related to the location of the experience, a property related to the "quality" of the experience, and a property related to the "badness" of the experience. As science shows, however, such properties may come apart, and an experience may have some of the typical properties of pain without having others. We may then reserve the word “suffering” to designate typical experiences of pain that include the property of “badness”. Still, even here it is not suffering as such that we would care for; rather, it is the specific property that all painful experiences must have in order to count as experiences of suffering. We may call this property ‘phenomenal badness’, ‘negative phenomenal valence’ or simply ‘unpleasantness’.

    Once we define our terms in this way and use them to express our moral concerns, it is clear that we are attaching no moral importance to higher mental processes. For instance, we don’t care whether a creature is “aware” of having an unpleasant experience, in the sense of being able to cognitively recognize it as such. Nor do we care whether the creature experience this episode “as something that is happening to [it]”. Rather, all we care is that the creature can feel the badness of the experience.

    Of course, under certain circumstances, only creatures in possession of certain higher mental faculties will experience unpleasantness. But in this case, these higher faculties will be mere causes of the experience we care about; they will not be themselves the object of our moral concern. Moreover, as you note, under other circumstances only creatures that lack such faculties will suffer. So, not only are higher mental faculties intrinsically unimportant; instrumentally, it is unclear whether we should on the whole care more about creatures that have them than about creatures that lack them.

    ReplyDelete
  3. Thanks for the comments, Andy and Pablo!

    It's true that people sometimes claim to wish to go back to hunter-gatherer days, but few of them actually seem to do it. Is that merely inertia preventing them from doing what would be in their long-term interest? I do think people are fairly mistaken about how pleasant hunter-gatherer living would be, especially when we get into issues like high infant mortality, disease, famine, inter-tribe warfare, etc.

    Thanks for the clarification, Pablo! Yes, what you said is roughly what I mean. I guess I would just add that it's unclear which higher cognitive processes are truly superfluous and which are are crucial for consciously feeling the badness of suffering. Presumably things like language or numeracy are irrelevant, but maybe some amount of awareness of oneself is not? It's hard to know without further study.

    ReplyDelete
  4. You mention "qualia" and "experiences" here. I thought you didn't believe there were any qualia or experiences, have you changed your mind on the issue or did I misunderstand you earlier?

    ReplyDelete
  5. I can make sense of certain understandings of "qualia" -- e.g., Gary Drescher's comparison of qualia with gensyms from Lisp. It's still meaningful to talk about particular kinds of sensations that an organism can recognize when we understand them in a non-confused way.

    Similarly, "experiences" correspond to particular cognitive processes that are very real, even if they're purely mechanical and third-person.

    In any event, I think I'm at greater risk of misunderstanding your position, since you're much better read in philosophy of mind than I. :)

    ReplyDelete
  6. It seems to me that Brian's position is best described as a form of reductive physicalism, rather than eliminativism. So qualia do exist, there is something it's like to be in pain, etc., but all these experiences are ultimately reducible to physical processes in the brain (or some other suitable substrate).

    Jesper (or anyone else), do you think the debate over qualia and the mind-body problem has interesting moral implications? Does something important follow from some of the rival views, but not from the others?

    ReplyDelete
  7. To clarify: I don't mean the very useful scientific debate about the neural correlates of consciousness, but the philosophical debate about whether there is an explanatory gap between the mental and the physical.

    ReplyDelete
  8. Brian: Hehe, interestingly I still do not understand Gary's formulation. But i might take some time thinking about it later. If I will i'll write a mail/comment.

    Pablo:
    Well, as I see it there are two philosophical issues. First there is the typical debate about physicalism. Second, there is the debate about eliminativism.

    I have been inclined to think that the first has no significance at all (but I am not completely sure, it might affect some methodological issues).

    Now, I am not sure even if the eliminativism issue matters. If you like me believe that only qualia can have any significant value the scenarios lacking qualia lack all value and thus factor out of expected utility calculations. Thus the answer to the philosophical debate shouldn't matter.

    On the other hand, even if you do not have this belief, you would probably think the things which would correlate with qualia if qualia existed have value. Thus neither here should the answer matter.

    (well, it strikes me also that if you value other things than qualia also, these might get higher relative value if you believe eliminativism is likely)

    ReplyDelete
  9. This comment has been removed by the author.

    ReplyDelete
  10. "If you like me believe that only qualia can have any significant value the scenarios lacking qualia lack all value and thus factor out of expected utility calculations. Thus the answer to the philosophical debate shouldn't matter."

    Sorry, I don't understand this. If I believe that only qualia could have value and I become persuaded that there are no qualia, my conclusion should be that moral nihilism is true: nothing has value. This conclusion seems extremely important. So the philosophical debate about eliminativism does seem to have interesting implications.

    [The comment which I originally posted had two additional paragraphs which now seem to me to be wrong.]

    ReplyDelete
  11. I hope Pablo can better classify my position than I can. All I can do is describe my thoughts on the matter in layman's terms until I'm able to grok the thick philosophy-of-mind terminology.

    I think what Jesper was saying in the quote that confused Pablo was that we have a Pascalian wager for the existence of qualia, because if qualia don't exist, nothing matters, but if they do exist, things do matter, so we should act as though they do exist. Clarification aside, I don't agree with this conclusion because of what Jesper said: "you would probably think the things which would correlate with qualia if qualia existed have value."

    I don't know if the explanatory-gap question has practical significance, but I think there are numerous other philosophical -- and not just scientific -- questions to be hammered out regarding what sorts of physical operations we want to regard as counting as conscious suffering.

    ReplyDelete