Monday, December 2, 2013

Posts moved to "Essays on Reducing Suffering"

In Oct.-Nov. 2013, I revamped my main website, "Essays on Reducing Suffering," to improve its appearance, add pictures, and rewrite significant portions of several essays. I also moved some of the higher-quality blog posts there from here. I plan to close out this blog and publish further writings on my main website because
  • I think readers find essays on my main website more authoritative -- lots of people have blogs but not as many have standalone sites like that one,
  • I prefer the fact that my website is static and lays out essays by topic rather than by date -- I fear that on blogs, the old posts get lost and unread even if they're well written and not stale, and
  • my website allows for more customization with formatting, etc.
One feature my static site lacks is comments, but I find that most discussion happens on Facebook nowadays, and sadly, my writings may appear more credible without comments. (As an example, an academic would not have a comments section for the papers on her website.)

Thanks to the readers of this blog for past contributions. I welcome continued feedback on my writings by email or Facebook.

Monday, October 14, 2013

Beauty-driven morality

In a waiting room today, I talked with someone I met about suffering by animals in nature. His reply was that suffering isn't really bad, and because nature is beautifully complex and intricate, we should try to keep it the way it is as much as possible. I've gotten this reaction many times, including from several close friends. For these people, nature's aesthetic appeal outweighs all the suffering of the individual insects and minnows that have to live through it.

Jonathan Haidt's Moral Foundations Theory describes five principal values that seem to underlie many moral intuitions:

1) Care/harm
2) Fairness/cheating
3) Loyalty/betrayal
4) Authority/subversion
5) Sanctity/degradation.

The last of these is partly driven by feelings of disgust, which seem to move from the visceral realm to the moral realm in some people by acquiring a higher sense of "absolute wrongness." A classic example is a thought experiment involving completely safe and harmless sex between a sister and brother. Some people say, "I can't explain why, but it's just wrong."

It seems there's a reverse side of disgust-driven morality, one which probably has much more sway over more liberal-minded types. It's what I'm calling "beauty-driven morality," and it's slightly different from Haidt's "moral elevation" concept. In beauty-driven morality, outcomes are evaluated based on how aesthetically pleasing, complex, amazing, and sublime they seem to the observer. So, for example, the intricacies of ecosystem dynamics -- complete with brutal predation and Malthusian mass deaths shortly after birth -- are seen as so elegant, such a wonderfully harmonious balance, that to replace them with anything more bland, sterile, or civilized would be morally tragic.

Our sense of beauty and awe is part of a reward system designed to encourage exploration and discovery. Identifying patterns, figuring things out, and otherwise tickling our aesthetic intellectual senses makes us feel good. In those with beauty-driven moral intuitions, this feel-good emotion seems to be not just a personal experience, like the pleasant taste of chocolate, but also a morally laden experience: The sense that "this is right; this is how the world should be."

Of course, care/harm-based morality is fundamentally very similar. Our brains feel reward upon helping others and punishment upon seeing others in pain, and we regard this not just as a private emotion but a reflection on how the world should be, i.e., it should contain more helping and less suffering.

A pure care/harm moralist like myself can tell the beauty-based moralist: "You don't understand. Beauty is just a reaction you have to imagining something. It doesn't mean we should actually work toward the scenario you picture as beautiful. The real deep importance of acting morally comes from improving the subjective experiences of other beings." The beauty-based moralist can reply: "No, you don't understand how transcendent this higher beauty is. It's so fundamentally important that it's worth many beings suffering to bring it about. This is where the deepest moral purpose lies."

Of course, I don't agree with the beauty-based moralist, but this fundamentally comes down to a difference in our brain wiring. Similarly, I can't talk a paperclip maximizer out of pursuing its metallic purpose in life. The paperclip maximizer tells us: "No, you both don't understand. The ineffably profound value of paperclips rises far above both of your petty concerns. I hope one day you see the shiny truth."

That said, there is more room to change the minds of beauty-based moralists than paperclip maximizers insofar as the former are humans who also tend to have care/harm intuitions. The aesthetic approach makes most sense from a "far mode" perspective -- looking at whole ecosystems or inter-agent evolutionary dynamics on large time scales -- but if you see in a near-mode way this particular gazelle having its intestines ripped out while still conscious, even the aesthetics of the situation may seem different, and if not, hopefully care/harm sentiments can enter in.

Since beauty-based morality presumably originates from aesthetic reward circuits, we would predict that people with more of these circuits (artists, poets, mathematicians, physicists, etc.?) would, ceteris paribus, tend to care more about making the future beautiful than average.

As a postscript, I should add that even if we don't agree with beauty-driven morality, there are good strategic reasons to compromise with people who do subscribe to it. For that matter, there are even good strategic reasons to compromise with paperclip maximizers if and when they emerge.

In addition, if we're preference utilitarians, we may place intrinsic weight on agents' desires for beauty or paperclips. In general, we should strive for a society in which other values are respected and in which we do cheap things to help other values, even if we don't care about them ourselves.

Monday, July 29, 2013

Should we worry about 1984 futures?

Summary. It seems that oppressive totalitarian regimes shouldn't be needed in the long-term future, although they might be prevalent in simulations.


When you hear the phrase "dystopic futures," one of the first images that may come to mind is a society like that of Oceania from Orwell's 1984. Big Brother eliminates opportunity for privacy, and orthodoxy is enforced by brainwashing and torture of those who fail to conform. As far as future suffering is concerned, the most troubling of these is torture.

In the short run, futures of this type are certainly possible, and indeed, governments like this already exist in some degree. However, my guess is in the long run, enforcing discipline by torture would become unnecessary. Torture is needed among humans as a hack to restrain motivations that would otherwise wander from those the authorities wanted to enforce. For arbitrary artificial minds, the subjects/slaves of the ruling AI can have whatever motivations the designer builds. We don't need to torture our computers to do what we ask. Even for more advanced computers of the future that have conscious thoughts and motivations, the motivations can simply be to want to follow orders. Organisms/agents that don't feel this way can just be killed and replaced.

Huxley's Brave New World approximates this idea somewhat for non-digitial minds in the form of drugs and social memes/rituals that inspire conformity. 1984 has plenty of these as well, and they don't represent an intrinsic concern for suffering reducers.

If we encountered aliens, it seems unlikely there would be much torture either (except maybe to extract some information before killing the other side). The side with more powerful technology would just decimate the one with less powerful technology.
Whatever happens, we have got
The Maxim gun, and they have not. (source)
Just wiping out your enemies is a lot cheaper than keeping them around subject to totalitarian rule.

The main context in which I would worry about 1984-style torture is actually in simulations. AIs of the future may find it useful to run vast numbers of sims of evolved societies in order to study the distribution of kinds of ETs in the universe, as well as to learn basic science. Depending on the AI's values, it might also run such sims because it finds them intrinsically worthwhile.

Sunday, July 21, 2013

Counterfactual credit assignment

Introduction

Effective altruists tend to assign credit based on counterfactuals: If I do X, how much better will the world be than if I didn't do X? This is the intuition behind the idea that the work you do in your job is at least somewhat replaceable, as well as the reason to seek out do-gooding activities that aren't likely to be done without you.

Perils of adding credit

We can get into tricky issues when trying to add up counterfactual credit, though. Let me give an example. Alice and Bob find themselves in a building that contains buttons. Each person is allowed to press only one button, at which point she/he is transported elsewhere and has no further access to the buttons. Thus, Alice and Bob want to maximize the effectiveness of their button pressing. There's a green button that, when pressed once, prevents 2 chickens from enduring life on a factory farm. There's also a red button that, when pressed twice in a row, prevents 3 chickens from enduring life on a factory farm. In order to make the red button effective, both Alice and Bob have to use their button press on it.

Alice goes first. Suppose she thinks it's very likely (say 99% likely) that Bob will press the red button. That means that if she presses the red button, she'll save 3 chickens, while if she presses the green button, she'll only save 2. There's more counterfactual credit for pressing the red button, so it seems she should do that. Then, Bob sees that Alice has pressed the red button. Now he faces the same comparison: If he presses red, he saves 3 chickens, while if he presses green, he saves only 2. He should thus press red. In this process, each person computed a counterfactual value of 3 for the red button vs. 2 for the green button. Added together, this implies a value of 3+3=6 vs. 2+2=4.

Unfortunately, in terms of the actual number of saved chickens, the comparison is 3 vs. 4. Both Alice and Bob should have pressed green to save 2+2=4 chickens. This shows that individual credit assignments can't just be added together naively.

Of course, the situation here depended on what Alice thought Bob would do. If Alice thought it was extremely likely Bob would press green, her counterfactual credit would have been 2 for green vs. 0 for red. Or, if she thought Bob would switch to red if and only if she pressed red, then the comparison was 2 for herself vs. 3-2=1 for Bob's switching to red and giving up his green.

Joint decision analysis

The decision analysis becomes more clear using a payoff matrix as in game theory, except in this case both Alice and Bob, being altruists, share the same payoff, which is total chickens helped:

Bob press red Bob press green
Alice press red 3 2
Alice press green 2 4

Alice and Bob should coordinate to each press green. Of course, if Alice has pressed red, at that point Bob should as well.

In this example, reasoning based on individual counterfactual credit still works. Imagine that Alice was going to press red but was open to suggestions from Bob. If he convinces her to press green and then presses green himself, the value will be 4 instead of 3 if he hadn't done that, so he gets more counterfactual credit if he persuades Alice to press green and then does the same himself than if he goes along with her choice of red.

Acknowledgements

This post was inspired by comments in "The Haste Consideration," which is a concrete case where counterfactual credit assignments can get tricky.

Tuesday, January 8, 2013

Mr. Rogers on unconditional love

Summary: Unconditional love is an attitude we adopt and a feeling we cultivate because of its salutary effects on people.

Fred Rogers ended many episodes of Mister Rogers' Neighborhood with the reminder that "people can like you exactly as you are," an expression he learned from his grandfather.

Episode 1606 of the program featured Lady Aberlin singing to Daniel the following song:
I'm glad
You're the way you are
I'm glad
You're you
I'm glad
You can do the things that you can do
I like
How you look
I like the way
That you feel
I feel that you
Have a right to be quite pleased with you
I'm glad
You're the way you are
I think
You're fine
I'm glad
You're the way you are
The pleasure's mine
It's good
That you look the way you should
Wouldn't change you if I could
'Cause I'm happy you are you.
Do these statements mean people shouldn't bother improving themselves? If others like them as they are, is there no incentive to get better at things?

Well, it's possible that conditional love could force people to try harder in order to seek approval, but at what cost and for what benefit? I think the cost is big: If you're not certain that anyone loves you, life can seem very scary, hopeless, and pointless. And I think there are plenty of other factors motivating people to improve in areas that matter without trying to use love as another carrot and stick. When people are in a rough emotional situation, they may not even have the motivation or support to undertake self-improvement, and might either wallow in despair or seek approval in unproductive ways -- including, as the song hints, through trying to look more attractive on the outside.

There's a time and place for incentives, but love by and for another person is one domain where trying to introduce incentives does more harm than good because of the nature of human psychology. Consider how popular the theme is in Christianity that God loves you no matter what: This is a powerful idea that can transform people's lives.

I feel unconditional love for a person even at the same time that I might prefer him/her to be different. If the person is open to advice on changing, I'll suggest things, but at the same time, I feel that even if the person doesn't change, it's okay -- s/he is still a special individual whose feelings matter just the same. In my mind, unconditional love is closely tied with hedonistic utilitarianism: When I realize that an organism feels happiness and suffering, at that point I realize that the organism matters and deserves care and kindness. In this sense, you could say the only "condition" of my love is sentience.

From "Then Your Heart is Full of Love" by Josie Carey Franz and Fred Rogers (1984):
When your heart can sing another's gladness,
Then your heart is full of love.
When your heart can cry another's sadness,
Then your heart is full of love.
[...]
When your heart has room for everybody,
Then your heart is full of love.

I'll close with another Fred Rogers song, possibly my favorite. It hints at this idea that the other person's feelings are the reason for our love of him or her.
It's you I like,
It's not the things you wear,
It's not the way you do your hair--
But it's you I like.
The way you are right now,
The way down deep inside you--
Not the things that hide you,
[...]
I hope that you'll remember
Even when you're feeling blue
That it's you I like,
It's you yourself,
It's you, it's you I like.

Monday, December 31, 2012

Agile projects

Give feedback early; give feedback often. Especially the early part.

When it comes to writing a paper or planning a campaign or picking a cause to focus on, a little bit of feedback at the beginning is worth hundreds of micro-edits or small optimizations later on. The topic that you write about can matter more than everything else in your whole article. If you complete a research paper about something unimportant, it doesn't much matter how well-written and well-researched the piece is (unless your goal is to establish prestige as a writer or build an audience that you can then direct toward your more important essays). If you pick an inefficient activism campaign, it doesn't much matter how well you carry it out (except for getting practice, personal experience, etc.).

Most of the time, feedback won't have the dramatic effect of reorienting the entire direction of a paper or campaign, but it may have smaller impacts, such as whether the author considers a given argument or whether the campaign undertakes measurement of its impact. A stitch in time saves nine, and it's easier (both physically and cognitively) to improve something at the beginning than near the end.

So why is it that people sometimes hesitate to share drafts, ideas, plans, etc. until they're almost completed? Maybe one reason is that slow feedback is sometimes more customary, and people fear that if they share totally incomplete drafts or brainstorms, people would judge them for not being thorough and polished and not having considered such-and-such objection. If this is the case, we should work to change the culture of feedback among the people we know, to make it clear that preliminary drafts are potentially better than polished products in terms of the benefit of giving feedback per unit time.

Another reason can be that when people comment on a rough draft, the author may already know that he needs to fix most of what the reviewer points out. But this can be largely allayed if the reviewer understands the stage where the project is. You don't (usually) give sentence-level edits on a paper outline. Also, the author could sketch out the areas that he knows are incomplete so that the reviewer won't comment on those.

The title of this post comes from agile software development, which is one area where the principles I described have been well recognized.

Sunday, August 26, 2012

Pain vs. suffering and animals vs. humans

People sometimes ask me whether I make a distinction between "pain" and "suffering." The answer is "yes, I do," although one reason this might not be clear is that I have the following quotation from George Orwell at the top of my page called "On the Seriousness of Suffering":
Nothing in the world was so bad as physical pain.
Katja Grace wrote a blog post based on this quote, and in the comments, I made the following clarification:

First, I don’t entirely agree with Orwell’s choice of words, but I included the quote as he wrote it for the sake of readability. In particular, as many have pointed out, what matters is not “pain” directly but “suffering,” i.e., the response that “this feels really awful and I want it to stop.” The commenters raised several examples where pain itself isn’t aversive: Pain asymbolia, masochism, people given morphine, etc., not to mention self-cutting and other things people do in order to release endorphins/opioids to make themselves feel better.
I would also omit Orwell’s word “physical,” because because mental pain can be just as bad.

Pain asymbolia is the most clear proof that pain and suffering are distinct, because unlike masochism where one can imagine that pleasure chemicals are merely outweighing pain signals, in pain asymbolia, the quale of pain itself is not aversive.

This suggests the broader question, What gives valence to qualia? I think the details of how this happens are largely unknown, but presumably there are brain processes which "paint" a suffering gloss onto experiences in the same way as certain brain processes paint a hedonic gloss onto pleasure. It's these painting operations that I count as suffering and that I want to reduce.

A related theme is the classic distinction between nociception and conscious pain. As Jane A. Smith explains in "A Question of Pain in Invertebrates":
Invertebrates, it seems, exhibit nociceptive responses analogous to those shown by vertebrates. They can detect and respond to noxious stimuli, and in some cases, these responses can be modified by opioid substances. However, in humans, at least, there is a distinction to be made between the "registering" of a noxious stimulus and the "experience" of pain. In humans, pain "may be seen as the response of the whole awake conscious organism to noxious stimuli, seated.., at the highest levels in the central nervous system, involving emotional and other psychological components" (Iggo, 1984). Experiments on decorticate mammals have shown that complex, though stereotyped, motor responses to noxious stimuli may occur in the absence of consciousness and, therefore, of pain (Iggo, 1984). Thus, it is possible that invertebrates' responses to noxious stimuli (and modifications of these responses) could be simple reflexes, occurring without the animals being aware of experiencing something unpleasant, that is, without "suffering" something akin to what humans call pain.
And from Antonio Damasio, The Feeling of What Happens, as excerpted here:
Would one or all of those neural patter[n]s of injured tissue be the same thing as knowing one had pain? And the answer is, not really. Knowing that you have pain requires something else that occurs after the neural patterns that correspond to the substrate of pain – the nociceptive signals – are displayed in the appropriate areas of the brain stem, thalamus, and cerebral cortex and generate an image of pain, a feeling of pain.
So when I ask whether insects might be able to suffer, I don't mean just whether they can react against physical injury and learn to avoid it in the future. I'm asking whether they can perceive this injury as something that is happening to them and that they want to have stopped. I agree that the jury is very much still out on this question. If it seems as though I believe otherwise, it's because I'm trying to track the expected value rather than the most likely point estimate.

Now, given that suffering is different from pain and that suffering can involve strong non-physical emotional components, does this mean animals matter less than we might think because they don't suffer in high-level mental ways?

First, it's unclear whether the claim is true that animals have substantially less sophisticated mentation, at least for "higher" animals like mammals. Animals show many of the psychopathologies that humans do and are used as models for depression when testing drugs. Elephants have death rituals. Crows appear to go sledding for fun. Marc Bekoff, Jonathan Balcombe, and other ethologists have written numerous books documenting the complex emotional lives of mammals, birds, fish, and even octopuses.

But, suppose it is true that non-human animals don't have a similar degree of psychological depth to their experiences. It's not obvious that this means they suffer less intensely. Maybe the brain applies normalization to its experiences, so that it can appropriately encode relative priorities of various drives without using excessive amounts of energy/storage. For example, say a mouse's suffering is between 0 and -10, while a human's would be between 0 and -50 due to emotional depth. However, maybe the human brain doesn't care about perfect granularity among all of the values between 0 and -50; it only needs a sufficient granularity to make the right tradeoffs, so it downplays the importance of physical pain. In other words, a physical pain that would have been -10 for the mouse might be -2 for the human, because the human has so much else to worry about. This is pure speculation, and I wouldn't rest my argument on this point, but it seems possible. This discussion also gets into philosophical issues about how we want to care about and measure emotional intensity, which lie beyond the scope of the current post.

Finally, what if animals do suffer less, even after taking account of the brain's normalization processes? Well, I guess I would ask, How much less do they suffer? I don't think it's orders of magnitude less, and if not, then the basic calculations showing that, at the margin, animal welfare takes priority over human welfare would remain. Suppose you were a chicken being scalded and drowned alive in a boiling defeathering tank. How much less bad would this experience be if you didn't have broader thoughts about the end of your life, the injustice of your situation, how much you'll miss your friends, etc.? I suspect that the raw physical pain would overwhelm these subsidiary thoughts during the moment, and even if not, I don't think the higher-level thoughts would be 10 times stronger than the raw pain.

Moreover, there are many times when humans may in fact suffer less because of their understanding of the situation. Humans enduring a bout of food poisoning can know that the agony will end after a day or two and can know that their friends and family will help them in the mean time. Animals going through the same experience may have no idea what's happening to them, whether it will end, or what will become of their lives.

The points discussed above are fascinating to ponder, and it's valuable to hear from other people which of their own experiences they've found most unpleasant. That said, we modern humans live extremely comfortable lives compared with factory-farmed or wild animals, so it isn't surprising that most of our worst memories may be of purely emotional injury. In any event, regardless of where we settle on the question of the relative magnitudes of animal and human pain, physical and psychological pain, I don't think it's likely to tip the balance of our calculations about where our dollars and hours will do the most good.