Monday, December 2, 2013

Posts moved to "Essays on Reducing Suffering"

In Oct.-Nov. 2013, I revamped my main website, "Essays on Reducing Suffering," to improve its appearance, add pictures, and rewrite significant portions of several essays. I also moved some of the higher-quality blog posts there from here. I plan to close out this blog and publish further writings on my main website because
  • I think readers find essays on my main website more authoritative -- lots of people have blogs but not as many have standalone sites like that one,
  • I prefer the fact that my website is static and lays out essays by topic rather than by date -- I fear that on blogs, the old posts get lost and unread even if they're well written and not stale, and
  • my website allows for more customization with formatting, etc.
One feature my static site lacks is comments, but I find that most discussion happens on Facebook nowadays, and sadly, my writings may appear more credible without comments. (As an example, an academic would not have a comments section for the papers on her website.)

Thanks to the readers of this blog for past contributions. I welcome continued feedback on my writings by email or Facebook.

Monday, October 14, 2013

Beauty-driven morality

In a waiting room today, I talked with someone I met about suffering by animals in nature. His reply was that suffering isn't really bad, and because nature is beautifully complex and intricate, we should try to keep it the way it is as much as possible. I've gotten this reaction many times, including from several close friends. For these people, nature's aesthetic appeal outweighs all the suffering of the individual insects and minnows that have to live through it.

Jonathan Haidt's Moral Foundations Theory describes five principal values that seem to underlie many moral intuitions:

1) Care/harm
2) Fairness/cheating
3) Loyalty/betrayal
4) Authority/subversion
5) Sanctity/degradation.

The last of these is partly driven by feelings of disgust, which seem to move from the visceral realm to the moral realm in some people by acquiring a higher sense of "absolute wrongness." A classic example is a thought experiment involving completely safe and harmless sex between a sister and brother. Some people say, "I can't explain why, but it's just wrong."

It seems there's a reverse side of disgust-driven morality, one which probably has much more sway over more liberal-minded types. It's what I'm calling "beauty-driven morality," and it's slightly different from Haidt's "moral elevation" concept. In beauty-driven morality, outcomes are evaluated based on how aesthetically pleasing, complex, amazing, and sublime they seem to the observer. So, for example, the intricacies of ecosystem dynamics -- complete with brutal predation and Malthusian mass deaths shortly after birth -- are seen as so elegant, such a wonderfully harmonious balance, that to replace them with anything more bland, sterile, or civilized would be morally tragic.

Our sense of beauty and awe is part of a reward system designed to encourage exploration and discovery. Identifying patterns, figuring things out, and otherwise tickling our aesthetic intellectual senses makes us feel good. In those with beauty-driven moral intuitions, this feel-good emotion seems to be not just a personal experience, like the pleasant taste of chocolate, but also a morally laden experience: The sense that "this is right; this is how the world should be."

Of course, care/harm-based morality is fundamentally very similar. Our brains feel reward upon helping others and punishment upon seeing others in pain, and we regard this not just as a private emotion but a reflection on how the world should be, i.e., it should contain more helping and less suffering.

A pure care/harm moralist like myself can tell the beauty-based moralist: "You don't understand. Beauty is just a reaction you have to imagining something. It doesn't mean we should actually work toward the scenario you picture as beautiful. The real deep importance of acting morally comes from improving the subjective experiences of other beings." The beauty-based moralist can reply: "No, you don't understand how transcendent this higher beauty is. It's so fundamentally important that it's worth many beings suffering to bring it about. This is where the deepest moral purpose lies."

Of course, I don't agree with the beauty-based moralist, but this fundamentally comes down to a difference in our brain wiring. Similarly, I can't talk a paperclip maximizer out of pursuing its metallic purpose in life. The paperclip maximizer tells us: "No, you both don't understand. The ineffably profound value of paperclips rises far above both of your petty concerns. I hope one day you see the shiny truth."

That said, there is more room to change the minds of beauty-based moralists than paperclip maximizers insofar as the former are humans who also tend to have care/harm intuitions. The aesthetic approach makes most sense from a "far mode" perspective -- looking at whole ecosystems or inter-agent evolutionary dynamics on large time scales -- but if you see in a near-mode way this particular gazelle having its intestines ripped out while still conscious, even the aesthetics of the situation may seem different, and if not, hopefully care/harm sentiments can enter in.

Since beauty-based morality presumably originates from aesthetic reward circuits, we would predict that people with more of these circuits (artists, poets, mathematicians, physicists, etc.?) would, ceteris paribus, tend to care more about making the future beautiful than average.

As a postscript, I should add that even if we don't agree with beauty-driven morality, there are good strategic reasons to compromise with people who do subscribe to it. For that matter, there are even good strategic reasons to compromise with paperclip maximizers if and when they emerge.

In addition, if we're preference utilitarians, we may place intrinsic weight on agents' desires for beauty or paperclips. In general, we should strive for a society in which other values are respected and in which we do cheap things to help other values, even if we don't care about them ourselves.

Monday, July 29, 2013

Should we worry about 1984 futures?

Summary. It seems that oppressive totalitarian regimes shouldn't be needed in the long-term future, although they might be prevalent in simulations.


When you hear the phrase "dystopic futures," one of the first images that may come to mind is a society like that of Oceania from Orwell's 1984. Big Brother eliminates opportunity for privacy, and orthodoxy is enforced by brainwashing and torture of those who fail to conform. As far as future suffering is concerned, the most troubling of these is torture.

In the short run, futures of this type are certainly possible, and indeed, governments like this already exist in some degree. However, my guess is in the long run, enforcing discipline by torture would become unnecessary. Torture is needed among humans as a hack to restrain motivations that would otherwise wander from those the authorities wanted to enforce. For arbitrary artificial minds, the subjects/slaves of the ruling AI can have whatever motivations the designer builds. We don't need to torture our computers to do what we ask. Even for more advanced computers of the future that have conscious thoughts and motivations, the motivations can simply be to want to follow orders. Organisms/agents that don't feel this way can just be killed and replaced.

Huxley's Brave New World approximates this idea somewhat for non-digitial minds in the form of drugs and social memes/rituals that inspire conformity. 1984 has plenty of these as well, and they don't represent an intrinsic concern for suffering reducers.

If we encountered aliens, it seems unlikely there would be much torture either (except maybe to extract some information before killing the other side). The side with more powerful technology would just decimate the one with less powerful technology.
Whatever happens, we have got
The Maxim gun, and they have not. (source)
Just wiping out your enemies is a lot cheaper than keeping them around subject to totalitarian rule.

The main context in which I would worry about 1984-style torture is actually in simulations. AIs of the future may find it useful to run vast numbers of sims of evolved societies in order to study the distribution of kinds of ETs in the universe, as well as to learn basic science. Depending on the AI's values, it might also run such sims because it finds them intrinsically worthwhile.

Sunday, July 21, 2013

Counterfactual credit assignment

Introduction

Effective altruists tend to assign credit based on counterfactuals: If I do X, how much better will the world be than if I didn't do X? This is the intuition behind the idea that the work you do in your job is at least somewhat replaceable, as well as the reason to seek out do-gooding activities that aren't likely to be done without you.

Perils of adding credit

We can get into tricky issues when trying to add up counterfactual credit, though. Let me give an example. Alice and Bob find themselves in a building that contains buttons. Each person is allowed to press only one button, at which point she/he is transported elsewhere and has no further access to the buttons. Thus, Alice and Bob want to maximize the effectiveness of their button pressing. There's a green button that, when pressed once, prevents 2 chickens from enduring life on a factory farm. There's also a red button that, when pressed twice in a row, prevents 3 chickens from enduring life on a factory farm. In order to make the red button effective, both Alice and Bob have to use their button press on it.

Alice goes first. Suppose she thinks it's very likely (say 99% likely) that Bob will press the red button. That means that if she presses the red button, she'll save 3 chickens, while if she presses the green button, she'll only save 2. There's more counterfactual credit for pressing the red button, so it seems she should do that. Then, Bob sees that Alice has pressed the red button. Now he faces the same comparison: If he presses red, he saves 3 chickens, while if he presses green, he saves only 2. He should thus press red. In this process, each person computed a counterfactual value of 3 for the red button vs. 2 for the green button. Added together, this implies a value of 3+3=6 vs. 2+2=4.

Unfortunately, in terms of the actual number of saved chickens, the comparison is 3 vs. 4. Both Alice and Bob should have pressed green to save 2+2=4 chickens. This shows that individual credit assignments can't just be added together naively.

Of course, the situation here depended on what Alice thought Bob would do. If Alice thought it was extremely likely Bob would press green, her counterfactual credit would have been 2 for green vs. 0 for red. Or, if she thought Bob would switch to red if and only if she pressed red, then the comparison was 2 for herself vs. 3-2=1 for Bob's switching to red and giving up his green.

Joint decision analysis

The decision analysis becomes more clear using a payoff matrix as in game theory, except in this case both Alice and Bob, being altruists, share the same payoff, which is total chickens helped:

Bob press red Bob press green
Alice press red 3 2
Alice press green 2 4

Alice and Bob should coordinate to each press green. Of course, if Alice has pressed red, at that point Bob should as well.

In this example, reasoning based on individual counterfactual credit still works. Imagine that Alice was going to press red but was open to suggestions from Bob. If he convinces her to press green and then presses green himself, the value will be 4 instead of 3 if he hadn't done that, so he gets more counterfactual credit if he persuades Alice to press green and then does the same himself than if he goes along with her choice of red.

Acknowledgements

This post was inspired by comments in "The Haste Consideration," which is a concrete case where counterfactual credit assignments can get tricky.

Tuesday, January 8, 2013

Mr. Rogers on unconditional love

Summary: Unconditional love is an attitude we adopt and a feeling we cultivate because of its salutary effects on people.

Fred Rogers ended many episodes of Mister Rogers' Neighborhood with the reminder that "people can like you exactly as you are," an expression he learned from his grandfather.

Episode 1606 of the program featured Lady Aberlin singing to Daniel the following song:
I'm glad
You're the way you are
I'm glad
You're you
I'm glad
You can do the things that you can do
I like
How you look
I like the way
That you feel
I feel that you
Have a right to be quite pleased with you
I'm glad
You're the way you are
I think
You're fine
I'm glad
You're the way you are
The pleasure's mine
It's good
That you look the way you should
Wouldn't change you if I could
'Cause I'm happy you are you.
Do these statements mean people shouldn't bother improving themselves? If others like them as they are, is there no incentive to get better at things?

Well, it's possible that conditional love could force people to try harder in order to seek approval, but at what cost and for what benefit? I think the cost is big: If you're not certain that anyone loves you, life can seem very scary, hopeless, and pointless. And I think there are plenty of other factors motivating people to improve in areas that matter without trying to use love as another carrot and stick. When people are in a rough emotional situation, they may not even have the motivation or support to undertake self-improvement, and might either wallow in despair or seek approval in unproductive ways -- including, as the song hints, through trying to look more attractive on the outside.

There's a time and place for incentives, but love by and for another person is one domain where trying to introduce incentives does more harm than good because of the nature of human psychology. Consider how popular the theme is in Christianity that God loves you no matter what: This is a powerful idea that can transform people's lives.

I feel unconditional love for a person even at the same time that I might prefer him/her to be different. If the person is open to advice on changing, I'll suggest things, but at the same time, I feel that even if the person doesn't change, it's okay -- s/he is still a special individual whose feelings matter just the same. In my mind, unconditional love is closely tied with hedonistic utilitarianism: When I realize that an organism feels happiness and suffering, at that point I realize that the organism matters and deserves care and kindness. In this sense, you could say the only "condition" of my love is sentience.

From "Then Your Heart is Full of Love" by Josie Carey Franz and Fred Rogers (1984):
When your heart can sing another's gladness,
Then your heart is full of love.
When your heart can cry another's sadness,
Then your heart is full of love.
[...]
When your heart has room for everybody,
Then your heart is full of love.

I'll close with another Fred Rogers song, possibly my favorite. It hints at this idea that the other person's feelings are the reason for our love of him or her.
It's you I like,
It's not the things you wear,
It's not the way you do your hair--
But it's you I like.
The way you are right now,
The way down deep inside you--
Not the things that hide you,
[...]
I hope that you'll remember
Even when you're feeling blue
That it's you I like,
It's you yourself,
It's you, it's you I like.