I quite enjoyed Anna Salamon's talk, "
Shaping the Intelligence Explosion," from the
Singularity Summit 2009. Unlike many futurist speakers and authors, Salamon presented basic statements about what motivates the
Singularity Institute (SIAI) in a fashion free from a lot of the unecessary transhumanist baggage (pet concerns like life extension or multiple-universe hypotheses) that can turn away people from other backgrounds who fundamentally care no less about these issues.
Salamon presented (~1:17 in the video) "Four Key Claims":
1. Intelligence can radically transform the world.
2. An intelligence explosion may be sudden.
3. An uncontrolled intelligence explosion would kill us and destroy practically everything we care about.
4. A controlled intelligence explosion could save us, and protect practically everything else we care about. It is difficult, but worth the attempt.
I'm personally rather skeptical that an intelligence explosion will ever occur -- indeed, I may assign the scenario a very low probability. On the other hand, if one did occur, the magnitude of its impact on our region of the cosmos would be so profound that I think focusing our efforts preparing for such possibilities has high
expected value. (Think about why you wear a seat belt the next time you drive to your friend's house down the street.) I liked the way Salamon explained SIAI's core mission as something that almost anyone, even skeptics like me, ought to care about -- not just computer geeks and sci-fi aficionados. (As far as the potential plausibility of intelligence explosion itself, I do think the discussion around 18:00 of
whole-brain emulation and the
Hansonian takeoff scenario was well done.)
Of course, SIAI is fundamentally an academic organization, and most of its research is highly valuable whether or not an "intelligence explosion" ever occurs. Indeed, I encourage donations to SIAI mainly to fund projects that will help us better understand how to reduce massive amounts of suffering in our multiverse. SIAI explores fundamental questions about physics, Bayesian statistics, anthropics, decision theory, infinitarian consequentialism, consciousness, and cognitive science need to be studied regardless of what happens with AI.
Finally, readers may be interested in this
other post on
SIAI's matching-grant challenge, in which donors can choose their own research projects to support.