Salamon presented (~1:17 in the video) "Four Key Claims":
1. Intelligence can radically transform the world.I'm personally rather skeptical that an intelligence explosion will ever occur -- indeed, I may assign the scenario a very low probability. On the other hand, if one did occur, the magnitude of its impact on our region of the cosmos would be so profound that I think focusing our efforts preparing for such possibilities has high expected value. (Think about why you wear a seat belt the next time you drive to your friend's house down the street.) I liked the way Salamon explained SIAI's core mission as something that almost anyone, even skeptics like me, ought to care about -- not just computer geeks and sci-fi aficionados. (As far as the potential plausibility of intelligence explosion itself, I do think the discussion around 18:00 of whole-brain emulation and the Hansonian takeoff scenario was well done.)
2. An intelligence explosion may be sudden.
3. An uncontrolled intelligence explosion would kill us and destroy practically everything we care about.
4. A controlled intelligence explosion could save us, and protect practically everything else we care about. It is difficult, but worth the attempt.
Of course, SIAI is fundamentally an academic organization, and most of its research is highly valuable whether or not an "intelligence explosion" ever occurs. Indeed, I encourage donations to SIAI mainly to fund projects that will help us better understand how to reduce massive amounts of suffering in our multiverse. SIAI explores fundamental questions about physics, Bayesian statistics, anthropics, decision theory, infinitarian consequentialism, consciousness, and cognitive science need to be studied regardless of what happens with AI.
Finally, readers may be interested in this other post on SIAI's matching-grant challenge, in which donors can choose their own research projects to support.
No comments:
Post a Comment