Ah! Never mind. My questions are answered in this document:
Coherent Extrapolated Volition Eliezer S. Yudkowsky Singularity Institute for Artificial Intelligence May 2004
Eliezer writes:
As an experiment, I am instituting the following policy on the SL4 mailing list:
None may argue on the SL4 mailing list about the output of CEV, or what kind of world it will create, unless they donate to the Singularity Institute:
- $10 to argue for 48 hours.
- $50 to argue for one month.
- $200 to argue for one year.
- $1000 to get a free pass until the Singularity.
Past donations count toward this total. It's okay to have fun, and speculate, so long as you're not doing it at the expense of actually helping.
It is a good deal, as Eliezer explains later on in the Q&A:
Q2. Removing the ability of humanity to do itself in and giving it a much better chance of surviving Singularity is of course a wonderful goal. But even if you call the FAI "optimizing processes" or some such it will still be a solution outside of humanity rather than humanity growing into being enough to take care of its problems. Whether the FAI is a "parent" or not it will be an alien "gift" to fix what humanity cannot. Why not have humanity itself recursively self-improve? (SamanthaAtkins?)
A2. For myself, the best solution I can imagine at this time is to make CEV our Nice Place to Live, not forever, but to give humanity a breathing space to grow up. Perhaps there is a better way, but this one still seems pretty good. As for it being a solution outside of humanity, or humanity being unable to fix its own problems... on this one occasion I say, go ahead and assign the moral responsibility for the fix to the Singularity Institute and its donors.
Moral responsibility for specific choices within a CEV is hard to track down, in the era before direct voting. No individual human may have formulated such an intention and acted with intent to carry it out. But as for the general fact that a bunch of stuff gets fixed: the programming team and SIAI's donors are human and it was their intention that a bunch of stuff get fixed. I should call this a case of humanity solving its own problems, if on a highly abstract level.[emphasis mine]
Q3. Why are you doing this? Is it because your moral philosophy says that what you want is what everyone else wants? (XR7)
A3. Where would be the base-case recursion? But in any case, no. I'm an individual, and I have my own moral philosophy, which may or may not pay any attention to what our extrapolated volition thinks of the subject. Implementing CEV is just my attempt not to be a jerk.
I do value highly other people getting what they want, among many other values that I hold. But there are certain things such that if people want them, even want them with coherent volitions, I would decline to help; and I think it proper for a CEV to say the same. That is only one person's opinion, however.
So, as you see, contributing as little as a thousand dollars gives you enormous power over the future of mankind, at least if your ideals regarding the future are "coherent" with Eliezer's
So, as you see, contributing as little as a thousand dollars gives you enormous power over the future of mankind, at least if your ideals regarding the future are "coherent" with Eliezer's
What - you mean: in the hypothetical case where arguing about the topic on SL4 has any influence on the builders at all, AND the SIAI's plans pan out?!?
That seems like a composition of some very unlikely unstated premises you have there.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.