It was interesting to see the really negative comment from (presumably the real) Greg Egan:

The Yudkowsky/Bostrom strategy is to contrive probabilities for immensely unlikely scenarios, and adjust the figures until the expectation value for the benefits of working on — or donating to — their particular pet projects exceed the benefits of doing anything else. Combined with the appeal to vanity of “saving the universe”, some people apparently find this irresistible, but frankly, their attempt to prescribe what rational altruists should be doing with their time and money is just laughable, and it’s a shame you’ve given it so much air time.

Showing 3 of 6 replies (Click to show all)

Previous arguments by Egan:

http://metamagician3000.blogspot.com/2009/09/interview-with-greg-egan.html

Sept. 2009, from an interview in Aurealis.

http://metamagician3000.blogspot.com/2008/04/transhumanism-still-at-crossroads.html

From April 2008. Only in the last few comments does Egan actually express an argument for the key intuition that has been driving the entire rest of his reasoning.

(To my eyes, this intution of Egan's refers to a completely irrelevant hypothetical, in which humans somehow magically and reliably are always able to acquire possession of... (read more)

2XiXiDu9yI think Greg Egan makes an important point there that I have mentioned before [http://lesswrong.com/lw/52n/q_what_has_rationality_done_for_you/3tko] and John Baez [http://johncarlosbaez.wordpress.com/2011/04/24/what-to-do/#comment-5514] seems to agree: Actually this was what I had in mind when I voiced my first attempt [http://lesswrong.com/lw/2l0/should_i_believe_what_the_siai_claims/] at criticizing the whole endeavour of friendly AI, I just didn't know what exactly [http://lesswrong.com/lw/52n/q_what_has_rationality_done_for_you/3two] was causing my uneasiness. I am still confused about it but think that it isn't much of a problem as long as friendly AI research is not being funded at the cost of other risks that are more thoroughly based on empirical evidence rather than the observation of logically valid arguments. To be clear, as I wrote in the post above, I think that there are very strong arguments [http://lesswrong.com/lw/3sy/your_best_arguments_for_risks_from_ai/] in support of friendly AI research. I believe that it is currently the most important cause one could support, but I also think that there is a limit to what one should do in the name of mere logical implications. Therefore I partly agree with Greg Egan. ETA There's now another comment by Greg Egan [http://johncarlosbaez.wordpress.com/2011/04/24/what-to-do/#comment-5515]:
2[anonymous]9yGreg Egan's view was discussed here [http://lesswrong.com/lw/2ti/greg_egan_disses_standins_for_overcoming_bias/] a few months ago.

What To Do: Environmentalism vs Friendly AI (John Baez)

by XiXiDu 2 min read24th Apr 201163 comments

20


In a comment on my last interview with Yudkowsky, Eric Jordan wrote:

John, it would be great if you could follow up at some point with your thoughts and responses to what Eliezer said here. He’s got a pretty firm view that environmentalism would be a waste of your talents, and it’s obvious where he’d like to see you turn your thoughts instead. I’m especially curious to hear what you think of his argument that there are already millions of bright people working for the environment, so your personal contribution wouldn’t be as important as it would be in a less crowded field.

I’ve been thinking about this a lot.

[...]

This a big question. It’s a bit self-indulgent to discuss it publicly… or maybe not. It is, after all, a question we all face. I’ll talk about me, because I’m not up to tackling this question in its universal abstract form. But it could be you asking this, too.

[...]

I’ll admit I’d be happy to sit back and let everyone else deal with these problems. But the more I study them, the more that seems untenable… especially since so many people are doing just that: sitting back and letting everyone else deal with them.

[...]

I think so far the Azimuth Project is proceeding in a sufficiently unconventional way that while it may fall flat on its face, it’s at least trying something new.

[...]

The most visible here is the network theory project, which is a step towards the kind of math I think we need to understand a wide variety of complex systems.

[...]

I don’t feel satisfied, though. I’m happy enough—that’s never a problem these days—but once you start trying to do things to help the world, instead of just have fun, it’s very tricky to determine the best way to proceed.

Link: johncarlosbaez.wordpress.com/2011/04/24/what-to-do/

His answer, as far as I can tell, seems to be that his Azimuth Project does trump the possibility of working directly on friendly AI or to support it indirectly by making and contributing money.

It seems that he and other people who understand all the arguments in favor of friendly AI and yet decide to ignore it, or disregard it as unfeasible, are rationalizing.

I myself took a different route, I was rather trying to prove to myself that the whole idea of AI going FOOM is somehow flawed rather than trying to come up with justifications for why it would be better to work on something else.

I still have some doubts though. Is it really enough to observe that the arguments in favor of AI going FOOM are logically valid? When should one disregard tiny probabilities of vast utilities and wait for empirical evidence? Yet I think that compared to the alternatives the arguments in favor of friendly AI are water-tight.

The problem why I and other people seem to be reluctant to accept that it is rational to support friendly AI research is that the consequences are unbearable. Robin Hanson recently described the problem:

Reading the novel Lolita while listening to Winston’s Summer, thinking a fond friend’s companionship, and sitting next to my son, all on a plane traveling home, I realized how vulnerable I am to needing such things. I’d like to think that while I enjoy such things, I could take them or leave them. But that’s probably not true. I like to think I’d give them all up if needed to face and speak important truths, but well, that seems unlikely too. If some opinion of mine seriously threatened to deprive me of key things, my subconscious would probably find a way to see the reasonableness of the other side.

So if my interests became strongly at stake, and those interests deviated from honesty, I’ll likely not be reliable in estimating truth.

I believe that people like me feel that to fully accept the importance of friendly AI research would deprive us of the things we value and need.

I feel that I wouldn't be able to justify what I value on the grounds of needing such things. It feels like that I could and should overcome everything that isn't either directly contributing to FAI research or that helps me to earn more money that I could contribute.

Some of us value and need things that consume a lot of time...that's the problem.

20