Why do people spend much, much more time worrying about their retirement plans than the intelligence explosion if they are a similar distance in the future? I understand that people spend less time worrying about the intelligence explosion than what would be socially optimal because the vast majority of its benefits will be in the very far future, which people care little about. However, it seems probable that the intelligence explosion will still have a substantial effect on many people in the near-ish future (within the next 100 years). Yet, hardly anyone worries about it. Why?

Why do people spend much, much more time worrying about their retirement plans than the intelligence explosion if they are a similar distance in the future?

Why do you think they are in similar distance in the future? If you take the LW median of a likely arrival of the intelligence explosion that's later than when most people are going to retire.

If you look at the general population most people consider the intelligence explosion even less likely.

4gjm4yFirst: Most people haven't encountered the idea (note: watching Terminator does not constitute encountering the idea). Most who have have only a very hazy idea about it and haven't given it serious thought. Second: Suppose you decide that both pension savings and intelligence explosion have a real chance of making a difference to your future life. Which can you do more about? Well, you can adjust your future wealth considerably by changing how much you spend and how much you save, and the tradeoff between present and future is reasonably clear. What can you do to make it more likely that a future intelligence explosion will improve your life and less likely that it'll make it worse? Personally, I can't think of anything I can do that seems likely to have non-negligible impact, nor can I think of anything I can do for which I am confident about the sign of the impact they do have. (Go and work for Google and hope to get on a team working on AI? Probably unachievable, not clear I could actually help, and who knows whether anything they produce will be friendly? Donate to MIRI? There's awfully little evidence that anything they're doing is actually going to be of any use, and if at some point they decide they should actually start building AI systems to experiment with their ideas, who knows?, they might be dangerous. Lobby for government-imposed AI safety regulations? Unlikely to succeed, and if it did it might turn out to impede carefully done AI research more than it impedes actually dangerous AI research, not least because it turns out that one can do AI research in more than one of the world's countries. Try to build a friendly AI myself? Ha ha ha. Assassinate AI researchers? Aside from being illegal and immoral and dangerous, probably just as likely to stop someone having a crucial insight needed for friendly AI as to stop someone making something that will kill us all. Try to persuade other people to worry about unfriendly AI? OK, but they don't have any more
4CellBioGuy4yBecause most people don't agree that 'it seems probable that the intelligence explosion will still have a substantial effect on many people in the near-ish future'.

Open Thread, Feb 8 - Feb 15, 2016

by Elo 1 min read8th Feb 2016224 comments

4


If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.