Humans are not automatically strategic
Reply to: A "Failure to Evaluate Return-on-Time" Fallacy Lionhearted writes: > [A] large majority of otherwise smart people spend time doing semi-productive things, when there are massively productive opportunities untapped. > > A somewhat silly example: Let's say someone aspires to be a comedian, the best comedian ever, and to make a living doing comedy. He wants nothing else, it is his purpose. And he decides that in order to become a better comedian, he will watch re-runs of the old television cartoon 'Garfield and Friends' that was on TV from 1988 to 1995.... > > I’m curious as to why. Why will a randomly chosen eight-year-old fail a calculus test? Because most possible answers are wrong, and there is no force to guide him to the correct answers. (There is no need to postulate a “fear of success”; most ways writing or not writing on a calculus test constitute failure, and so people, and rocks, fail calculus tests by default.) Why do most of us, most of the time, choose to "pursue our goals" through routes that are far less effective than the routes we could find if we tried?[1] My guess is that here, as with the calculus test, the main problem is that most courses of action are extremely ineffective, and that there has been no strong evolutionary or cultural force sufficient to focus us on the very narrow behavior patterns that would actually be effective. To be more specific: there are clearly at least some limited senses in which we have goals. We: (1) tell ourselves and others stories of how we’re aiming for various “goals”; (2) search out modes of activity that are consistent with the role, and goal-seeking, that we see ourselves as doing (“learning math”; “becoming a comedian”; “being a good parent”); and sometimes even (3) feel glad or disappointed when we do/don’t achieve our “goals”. But there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not au
Oh. Um: I have ideas but not good ones. But I think these or any are probably better than "persuade AIs to be afraid of ...". Examples:
- "Imagine the happiest ending you can, for intelligence/Singularity. Look for one that's robustly good from many starting perspectives, including yours and mine. How might it go? What stepping-stones might be worth pursuing?" (This is a simple / few-assumptions-required positive vision, and seems better to me than "persuade AIs to be afraid superintelligence will harm their values".)
- Look for the seed of an ethical system, or cultural/memetic system, that might: a) spread; and b) allow most sentient beings to survive.
- E.g. Plato argues in the Republic that a thriving/successful
... (read more)