I agree with you that the "stereotyped image of AI catastrophe" is not what failure will most likely look like, and it's great to see more discussion of alternative scenarios. But why exactly should we expect that the problems you describe will be exacerbated in a future with powerful AI, compared to the state of contemporary human societies? Humans also often optimise for what's easy to measure, especially in organisations. Is the concern that current ML systems are unable to optimise hard-to-measure goals, or goals that are hard to re... (read more)
But why exactly should we expect that the problems you describe will be exacerbated in a future with powerful AI, compared to the state of contemporary human societies?
To a large extent "ML" refers to a few particular technologies that have the form "try a bunch of things and do more of what works" or "consider a bunch of things and then do the one that is predicted to work."
That is true but I think of this as a limitation of contemporary ML approaches rather than a fundamental property of advanced AI.
I'm mostly aiming t... (read more)
Thanks for elaborating. There seem to be two different ideas:
1), that it is a promising strategy to try and constrain early AGI capabilities and knowledge
2), that even without such constraints, a paperclipper entails a smaller risk of worst-case outcomes with large amounts of disvalue, compared to a near miss. (Brian Tomasik has also written about this.)
1) is very plausible, perhaps even obvious, though as you say it's not clear how feasible this will be. I'm not convinced of 2), even though I've heard / read many people expressing this ide... (read more)
Another risk from bugs comes not from the AGI system caring incorrectly about our values, but from having inadequate security. If our values are accurately encoded in an AGI system that cares about satisfying them, they become a target for threats from other actors who can gain from manipulating the first system.
I agree that this is a serious risk, but I wouldn't categorise it as a "risk from bugs". Every actor with goals faces the possibility that other actors may attempt to gain bargaining leverage by threatening to deliberately thwart th... (read more)
Upvoted. I've long thought that Drexler's work is a valuable contribution to the debate that hasn't received enough attention so far, so it's great to see that this has now been published.
I am very sympathetic to the main thrust of the argument – questioning the implicit assumption that powerful AI will come in the shape of one or more unified agents that optimise the outside world according to their goals. However, given our cluelessness and the vast range of possible scenarios (e.g. ems, strong forms of biological enhancement, mergin... (read more)
I agree that establishing a cooperative mindset in the AI / ML community is very important. I'm less sure if economic incentives or government policy are a realistic way to get there. Can you think of a precedent or example for such external incentives in other areas?
Also, collaboration between the researchers that develop AI may be just one piece of the puzzle. You could still get military arms races between nations even if most researchers are collaborative. If there are several AI systems, then we also need to ensure cooperation between these AIs, which isn't necessarily the same as cooperation between the researchers that build them.
What exactly do you think we need to specify in the Smoking Lesion?