It seems intuitive to me why that would be the case. And I've seen Eliezer make the claim a few times. But I can't find an article describing the idea. Does anyone have a link?

New to LessWrong?

New Answer
New Comment

2 Answers sorted by

riceissa

May 10, 2020

160

And I've seen Eliezer make the claim a few times. But I can't find an article describing the idea. Does anyone have a link?

Eliezer talks about this in Do Earths with slower economic growth have a better chance at FAI? e.g.

Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces. This means that UFAI parallelizes better than FAI.

Moved this to answers, because it's a direct answer.

wonderful, thank you!

Matthew Barnett

May 11, 2020

120

For an alternative view, you may find this response interesting from an 80000 hours podcast. Here, Paul Christiano appears to reject that AI safety research is less parallelizable.

Robert Wiblin: I guess there’s this open question of whether we should be happy if AI progress across the board just goes faster. What if yes, we can just speed up the whole thing by 20%. Both all of the safety and capabilities. As far as I understand there’s kind of no consensus on this. People vary quite a bit on how pleased they’d be to see everything speed up in proportion.
Paul Christiano: Yes. I think that’s right. I think my take which is a reasonably common take, is it doesn’t matter that much from an alignment perspective. Mostly, it will just accelerate the time at which everything happens and there’s some second-order terms that are really hard to reason about like, “How good is it to have more computing hardware available?” Or ”How good is it for there to be more or less kinds of other political change happening in the world prior to the development of powerful AI systems?”
There’s these higher order questions where people are very uncertain of whether that’s good or bad but I guess my take would be the net effect there is kind of small and the main thing is I think accelerating AI matters much more on the like next 100 years perspective. If you care about welfare of people and animals over the next 100 years, then acceleration of AI looks reasonably good.
I think that’s like the main upside. The main upside of faster AI progress is that people are going to be happy over the short term. I think if we care about the long term, it is roughly awash and people could debate whether it’s slightly positive or slightly negative and mostly it’s just accelerating where we’re going.

Moved this to answers, because I think it seems like a very useful answer.