We're not running out of data to train on, just text.
Why did I not need 1 Trillion language examples to speak (debatable) intelligently? I'd suspect the reason is a combination of inherited training examples from my ancestors, but more importantly, language output is only the surface layer.
In order for language models to get much better, I suspect they need to be training on more than just language. It's difficult to talk intelligently about complex subjects if you've only ever read about them. Especially if you have no eyes, ears, or any other sense data. The best language models are still missing crucial context/info which could be gained through video, audio, and robotic IO.
Combined with this post, this would also suggest our hardware can already train more parameters than we need to in order to get much more intelligent models, if we can get that data from non text sources.
I would need to understand why early AIs would become so much more powerful than corporations, terrorists or nation-states>
One argument I removed to make it shorter was approximately: "It doesn't have to take over the world to cause you harm". And since early misaligned AI is more likely to appear in a developed country, your odds of being harmed by it is higher compared to someone in an undeveloped country. If ISIS suddenly found itself 500 strong in Silicon Valley and in control of Google's servers, surely you would have the right to be concerned before they had a good chance of taking over the whole world. And you'd be doubly worried if you did not understand how it went from 0 to 500 "strong", or what the next increase in strength might be. You understand how nation states and terrorist organizations grow. I don't think anyone currently understands, well, how AI grows in intelligence.There were a million other arguments I wanted to "head off" in this post, but the whole point of introductory material is to be short.> there is no reason to believe that rouge AI will be dramatically more powerful than corporations or terrorists"
I don't think that's true. If our AI ends up no more powerful than existing corporations or terrorists, why are we spending billions on it? It had better be more powerful than something. I agree alignment might not be "solvable" for the reasons you mention, and I don't claim that it is. I am specifically claiming AI will be unusually dangerous, though.
As another short argument:
We don't need an argument for why AI is dangerous, because dangerous is the default state of powerful things. There needs to be a reason AI would be safe.