All of Legionnaire's Comments + Replies

I recommend making the text on screen at least a little larger. That is a common useful trope in infotainment that works well.

We're not running out of data to train on, just text.

Why did I not need 1 Trillion language examples to speak (debatable) intelligently? I'd suspect the reason is a combination of inherited training examples from my ancestors, but more importantly, language output is only the surface layer.

In order for language models to get much better, I suspect they need to be training on more than just language. It's difficult to talk intelligently about complex subjects if you've only ever read about them. Especially if you have no eyes, ears, or any other sense data.... (read more)

I would need to understand why early AIs would become so much more powerful than corporations, terrorists or nation-states>

One argument I removed to make it shorter was approximately: "It doesn't have to take over the world to cause you harm". And since early misaligned AI is more likely to appear in a developed country, your odds of being harmed by it is higher compared to someone in an undeveloped country. If ISIS suddenly found itself 500 strong in Silicon Valley and in control of Google's servers, surely you would have the right to be concerned befo... (read more)

As another short argument: We don't need an argument for why AI is dangerous, because dangerous is the default state of powerful things. There needs to be a reason AI would be safe.