Are you struggling to get someone to understand why AGI might be very dangerous?

 

Rather than talking about nanomachines, Skynet, AGI easily persuading people, etc.

 

I suggest using a more grounded idea:

 

Meta, Google, Microsoft, OpenAI, etc have yet to create or release a single AI model that doesn't have things going wrong within the first week. Meta had the most embarrassing moment, with LLaMA's weights leaking to the public a week after announcement. (And now Meta's head AI scientist is posting on twitter about how AGI and AI will never be dangerous and there's nothing to worry about, likely to try to avoid responsibility for when LLaMA is inevitably used in the next big AI cybercrime). Youtube and Facebook have yet to figure out how to get their much more simple algorithm to stop promoting terrorist, anti-vaxxers, etc. Either out of not knowing how or it not being a high enough priority to fix. Do you trust these companies to correctly create, or even care to correctly create a much, much more powerful AI?

Edit: changed from "Are you struggling to get someone to understand why AGI might be kill us all?" to "Are you struggling to get someone to understand why AGI might be very dangerous?"after mukashi pointed out this is an argument for danger, not extinction.

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 7:24 PM

This seems to be neither about AI nor about doom.  It's about LLMs accelerating some human trends that are unpleasant.  

I actually agree that this is a bigger threat to short-term widescale human happiness than actual AI doom, but I don't want to mix up the two topics.

It's not directly about AGI, no. But it could be a way to change a skeptic's mind about AI risk. Which could be useful if they're a regulator/politician.

I don't see how that implies that everyone dies.

It's like saying, weapons are dangerous, imagine what would happen if they fall in the wrong hands. Well, it does happen and sometimes that have bad consequences but there is no logical connection between that and everyone dying, which is what doom means. Do you want to argue that LLMs are dangerous? Fine. No problem with that. But doom is not that.

That's fair. Edited to reflect that.

I do think it could be a useful way to convince someone who is completely skeptical of risk from AI.