Posts

Sorted by New

Wiki Contributions

Comments

Why wouldn't their leadership be capable of personally evaluating arguments that this community has repeatedly demonstrated can be compressed into sub 10 minute nontechnical talks? And why assume whichever experts they're taking advice from would uniformly interpret it as "craziness" especially when surveys show most AI researchers in the west are now taking existential risk seriously? It's really not such a difficult or unintuitive concept to grasp that building a more intelligent species could go badly.

My take is the lack of AI safety activity in China is effectively due almost entirely to the language barrier, I don't see much reason they wouldn't be about equally receptive to the fundamental arguments as a western audience once presented with them competently.

Honestly, I would probably be more concerned about convincing western leaders whose "being on board" this debate seems to take as an axiom.

This post is quite strange and at odds with your first one. Your own point 5 contradicts your point 6. If they're so good at taking ideas seriously, why wouldn't they respond to coherent reasoning presented by a US president? Points 7 and 8 just read like hysterical Orientalist Twitter China Watcher nonsense, to be quite frank. There is absolutely nothing substantiating that China would recklessly pursue nothing but "superiority" in AI at all costs (up to and including national suicide) beyond simplistic narratives of the CCP being a cartoon evil force seeking world domination and such.

Instead of invoking tired tropes like the Century of Humiliation, I would mention the tech/economic restrictions recently levied by the US (which are, not inaccurately, broadly seen in China as an attempt to suppress its national development, with comments by Xi to that effect). Any negotiated slowdowns in AI would have to be demonstrated to China as not to be a component of that, which it shouldn't be hard to if the US is also verifiably halting its own progress, and the AGI x-risk arguments can be clearly communicated.

Not sure if he took him up on that (or even saw the tweet reply). Am just hoping we have someone more proactively reaching out to him to coordinate is all. He commands a lot of respect in this industry as I'm sure most know.

I think people in the LW/alignment community should really reach out to Hinton to coordinate messaging now that he's suddenly become the most high profile and credible public voice on AI risk. Not sure who should be doing this specifically, but I hope someone's on it.

Yup. I commented on how outreach pieces are generally too short on their own and should always be leading to something else here.

I'm pretty opposed to public outreach to get support for alignment, but the alternative goal of whipping up enough hysteria to destroy the field of AI/the AGI development groups killing us seems much more doable. Reason being from my lifelong experience observing public discourse on topics I have expert knowledge on (e.g. nuclear weapons, China), it seems completely impossible to implant the exact right ideas into the public mind, especially for a complex subject. Once you attract attention to a topic, no matter how much effort you put into presenting the proper arguments, the conversation and people's beliefs inevitably trend toward simple & meme-y/emotionally riveting ideas, instead of the accurate ones. (Looking at the popular discourse on climate change is another good illustration of this.)

But in this case, maybe even if people latch onto misguided fears about Terminator or whatever, as long as they have some sort of intense fear of AI, it can still produce the intended actions. To be clear I'm still very unsure whether such a campaign is a good idea at this point, just a thought.

I think reaching out to governments is a more direct lever: civilians don't have the power to shut down AI themselves (unless mobs literally burn down all the AGI offices), the goal with public messaging would be to convince them to pressure the leadership to ban it right? Why not cut out the middleman and make the leaders see the dire danger directly?

The downvotes on my comment reflect a threat we all need to be extremely mindful of: people who are so terrified of death that they'd rather flip the coin on condemning us all to hell, than die. They'll only grow ever more desperate & willing to resort to more hideously reckless hail marys as we draw closer.

Never even THINK ABOUT trying a hail mary if it also comes with an increased chance of s-risk. I'd much rather just die.

Load More