1650

LESSWRONG
LW

1649
AI GovernanceCommunity OutreachAI

6

We automatically change people's minds on the AI threat

by Mikhail Samin
3rd Oct 2025
1 min read
0

6

This is a linkpost for https://whycare.aisgf.us

6

New Comment
Moderation Log
More from Mikhail Samin
View more
Curated and popular this week
0Comments
AI GovernanceCommunity OutreachAI
After a message, the chatbot asks how helpful it was, on the scale from "Not at all" to "Completely changed my mind". n=24, median=5, average=4.46.
One of the zeros is someone who added a comment that they had already been convinced.

Something that I've been doing for a while with random normal people (from Uber drivers to MP staffers) is being very attentive to the diff I need to communicate to them on the danger that AI would kill everyone: usually their questions show what they're curious about and what information they're missing; you can then dump what they're missing in a way they'll find intuitive.

We've made a lot of progress automating this. A chatbot we've created makes arguments that are more valid and convincing than you expect from the current systems.

We've crafted the context to make the chatbot grok, as much as possible, the generators for why the problem is hard. I think the result is pretty good. Around a third of the bot's responses are basically perfect.

We encourage you to go try it yourself: https://whycare.aisgf.us. Have a counterargument for why AI is not likely to kill everyone that a normal person is likely to hold? Ask it!

If you know normal people who have counterarguments, try giving them the chatbot and see how they'll interact/whether it helps.

We're looking for volunteers, especially those who can help with design, for ideas for a good domain name, and for funding.