How are you approaching cognitive security as AI becomes more capable?
I'm worried about how increasingly capable AI could hijack my brain. Already: * LLMs drive people to psychosis. * AI generated content racks up countless views. * Voice cloning allows scammers to impersonate loved ones, bosses, etc. * AI engagement is difficult to distinguish from real user engagement on social...
Them: "I think X"
You: "That's wrong because Z"
Them: "I think you're just disagreeing because you'd not open-minded enough"
You: "What makes you think that?"
Them: "I think it because Y"
What do they say for 'Y'? That seems the part that actually constitutes their argument and which you will be able to call out if they're making a mistake.