Remove AI from this picture and it says, "Participating in the public sphere is risky. Someone could hurt you or trick you. They could listen to what you say and use it against you. If you openly admit what you value, someone could hold it for ransom to compel your cooperation in their schemes."
Which is all true.
But it's also true that participating in the public sphere enables cooperation; enables mutual aid; enables creating reputation and credibility; enables joining with others to achieve your values.
AIs will get more powerful & very significant pretty soon, it seems pretty relevant to ask whether one should take actions to prevent AIs from knowing about you, and pretty plausible to suggest that while the particular problems with being known publicly will stick around, they may get worse, and we may get a very different distribution over upsides and downsides.
I think your comment ignores the fact that OP is stating what problems (and benefits) they expect to be more salient if an AI is able to train on your posts.
Note that the risks of being modeled are EXACTLY the reasons that you want to publish in the first place. You want other people to be able to understand you and to jointly update models with information each other provides. And you risk that people will misuse your posts in order to hurt or manipulate you.
i'd argue that AI increases both sides of the equation - you benefit greatly by not having to re-explain yourself, and from people AND AIs engaging with you and helping you refine/update your models. And the AIs have more time and focus to use it against you.
These may not be symmetric, so it may flip your personal equilibrium from public to private. But it should never have been binary anyway. What you publish has always been curated and limited to things you want in your permanent record, and that you expect more benefit than risk. If that line shifts by a little, that's reasonable. If it shifts entirely one way (or the other), you're overreacting.
It's sometimes suggested that as a defense against superpersuasion, one could get a trusted AI to paraphrase all of one's incoming messages. That way an attacker can't subliminally steer the target by choosing exactly the right words and phrases to push their buttons.
Maybe one could also defend against superhuman persuasion and manipulation by getting a trusted AI to paraphrase all of one's outgoing messages. That way an attacker can't pick up on subtle clues about the target's psychology embedded in their writing style. If this paraphrasing strategy works, the attacker is denied the training data they need to figure out how to push the target's buttons.
I'd probably say not to worry too much. I think that the question isn't whether public posting creates additional risk, it's whether that marginal risk matters given the baseline risk.
If there is a super persuader around, it has many avenues of attack, such as a $5 wrench in the hands of a more gullible person.
I like being open on the internet. But I am getting increasingly worried (have been for the last few years, but never really acted on it) that public posting is dangerous because in the future your public corpus will expose you to increased risk of persuasion, extortion, and general dangers from AIs that are good at understanding and exploiting cues and information deduced from your writing.
This is scary to think about because at least for me the implications are very negative – I value being able to post and share without concern, but I would like to see more engagement and discussion on this issue, and hear people's opinions
For examples of risks:
This data can be used to a) replace literal individuals and b) manipulate them, and c) impersonate them.
You can defend yourself by trying to block any kind of unwanted outreach, but even then, your information could be exploited and used by the people you interact with in unnatural and harmful ways.
Should you be worried about this? I am curious about people's opinions. This would mean closing off your public internet presence and maybe designing and using communities that have defense mechanisms (for eg can't be scraped).
For being worried:
Against:
I value being able to post and engage openly so much, such that even walled gardens feel like a middling compromise. But there is a big problem here if deep patterns about individuals, their weaknesses, vulnerabilities etc can be read off easily. Even in terms of general societal robustness this has big implications for example for more advanced espionage, political maneuvering, etc... so are we cooked? what kinds of solutions can solve this given malicious actors? the law? Subtle effects of this seem very hard to check.
I would also like to see more research effort on quantifying and understanding the upper bound for persuasion and what kinds of patterns can already be detected from user personas.