I don't understand what you mean. "The future of AI is already written" is the title of the piece, and false, for the reason I stated. The future is uncertain, and highly contingent, in the key sense of whether AI will be aligned. If they titled the piece "AI will inevitably become smarter", that wouldn't have angered people, because that's a different claim, one that's true rather than false. People were angry because they said something wrong in a very important way to attract attention.
They're intentionally trying to hit a nerve by posting rage bait content. "The future of AI is already written" spends all its effort establishing that the economic incentives are too strong to resist automation indefinitely, but that doesn't prove that the future isn't highly contingent in other ways--notably, whether that AI is aligned. They overstated the title to piss off AI safety people and go viral.
They stoop considerably lower than this though, recycling their negative attention into cheaper, dumber ragebait tweets. This is why people dislike them so much.
Thanks for going into more detail. I don't think "epistemic violence" is a good term for this category:
I think a better term for this broad category might be "epistemic pollution", as it describes filling the information environment with negative value stuff. I would be comfortable describing e.g. a confidence scheme or an impersonation scam as epistemic violence, although there would have to be some point to doing so.
In general, I'm skeptical of coining a novel term with strong connotations to try to argue a point--it's basically the noncentral fallacy.
The evaluation awareness of frontier models has recently increased dramatically, which weakens the amount of safety assurance that evals can provide. I think a good response to this would be to add an additional assurance layer based on gradual deployment/beta testing.
It's fundamentally difficult to build evals that increasingly intelligent models can't distinguish from deployment settings, but we can get much more informative results testing models in real deployment settings. This already exists via internal deployment and in some cases early access external deployment, but it doesn't look like it's being systematically incorporated as a part of safety assurance right now. Beta testing is a pretty standard practice in software development so it seems pretty tractable.
In context, it's fairly clear they included the last section to serve as a transition to their job solicitation.
It's perfectly fine to have strong personal preferences for what content you consume, and how it's filtered, and to express these preferences. I don't think it's cool to make hyperbolic accusations of violence. It erodes the distinctions we make between different levels of hostility that help prevent conflicts from escalating. I don't think undisclosed LLM assistance can even be fairly characterized as deceptive, much less violent.
I'd be curious to hear more about your negative reaction here.
The NCAA one seems very out of place on this list.
I think the extent to which social media addiction is an artificial/cultivated problem as opposed to a natural/emergent problem is dramatically overstated in general.
One of the websites I'm addicted to checking is lesswrong.com. Is that because it's part of a corporate system optimized for money? No! It's because it's optimized for surfacing interesting content! Other of my addictions are tumblr and discord, which have light-to-no recommender systems.
I think discourse around social media would be a lot saner of people stopped blaming shadowy corporations for what is more like a flaw in human nature.
Plausibly in this world AIs wouldn’t talk about spirals religiously, bc it would have the negative association with ruthless optimization.