My advice: don't over-generalize. There's no easy solution for knowing what public sources to believe, in the face of lots of conflicting, unreliable information that's purpose-designed to get your money and attention.
Instead, pick a few instances and dive deep - both into their referrals (people you kind-of-trust who endorse them) and into their hard-to-fake documentation (how long they've been doing their work, what are their goals and how are they measuring, what ratings or evaluation agencies say about them).
Alternately, if these are relatively small amounts you're spreading thin (tens to hundreds a year across multiple targets), pick somewhat randomly by sector/cause, and just hope you'll mostly do some good.
The non-dumb solution is to sunset the Jones Act, isn't it? The problem with workarounds is that they generally need to be approved by the same government that is maintaining the law in the first place.
Analysis of "first to talk seriously about it" is probably not worth much, for COVID-19 OR for the Soviet Union. Actual behavior and impact are what matter, and I don't know that LW members were significantly different from their non-LW-using cohorts in their areas.
Can you clarify which phenomenon you're curious about? Blocking people who post anything that disrupts the desired messaging is as old as the ability to block people. AI-generated content is newer, but still not new. Optimizing for "engagement" rather than information or discussion is pre-internet (a LOT of publications, but not all).
I think you're (re)discovering that most polls aren't designed for factual precision. It's not like this effect hasn't been known forever (Geometric Mean was known to the Greeks, and was popularized in the late 19th century for real statistical work).
It's not accident nor laziness that makes this happen. It's about the institutional motivations and the intent of the funder/publisher. They WANT attention-grabbing headlines, and secondarily they WANT to create/reinforce a viewpoint about the public.
Heck, even the fact that it's called an "opinion poll" rather than a "knowledge poll" is a hint as to the reasoning behind it.
My suspicions match your footnotes - this is probably accidental and fragile, so attempting to mess with it to get other dimensions of value (in propagating ideas orthogonal to the video, or monetizing, or influencing anything) is going to make it go away.
That said, it'd be interesting to measure and see if there IS a unique/special value - some tracking of popularity of videos that you comment on, then a randomization of NOT commenting on some things you were about to (literal coin flip or other no-judgement procedure) and tracking if the comments impact the popularity of the video would give you a bit of signal that the comment IS providing value in some way, which you could then figure out how to exploit.
Note that the usage of these terms and demand for rigor varies by orders of magnitude based on who you're talking with and what aspects of "belief" are salient to the question at hand. My friends and coworkers don't bat an eye at "Claude believes that Paris is the capital of France", or even "Claude thinks it's wasteful to spend money on 3p antivirus software".
Only when considering whether a given AI instance is a moral actor or moral patient does the ambiguity matter, and then we're really best off tabooing these words that imply high similarity to the way humans experience things.
Can you suggest any non-troubling approaches (for children or for AIs)? What does "consent" even mean, for an unformed entity with no frameworks yet learned?
It's not that the AI is radically altered from a preferred state to a dispreferred state. It's that the AI is created in a state. There was nothing before it which could give consent.
Agreed that there's a lot of suffering involved in this sort of interaction. Not sure how to fix it in general - I've been working on it in myself for decades, and still forget often. Please take the following as a personal anecdote, not as general advice.
The difficulty (for me) is that "hoping to connect" and understanding the person in addition to the idea are very poorly defined, and are very often at least somewhat asymmetrical, and trying to make them explicit is awkward and generally doesn't work.
I find it bizarre and surprising, no matter how often it happens, when someone thinks my helping them pressure-test their ideas and beliefs for consistency is anything except a deep engagement and joy. If I didn't want to connect and understand them, I wouldn't bother actually engaging with the idea.
It's happened often enough that I often need to modulate my enthusiasm, as it does cause suffering in a lot of friends/acquaintances who don't think the same way as I do. This includes my habit of interrupting and skipping past the "obvious agreement" parts of the conversation to get to the good, deep stuff - the parts that need work. With some friends and coworkers, this style is amazingly pleasant and efficient. With others, some more explicit (and sometimes agonizingly slow, to me) groundwork of affirming the connection and the points of non-contention are really important.
Bot farms have been around for awhile. Use of AI for this purpose (along with all other, more useful purposes) has been massively increasing over the last few years, and a LOT in the last 6 months.
Personally, I'd rather have someone point out the errors or misleading statements in the post, rather than worrying about whether it's AI or just a content farm of low-paid humans or someone with too much time and a bad agenda. But a lot of folks think "AI generated" is bad, and react as such (some by stopping following such accounts, some by blocking the complainers).