The embedded and underlying falsehood is that "the poverty line" or "the cost of living" is a useful tool for policy or personal decision-making. Two major issues that confound the desire for simplicity:
1) one size does not fit all. Both across and within geographies, across and within families, and across time for the same individuals, the variance in expectation, community/family support, and behaviors can change the requirements to experience poverty, comfort, or wealth, by more than an order of magnitude.
2) Poverty multidimensional, and is a continuum, not a line. It's quite possible to be impoverished in some elements (education or entertainment) at a VERY different level than others (nutrition or leisure time).
Bot farms have been around for awhile. Use of AI for this purpose (along with all other, more useful purposes) has been massively increasing over the last few years, and a LOT in the last 6 months.
Personally, I'd rather have someone point out the errors or misleading statements in the post, rather than worrying about whether it's AI or just a content farm of low-paid humans or someone with too much time and a bad agenda. But a lot of folks think "AI generated" is bad, and react as such (some by stopping following such accounts, some by blocking the complainers).
My advice: don't over-generalize. There's no easy solution for knowing what public sources to believe, in the face of lots of conflicting, unreliable information that's purpose-designed to get your money and attention.
Instead, pick a few instances and dive deep - both into their referrals (people you kind-of-trust who endorse them) and into their hard-to-fake documentation (how long they've been doing their work, what are their goals and how are they measuring, what ratings or evaluation agencies say about them).
Alternately, if these are relatively small amounts you're spreading thin (tens to hundreds a year across multiple targets), pick somewhat randomly by sector/cause, and just hope you'll mostly do some good.
The non-dumb solution is to sunset the Jones Act, isn't it? The problem with workarounds is that they generally need to be approved by the same government that is maintaining the law in the first place.
Analysis of "first to talk seriously about it" is probably not worth much, for COVID-19 OR for the Soviet Union. Actual behavior and impact are what matter, and I don't know that LW members were significantly different from their non-LW-using cohorts in their areas.
Can you clarify which phenomenon you're curious about? Blocking people who post anything that disrupts the desired messaging is as old as the ability to block people. AI-generated content is newer, but still not new. Optimizing for "engagement" rather than information or discussion is pre-internet (a LOT of publications, but not all).
I think you're (re)discovering that most polls aren't designed for factual precision. It's not like this effect hasn't been known forever (Geometric Mean was known to the Greeks, and was popularized in the late 19th century for real statistical work).
It's not accident nor laziness that makes this happen. It's about the institutional motivations and the intent of the funder/publisher. They WANT attention-grabbing headlines, and secondarily they WANT to create/reinforce a viewpoint about the public.
Heck, even the fact that it's called an "opinion poll" rather than a "knowledge poll" is a hint as to the reasoning behind it.
My suspicions match your footnotes - this is probably accidental and fragile, so attempting to mess with it to get other dimensions of value (in propagating ideas orthogonal to the video, or monetizing, or influencing anything) is going to make it go away.
That said, it'd be interesting to measure and see if there IS a unique/special value - some tracking of popularity of videos that you comment on, then a randomization of NOT commenting on some things you were about to (literal coin flip or other no-judgement procedure) and tracking if the comments impact the popularity of the video would give you a bit of signal that the comment IS providing value in some way, which you could then figure out how to exploit.
Note that the usage of these terms and demand for rigor varies by orders of magnitude based on who you're talking with and what aspects of "belief" are salient to the question at hand. My friends and coworkers don't bat an eye at "Claude believes that Paris is the capital of France", or even "Claude thinks it's wasteful to spend money on 3p antivirus software".
Only when considering whether a given AI instance is a moral actor or moral patient does the ambiguity matter, and then we're really best off tabooing these words that imply high similarity to the way humans experience things.
All of them are lovely and extremely useful in my world-models. And all of them have failed me at one time or another. Here are some modes where I've thought I understood and it was obvious, but I was missing important assumptions: