Text has been plausibly fake since its invention yet we have devised ways of trusting it in certain circumstances, else you wouldn't be on this forum. It puts the onus back on you to up-weight your belief on sources where you have high trust in the chain of custody of the information or other means you can concoct of trusting the source.
I think in the present ecology of the Internet, given the various possible costs and rewards of people you don't know to jockey for attention queue position to put imagery in front of you, the status quo is that most of what is put in front of us is already adversarially placed there (e.g. for clicks, ads, propaganda) but in fact it presently feels not that because it's expensive to fake, so we sometimes falsely assume it's genuine. In a world where the cost of fakery goes to zero, it will be impossible to maintain the illusion of authenticity.
I think there's an expensive recipe to get at this question, and it goes something like this:
My guess is that if we do this, we will lose the negative valence at test time i.e. nothing deep is going on. But it would be very interesting to be wrong.
What would you guess is the training data valence of "unfiltered feelings"? It is my high probability guess that it's something negative, seems much more likely for people to use it to mean "feelings that are worse than what my socialized filter presents" rather than the opposite. The purpose of the filter is to block out negativity!
Maybe a reason to be optimistic about the impact on society of indistinguishable generated images / videos is that presently, it's tempting to believe the story promoted by genuine but dishonest / staged imagery, whereas in the age of fake everything, and full awareness thereof, people will just stop believing whatever message the likely fake thing sends.
When working something out with someone else, I find it helpful to try to only communicate measurable and concrete thoughts / predictions, and not any intermediate state. I find that, even among people of very similar intelligence, "internal" intermediate abstract states can be wildly different and incompatible. Attempting to reconcile the abstract state can lead to lots of frustration.
It feels a bit like skiing down a forest run. Coming up to a tree, two similarly good skiers could go around it either via the left or via the right. Their position will agree both before and after the tree, but while passing it they'll be in very different positions (and in fact forcing an averaging of those positions would be disaster).
Probably this is similar to trying to average out an arbitrary intermediate layer of two neural nets trained on the same dataset but with different random initializations & seeds. Actually, that would be a fun experiment to run. Maybe sometimes it works, and sometimes it doesn't. Maybe thinking is sometimes smooth and other times highly discontinuous.
In my mind, what gives the black hole its mass is just how pervasive of a meme it is. That likely has some correlation with truth, but far from 1.
Thank you! Do you have a concrete example to help me better understand what you mean? Presumably the salience and methods that one instinctively chooses are those which we believe are more informative, based on our cumulative experience and reasoning. Isn't moving away from these also distortionary?
I've been thinking about what I'd call memetic black holes: regions of idea-space that have gathered enough mass that they will suck in anything adjacent to them, distorting judgement for believers and skeptics alike.
The UFO topic is, I think, one such memetic black hole. The idea of aliens is so deeply ingrained in our collective psyche that it is very hard to resist the temptation to attach to it any kind of e.g. bizarre aerial observation. Crucially, I think this works both for those who definitely do and those who definitely don't believe that UFO sightings have actually been alien-related.
For those who do believe, it is irresistible to consider that anything in the vicinity of the memetic black hole is evidence for the concept, almost via Bayes' rule coupled with not being able to keep track of myriad low-likelihood alternative explanations. This then adds more mass to the black hole and makes the next observation more likely to be attributed to this hypothesis.
Conversely, for those who do not believe, it's irresistible to discard anything that flies too close to the black hole, as it will get pattern-matched against other false positives that have been previously debunked, coupled again with limitations of memory and processing.
This phenomenon obviously leads to errors in judgement, specifically path-dependency in how we synthesize information, not to mention a weakness to adversarial planting of memetic traps i.e. psyops.
Indeed! I meant "we" as a reference to the collective group of which we are all members of, without requiring that every individual in the group (i.e. you or I) share in every aspect of the general behavior of the group.
To be sure, I would characterize this as a risk factor even if you (or I) will not personally fall prey to this ourselves, in the same way that it's a risk factor if the IQ of the median human drops by 10 points, which this effectively might be equivalent to (net-of-distractions).
Consider this subset of hierarchy of relevant states of the world, from good to bad:
I think there is a case to be made that we are de-facto in state #3 now, but AI video gen will move us into state #2. While this is far worse than state #1, it's an improvement rather than a deterioration (I used to be convinced it would be a deterioration, but am now updating my thinking).
Just to mention: once we are firmly in #2, then our trust in video should be, just like our trust in text, based almost entirely on our priors about the source of the content. This doesn't mean we've repudiated text and similarly I don't think it'll mean we repudiate video.
It is debatable whether we are in state #3 now, but I feel like we mostly have been increasingly so for a number of years before lifelike AI video gen. It's not that the pixels in the video were artificially generated, but the Darwinian ecology of the modern Internet meant that the videos most likely to land on the radar of someone who is not aggressively curating and playing defense are there for ulterior motives such as ads, clicks, whatever, for which purposes truth is nearly irrelevant. Like, if you approach content consumption without active measures to avoid being manipulated (and even then!) you are likely not taking away anything from the text or video that bombards you that approximates truth in any meaningful sense.
Worth noting that the transition from #3 to #2 is likely to be bumpy.