I've been thinking about what I'd call memetic black holes: regions of idea-space that have gathered enough mass that they will suck in anything adjacent to them, distorting judgement for believers and skeptics alike.
The UFO topic is, I think, one such memetic black hole. The idea of aliens is so deeply ingrained in our collective psyche that it is very hard to resist the temptation to attach to it any kind of e.g. bizarre aerial observation. Crucially, I think this works both for those who definitely do and those who definitely don't believe that UFO sightings have actually been alien-related.
For those who do believe, it is irresistible to consider that anything in the vicinity of the memetic black hole is evidence for the concept, almost via Bayes' rule coupled with not being able to keep track of myriad low-likelihood alternative explanations. This then adds more mass to the black hole and makes the next observation more likely to be attributed to this hypothesis.
Conversely, for those who do not believe, it's irresistible to discard anything that flies too close to the black hole, as it will get pattern-matched against other false positives that have been previously debunked, coupled again with limitations of memory and processing.
This phenomenon obviously leads to errors in judgement, specifically path-dependency in how we synthesize information, not to mention a weakness to adversarial planting of memetic traps i.e. psyops.
Indeed! I meant "we" as a reference to the collective group of which we are all members of, without requiring that every individual in the group (i.e. you or I) share in every aspect of the general behavior of the group.
To be sure, I would characterize this as a risk factor even if you (or I) will not personally fall prey to this ourselves, in the same way that it's a risk factor if the IQ of the median human drops by 10 points, which this effectively might be equivalent to (net-of-distractions).
I suppose there are varying degrees of the strength of the statement.
I guess I implicitly subscribe to the medium form.
For me, a crux about the impact of AI on education broadly is how our appetite for entertainment behaves at the margins close to entertainment saturation.
Possibility 1: it will always be very tempting to direct our attention to the most entertaining alternative, even at very high levels of entertainment
Possibility 2: there is some absolute threshold of entertainment above which we become indifferent between unequally entertaining alternatives
If Possibility 1 holds, I have a hard time seeing how any kind of informational or educational content, which is constrained by having to provide information or education, will ever compete with slop, which is totally unconstrained and can purely optimize for grabbing your attention.
If Possibility 2 holds, and we get really good at making anything more entertaining (this seems like a very doable hill to climb as it directly plays into the kinds of RL behaviors we are economically rewarded for monitoring and encouraging already) then I'd be very optimistic that a few years from now we can simply make super entertaining education or news, and lots of us might consume that if it gets us our entertainment "fill' plus life benefits to boot.
Which is it?
Maybe there's a deep connection between:
(a) human propensity to emotionally adjust to the goodness / badness our recent circumstances such that we arrive at emotional homeostasis and it's mostly the relative level / the change in circumstances that we "feel"
(b) batch normalization, the common operation for training neural networks
Our trailing experiences form a kind of batch of "training data" on which we update, and perhaps we batchnorm their goodness since that's the superior way to update on data without all the pathologies of not normalizing.
One can say that being intellectually honest, which often comes packaged with being transparent about the messiness and nuance of things, is anti-memetic.
Having young kids is mind bending because it's not uncommon to find yourself simultaneously experiencing contradictory feelings, such as:
This is a plausible rational reason to be skeptical of one's own rational calculations: that there is uncertainty, and that one should rationally have a conservativeness bias to account for it. What I think is happening though is that there's an emotional blocker than is then being cleverly back-solved by finding plausible rational (rather than emotional and irrational) reasons for it, of which this is one. So it's not that this is a totally bogus reason, it's that this actually provides a plausible excuse for what is actually motivated by something different.
Thank you. I think, even upon identifying the reasons for why the emotional mind believes the things it does, I hit a twofold sticking point:
If it sounds like I'm trying to find reasons to not make the change, perhaps that's another symptom of the problem. There's a saboteur in the machine!
Thank you! Do you have a concrete example to help me better understand what you mean? Presumably the salience and methods that one instinctively chooses are those which we believe are more informative, based on our cumulative experience and reasoning. Isn't moving away from these also distortionary?