I suppose there are varying degrees of the strength of the statement.
I guess I implicitly subscribe to the medium form.
For me, a crux about the impact of AI on education broadly is how our appetite for entertainment behaves at the margins close to entertainment saturation.
Possibility 1: it will always be very tempting to direct our attention to the most entertaining alternative, even at very high levels of entertainment
Possibility 2: there is some absolute threshold of entertainment above which we become indifferent between unequally entertaining alternatives
If Possibility 1 holds, I have a hard time seeing how any kind of informational or educational content, which is constrained by having to provide information or education, will ever compete with slop, which is totally unconstrained and can purely optimize for grabbing your attention.
If Possibility 2 holds, and we get really good at making anything more entertaining (this seems like a very doable hill to climb as it directly plays into the kinds of RL behaviors we are economically rewarded for monitoring and encouraging already) then I'd be very optimistic that a few years from now we can simply make super entertaining education or news, and lots of us might consume that if it gets us our entertainment "fill' plus life benefits to boot.
Which is it?
Maybe there's a deep connection between:
(a) human propensity to emotionally adjust to the goodness / badness our recent circumstances such that we arrive at emotional homeostasis and it's mostly the relative level / the change in circumstances that we "feel"
(b) batch normalization, the common operation for training neural networks
Our trailing experiences form a kind of batch of "training data" on which we update, and perhaps we batchnorm their goodness since that's the superior way to update on data without all the pathologies of not normalizing.
One can say that being intellectually honest, which often comes packaged with being transparent about the messiness and nuance of things, is anti-memetic.
Having young kids is mind bending because it's not uncommon to find yourself simultaneously experiencing contradictory feelings, such as:
This is a plausible rational reason to be skeptical of one's own rational calculations: that there is uncertainty, and that one should rationally have a conservativeness bias to account for it. What I think is happening though is that there's an emotional blocker than is then being cleverly back-solved by finding plausible rational (rather than emotional and irrational) reasons for it, of which this is one. So it's not that this is a totally bogus reason, it's that this actually provides a plausible excuse for what is actually motivated by something different.
Thank you. I think, even upon identifying the reasons for why the emotional mind believes the things it does, I hit a twofold sticking point:
If it sounds like I'm trying to find reasons to not make the change, perhaps that's another symptom of the problem. There's a saboteur in the machine!
This is both a declaration of a wish, and a question, should anyone want to share their own experience with this idea and perhaps tactics for getting through it.
I often find myself with a disconnect between what I know intellectually to be the correct course of action, and what I feel intuitively is the correct course of action. Typically this might arise because I'm just not in the habit of / didn't grow up doing X, but now when I sit down and think about it, it seems overwhelmingly likely to be the right thing to do. Yet, it's often my "gut" and not my mind that provides me with the activation energy needed to take action.
I wish I had some toolkit for taking things I intellectually know to be right / true, and making them "feel" true in my deepest self, so that I can then more readily act on them. I just don't know how to do that -- how to move something from my head to my stomach, so to speak.
Any suggestions?
Something that gets in the way of my making better decisions is that I have strong empathy that "caps out" the negative disutility that a decision might cause to someone, which makes it hard to compare across decisions with big implications.
In the example of the trolley problem, both branches feel maximally negative (imagine my utility from each of them is negative infinity) so I have trouble comparing them, and I am very likely to simply want to not be involved. This makes it hard for me to perform the basic utility calculation in my head, perhaps not in the literal trolley problem where the quantities are obvious, but certainly in any situation that's more ambiguous.
Indeed! I meant "we" as a reference to the collective group of which we are all members of, without requiring that every individual in the group (i.e. you or I) share in every aspect of the general behavior of the group.
To be sure, I would characterize this as a risk factor even if you (or I) will not personally fall prey to this ourselves, in the same way that it's a risk factor if the IQ of the median human drops by 10 points, which this effectively might be equivalent to (net-of-distractions).