LESSWRONG
LW

Decaeneus
170Ω43610
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
1Decaeneus's Shortform
2y
81
No wikitag contributions to display.
Decaeneus's Shortform
Decaeneus2mo10

Indeed! I meant "we" as a reference to the collective group of which we are all members of, without requiring that every individual in the group (i.e. you or I) share in every aspect of the general behavior of the group.
 

To be sure, I would characterize this as a risk factor even if you (or I) will not personally fall prey to this ourselves, in the same way that it's a risk factor if the IQ of the median human drops by 10 points, which this effectively might be equivalent to (net-of-distractions).

Reply
Decaeneus's Shortform
Decaeneus2mo10

I suppose there are varying degrees of the strength of the statement.

  • Strong form: sufficiently compelling entertainment is irresistible for almost anyone (and of course it may disguise itself as different things to seduce different people, etc.)
  • Medium form: it's not theoretically irresistible, and if you're really willful about it you can resist it, but people will large will (perhaps by choice, ultimately) not resist it, much as they (we?) have not resisted dedicating an increasing fraction of their time to digital entertainment so far.
  • Weak form: it'll be totally easy to resist, and a significant fraction of people will.

I guess I implicitly subscribe to the medium form.

Reply
Decaeneus's Shortform
Decaeneus2mo6-2

For me, a crux about the impact of AI on education broadly is how our appetite for entertainment behaves at the margins close to entertainment saturation.

Possibility 1: it will always be very tempting to direct our attention to the most entertaining alternative, even at very high levels of entertainment

Possibility 2: there is some absolute threshold of entertainment above which we become indifferent between unequally entertaining alternatives

If Possibility 1 holds, I have a hard time seeing how any kind of informational or educational content, which is constrained by having to provide information or education, will ever compete with slop, which is totally unconstrained and can purely optimize for grabbing your attention.

If Possibility 2 holds, and we get really good at making anything more entertaining (this seems like a very doable hill to climb as it directly plays into the kinds of RL behaviors we are economically rewarded for monitoring and encouraging already) then I'd be very optimistic that a few years from now we can simply make super entertaining education or news, and lots of us might consume that if it gets us our entertainment "fill' plus life benefits to boot.

Which is it?

Reply
Decaeneus's Shortform
Decaeneus2mo10

Maybe there's a deep connection between:

(a) human propensity to emotionally adjust to the goodness / badness our recent circumstances such that we arrive at emotional homeostasis and it's mostly the relative level / the change in circumstances that we "feel"

(b) batch normalization, the common operation for training neural networks

 

Our trailing experiences form a kind of batch of "training data" on which we update, and perhaps we batchnorm their goodness since that's the superior way to update on data without all the pathologies of not normalizing.

Reply
Kabir Kumar's Shortform
Decaeneus2mo30

One can say that being intellectually honest, which often comes packaged with being transparent about the messiness and nuance of things, is anti-memetic.

Reply
Decaeneus's Shortform
Decaeneus2mo425

Having young kids is mind bending because it's not uncommon to find yourself simultaneously experiencing contradictory feelings, such as:

  • I'm really bored and would like to be doing pretty much anything else right now.
  • There will likely come a point in my future when I would trade anything, anything to be able to go back in time and re-live an hour of this.
Reply15
Decaeneus's Shortform
Decaeneus2mo10

This is a plausible rational reason to be skeptical of one's own rational calculations: that there is uncertainty, and that one should rationally have a conservativeness bias to account for it. What I think is happening though is that there's an emotional blocker than is then being cleverly back-solved by finding plausible rational (rather than emotional and irrational) reasons for it, of which this is one. So it's not that this is a totally bogus reason, it's that this actually provides a plausible excuse for what is actually motivated by something different.

Reply
Decaeneus's Shortform
Decaeneus2mo10

Thank you. I think, even upon identifying the reasons for why the emotional mind believes the things it does, I hit a twofold sticking point:

  • I consider the constraints themselves (rarely in isolation but more like the personality milieu that they are enmeshed with) to be part of my identity, and attempting to break them is scary in both a deep existential loss of self sense, and in a "this may well be load bearing in ways I can't fully think through" sense
  • Even orthogonal to the first bullet, it's somehow hard to change them even though with my analytical mind I can see what's going on. It's almost like the emotional Bayesian updating has brought these beliefs / tendencies to a very sharp peak long ago, but now circumstances have changed but the peak is too sharp to belief it away with new experience.

If it sounds like I'm trying to find reasons to not make the change, perhaps that's another symptom of the problem. There's a saboteur in the machine!

Reply
Decaeneus's Shortform
Decaeneus2mo50

This is both a declaration of a wish, and a question, should anyone want to share their own experience with this idea and perhaps tactics for getting through it.

I often find myself with a disconnect between what I know intellectually to be the correct course of action, and what I feel intuitively is the correct course of action. Typically this might arise because I'm just not in the habit of / didn't grow up doing X, but now when I sit down and think about it, it seems overwhelmingly likely to be the right thing to do. Yet, it's often my "gut" and not my mind that provides me with the activation energy needed to take action.

I wish I had some toolkit for taking things I intellectually know to be right / true, and making them "feel" true in my deepest self, so that I can then more readily act on them. I just don't know how to do that -- how to move something from my head to my stomach, so to speak.

Any suggestions?

Reply
Decaeneus's Shortform
Decaeneus2mo60

Something that gets in the way of my making better decisions is that I have strong empathy that "caps out" the negative disutility that a decision might cause to someone, which makes it hard to compare across decisions with big implications.

In the example of the trolley problem, both branches feel maximally negative (imagine my utility from each of them is negative infinity) so I have trouble comparing them, and I am very likely to simply want to not be involved. This makes it hard for me to perform the basic utility calculation in my head, perhaps not in the literal trolley problem where the quantities are obvious, but certainly in any situation that's more ambiguous.

Reply
Load More
17Self-censoring on AI x-risk discussions?
Q
1y
Q
2
1Decaeneus's Shortform
2y
81
2Daisy-chaining epsilon-step verifiers
Q
2y
Q
1