MalcolmOcean

Creator of the Intend app (formerly known as Complice) a system for orienting to each day with intentionality in service of long-term careabouts. It features coworking rooms, the longest-running of which is the Less Wrong Study Hall: https://intend.do/room/lesswrong

I'm working full-time on solving human coordination at the mindset & trust level. You can maybe get a sense of my thinking there via this 10min video.

Wiki Contributions

Comments

Great post. A few typos that weren't worth commenting to mention ("LMM" instead of "LLM") but I felt like it was worth noting that

In our ‘casual’ or ‘actually thinking’ mode

probably wants to be "causal".

I overall like what you're trying to point at here — you're raising a real and important concern about what's happening with the weakening of protection from random angry people in a wide range of places including tenure, due to cultural shifts and changes in media (eg social media).

At the same time, the Rainbowland example is a terrible example for making this point here. Or at least, making it in the way you describe. As jaspax and ChristianKI note, "it's about accepting people" obfuscates the meaning of the song that was why it got banned, one that many people agree with.

It's totally plausible to me, given what I've seen of people being afraid of children being exposed to trans ideology, that the school administrators themselves banned the song as part of doing their job to create a good learning environment for kids, no cowtowing to angry complainer required. I agree that banning kids wanting to sing the song is not useful and perhaps counterproductive, but if ordinary people getting upset about it seems absurd to you then I suspect you're out of touch with what a substantial and growing fraction of people think, including many people "on the left" and some trans people: we need to keep trans ideology out of schools in order to keep kids safe & sane. Not because trans people aren't real or deserve respect, but because kids are getting memed not just into accepting people but into positions like "it's not cool to be straight" which is non-acceptance and a dumb reason for experimental medical treatments. From this perspective, Rainbowland looks like a song that's ostensibly about motivation & discipline but is subtextually about how cool it is to be anorexic.

But most people who think this are being quiet because they don't want to attract the very attacks you're talking about here, from the small minority of hostile vindictive people! I only recently got enough clarity on the subject, sense of importance, and sense that I'm not alone with my sense of things, that I decided it was important to voice my relatively boring view that's somehow controversial.

I resonate a lot with this, and it makes me feel slightly less alone.

I've started making some videos where I rant about products that fail to achieve the main thing they're designed to do, and get worse with successive iterations and I've found a few appreciative commenters:

Rant successful, it made someone else feel like they weren't alone

And part of my experience of the importance of ranting about it, even if nobody appreciates it, is that it keeps me from forgetting my homeland, to use your metaphor.

My most recent published blog post had in the 2nd paragraph "I bet there’s nobody reading this who has never used a phrase like..." and this article made me think it would be kind to change it.

Then I searched your facebook posts and you have indeed used the phrase, so in this case at least you aren't nobody. But I'm still changing the post.

(The phrase is "part of me", which if any of my friends were to somehow have never once used I wouldn't have been surprised to discover it you.)

Right, yeah. And that (eventually) requires input of food into the person, but in principle they could be in a physically closed system that already has food & air in it... although that's sort of beside the point. And isn't that different from someone meditating for a few hours between meals. The energy is already in the system for now, and it can use that to untangle adaptive entropy.

Huh, reading this I noticed that counterintuitively, alignment requires letting go of the outcome. Like, what defines a non-aligned AI (not an enemy-aligned one but one that doesn't align to any human value) is its tendency to keep forcing the thing it's forcing rather than returning to some deeper sense of what matters.

Humans do the same thing when they pursue a goal while having lost touch with what matters, and depending on how it shows up we call it "goodharting" or "lost purposes". The mere fact that we can identify the existence of goodharting and so on indicates that we have some ability to tell what's important to us, that's separate from whatever we're "optimizing" for. It seems to me like this is the "listening" you're talking about.

And so unalignment can refer both to a person who isn't listening to all parts of themselves, and to eg corporations that aren't listening to people who are trying to raise concerns about the ethics of the company's behavior.

The question of where an AI would get its true source of "what matters" from seems like a bit of a puzzle. One answer would be to have it "listen to the humans" but that seems to miss the part where the AI needs to itself be able to tell the difference between actually listening to the humans and goodharting on "listen to the humans".

Maybe instead of "shut up and do the impossible" we need "listen, and do the impossible" 😆

Sort of flips where the agency needs to point.

This "it gets worse if you try to deal with it" isn't necessarily true in every case. In this way adaptive entropy is actually unlike thermodynamic entropy: it's possible to reduce adaptive entropy within a closed system.

Actually naming whether this bolded part is true would require defining what "closed" means in the context of an adaptive system—it's clearly different than a closed system in the physical sense, since all adaptive systems have to be open in order to live.

This is great and I'm looking forward to your book.

Some adjacent ideas:

I feel like I've been appreciating the nature of wisdom (as you describe it here) increasingly much over the past couple of years. One thing this has led me to is looking at tautologies, where the sentence in some sense makes no claim but directs your attention to something that's self-evident once you look. For example, "the people you spend time with will end up being the people you've spent time with".

In 2017, I wrote an article about transcending regret, and a few years later I shared it with a friend and said:

at the time I wrote this, I hadn't gotten the insight as deep into my bones as I now have, & I still have much further to go

but the insight is still legit
& the articulation is good
& your integration will be yours anyway no matter how well I had it integrated when I wrote it

This feels like sort a dual of the sazen, and also maybe relates to the comment Kaj made about experiences that are hard to point at verbally even once you have experienced them.

Huh—it suddenly struck me that Peter Singer is doing the exact same thing in the drowning child thought experiment, by the way, as Tyler Alterman points out beautifully in Effective altruism in the garden of ends. He takes for granted that the frame of "moral obligation" is relevant to why someone might save the child, then uses our intuitions towards saving the child to suggest that we agree with him about this obligation being present and relevant, then he uses logic to argue that this obligation applies elsewhere too. All of that is totally explicit and rational within that frame, but he chose the frame.

In both cases, everyone agrees about what actually happens (a child dies, or doesn't; you contribute, or you don't).

In both cases, everyone agrees because within the frame that has been presented there is no difference! Meanwhile there is a difference in many other useful frames! And this choice of frame is NOT, as far as I can recall, explicit. Rather than recall, let me actually just go check... watching this video, he doesn't use the phrase "moral obligation", but asks "[if I walked past,] would I have done something wrong?". This interactive version offers a forced choice "do you have a moral obligation to rescue the child?"

In both cases, the question assumes the frame, and is not explicit about the arbitrariness of doing so. So yes, he is explicit about setting the zero point, but focusing on that part of the move obscures the larger inexplicit move he's making beforehand.

Load More