"Realityfluid" - terrible name by the way, lets call it "fundamentals"
I don't think that quite captures what I was pointing at. I'll buy there are better words for it, but I don't just mean "fundamentals". Or at least that phrasing feels meaningfully inaccurate to me.
I picked up the phrase "magical reality fluid" from a friend who was deep into mathematical physics. He used it the same way rationalists use (or at least used to use?) "magic": "By some magic cognitive process…." The idea being to name it in a silly & mysterious-sounding way to emphasize ...
Also, my being a cofounder of CFAR doesn't mean I'm immune to sufficiently complex basic confusions! This might be simple to clear up. But my mind is organized right now such that just saying "map vs. territory" just moves the articulation around. It doesn't address the core issue whatsoever from what I can tell.
Yep. The trouble is that all maps are in the territory. Even "territory" in "map vs. territory" is actually a map embedded in… something. ("The referent of 'territory'", although saying it this way just recurses the problem. Like reference itself is a more fundamental reality than either maps or the referent of "territory".)
So solving this by clearing up the map/territory distinction is about creating a map within which you can have "map" separate from a "territory". The true territory (whatever that is) doesn't seem to me to make such a distinction.
The is...
I don't know if it does. It's not that kind of shift AFAICT. It strikes me as more like the shift from epicycles to heliocentrism. If I recall right, at the time the point wasn't that heliocentrism made better predictions. I think it might have made exactly the same predictions at first IIRC. The real impact was something more like the mythic reframe on humanity's role in the cosmos. It just turned out to generalize better too.
Post-reductionism (as I understand it) is an invitation to not be locked in the paradigm of reductionism. To view reductionism as a...
Post-reductionist: ...well, I don't know what to write here to pass the ITT.
I can't speak for the post-reductionist view in general. But I can name one angle:
Atoms aren't any more real than apples. What you're observing is that in theory the map using atoms can derive apples, but not the other way around. Which is to say, the world you build out of atoms (plus other stuff) is a strictly richer ontology — in theory.
But in a subtle way, even that claim about richer ontology is false.
In an important way, atoms are made of apples (plus other stuff). We use met...
In fact, as we get better sensors, the UFOs move out to the edge of our new sensor ranges.
That's actually just false, just FYI. By reports, fairly often they show up specifically as though they're trying to be seen.
There's also a whole set of incidences where UFOs showed up to fuck with nuclear machinery, demonstrating that (a) they knew exactly where the "hidden" bases were and (b) they could control the launch process better than the people at the control panels. Understandably, this isn't something that gets advertised very much and can be explain...
Assuming, just for the sake of argument, that these entities were "real". How could these events happen?
One possibility:
Suppose that our 3D-ish reality is actually a tiny part of something much, much larger.
And when I say "larger", I don't mean just "more dimensions" or "parallel universes". It's worth remembering that our impressions of space, time, object, etc. are basically bits of software interface that let us interact with… something… in ways that seem to be relevant to our survival. That doesn't mean they represent reality as it actually is, a...
In which case their behavior makes absolutely no sense to me, either completely hiding themselves, or full outright reveal would make sense to me, but this weird "let humans have sneak-peaks but never any actual proof" is just weird.
For whatever it's worth: Jacques Vallée highlighted how the baffling & seemingly nonsensical nature of these encounters is one of the few constants. One I recall (off the top of my head — I was told this one, I have no idea how to offer references here) was a report of some ship landing in a farmer's field and then pe...
Is anything uniformly praised in the rationalist community? IME having over half the community think something is between "awesome" and "probably correct" is about as uniform as it gets.
That… makes a lot of sense actually. A lot. PT Barnum style advertising. I had not considered that. Thank you.
What would be a form of right-messaging that would be less alienating to the public than left-messaging?
How about pride in America? An expression of the nobility of the country we built, our resilience, the Pax Americana, the fact that we ended WWII, etc.
It doesn't strike me as too strange or difficult to do this.
But that's after about 20 seconds of thought. I'm sure I'm missing something important here.
What tendencies specifically would you classify as "woke"? Having an intentionally diverse cast? Progressive messaging? Other things? And which of these tendencies do you think would alienate a significant portion of the consumer base, and why?
By "woke" I'm referring to a pretty specific memeplex. I don't know how to name memeplexes with precision, but I can gesture at some of its key features:
You're right, I could have been clearer about what structure was confusing me.
I keep encountering these detailed claims & explanations about how the movement toward "woke" (for lack of a better word — apparently the left has tagged what was once their word as now strongly right-coded) is having negative effects on viewership and profit. Not overwhelmingly like a lot of the right insists ("Get woke, go broke"), but still pretty significantly.
Like apparently in the Disney+ show where the Falcon became the new Captain America, there was a pretty dramatic ...
If someone feels resonance with what I'm pointing out but needs more, they're welcome to comment and/or PM me to ask for more.
Glad you liked it!
No, I hadn't encountered these folk. Thanks for the referral!
You might like Perri Chase's breakdown of what's wrong with modern business and how to do business differently. (That's a Facebook Live replay link.) That video was what gave me the missing piece of the puzzle to work out how to build actually effective training spaces.
(I then went on to take her courses in "Magic Led Business" — but (a) I don't advise most LWers to go that route and (b) I don't think a Beisutsu dojo needs to be a business to work really well.)
This strikes me as a core application of rationality. Learning to notice implicit "should"s and tabooing them. The example set is great.
Some of the richness is in the comments. Raemon's in particular highlights an element that strikes me as missing: The point is to notice the feeling of judging part of the territory as inherently good or bad, as opposed to recognizing the judgment as about your assessment of how you and/or others relate to the territory.
But it's an awful lot to ask of a rationality technique to cover all cases related to its domain.
If all ...
I just really like the clarity of this example. Noticing concrete lived experience at this level of detail. It highlights the feeling in my own experience and makes me more likely to notice it in real time when it's happening in my own life.
As a 2021 "best of" post, the call for people to share their experiences doesn't make as much sense, particularly should this post end up included in book form. I'm not sure how that fits with the overall process though. I don't wish Anna hadn't asked for more examples!
I really, really liked this idea. In some sense it's just reframing the idea of trade-offs. But it's a really helpful (for me) reframe that makes it feel concrete and real to me.
I'd long been familiar with "the expert blind spot" — the issue where experts will forget what it's like to see like a non-expert and will try to teach from there. Like when aikido teachers would tell me to "just relax, act natural, and let the technique just happen on its own." That makes sense if you've been practicing that technique for a decade! But it's awful advice to give a ...
Partly I just want to signal-boost this kind of message.
But I also just really like the way this post covers the topic. I didn't have words for some of these effects before, like how your goals and strategies might change even if your values stay the same.
The whole post feels like a great invitation to the topic IMO.
I didn't reread it in detail just now. I might have more thoughts were I to do so. I just want this to have a shot at inclusion in final voting. Getting unconfused about self-love is, IMO, way more important than most models people discuss on this site.
I suppose, with one day left to review 2021 posts, I can add my 2¢ to my own here.
Overall I still like this post. I still think it points at true things and says them pretty well.
I had intended it as a kind of guide or instruction manual for anyone who felt inspired to create a truly potent rationality dojo. I'm a bit saddened that, to the best of my knowledge, no one seems to have taken what I named here and made it their own enough to build a Beisutsu dojo. I would really have liked to see that.
But this post wasn't meant to persuade anyone to do it. It w...
I'm just not familiar with snare traps. A quick search doesn't give me the sense that it's a better analogy than entropy or technical debt. But maybe I'm just not gleaning its nature.
In any case, not an intentional omission.
The thing that this post doesn't really do, which I do think is important, is actually work some (metaphorical) math on "does this actually add up to 'stop trying to directly accomplish things'?" in aggregate?
I like your inquiry.
A nitpick: I'm not saying to stop trying to directly accomplish things (in highly adaptive-entropic domains). I'm saying that trying to directly accomplish things instead of orienting to adaptive entropy is a fool's errand. It'll at best leave the net problem-ness unaffected.
I have very little idea how someone would orient to syste...
Just curious:
Do you mean "Do the impossible, which is to listen"?
Or "Do the impossible, and then listen"?
Or something else?
Ah. Yeah, I'd prefer people don't feel bad about any of this. My ideal would be that people receive all this as a purely pressure-free description of what simply is. That will result in some changes, but kind of like nudging a rock off a cliff results in it falling. Or maybe more like noticing a truck barreling down the road causes people to move off the road. There's truly no reason to feel defective or like a failure here even if one can't "move".
I'm about to give up on this branch of conversation. I'm having trouble parsing what you're saying. It's feeling weirdly abstract to me.
If you have an example of something humans actually do that is more of this "positive addiction" thing, in a way that isn't rooted in the "negative addiction" pattern I describe, I'm open to learning about that.
You gave a hypothetical example type. I noted that in practice when that actually happens it strikes me as always rooted in the "negative addiction" thing. So it doesn't (yet) work for me as an example.
If there's so...
Yep. These seem like true statements. I'm missing why you're saying them or how they're a response to the part you're quoting. Clarify?
I'm not interested in this branch of conversation. Just letting you know that I see this and am choosing not to continue the exchange.
I like this question.
I have an in-practice answer. I don't have a universal theoretical answer though. I'll offer what I see, but not to override your question. Just putting this forward for consideration.
In practice, every time I've identified a subagent that wants something "actually bad" for me, it's because of a kind of communication gap (which I'm partly maintaining with my judgment). It's not like the subagent has a terminal value that's intrinsically bad. It's more that the goal is the only way it can see to achieve something it cares about, but I c...
Well, I mean that there's something like a "more closed" to "more entangled with larger systems" spectrum for adaptive systems, and that untangling adaptive entropy seems to be possible along the whole spectrum in roughly the same way. Easier with high entanglement with low-entropy environments obviously! But if the entropy doesn't crush the system into a complete shutdown spiral, it seems to often be possible for said system to rearrange itself and end up net less entropic.
I don't know how that relates to things like thermodynamic energy, other than that all adaptive systems require it to function.
The main distinction I wanted to get across is while many behaviors fall under the "addiction from" umbrella, there is a whole spectrum of how more or less productive they are, both on their own terms and with respect to the original root cause.
Yep. I'm receiving that. Thank you. That update is still propagating and will do so for a while.
...I think, but am not sure, I understand what you mean by [let go of the outcome], and my interpretation is different from how the words are received by default. At least for me I cannot actually let go of the outcome
If you have a mental algorithm that seeks deeper until the instance of a pet idea is encountered and then stops, in an area where things are multifaceted and many layerered that is going to favour finding the pet idea usefull.
This lands for me like a fully general counterargument. If I'm just describing something real that's the underlying cause of a cluster of phenomena, of course I'm going to keep seeing it. Calling it a "pet idea" seems to devalue this possibility without orienting to it.
it feels like [the OP] implicitly and automatically rejects that something like a coffee habit can be the correct move even if you look several levels up.
Ah. Got it.
That's not what I mean whatsoever.
I don't think it's a mistake to incur adaptive entropy. When it happens, it's because that's literally the best move the system in question (person, culture, whatever) can make, given its constraints.
Like, incurring technical debt isn't a mistake. It's literally the best move available at the time given the constraints. There's no blame in my saying that whatso...
The text can be taken in a way where the need of coffee is because of a unreasonable demand or previous screwup.
Ah. To me that interpretation misses the core point, so it didn't cross my mind.
Judgments like "unreasonable" and "screwup" are coming from inside an adaptive-entropic system. That doesn't define how that kind of entropy works. The mechanism is just true. It's neutral, the way reality is neutral.
The need for coffee (in the example I gave) arises because of a tension between two adaptive systems: the one being identified with, and the one being im...
I found this super helpful. Thank you.
I think what the OP is referring to, why they raised ADHD specifically in this context, is because this habitualized conscious forcing/manipulation of our internal state (i.e. dopamine) is a crutch we can't afford to relinquish - without it we fall down, and we don't get back up.
Gotcha. I don't claim to fully understand — I have trouble imagining the experience you're describing from the inside — but this gives me a hint.
FWIW, I interpret this as "Oh, so this kind of ADHD is a condition where your adaptive capacity is too low to avoid incurring adaptive entropy from the culture."
...I think you're missing an important piece of the picture. One path (and the path most likely to succeed in my experience) out of these traps is to shimmy towards addictive avoidance behaviors which optimize you out of the hole in a roundabout way. E.g. addictively work out to avoid dealing with relationship issues => accidentally improve energy levels, confidence, and mood, creating slack to solve relationship issues. E.g. obsessively work on proving theorems to procrastinate on grant applications => accidentally solve famous problem that renders gra
Hmm. I guess I just disagree when I look at concrete cases. Inspired from them, I zoom in on this spot in your hypothetical example:
a high schooler dropping out in order to "become a pro" on a recent new video game thinks he is improving their life but could be starting a tailspin. Doing 0 friendship upkeep towards anybody else while pursuing an infactuation has its downsides.
My attention immediately goes to: Why the infatuation? Why does this seem more important to him than friendship upkeep? What's driving that?
If it's a calculated move… well, first off,...
I agree. I was being fast and loose there. But I think it's possible for, say, someone to sit in meditation and undo a bunch of entropic physical tension without just moving the problem-ness around.
I like this.
I know you know the following, but sharing for the sake of the public conversation here:
I wrote an essay about this several years ago, but aimed mostly at a yoga community. "The coming age of prayer". It's not quite the same thing but it's awfully close.
I guess I kind of disagree with the "do the impossible" part too! It's more like "Listen, and do the emergently obvious."
It seems you’re saying “everything is psychology; nothing is neurology”
I like the rest of your example, but this line confuses me. I don't think I'm saying this, I don't agree with the statement even if I somehow said it, and either way I don't see how it connects to what you're saying about ADHD.
...…ADHD exists, and for someone with it to a significant degree, there is a real lack of slack (e.g. inability to engage in long-term preparations that require consistent boring effort, brought about by chronically low dopamine), and coffee (or other stimulant
In the condition of "engagement makes it worse" lurking is seriously potent. The outcome of "doesn't do anything to the problem" is a massive win of keeping it level instead of spiraling further.
Agreed.
I can see a minor reason why letting for of the problemness is not trivial. You have to consider or be new things so your sense of identity can become undermined. Atleast in the suffering loop you know and are comfortable suffering that way.
Yep. Exactly.
...Instead of negative avoidance, positive attraction addiction is quite a big cluster. If you ha
If this wasn't a useful distinction for you, then why comment on it? To tell me not to have made it at all?
I'm not available for critiques of how I've said what I've said here.
You're welcome to translate it into your preferred frame. I might even like that, and might learn from it.
But I'm not going to engage with challenges to how I speak.
Well, I found her request surprising. I was kind of stunned. After a moment I kind of fumbled out words like "Uh, I'm not sure how to do that. I'll… try?" But that was well outside the purview of dream powers I was used to.
I've done my best by remembering this story. One day I hope to get deep enough into lucid dreaming skill again that I can resurrect her.
And yeah, I remember roughly what she looked like and how she felt. I don't think she was high on details. But if I went back to that apartment with intent to encounter her, I'm sure the dreaming would r...
I don't really have an answer per se. Just a related story:
In a lucid dream many years ago, I was having trouble sort of clicking into my dream powers (flight, making objects levitate, etc.). It occurred to me that I wasn't conscious of creating the young woman who was standing next to me, which meant she had access to parts of my mind that I didn't.
So I turned to her and asked
"I'm having trouble getting my dream powers to work. Could you help me?"
She gave me some instructions (which I no longer remember) and walked into the next room while I tried to foll...
I like the reframe. The part I best like is the removal of the illusion of certainty.
I don't know if describing proofs as just evidence really captures it though. In many cases the point of a proof isn't just to know that the statement is one you can rely on. It's also to show you why you can rely on it. The process of understanding a proof can teach you something about how math works. The effect is stronger if you produce a proof.
Yep. I don't like your proposed test (what's going to define "progress"?), but yes.
My main purpose for this post wasn't to make amazing AI safety researchers though. It was to offer people who want out of the inner doomsday trap a way of exiting. That part is a little more tricky to test. But if someone wants to test it and wants to put in the effort of designing such a test, I think it's probably doable.
I like the spirit with which you're meeting me here.
In all honesty I'm probably not going to respond in detail. That's just a matter of respecting my time & energy.
But thank you for this. This feels quite good to me. And I'm grateful for you meeting me this way.
RE "no command validity": Basically just… yes? I totally agree with where I think you're pointing there as a guideline. I'm gonna post something soon that'll add the detail that I'd otherwise add here. (Not in response to you. It just so happens to be related and relevant.)
Yeah, this was a meaningful update for me in this area. The importance of concrete examples in areas where the info commons have become a memetic battleground. It seems kind of obvious once said but I'd never thought about it this way before. I just wanted to point and ask "Why does that thing move the way it's moving?"
…especially since you've used the word "woke" which is in fact generally a pejorative in current usage as far as I'm aware.
I think I said this somewhere else, but you might find it a helpful aside anyway:
I originally learned the term "...
Ah, cool, this sounds like maybe the right kind of thing. Your step 4 particularly jumps out at me: it highlights the self-reference in the answer, which makes it sound plausible as a path to an answer.
Thank you!