My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms

by Kaj_Sotala17 min read8th Mar 2018131 comments

310

MeditationAliefRationality
Curated

Epistemic status: pretty confident. Based on several years of meditation experience combined with various pieces of Buddhist theory as popularized in various sources, including but not limited to books like The Mind Illuminated, Mastering the Core Teachings of the Buddha, and The Seeing That Frees; also discussions with other people who have practiced meditation, and scatterings of cognitive psychology papers that relate to the topic. The part that I’m the least confident of is the long-term nature of enlightenment; I’m speculating on what comes next based on what I’ve experienced, but have not actually had a full enlightenment. I also suspect that different kinds of traditions and practices may produce different kinds of enlightenment states.

While I liked Valentine’s recent post on kensho and its follow-ups a lot, one thing that I was annoyed by were the comments that the whole thing can’t be explained from a reductionist, third-person perspective. I agree that such an explanation can’t produce the necessary mental changes that the explanation is talking about. But it seemed wrong to me to claim that all of this would be somehow intrinsically mysterious and impossible to explain on such a level that would give people at least an intellectual understanding of what Looking and enlightenment and all that are. Especially not after I spoke to Val and realized that hey, I actually do know how to Look, and that thing he’s calling kensho, that’s happened to me too.

(Note however that kensho is a Zen term and I'm unfamiliar with Zen; I don't want to use a term which might imply that I was going with whatever theoretical assumptions Zen might have, so I will just talk about “my experience” when it comes up.)

So here is my attempt to give an explanation. I don’t know if I’ve succeeded, but here goes anyway.

----

One of my key concepts is going to be cognitive fusion.

Cognitive fusion is a term from Acceptance and Commitment Therapy (ACT), which refers to a person “fusing together” with the content of a thought or emotion, so that the content is experienced as an objective fact about the world rather than as a mental construct. The most obvious example of this might be if you get really upset with someone else and become convinced that something was all their fault (even if you had actually done something blameworthy too).

In this example, your anger isn’t letting you see clearly, and you can’t step back from your anger to question it, because you have become “fused together” with it and experience everything in terms of the anger’s internal logic.

Another emotional example might be feelings of shame, where it’s easy to experience yourself as a horrible person and feel that this is the literal truth, rather than being just an emotional interpretation.

Cognitive fusion isn’t necessarily a bad thing. If you suddenly notice a car driving towards you at a high speed, you don’t want to get stuck pondering about how the feeling of danger is actually a mental construct produced by your brain. You want to get out of the way as fast as possible, with minimal mental clutter interfering with your actions. Likewise, if you are doing programming or math, you want to become at least partially fused together with your understanding of the domain, taking its axioms as objective facts so that you can focus on figuring out how to work with those axioms and get your desired results.

On the other hand, even when doing math, it can sometimes be useful to question the axioms you’re using. In programming, taking the guarantees of your abstractions as literal axioms can also lead to trouble. And while it is useful to perceive something as objectively life-threatening and out to get you, that perception is going to get you in a lot of trouble if it’s actually false. Such as if you get into a fight with your romantic partner and assume that they actively want to hurt you, when they’re just feeling hurt over something that you said.

Cognitive fusion trades flexibility for focus. You will be strongly driven and capable of focusing on just the thing that’s your in mind, at the cost of being less likely to notice when that thing is actually wrong.

Some simple defusion techniques suggested by ACT include things like noticing when you’re thinking something bad about yourself, and prefacing it with “I’m having the thought that”. So if you find yourself thinking “I am a terrible person”, you can change that into “I’m having the thought that I am a terrible person”. Or you can repeat the word “terrible” a hundred times, until it stops having any meaning. Or you can see if you can manipulate the way that the thought sounds like in your head, such as turning it into a comical whine that sounds like it’s from a cartoon, until you can no longer take it seriously. (Eliezer’s cognitive trope therapy should also be considered as a cognitive defusion technique.) In one way or the other, all of these highlight the fact that the thought or emotion is just a mental construct, making it easier to question its truthfulness.

However, managing to defuse from a thought that is actively bothering you, is a relatively superficial level of defusion. We must go deeper.

Meditation as cognitive defusion practice

While there are many different forms of meditation, many of them could be reasonably characterized as practicing the skill of intentional cognitive defusion.

One of the most basic forms of meditation is to just concentrate on your breath - or on any other focus that you have happened to choose. Soon, a distraction will come up in your mind - something that says that there’s a more important thing to do, or that you are bored, or that this isn’t leading anywhere.

If you start engaging with the content of that distraction, you’re already failing to keep your focus. That is, if a thought comes to you saying that there’s a more important thing to do, and you start arguing with yourself and trying to make a logical case for why meditation is actually the most important thing, then you’ve already been distracted from whatever it was that you were supposed to be focusing on. On some level, you have bought into the internal logic of the distraction, and into the belief that the argument must be beaten on its own terms.

What you must do instead, is to disregard the content of the distraction. Instead of becoming fused with its contents, defuse and redirect your attention back towards your focus. Whenever a new distraction rises, do this again.

As your skill improves and your attention becomes more reliably anchored on the focus, you can start learning additional skills. If you are doing something like the meditation program outlined in e.g. The Mind Illuminated, one of the next steps is to develop an awareness of distractions that are just on the edge of your consciousness, which are not yet distracting you but are going to steal your attention any moment now. By cultivating a sensitivity to those subtle movements of your mind, you are increasing your ability to notice lower-level details of what’s going on in your consciousness, in a way which helps with cognitive defusion by making you more aware of the ways in which your experience is constructed.

As an example of such increased sensitivity, some time back I was doing concentration meditation, using an app which plays the sound of something hitting a woodblock, 50 times per minute. As I was concentrating on listening to the sound, I noticed that what had originally been just one thing in my experience - a discrete sound event - was actually composed of many smaller parts. The beginning and end of the sound were different, so there were actually two sound sensations; and there was a subtle visualization of something hitting something else; and a sense of motion accompanying that visualization. I had not previously even been fully aware that my mind was automatically creating a mental image of what it thought that the sound represented.

Continuing to observe those different components, I became more aware of the fact that my visualization of the sound changed over time and between meditation sessions, in a rather arbitrary way. Sometimes my mind conjured up a vision of a hammer hitting a rock in a dwarven mine; sometimes it was two wooden sticks hitting each other; sometimes it was drops of water falling on the screen of my phone.

By itself, this would mostly just be a curiosity. However, developing the kind of mental precision that actually lets you separate your experience into these kinds of small subcomponents, seems like a prerequisite for slicing your various mental outputs in a way which lets you see what they’re made of.

Last summer, I noticed myself having the thought that I couldn’t be happy, which made me feel bad. And then I noticed that associated with that thought, was a mental image of what a happy person was like - that image was of a young, cheerful, outgoing and extraverted girl.

In other words, my prototypical concept of a happy person included not just happiness, but extraversion and high energy as well. And so my mind was comparing my self-concepts with this concept of happiness, noticing that I wasn’t that kind of a person, and so concluding that I couldn’t be happy. Realizing that my concept of a “happy person” was uselessly narrow allowed me to fix the problem.

But if we break down what happened with the dysfunctional “happiness concept” into slightly smaller steps, something like this seems to have happened:

1) me feeling unhappy -> 2) mental image of a happy person -> 3) thought that I can’t be happy

Notice that this has a similarity with the way my mind automatically produced a visualization for the woodblock sound:

1) sensation of the woodblock sound -> 2) mental image of two woodblocks hitting each other -> 3) thought of “oh, it’s two woodblocks hitting each other”

In both cases, some stimulus seemed to have produced a subtle mental image as a preliminary interpretation of what the stimulus meant, which then translated into a higher-level abstract concept. In both cases, something was off about the middle step. In the case of the happiness example, I had a too narrow view of what happy people are like. With the sound, the problem was that my mind was making up various interpretations of what was making the sound, despite having too little data to actually determine what it was.

Having developed the ability to notice those earlier steps in my mental processes, allowed me to notice a potential problem, as opposed to only being aware of the final output of the process.

I believe that this kind of thing is what Valentine means when he talks about Looking: being able to develop the necessary mental sharpness to notice slightly lower-level processing stages in your cognitive processes, and study the raw concepts which then get turned into higher-level cognitive content, rather than only seeing the high-level cognitive content.

This seems like a core rationality skill, since seeing slightly earlier stages of your cognitive process helps question its validity, which is to say it makes it easier for you to engage in cognitive defusion when desired. (If the process seems valid, you can still choose to fuse with it if that provides a benefit.) And being able to apply selective cognitive defusion means being able to not believe everything that you think, which is an essential requirement for things like actually changing your mind.

Understanding suffering

Understanding suffering is a special case of Looking, but a sufficiently important one that it deserves to be briefly discussed in some detail.

Usually, most of us are - on some implicit level - operating off a belief that we need to experience pleasant feelings and need to avoid experiencing unpleasant feelings. In a sense, thinking about getting into an unpleasant or painful situation may feel almost like death: if we think that the experience would be unpleasant enough, then no matter how brief it might be, we might do almost anything to avoid ending up there.

There’s a sense in which this is absurd. After all, a moment of discomfort is just that - a moment of discomfort. By itself, it won’t do us any lasting damage, and trying to avoid can produce worse results even on its own terms.

For instance, consider the person who keeps putting off making a doctor’s appointment because they suspect that there’s something wrong with them. If there really is something seriously wrong, then the best thing would be to get a diagnosis as fast as possible. And even if it is something harmless, it would still be better to find out about that earlier rather than later, so as to stop feeling the nervous about it. Not going to the doctor, and continuing to feel nervous about it, is about the worst possible outcome - even if you cared about avoiding discomfort.

On a conscious level, we realize that this kind of behavior is absurd. Then we go on doing it.

You might say that it’s because there’s a part of us that remains cognitively fused with the alief that all painful experiences need to be avoided, and that there’s something vaguely death-like about them.

Typically, if we are only talking about relatively mild discomfort, then that alief doesn’t manifest itself very strongly. We are okay with the thought of facing mild discomfort. But just as it’s easy to remain calm and defused from feelings of anger as long as there isn’t anything strongly upsetting going on, on some level we will tend to experience cognitive fusion with the “pain is death” alief more and more strongly the worse we expect the pain to be.

The general way by which incorrect aliefs are changed is by giving the part of your brain holding them, experiences about what the world is really like. If you have a dog phobia, you might do desensitization therapy, gradually exposing yourself to dogs in controlled circumstances. Eventually, seeing that you have encountered dogs many times and that it’s safe, your brain updates and ceases to have the phobia.

Similarly, if you Look at the process of yourself flinching away from thoughts of painful experiences, you will come to directly experience the fact that it’s the flinching away from them that actually produces suffering, and that the thoughts would be harmless by themselves.

The dog doesn’t hurt you: it’s your own fear that hurts you. Similarly, pain isn’t bad by itself, but turns into suffering when we come to believe that we need to avoid it. Seeing this, the parts of your mind that have been doing the flinching away, will gradually start updating towards not habitually flinching away.

When I say that it is the automatic flinching away that actually produces suffering, I don’t mean that just in the sense of “putting off painful experiences causes us to experience more pain in the long run”. I mean that the processes involved with the flinching away are literally what turns pain into suffering: if you can get the flinching away to stop, pain (whether physical or emotional) will still be present as an attention signal that flags important things into your awareness. But neither the experience of pain, nor the thought of experiencing pain in the future, will be experienced as aversive anymore. The alief / belief of “pain is death” will not be active.

Now, Looking at your process-of-flinching-away in order to stop flinching away, is a long and slow process. We can again compare it with getting desensitized to a phobia: even after you have learned to be okay with a mild phobia trigger (say, a toy dog in the same room with you), you will continue to be freaked out by worse versions of the trigger (such as a real dog). It’s very possible to have setbacks if a dog attacks you or if your life just generally gets more stressful, and sometimes you might show up at a session and get freaked out by things you thought you were already desensitized to. Learning to Look at suffering in order to reduce it is similar.

So what’s all this “look up” and “get out of the car” stuff?

Here’s an analogy.

Suppose that one day, you happen to run into a complete stranger. You don’t think very much about needing to impress them, and as a result, you come off as relaxed and charming.

The next day, you’re going on a date with someone you’re really strongly attracted to. You feel that it’s really really important for you to make a good impression, and because you keep obsessing about this thought, you can’t relax, act normal, and actually make a good impression.

Suppose that you remember all that stuff about cognitive fusion. You might (correctly) think that if you managed to defuse from the thought of this being an important encounter, then all of this would be less stressful and you might actually make a good impression.

But this brings up a particular difficulty: it can be relatively easy to defuse from a thought that you on some level believe is, or at least may be, false. But it’s a lot harder to defuse from a thought which you believe on a deep level to actually be true, but which it’s just counterproductive to think about.

After all, if you really are strongly interested in this person, but might not have an opportunity to meet with them again if you make a bad impression... then it is important for you to make a good impression on them now. Defusing from the thought of this being important, would mean that you believed less in this being important, meaning that you might do something that actually left a bad impression on them!

You can’t defuse from the content of a belief, if your motivation for wanting to defuse from it is the belief itself. In trying to reject the belief that making a good impression is important, and trying to do this with the motive of making a good impression, you just reinforce the belief that this is important. If you want to actually defuse from the belief, your motive for doing so has to come from somewhere else than the belief itself.

The general form of this thing is what makes big green bats complain that you’re still not getting out of the car. Or people who are aware of their cell phones, that you’re still not looking up. You are fused with some belief or conceptual system while trying to use that very same belief or conceptual system to defuse yourself from it, which keeps you trapped in it. Instead, you could just stop using it, and then you’d be free.

Of course, this is easier said than done. Even if you know that this is what you’re doing, knowing it isn’t enough to stop doing it. Essentially, you have to somehow distract yourself from the belief you’re caught up with… but if your belief is that this thing is really important, then before you could distract yourself from it, you’d need to distract yourself from it, so as to stop worrying about the potential consequences of having distracted yourself from it.

Yeah.

All of this particularly applies for trying to overcome suffering. Because remember, suffering is caused by a belief that pain is intrinsically bad. That belief is what causes you to try to flinch away from pain in a way which, by itself, creates the suffering.

So if you are experiencing some really powerful emotion that’s causing you a lot of suffering, making you want to defuse from it so that you could stop feeling those bad things?

Well, then you are trying to be okay with feeling bad things, so that you could stop feeling bad things. Again, your motive for wanting to defuse from a belief, is digging you deeper into the belief.

On the surface, this would seem to suggest that you can only use Looking to stop suffering in cases of relatively mild pain, where you don’t really even care all that much about whether you’re in pain or not. Looking would only help you feel better in the cases when you’d need it the least anyway.

And to be honest, a lot of the time it does feel that way.

Fortunately, there is a solution.

The three marks

I previously mentioned that there’s something absurd about the belief that pain would need to be avoided: after all, if something really painful happens, then that won’t kill us: usually it only means that, well, something really painful has happened. We might be left traumatized, but that trauma is by itself also just more pain.

It’s as if a deep part of our minds is deluded about just how world-ending the pain is in the first place.

Buddhist theory states that that delusion arises from deep parts of our minds being wrong about some fundamental aspects of existence, traditionally called the three marks: impermanence, unsatisfactoriness, and no-self. If we can make ourselves curious about the true nature of existence, and Look deeply enough into just how our mind works, we can eventually witness things about how our mind works which contradict those delusions.

Do that often and deep enough, and the delusions shatter.

This allows us to actually overcome suffering, because in order to explore the nature of the self, we do not need to always be motivated by a desire to make the suffering stop. Rather, we can be motivated by things like curiosity or a desire to help other people, and explore the workings of our mind during times when we are not in terrible pain.

There will be a time when this happens on a sufficiently deep level that a person becomes convinced of full enlightenment being possible. Typically, the first time will be enough to let them get a taste of what it’s like to live without delusions; but their insights are not yet deep enough to cause a permanent change, and the delusions will soon regenerate themselves.

Still, the delusions will not regenerate entirely: something will have shifted permanently, in a way that makes it easier to make further progress on dissolving them.

While it is impossible to use words to convey the experience of getting insight into the three marks of existence, it is possible to offer a third-person perspective on what exactly it is that our minds are mistaken about. Of the three marks, no-self may be the easiest to explain in these terms.

In the book The Mind Illuminated, the Buddhist model of psychology is described as one where our minds are composed of a large number of subagents, which share information by sending various percepts into consciousness. There's one particular subagent, the 'narrating mind' which takes these percepts and binds them together by generating a story of there existing one single agent, an I, to which everything happens. The fundamental delusion is when this fictional construct of an I is mistaken for an actually-existing entity, which needs to be protected by acquiring percepts with a positive emotional tone and avoiding percepts with a negative one.

When a person becomes capable of observing in sufficient detail the mental process by which this sense of an I is constructed, the delusion of its independent existence is broken. Afterwards, while the mind will continue to use the concept "I" as an organizing principle, it becomes correctly experienced as a theoretical fiction rather than something that could be harmed or helped by the experience of “bad” or “good” emotions. As a result, desire and aversion towards having specific states of mind (and thus suffering) cease. We cease to flinch away from pain, seeing that we do not need to avoid them in order to protect the “I”.

On why enlightenment may not be very visible in one’s behavior

In the comments of the kensho post, cousin_it mentioned having read several reports of people claiming enlightenment… yet not seeming to really demonstrate it by having better emotional skills. A paper also reported on various people having achieved some kinds of advanced meditative states… but still not being all that different when viewed from the outside:

There seemed to be a clear distinction between a PNSE participant’s personality and their underlying sense of having an individualized sense of self. When the latter is absent, the former seems to be able to continue to function relatively unabated. There are exceptions. For example, the change in well-being in participants who were depressed prior to the onset of PNSE was obviously spotted by those around them. Generally, however, the external changes were not significant enough to be detected, even by those closest to the participant.

Based on how I experienced things when I had the experience that made enlightenment seem within reach, something like a lack of noticeable change is in fact exactly what I would expect from many people who become enlightened.

Remember, enlightenment means that you no longer experience emotional pain as aversive. In other words, you continue to have “negative” emotions like fear, anger, jealousy, and so on - you just don’t mind having them.

This does end up changing some of your emotional landscape. My experience was that since feeling crappy felt like an okay thing to happen, the thought of having negative experiences in the future no longer stressed me out. This brought with it a sense of calm, since I knew that I was in some sense "invulnerable" to anything that might happen. But the state of calmness was more of a result of everything being okay - a consequence of there no longer being anything that would be a genuine threat - rather than a permanent emotional state.

That emotion of calm could still be momentarily replaced by other emotional states as normal, it was just that one particular source of negative feelings (the fear of future negative feelings) was eliminated. I would still feel sadness about the things I normally feel sad about, anger about the things I normally feel angry about, and so on. And because those emotions no longer felt aversive, I didn’t have a reason to invest in not feeling those things - unless I had some other reason than the intrinsic aversiveness of an emotion to do so.

My model here is that enlightenment doesn’t automatically make you a good person, nor particularly emotionally balanced, or anything like that. If you were a jealous wreck before, but felt like it was totally justified and right for you to behave jealously… then seeing through the illusion of the self isn’t going to clear those cognitive structures from your head. It can help you defuse from them enough to see that your justifications are essentially arbitrary - but at the same time, you may also have defused from any cognitive structures that say that there’s something bad about having essentially arbitrary justifications.

To put it differently: one way of describing my experience was that it felt like an extreme moment of cognitive defusion, where I defused from my entire motivational system, and could just watch its operation from the outside.

But the thing is, if you truly step outside your entire motivational system, then that leaves the part that just stepped out with no motivational system, leaving the existing one operating as normal.

Suppose that you are thinking something like, “aha! stepping outside my whole motivational system means that I’m finally free to do thing X, which stupid internal conflicts have been blocking me from doing so far!”

But if you are thinking that, then you are still working inside a motivational system where it’s important to achieve X. (Still not stepping out of the car.) If you have truly defused from your motivational system, then you have no particular desire to change the things in your mind that influence whether you are going to achieve X or not.

Even if you manage to step outside the system, the system is still going to keep doing various things - like taking your body to the store to get food - that it has learned to do: being defused from a motivation doesn’t mean that the motivation would necessarily disappear or stop influencing your behavior. It just means that you can examine its validity as it goes on.

And if you see yourself going to the store to get some food, well, why not go along with that? After all, to stop acting as you always have, would require some special motivation to do so. All of your motivations exist within the system. If you previously had a motivation to change something about your own behavior, but also had underlying psychological reasons why you hadn’t changed your behavior yet, then enlightenment may leave that balance of competing motivations basically unaltered. You may still have mental processes struggling against each other and you may experience internal conflict as normal: the only difference is that you won’t suffer from that internal conflict.

Does this contradict the people who say that meditation will make you actively happy?

No: it only means that Looking at the nature of suffering might not make you actively happy (in the sense of experiencing lots of positive emotions). Remember that there are many things that you can Look at: meditation is essentially focusing your attention on something, and what you focus on makes a major difference.

I think in terms of meditative practices that work within an existing system (of pleasure and pain), versus ones that try to move you outside the system entirely. Some traditions focus on working inside the system, and may involve things like conditioning your mind for constant pleasure. Some systems combine the two, involving both practices which increase the amount of pleasure you’ll experience, while also helping you be okay even with experiencing less pleasure. The Mind Illuminated takes this approach, for example.

And if enlightenment leaves your existing personality remains mostly intact, does it mean that Looking and meditation are useless for improving your rationality after all?

No. Again, it only means that Looking at the things which cause suffering, will not change your behavior as much as you might expect. Again, there are many different things about the functioning of your mind that you can Look at. And getting to the point where're you're enlightened, requires training up a lot of mental precision which you can then use to Look at various things.

Even if you do manage to defuse from everything that causes you suffering, your existing personality and motivational system will still be in charge of what it is that you Look at in the future. If all you cared about was ceasing to suffer, well, you’re done! You might not have the motivation to do any more Looking on top of that, since it already got you what you wanted. You’ll just go on living as normal, with your existing personality.

But if you cared about things like saving the world, then you will still continue to work on saving the world, and you will be Looking at things which will help you save the world - including ones that increase your rationality.

It’s just that if the world ends up ending, it won’t feel like the end of the world.

Of course, you will still feel intense grief and disappointment and everything that you’d expect to feel about the world ending.

Intense grief and disappointment just won’t be the end of the world.

[Edited to add: for my more detailed, later explanation of this topic, see the series of posts starting from A non-mystical explanation of insight meditation and the three characteristics of existence.]

310

130 comments, sorted by Highlighting new comments since Today at 8:48 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
While I liked Valentine’s recent post on kensho and its follow-ups a lot, one thing that I was annoyed by were the comments that the whole thing can’t be explained from a reductionist, third-person perspective. I agree that such an explanation can’t produce the necessary mental changes that the explanation is talking about. But it seemed wrong to me to claim that all of this would be somehow intrinsically mysterious and impossible to explain on such a level that would give people at least an intellectual understanding of what Looking and enlightenment and all that are.

Speaking as someone who's more or less avoided participating in the kensho discussion (and subsequent related discussions) until now, I think the quoted passage pretty much nails the biggest reservation I had with respect to the topic: the language used in those threads tended to switch back and forth between factual and metaphorical with very little indication as to which mode was being used at any particular moment, to the point where I really wanted to just say, "Okay, I sort of see what you're gesturing at and I'd love to discuss this with you in good faith, but before we get started on that, c... (read more)

I am really happy that this post was written, and mildly annoyed by the same things you're annoyed by.

To explain rather than excuse, there's a good reason that meditation teachers historically avoid giving clear answers like this. That's because their goal is not to help you intellectually understand meditation, but rather to help you do meditation.

It's very easy to mentally slip from "I intellectually understand what sort of thing this is" to "I understand the thing itself", and so meditation teachers hit this problem with a hammer by just refusing to explain it, so you're forced to try it instead. This problem is what the "get out of the car" section is talking about.

I have some worry that this post will make it easier for people to make errors like:

"I'm angry, because X is a jerk. Aha, I should try the thing Kaj was talking about, and notice that feeling angry is not helping me with my goal of utterly destroying X."

(This is exaggerated, but mistakes of this shape are really, really easy to make.)

I think it's definitely worth the cost, but it is a cost.

In particular, I would also add this warning: it's (mildly) dangerous to try to convince yourself of this no-self stuff too deeply on a purely intellectual level.

There was one point where I had read intellectual descriptions of the no-self thing, but hadn't had the experience of it. But I figured that maybe if I really thought it through and used a lot of compelling arguments, I could convince myself of it - after all, the intellectual argument seemed reasonable, but I clearly wasn't believing it on an emotional level, so maybe if I tried really hard to make the intellectual argument sink in?

This does not work. (At least, it didn't work for me, and I doubt it works for the average person.) The "no-self" thing was still getting interpreted in terms of my existing ontology, rather than the ontology updating. What I ended up with was some kind of a notion, temporarily and imperfectly believed on an emotional level, that every second of existence involved me dying and a new entity being created, and that every consciousness-moment would be my last.

That was not a healthy state of mind to be in; fortunately, my normal thinking patterns pretty quickly overrod... (read more)

The "no-self" thing was still getting interpreted in terms of my existing ontology, rather than the ontology updating.

This.

I'll finish reading the other comments and then, time permitting, I'll add my own.

I'll just note for now that there's a kind of "being clear" that I think is dangerous for rationality, in a way analogous to what you describe here about no-self. The sketch is something like: if an epistemology is built on top of an ontology, then that epistemology is going to have a hard time with a wide swath of ontological updates. Getting around this seems to require Looking at one's ontologies and somehow integrating Looking into one's epistemology. Being required to explain that in terms of a very specific ontology seems to give an illusion of understanding that often becomes sticky.

but before we get started on that, can we quickly step out of mythic mode/metaphor land/narrative thinking for a moment, just to make sure that we are all still on the same page as far as basic ontology goes, and agree that, for instance, physics and mathematics and logic are still true?"

Now it seems to me like there was some straight up miscommunication in that thread. My recollection is that everywhere I saw this question explicitly asked, it was explicitly answered "yes" (e.g. JenniferRM asked it at some point). I don't remember Said asking this question.

I like this. The terminology I was exposed to for what you're calling cognitive fusion is being "subject to" something (I think it comes from Kegan but I learned it from Pete Michaud), and defusion is taking the thing "as object." (Actually these might not quite line up; if someone who's familiar with both terms can explain any possible differences to me I'd appreciate it.) And the practice I've been using for getting experience with this is circling.

Example: I spent most of the last year being subject to aliefs along the lines of "if X happens then that means I'm bad, which means that nobody will ever love me," which constantly surfaced in most of the circles I was in. The point of working with this alief in a circle was 1) to get exposed to situations that might trigger the alief, 2) to notice when other people started being confused about what I was saying because it no longer made sense to them, and 3) possibly to get actual experiences that contradict the alief (people still loving me even though X had happened). I was gradually able to take this alief as object but it took awhile; it had a very, very strong grip on me. T... (read more)

Reading this comment made me feel really happy for you.

From what you've been writing here and on Facebook, I feel like I can relate to a lot of the stuff you've been going through and fixing. I'm glad that you're getting through it.

Thanks. I've really appreciated your writing about what you've gone through as well, especially the core transformation stuff and the self-concept stuff above.

I like this. I largely agree.

I'd like to pinpoint a few differences I notice. I hope the collective here takes this as me coming from a spirit of "Here's the delta I see" rather than "I disagree and here's why." By and large I really like the clarity Kaj has brought to this.

First, a meta thing:

While I liked Valentine’s recent post on kensho and its follow-ups a lot, one thing that I was annoyed by were the comments that the whole thing can’t be explained from a reductionist, third-person perspective.

I didn't mean to convey that it can't be explained this way. I now think I was combining a few different things in a way that accidentally made it hard to understand:

  • One key thing I now see is that Looking doesn't require self-reference — but most of the interesting applications of Looking that I'm aware of do require navigating self-reference. An example of this is the "get out of the car" problem. (I'll have more to say about Kaj's interpretation of that in a bit.)
  • The main thrust of what I'm poking at is a collection of results of Looking at ontology (whereas here Kaj focuses mostly on Looking at suffering
... (read more)

On the inside, before you Look, the thing you’re about to Look at doesn’t look on the inside like “high-level cognitive content”. It looks like how things are. This ends up with me saying things that sound kind of crazy or nonsensical, but to me are obvious once I Look at them. (E.g., there are no objects. We create objects in order to think. Because language is suffused with object-ness, though, I don’t know of any coherent way of talking about this.)

This sounds very familiar. To quote from How An Algorithm Feels From Inside:

Because we don't instinctively see our intuitions as "intuitions", we just see them as the world. When you look at a green cup, you don't think of yourself as seeing a picture reconstructed in your visual cortex—although that is what you are seeing—you just see a green cup. You think, "Why, look, this cup is green," not, "The picture in my visual cortex of this cup is green."

And in the same way, when people argue over whether the falling tree makes a sound, or whether Pluto is a planet, they don't see themselves as arguing over whether a categorization should be active in their neural networks. It seems like either the tre

... (read more)
It sounds like Looking is a skill that lets someone have more introspective access to their own neural network structures. If this is a correct understanding, it seems perfectly compatible with LW's current approach to ontology, or at least the approach laid out in Eliezer's Sequences (with one caveat being that I think we should be careful/skeptical about whether someone purporting to be Looking is really introspecting parts of their neural network structures, or merely doing some form of epistemic wireheading). Do you agree?

Hmm. I need to answer this in two pieces simultaneously:

  • The short and slightly deceptive answer is "Yes I agree." A more careful answer: From within LW's current approach to ontology, the restriction of Looking to that ontology works perfectly well, although there are some things (like what Eric S. Raymond refers to in Dancing With the Gods) that will at best make sense while remaining largely inaccessible.
  • Your very first sentence here presupposes the standard LW ontology: "It sounds like Looking is a skill that lets someone have more introspective access to their own neural network structures." The structure of your question
... (read more)

Hmm... So going back to the paragraph I was responding to:

On the inside, before you Look, the thing you’re about to Look at doesn’t look on the inside like “high-level cognitive content”. It looks like how things are. This ends up with me saying things that sound kind of crazy or nonsensical, but to me are obvious once I Look at them. (E.g., there are no objects. We create objects in order to think. Because language is suffused with object-ness, though, I don’t know of any coherent way of talking about this.)

Are you saying that LW's approach to ontology has a different problem from this (which causes it to not be able to create an ontology that captures everything that's important about Looking)? (In other words, this paragraph wasn't meant to apply to LW; LW has a different problem.) Or is it something more like, LW's approach appreciates "we create objects in order to think" on an intellectual level but not on a practical level?

Or is it something more like, LW's approach appreciates "we create objects in order to think" on an intellectual level but not on a practical level?

That one.

Though to be clear, I'm not trying to talk specifically about the "there are no objects" thing exactly. I was using that as an example of something seen via Looking that I imagine sounds kind of crazy or nonsensical.

But I do mean that LW culture occurs to me as being subject to its ontology, and to the extent that there's discussion of this, that discussion is pretty reliably done within that ontology. This gives the illusion of it being justified (when that's actually just a consistency check) and makes the ontology's blindspots incredibly difficult to point out.

I don't think that sentence exactly presupposes the standard LW ontology. Rather, Wei_Dai is saying: "It currently looks to me as if this Looking stuff is compatible with standard LW ontology, and here's what it looks like; if that's wrong, please explain how".

I have largely lost hope, though, that any of the Enlightened[1] will seriously attempt to explain how, rather than just continuing to tell us Unenlightened[2] folks that our ontology, or paperclip-maximizer-like brain subagents, or whatever, block us from understanding. Of course they may well be right; perhaps explaining what we're wrong about really is futile because we just need to Get Out Of The Car but nothing they tell us will help us know what a "car" is or in what way we're "in" one or how to "get out", until enlightenment strikes and we see -- sorry, See -- for ourselves. That doesn't stop it being frustrating, though. Still, I continue to harbour some hope that Valentine's future articles may be, um, enlightening.

[1] I don't mean to imply (1) that the people in question have achieved True Enlightenment, whatever that may be, or (2) that... (read more)

I'm confused. You seem to be expressing frustration at not getting a clear explanation of how Looking is incompatible with standard LW ontology, but Val just said that it is compatible with it?

Of course they may well be right; perhaps explaining what we're wrong about really is futile because we just need to Get Out Of The Car but nothing they tell us will help us know what a "car" is or in what way we're "in" one or how to "get out",

... my post was an attempt to explain some of exactly that? A "car" is any belief or conceptual structure that you're fused with; how you get out of it depends on the exact nature of the belief. For things like simple emotional reactions, just managing to distract yourself may be enough; for deeper conceptual structures, developing skill at being able to break down your cognitive processes into more basic building blocks is typically required.

I don't think Valentine did quite say that (his notion of) Looking is compatible with standard LW ontology. He speaks of "the restriction of Looking to that ontology" and indicates that from within the standard LW ontology other things will "remain largely inaccessible". He says that what Wei_Dai is saying "presupposes the standard LW ontology" and that this produces a "Get out of the car" problem. (While, yes, conceding that within that ontology "yes, it's compatible" is the best available answer.)

I agree that your post is an attempt to explain those things. (And my slightly snarky comments about what "the Enlightened" are and aren't willing to do was -- I should have been explicit about this, sorry -- not meant to apply to you: your clarity and explicitness on this stuff is extremely welcome.) But my impression is that, while Valentine has expressed approval of your post and said that he feels understood and so forth, he thinks there are important aspects of Looking/enlightenment/kensho/... that it doesn't (and maybe can't) cover.

Obvious disclaimer: I am not Valentine, and I may very well be misunderstanding him.

But my impression is that, while Valentine has expressed approval of your post and said that he feels understood and so forth, he thinks there are important aspects of Looking/enlightenment/kensho/... that it doesn't (and maybe can't) cover.

Doesn't: yes, for sure.

Can't: mmm, maybe? I expect that by the end of the sequence I'm writing, we'll return to Kaj's interpretation of Looking and basically just use it as a given — but it'll mean something slightly different. Right now, I expect that if we just assume Kaj's interpretation, we're going to encounter a logjam when we apply Looking to the favored LW ontology, and the social web will have a kind of allergic reaction to the logjam that prevents collective understanding of where it came from. Once we collectively understand the structure of that whole process, we can smash face-first into the logjam, notice the confusion that results, and then make some meaningful progress on making our epistemic methods up to tackling serious meta-ontological challenges. At that point I think it'll be just fine to say "Yep, we can think of Looking as compatible with the standard LW ontology." Just not before.

6gjm3yInteresting. Let's see what the sequence holds...

Got it. Apology accepted and appreciated. :)

6Valentine3yI really am trying. When I talk about paperclip-maximizer-like subagents or ontological self-reference, it's not my intent to say "You can't understand because of XYZ." I'm trying to say something more like, "I'd like you to notice the structure of XYZ and how it interferes with understanding, so that you notice and understand XYZ's influence while we talk about the thing." Right now there's too large of an inferential gap for me to answer the "how" question directly, and I can see specific ways in which my trying will just generate confusion, because of XYZs. But I really am trying to get there. It's just going to take me a little while. Strong agreement.
6Valentine3yMeta: Okay, I'm super confused what just happened. The webpage refreshed before I submitted my reply and from what I could tell just erased it. Then I wrote this one, submitted it, and the one I had thought was erased appeared as though I'd posted it. (And also, I can't erase either one…?)
5Valentine3yI really am sincerely trying. In this case there's a pretty epic inferential gap, and I'm working on bridging that gap… and it requires first talking about paperclip-maximizing-like mechanisms and illusions created by self-reference within ontologies that one is subject to. Then I can point at the Gödelian loophole, and we can watch our minds do summersaults, and we'll recognize the summersaults and can step back and talk coherently about what the existence of the ontological wormhole might mean for epistemology. Or at least that's the plan. And… I recognize it's frustrating in the middle. And if I were more clever and/or more knowledgeable, I might have seen a way to make it less frustrating. I'd rather not create that experience for y'all. FWIW, I don't think the Unenlightened[2] can't understand where I'm going. I just need some conceptual structures, like the social web thing [https://www.lesserwrong.com/posts/AqbWna2S85pFTsHH4/the-real-world-omega], to make where I'm going even possible to say — at least given my current skill with expressing this stuff. Ha! :-) I hope so too.
I'm particularly concerned here because the culture around LW-style rationality seems to emphasize a very specific and almost mathematically precise ontology in a way that is often super useful but that I don't think is a necessary consequence of the epistemic orientation. That made me really hesitant to put a bunch of effort into spelling out what Looking might be within that favored ontology, since the whole point is to notice restrictions on epistemic strength imposed by ontological rigidity.

Yeah, this thing. In another comment I used the phrase "LW epistemic game" to describe the pattern in the local social web around deciding who's epistemically trustworthy and what flavors of arguments are epistemically acceptable. I'm not sure how to get people to Look at the game without looking like I'm defecting in the game.

Alright, but, it is actually true that some flavors of arguments are acceptable (i.e. serve as evidence for truth) whereas other flavors of arguments are not acceptable (i.e. don't serve as evidence for truth). A lot of rationalist wisdom revolves precisely around distinguishing one from the other. Your comment sounds like someone who is comparing science and religion and saying that, both are just "patterns in a social web" that deem different sort of arguments as acceptable. However, they are not really symmetric. One of them is more correct than the other. So, saying that Looking cannot be explained by the sort of arguments that rationalists tend to accept does not strike me as a point in favor of Looking as a useful concept? I might be misunderstanding your intent.

acceptable (i.e. serve as evidence for truth)

I hope even in the LW epistemic frame we can appreciate how dangerous it is to conflate "acceptable," which is fundamentally a social notion, and "serve[s] as evidence for truth." The point of being able to Look at the LW epistemic game, from within the point of view of the LW epistemic game, is precisely to see the ways in which playing it well is Goodharting on truth-seeking.

Your comment sounds like someone who is comparing science and religion and saying that, both are just "patterns in a social web" that deem different sort of arguments as acceptable.

Yes, that's a sort of thing I might say.

However, they are not really symmetric. One of them is more correct than the other.

I get the feeling that you think you've said a simple thing, but actually the thing you've said is very complicated and deserves to be unpacked in much greater detail. In short: more correct about what? And why does being correct about those things matter?

The scientific frame is not just a collection of methods for finding truths, it's also a collection of implicit values about what sorts of truths are worth findin... (read more)

6Vanessa Kosoy3yI am not conflating them, I am using the word "acceptable" to mean "the sort of argument that would move a rational agent" = "serves as evidence for truth". Of course our community standards are likely to fall far short from the hypothetical standards of ideal rational agents. However, it seems not useful to point it out without saying precisely which error is being made. When you start from something highly optimized (as I do believe our community standards to be, relatively speaking) and make a random change, you are far more likely to do harm than good. I don't know why you feel that I think that I said a simple thing (we're breaking Yudkowsky's Law of Ultrafinite Recursion here). I do think I said something that is relatively uncontroversial in this community. But alright, let's start unpacking. The opposite is also not clear. Religion is responsible for many, many atrocities, both on a grand scale (witch hunts, crusades, jihads, pogroms etc.) and on a moderate scale (e.g. persecution of sexual minorities, upholding patriarchal social orders and justifying authoritarian regimes). These atrocities were ameliorated in the modern age to a large extent thanks to the disillusionment with religion brought about by the advent of science. Moreover, religion doesn't really consciously set out to find truths about making good communities. It seems to be more a side effect of religion, since a group of people with common beliefs and traditions is naturally more cohesive than a group of people without such. I think that if we do set out to find these truths, we would be well advised to use the methods of science (e.g. empiricism and mathematical models) rather than the methods of religion (i.e. dogma).
"the sort of argument that would move a rational agent" = "serves as evidence for truth".

I think these are not at all the same, and that using the word "acceptable" makes you more likely to make this particular bucket error. In short: serves as evidence for truth to whom?

In practice what assuming these are the same looks like, in the LW epistemic game, is 1) pretending that we're all rational agents and 2) therefore we should only individually accept arguments that make sense to all of us. But in fact the sorts of arguments that would move me are not the sorts of arguments that would move you (and whether or not we're rational agents we still need to decide which arguments move us), which is why I can feel very confident that a decision I'm making is a good idea even though I may not be able to phrase my internal arguments for doing so in a way that would be satisfying to you. Optimizing for satisfying all of LW is straight up Goodharting on social approval.

However, it seems not useful to point it out without saying precisely which error is being made.

There's at least one error I can point to easily because someone else already did... (read more)

4Vanessa Kosoy3yI agree that you can make arguments that appeal to people that have a particular intuition and don't appeal to people that don't have this intuition. Although it is also possible to point out explicitly that you are relying on this intuition and that convincing the rest would require digging deeper, so to speak. I'm not sure whether the essence of your claim is that people on LW take ill to that kind of arugments? I admit that I haven't read the entire "hero licensing" essay but my impression was that is hammering home the same thesis that already appears in "inadequate equilibria", namely that "epistemic modesty" as often practiced is a product of status games rather than a principle of rationality. But I don't really understand why you think it's "a default outcome of the LW epistemic game". Can you expand? Yes, the "LW epistemic games" is not perfectly truth-seeking. Nothing that humans do is perfectly truth-seeking. Since (I think) virtually nobody thinks that it is perfectly truth seeking, it's mostly worth pointing out only inasmuch as you also explain how it is not truth-seeking and in what direction it would have to change in order to become more so.

Quote from Richard Feynman explaining why there are no objects here.

I've begun a STEM-compatible attempt to explain a "no objectively-given objects" ontology in "Boundaries, objects, and connections." That's supposed to be the introduction to a book chapter that is extensively drafted but not yet polished enough to publish.

Really glad you are working on this also!

9Vanessa Kosoy3yI don't understand why should it happen. Of course we can use reductionist materialism to reason about processes that happen in our brain when we are doing this very reasoning. It does cause confusion, which is the reason for debates surrounding "free feel" et cetera, but this confusion is solvable. I think that I no longer have this confusion, so why should Looking be an exception? The reason the paperclip maximizer won't listen is because it doesn't care, not because it doesn't understand what you're saying. So, this allegory would only make sense if, some parts of our mind don't care about the benefits of Looking while other parts do care. It still shouldn't be an impediment to understand what Looking is.
Of course we can use reductionist materialism to reason about processes that happen in our brain when we are doing this very reasoning.

I'm not disagreeing with that. I'm saying that:

  • It's pretty normal to miss the confusion in this case.
  • Looking isn't reasoning.

The reason the paperclip maximizer won't listen is because it doesn't care, not because it doesn't understand what you're saying. So, this allegory would only make sense if, some parts of our mind don't care about the benefits of Looking while other parts do care. It still shouldn't be an impediment to understand what Looking is.

…unless it suspects that understanding what Looking is might make it less effective at maximizing paperclips.

7Vanessa Kosoy3yHow can understanding something make you less effective at doing something? Are you modeling the mind as an adversarial system, where one subagent wants to prevent another from gaining some knowledge? Or, is Looking some kind of infohazard that can damage a mind just via the knowledge itself? In either case it makes Looking sound like something very dangerous.
How can understanding something make you less effective at doing something? Are you modeling the mind as an adversarial system, where one subagent wants to prevent another from gaining some knowledge?

This is roughly the kind of thing that can happen. For example, suppose that it's an important feature of your identity / self-concept that you're a good, kind person, such that seeing strong evidence that you're not such a person would be psychologically devastating to you - you wouldn't be able to trust yourself to interact with other people, or something, and so you'd hole up in your room and just be depressed, or at least some part of you is afraid that something like this is possible. Then that part of you will be highly motivated to ignore evidence that you're not a good, kind person, and avoid situations or thoughts that might lead you to see such evidence.

My experience is that many or even most people have a thing like this (and don't know it). At CFAR we use the term "load-bearing bug" to refer to a bug you have that actively resists being solved, because some part of you is worried that solving it might be devastating in this way.... (read more)

It's certainly true that there are truths about which people are lying to themselves. However, I'm confused about this being an explanation for why Looking is so difficult to explain. My impression from the "phone" allegory etc. was that Looking is just supposed to be such a difficult concept that most people have almost no tools in their epistemic arsenal to understand it. This is very different from saying that people already know in their hearts what Looking is but don't want to acknowledge it because it would disrupt some self-deception.

My impression from the "phone" allegory etc. was that Looking is just supposed to be such a difficult concept that most people have almost no tools in their epistemic arsenal to understand it. This is very different from saying that people already know in their hearts what Looking is but don't want to acknowledge it because it would disrupt some self-deception.

People don't need to already know it in order for this dynamic to play out. All that's required is that the person have some kind of idea of what type of impact it'll have on their mental architecture — and that "some kind of idea" needn't be accurate.

This gets badly exacerbated if the concept is hard to understand. See e.g. "consciousness collapses quantum uncertainty" type beliefs. This does a reasonably good job of immunizing a mind against more materialist orientations to quantum phenomena.

But to illustrate in a little more detail how this might make Looking more difficult to understand, here's a slightly fictionalized exchange I've had with many, many people:

  • Them: "Give me an example of Looking."
  • Me: "Okay. If you Look at your hand, you can se
... (read more)

Alright, I think I now understand much better what you mean, thank you. It is true that there are things that set off epistemic immune responses despite being "innocuous" (e.g. X-risk vs. "doomsday prophecies" and rationalist community vs. "cult"). However, it is also the case that, these immune responses are there for a reason. If you want to promote an idea which sets off such responses then, IMHO, you need to make it clear as early as possible how your idea is different from the "pathogens" against which the response is intended. Specifically in the case of Looking, what rings my alarm bells is not so much the "this-ness" etc. but the claim that Looking is beyond rational explanation (which Kaj seems to be challenging in this post).

Alright, I think I now understand much better what you mean, thank you.

Great. :-)

[…]these immune responses are there for a reason.

Of course. As with all other systems.

Specifically in the case of Looking, what rings my alarm bells is not so much the "this-ness" etc. but the claim that Looking is beyond rational explanation (which Kaj seems to be challenging in this post).

The following has been said many times already, but I'll go ahead and reiterate it here once more: I was not trying to claim that Looking is beyond rational explanation.

It still shouldn't be an impediment to understand what Looking is.

Except if the parts that listen to the explanation, don't care about actually understanding it.

I don't know whether this is the thing that Val means, but I've certainly had times of finding things that were kind of like Looking, and getting all excited about them... but my mind was mostly interested in understanding them on an intellectual level and then showing off that "understanding", rather than actually doing the things that they were all about.

And I think that, on some level there was a small voice at the back of my mind, pointing out that I was missing the point... all while the paperclipper in charge ignored it and ran things the way it wanted to run them.

9Vanessa Kosoy3yAlright, but it seems completely reasonable to want to understand something on an intellectual level before actually doing the something. If you don't understand it on an intellectual level then how can you know whether it's worth doing?

Sure, there's a sense in which you may want to get some intellectual understanding of what something is before you start doing it. But I wasn't just developing an intellectual understanding of the things in order to figure out whether they were worth doing: I was already convinced that they were doing. Rather I was focusing on the intellectual understanding of the thing as a substitute for actually doing the thing.

Suppose I wanted to become a musician, so I spent all my time reading about biographies about musicians, studying research on the psychological benefits of learning music, and following discussions on forums for musicians. But not spending any time actually practicing the act of playing music, nor doing things like learning to read musical notation.

Yes, there may be some benefit to be had with the stuff that I'm doing. Maybe it will be useful for helping me determine whether or not I really want to become a musician. But if I decide that I do want to become one, and then think that by spending all my time doing these things I'm making major progress towards being a musician, then I'm just deluding myself.

Edited to add: And also, I might note... if... (read more)

If you don't understand it on an intellectual level then how can you know whether it's worth doing?

You can have intuitions and trust them. This is most of how I learned math: I had strong intuitions about what I wanted to learn at any given point, I didn't have an intellectual understanding of why I wanted to learn those things as opposed to other things, and I followed my intuitions and they led me to great places (which is why I kept trusting them).

Intuitions are great and they are a core part of human intelligence, or even the core part of human intelligence. However, the essence of the rationalist project is that our intuitions are biased, so often we need to re-examine them, question them, try to understand whether they are well-calibrated in this particular case etc. Also, intuitions are good for individual, but, since intuitions are (almost by definition) very hard to communicate, they are not very useful for social coordination.

However, the essence of the rationalist project is that our intuitions are biased

Why do you trust your explicit intellectual reasoning any more than your intuitions?

Also, intuitions are good for individual, but, since intuitions are (almost by definition) very hard to communicate, they are not very useful for social coordination.

I don't really understand what point you're trying to argue for with this. Is the conclusion "...and therefore we shouldn't talk about them?" or "...and therefore we shouldn't use them?" or what?

I agree that if I go around making a lot of decisions based on my intuitions it will be harder to explain those decisions to other people. There are situations in which I want to optimize very hard for making decisions that are explicable in this way (e.g. if I'm a business manager), but there are situations where I don't, and if I behave as if I'm always in my-decisions-need-to-be-explicable mode then I am missing opportunities to grasp a lot of power.

5Vanessa Kosoy3yFirstly, there are cases where you can definitely trust your explicit reasoning more than your intuitions. For example, if I prove a mathematical theorem than I trust it more than just having an intuition that the theorem is true. Similarly, if I use physics to compute something about a physical phenomenon, I trust it more than just having an intuition about the physical phenomenon. For most questions you can't really compute the answer. You need to use some combination of intuition and explicit reasoning. However, this combination is indeed more trustworthy than intuition alone, since it allows treating at least some aspects of the question with precision. Moreover, if you manage to analyze your intuition and understand its source, your know much better to which extent you should trust it for the question at hand. Finally, it is the explicit reasoning part which allows you to offset the biases that you know your reasoning to have, at least until you trained your intuition to offset these biases automatically (assuming this is possible at all). The point I'm trying to argue is, if someone wants to promote Looking to the community as a useful concept and practice, then they should prepare to argue in favor of it using explicit intellectual reasoning.
For example, if I prove a mathematical theorem than I trust it more than just having an intuition that the theorem is true. Similarly, if I use physics to compute something about a physical phenomenon, I trust it more than just having an intuition about the physical phenomenon.

I think the situation is much more complicated than this, at least for experts. Cf. Terence Tao's description of the pre-rigorous, rigorous, and post-rigorous stages of mathematical development. Mathematical papers often have incorrect proofs of correct statements (and the proofs are often fixable), because mathematicians' intuitions about mathematics are so well-developed that they lead them to correct conjectures even when attempts to write down proofs go awry because in a long proof there are many opportunities to make mistakes. My experience has definitely been that the longer a proof / computation gets the more I trust my intuitions if they happen to disagree. (But of course I trained my intuitions on many previous proofs / computations.)

For most questions you can't really compute the answer. You need to use some combination of intuition and explicit reasoning. However, this combination i
... (read more)

With explicit intellectual reasoning, there's a chance for error correction. If someone's initial reasoning is wrong, others can point it out or they can eventually realize it on their own with further reasoning, and it seems possible to make progress towards the truth over time this way. (See science, math, and philosophy.) I'm worried that if Looking is wrong on a some question and makes me unjustifiably certain about it as well as discount explicit reasoning about that question, I won't be able to back out of that epistemic state.

I'm also worried that LW as a whole will get into such a state and not be able to back out of it, which makes me want to also discourage other people from trying Looking without first having an explicit understanding of its epistemic nature. I want to have answers to questions like:

  1. How does Looking work (especially on questions that are not confined to the internals of one's own mind)?
  2. How confident should we be about the answers that Looking gives? (Do people tend to be overconfident about Looking and if so how should we correct for that both as individuals and as a community?)
  3. If Looking gives systematically wrong answers to certain questions (i.e.,
... (read more)
I'm worried that if Looking is wrong on a some question and makes me unjustifiably certain about it as well as discount explicit reasoning about that question, I won't be able to back out of that epistemic state.

There are lots of ways to find out that you're wrong about something. Instead of doing explicit reasoning you can make predictions and run experiments. Looking doesn't mean not being on board with beliefs having to pay rent.

Example: when I Look at people, I get a sense of what's going on with them to cause them to behave in certain ways, and I can test that sense by using it to make predictions and running experiments to check them (e.g. asking them a certain kind of question to see their response), in addition to doing explicit reasoning and seeing if the explicit reasoning comes to similar conclusions. Looking is not meant to displace explicit reasoning, but it is a different tool than explicit reasoning, and sometimes I want to use one or the other or both.

Subexample: I met a guy recently at a circling workshop, and after he had said about 10 words I was highly confident, based on how I was reading his tone of voice and body language (which mani... (read more)

What’s unsatisfying about Kaj’s original post above as an answer to this question?

I think it's a step in the right direction, but I'm not sure if his explanation is correct, or that different people are even talking about the same thing when they say "Looking".

The framing of this question feels off to me. Looking is a source of data, not answers. What you do with that data is up to you, and you can apply as much explicit reasoning as you want once you even have access to the additional data at all.

Take this example of Looking:

The point of being able to Look at the LW epistemic game, from within the point of view of the LW epistemic game, is precisely to see the ways in which playing it well is Goodharting on truth-seeking.

I had interpreted this to mean that you were getting the answer of "playing it well is Goodharting on truth-seeking" directly out of Looking. If that's not the case, can you explain what the data was, and how that lead you to the conclusion of "playing it well is Goodharting on truth-seeking"? (I think Goodharting is almost certainly true and unavoidable to some extent in any social situation, and it wouldn't be to... (read more)

I had interpreted this to mean that you were getting the answer of "playing it well is Goodharting on truth-seeking" directly out of Looking. If that's not the case, can you explain what the data was, and how that lead you to the conclusion of "playing it well is Goodharting on truth-seeking"? (I think Goodharting is almost certainly true and unavoidable to some extent in any social situation, and it wouldn't be too hard to find, via our normal intuitions and explicit reasoning, specific forms of Goodharting on LW. What additional data does Looking provide?)

I don't have a cached answer to this; Looking is preverbal, so I have to do a separate cognitive task of introspection to give a verbal answer to this question. (I'm also somewhat more confident than I was that I'm doing the thing that Kaj and Val call Looking but certainly not 100% confident. Maybe 90%.)

Okay, so here's an analogy: when I was in 8th grade I read Atlas Shrugged, and it successfully invaded my mind and turned me into an objectivist for several months. I went around saying things like "gift-giving is immoral" (I also gave people gifts, and refused to noti... (read more)

I'm still not sure what exactly was the data that you got from Looking. You said previously "What you do with that data is up to you, and you can apply as much explicit reasoning as you want once you even have access to the additional data at all." In order to apply explicit reasoning to some data we have to verbally describe it or give it some sort of external encoding, right? If so, can you give a description or encoding of just the raw data (or the least processed data that you have access to) that you got from Looking?

Framing things in terms of data, hypotheses, and predictions is a strong concession to the LW epistemic game that I am explicitly choosing to make right now for the sake of ease of communication, and not everyone’s going to make that choice all the time.

What's the proposed alternative to framing things this way, and how does one correct epistemic errors in that frame? For example if Val says “one clear thing I noticed when I first intentionally Looked is that everyone has bodhicitta” and I want to ask him about data and predictions, but he doesn't want to use that frame, what should I do instead?

Val does describe a few things after your quote that can be interpreted as such predictions.

I'm not seeing anything that look like testable predictions. Can you spell them out?

6Qiaochu_Yuan3yYou can do less direct things, like having other nonverbal parts of your mind process the data, introspecting / Focusing to get some words out of those parts, and then doing explicit reasoning on the words. I already tried to do that; the data gets processed into felt senses and I tried to give my Focusing labels [https://www.lesserwrong.com/posts/PXqQhYEdbdAYCp88m/focusing-for-skeptics] for the felt senses. I probably didn't do the best job but I don't feel up to putting in the level of effort that feels like it would be necessary to do substantially better. Here's another analogy: if you're face-blind, you're getting the same raw sensory input from your eyes that everyone else is (up to variations between your eyes, whatever), but the part of most people's minds explicitly dedicated to processing and recognizing faces is not active or at least is weak, so you can see a face and process it as "this face with this kind of eyes and this nose and this hair" where someone else would see the same face and process it as "Bob's face." Looking is sort of like becoming less face-blind. (Only sort of, this is really not a great analogy.) And it's unclear how one would go about communicating what's different about your mind when this happens, other than "now it's immediately clear to me that that's Bob's face, whereas before I would have had to use explicit reasoning to figure that out." Meet him in person and ask him to show you the way in which everyone has bodhicitta. (Of course you are fully justified in finding this too expensive / risky to try.) Edit: I misunderstood Wei Dai's question; see below. I don't have a good verbal description of the alternative frame (nor do I have only one alternative frame), but the way you correct epistemic errors in it is to smash into the territory repeatedly. (There's an additional thing of just not worrying about epistemic errors as such very much. Tennis players don't spend a lot of time asking themselves "but what if all

Meet him in person and ask him to show you the way in which everyone has bodhicitta. (Of course you are fully justified in finding this too expensive /​ risky to try.)

In practice, doesn't that just translate to "shut up and don't question it"?

(There’s an additional thing of just not worrying about epistemic errors as such very much. Tennis players don’t spend a lot of time asking themselves “but what if all of my beliefs about tennis are wrong tho?” because they just play a bunch of tennis and notice what works and what doesn’t instead, without ever explicitly thinking about their epistemics at all. This isn’t to say it might not benefit them to think about epistemics every once in awhile, but it’s not the mode they primarily operate in.)

I guess it depends on what field you're working in so perhaps part of the disagreement here is caused by us coming from different backgrounds. I think in fields with short strong feedback cycles like tennis and math, where epistemic errors aren't very costly, you can afford to not worry about epistemic errors much and just depend on smashing into the territory for error correction. In other fields like computer security and philoso... (read more)

6Qiaochu_Yuan3yThis seems really uncharitable, by far the least charitable you've been in this conversation so far (where I've generally been 100% happy with your behavior on the meta level). I have not asked you to shut up and I have not asked you not to question anything. You asked a question about what things look like in an alternative frame and I gave an honest answer from that frame; I don't like being punished for answering the question you asked in the way you requested I answer it. Edit: The above was based on a misunderstanding of Wei Dai's question about what he should do instead; see below. Some things are just hard to transmit except in person, and there are plenty of totally unobjectionable examples of this phenomenon. Feedback cycles in circling are very short, although pretty noisy unless the facilitator is quite skilled. Feedback cycles in ordinary social interaction can also be very short, although even noisier.
4Wei_Dai3yTo clarify, I wasn't saying that you were doing either of those things. My point was that you seemed to be proposing an epistemic norm whose practical effect would be similar to people being allowed to say "shut up and don't question it", namely that it would make it very hard to question certain conclusions and correct potential errors. (Again, I don't think you're doing this now, just proposing it as something that should be acceptable.) Some examples please? I honestly can't think of anything I know that can only be transmitted in person.
My point was that you seemed to be proposing an epistemic norm whose practical effect would be similar to people being allowed to say "shut up and don't question it", namely that it would make it very hard to question certain conclusions and correct potential errors.

I don't know that I was proposing an epistemic norm. What I did was tell you what interaction with the territory you would need to have in order to be able to understand a thing, in the same way that if we lived in a village where nothing was colored red and you asked me "what would I have to do to understand the ineffable nature of redness?" I might say "go over to the next village and ask to see their red thing."

Some examples please? I honestly can't think of anything I know that can only be transmitted in person.

Playing basketball? Carpentry? Singing? Martial arts? There are plenty of physical skills you could try teaching online, you probably wouldn't get very far trying to teach them via text, probably somewhat farther via video, but in-person instruction, especially because it allows for substantial interaction and short feedback cycles, is really hard to replace... (read more)

I don’t know that I was proposing an epistemic norm.

In that case there was a misunderstanding somewhere. Here's my understanding/summary of our course of conversation: I said that explicit reasoning is useful for error correction. You said we can apply explicit reasoning to the data generated by Looking, and also check predictions for error correction. I said people who talk about Looking don't tend to talk in terms of data, hypothesis and prediction. You said they may not want to use that frame. I asked what I should ask about instead (meaning how else can I try to encourage error correction, since that was the reason for wanting to ask about data and prediction in the first place). You said "Meet him in person and ask him to show you the way in which everyone has bodhicitta." I interpreted that as a proposed alternative (or addition) to the norm of asking for data and predictions when someone proposes a new idea.

I guess the misunderstanding happened when I asked you "what should I do instead?" and you interpreted that as asking how can I understand Looking and bodhicitta, but what I actually meant was how can I encourage error correction in case Val was wro... (read more)

I guess the misunderstanding happened when I asked you "what should I do instead?" and you interpreted that as asking how can I understand Looking and bodhicitta, but what I actually meant was how can I encourage error correction in case Val was wrong about everyone having bodhicitta, and he doesn't want to use the frame of data, hypothesis and prediction.

Oh. Yes, that's exactly what happened. Thanks for writing down that summary.

I don't really have a good answer to this question (if I did, it would be "try to encourage Val to use the frame of data, hypothesis and prediction, just don't expect him to do it all the time") so I'll just say some thoughts. In my version of the frame Val is using there's something a bit screwy about thinking of "everyone has bodhicitta" as a belief / hypothesis that makes testable predictions. That's not quite the data type of that assertion; it's a data type imported over from the LW epistemic frame and it's not entirely natural here.

Here's a related example that might be easier to think about: consider the assertion "everyone wants to be loved." Interpreted too... (read more)

4Vanessa Kosoy3yI agree that the situation is more complicated, I disagree that it is "much more complicated". Yes, mathematicians rely on intuition to feel in the gaps in proofs and to seek out the errors in proofs. And yet, it is uncontroversial that having a proof should make you much more confident in a mathematical statement than just having an intuition. In reality, there is a spectrum that goes roughly "intuition that T is correct" -> "informal argument for T" -> "idea for how to prove T" -> "sketch of a proof of T" -> "unvetted proof of T" -> "vetted, peer-reviewed proof of T" -> "machine verifiable formal proof of T". Have I actually tried what? As to why I believe this, I think I already gave an "explicit reasoning" argument: and, yes, my intuition and life experience confirm this although this is not something that I can transmit to you directly. This is a wrong way to look at this. Intuition and explicit reasoning are not two separate judges that give two separate verdicts. Combining intuition and explicit reasoning doesn't mean averaging the results. The way it works is, when your intuition and reasoning disagree, you should try to understand why. You should pit them against each other and let them fight it out, and in the end you have something that resembles a system of formal arguments with intuition answering some sub-queries, and your reasoning and intuition both endorse the result. This is what I mean by "understanding on an intellectual level". I don't insist that you only use explicit reasoning. Feel free use metaphors, koans, poetry and whatnot. But you should also explain it with explicit reasoning. Well, if you are saying "I don't want to convince everyone or even the most" that's your prerogative of course. I just feel that the point of this forum is trying to have discussions whose insights will percolate across the entire community. Also I am personally interested in understanding what Looking is about and I feel that the explanations
For most questions you can't really compute the answer. You need to use some combination of intuition and explicit reasoning. However, this combination is indeed more trustworthy than intuition alone, since it allows treating at least some aspects of the question with precision.

I don't think this is true; intuition + explicit reasoning may have more of a certain kind of inside view trust (if you model intuition as not having gears that can be trustable), but intuition alone can definitely develop more outside-view/reputational trust. Sometimes explicitly reasoning about the thing makes you clearly worse at it, and you can account for this over time.

Finally, it is the explicit reasoning part which allows you to offset the biases that you know your reasoning to have, at least until you trained your intuition to offset these biases automatically (assuming this is possible at all).

I also don't think this is as clear cut as you're making it sound; explicit reasoning is also subject to biases, and intuitions can be the things which offset biases. As a quick and dirty example, even if your explicit reasoning takes the form of mathematical proofs which are verifia... (read more)

4Vanessa Kosoy3yI don't see it this way. I think that both intuition and explicit reasoning are relevant to both inside view and outside view. It's just that the input of inside view is the inner structure of question and the input of outside view is the reference category inside which the question resides. People definitely use the outside view in debates by communicating it verbally, which is hard to do with pure intuition. I think that ideally you should use combine intuition with explicit reasoning and also combine inside view with outside view. You can certainly have biases about these things, but these things can be regarded as coming from your intuition. You can think of it as P vs. NP. Solving problems is hard but verifying solutions is easy. To solve a problem you have to use intuition, but to verify the solution you rely more on explicit reasoning. And since verifying is so much easier, there is much less room for bias.
If you don't understand it on an intellectual level then how can you know whether it's worth doing?

(Addressing this question in general, and then in terms of this discussion)

I think that "worth doing" is the sneaky part of that question. Any decision making process (by intuition or intellectual thought) is usefully thought of as a trade-off between time it takes to decide, and the delta in payout.

If it takes 5 min to pick the "most worth" meal at a restaurant, and it's only marginally better, maybe you should just order the first thing that comes to mind and spend more time talking with your friends.

Also, if it's super easy to get empirical data about how "worth doing" each action is (you're at a buffet and can sample a small bit off everything), maybe it's better to just do the experiment.

I would only make "have an intellectual understanding of the thing" a prerequisite if I thought it was super costly to try, or I thought it was potentially dangerous, or I only got one shot.

Which I think is a bigger crux for a lot of people on this topic. I don't see it as super costly to put bits of effort here and there into trying to get an experiential sense of Looking.

Though who knows, maybe I don't value my time enough. I certainly at least want to be a person who consistently makes time to test reasonable sounding ideas/actions/experiences like Looking.

8Vanessa Kosoy3yI completely agree that, when something is sufficiently cheap then it is often better to simply try it than spend a lot of effort on trying to analyze it. However, my impression was that Looking is far from cheap, that it is something that requires years of practicing meditation to achieve. I might be wrong? Moreover, it also seems potentially dangerous, at least for me. I know for a fact that my sanity is not impervious and I'm wary of trying anything that might harm it.

It definitely doesn't take years of practicing meditation. Though I'm hesitant to speculate on how long it would take on average, because how prepared for the idea people are varies a lot. The hardest step is the first one: realizing that people are talking about things you don't yet understand.

5Hazard3yIf the question was, "Should I commit to spending years investigating Looking and related ideas?" I'd agree that most people could rightly conclude, "No, I shouldn't". So a better question becomes, "Is it useful to take a step in that direction?". Again, to me the answer seems to be yes. But besides, "is it worth the time" there's still the prospect of danger you mention. I used to think that there was no way to be in danger, as long as you took things one step at a time, but slippery slopes and murder ghandis [https://www.lesserwrong.com/s/XsMTxdQ6fprAQMoKi/p/Kbm6QnJv9dgWsPHQP] make a lot of sense. Here's 5 min of thought on why it doesn't feel dangerous to me: * I feel I've gotten much better at noticing confusion, and it seems like Looking would have to systematically undermine my ability to notice confusion before it could hurt me (and that registers on a gut level as very unlikely) * I've previously been in a place of bad epistemics and bad functionality via going overboard with "naive rationality". Having come out the other end, I guess I feel a bit inoculated to "getting hijacked by an idea". * I think this talk [https://www.youtube.com/watch?v=dup6xkvj1S0] by Dan Barker kicked a strong sense of, "Oh, you can have a crazy 'supernatural' experience, and it doesn't have to mean anything." What do you notice when you think on your feeling of it being dangerous?

I feel that Looking might be dangerous for roughly the same reason psychedelic drugs are dangerous. I know that my sanity is not very robust, and I also know that my mind is quite well functioning in the moment (so that "if ain't broke" applies). Experimenting with highly unusual states of consciousness seems like something that pushes your mind away from its normal operating conditions and can destabilize the system in difficult to predict ways.

Learning to Look can definitely be hazardous: some teachers advise people with any kind of mental health issues to be very cautious about trying meditation at all. In particular, learning the required kind of sensitivity for noticing subtle movements of mind means that you also become aware of any unpleasant stuff that you might so far have been suppressing in order to cope.

Ideally, that dark stuff bubbling up to surface will be rough but beneficial, as the person meditating will process it and get over it, but that's under the assumption that they are relatively mentally healthy (on some relevant axis); for people who aren't, it may be too much to handle at once.

And, as you say, deeper Looking inherently involves moving the mind outside its standard parameters. If the techniques aren't used right, there is a very definite risk of breaking things.

Basically agree with this.

The most significant frustration with trying to speak effectively about this topic is that a significant fraction of what we are engaging with when we do is the mind's attempts to immunize itself from needing to get out of the car by recontextualizing the instructions as something referring to things within the car. This is especially egregious with extremely intelligent folks who can get very creative with it.

"But what about X? I found X very useful!"

Yes, perceived usefulness is one of the best ways it can convince you to watch TED tal-I mean understand contemplative practices.

When a person becomes capable of observing in sufficient detail the mental process by which this sense of an I is constructed, the delusion of its independent existence is broken. Afterwards, while the mind will continue to use the concept “I” as an organizing principle, it becomes correctly experienced as a theoretical fiction rather than something that could be harmed or helped by the experience of “bad” or “good” emotions. As a result, desire and aversion towards having specific states of mind (and thus suffering) cease. We cease to flinch away from pain, seeing that we do not need to avoid them in order to protect the “I”.

As a "third party", this explanation makes little sense to me. Suppose it's true that "our minds are composed of a large number of subagents, which share information by sending various percepts into consciousness" and I realize this through meditation, it seems that "I" remain something real (namely a group of subagents) and can still be potentially harmed or helped. Why would a group be less capable of being harmed than a monolithic agent? I'm not seeing the logic here.

Also, I'm surprised that you give no mention to transient ... (read more)

Suppose it's true that "our minds are composed of a large number of subagents, which share information by sending various percepts into consciousness" and I realize this through meditation, it seems that "I" remain something real (namely a group of subagents) and can still be potentially harmed or helped. Why would a group be less capable of being harmed than a monolithic agent? I'm not seeing the logic here.

Right, so I'm using a very specific sense of "harmed" here.

The claim is not that the subagents couldn't be harmed or helped in the sense of e.g. brain damage damaging their function, things in the external world happening more or less according to their values, etc. They obviously can, and there's nothing delusional about that.

The "harm" that I'm referring to is the alief that there's something intrinsically bad about experiencing emotions with negative valence. For instance, I might experience stress about going to the dentist, and this is not because I would expect the dentist to do the "objective" kind of damage which you're talking about - obviously I expect the dentist to benefit me,... (read more)

Did that make any sense?

Yes, that does make a lot more sense.

Similarly, it’s been a while since I last read it, but I believe that the Dietrich paper is explaining on a neurophysiological level basically the same process that this post gave a cognitive explanation of.

I think Dietrich's explanation is essentially non-cognitive, i.e., the denial of self is caused by something like a hardware glitch or switch that is triggered by meditation, rather than a sequence of cognitive steps. (Similar things can happen during endurance running, hypnosis, and drug-induced states, which are more obviously non-cognitive.) Here's the relevant part of the paper:

meditation entails sustained concentration and heightened awareness by focusing attention on a mantra, breathing rhythm, or a number of other internal or external events [...] Humans appear to have a great deal of control over what they attend to (Atkinson & Shiffrin, 1968), and in meditation, attentional resources are used to actively amplify a particular event such as a mantra until it becomes the exclusive content in the working memory buffer. This intentional, concentrated effort selectively disengages all other cognitive ca

... (read more)

Ah, right, now I think I understand what you were saying.

I think the thing here is that, like a lot of old research on the topic, Dietrich does not do a very precise job of exactly what kind of meditation he's talking about: possibly because he doesn't (or at least didn't at the time of writing this) realize that meditation actually covers a wide variety of different practices.

In particular, the thing that he's talking about sounds kind of like he's describing something like high-level samatha jhanas: probably something like the seventh or eighth jhana (note: links within that wiki seem to be broken, people curious about the earlier jhanas may want to use the book's pdf instead).

These are indeed mental states where a meditator may end up in, if they manage to concentrate really really intensely on just one thing, to the exclusion of anything else. And from those descriptions, it really does sound like you reach them by successively turning off brain functions until you get a really weird mental state.

However, a lot of traditions - including the author of the linked wiki/book - emphasize that getting into samatha jhana states is not enlightenment. Some ... (read more)

I think the thing here is that, like a lot of old research on the topic, Dietrich does not do a very precise job of exactly what kind of meditation he’s talking about: possibly because he doesn’t (or at least didn’t at the time of writing this) realize that meditation actually covers a wide variety of different practices.

Good point, that would explain a lot. What do you think of the second paper that I link to here that tries to create a framework for classifying the various contemplative practices? If it seems like a useful framework, where does "Looking" fall into it?

Basically, Ingram’s saying the same thing that you were suggesting: that there’s no particular insight to be had from these states, as they’re just tripping on weird experiences that you get from turning normal brain functions off, but that people who get too attached to them may start rationalizing all kinds of excess significance to them.

Interestingly, it seems that there are deep disagreements between and even within Buddhist traditions about which mental states count as "enlightenment" or "awakening", and which ones are merely states of deep concentration. See the first paper linked to in the same post.

Those papers are a great find!

What do you think of the second paper that I link to here that tries to create a framework for classifying the various contemplative practices? If it seems like a useful framework, where does "Looking" fall into it?

I really like that framework. This description of the deconstructive family definitely sounds like it's talking about Looking:

Another approach would be to directly examine your experience, for example by dissecting the feeling of anxiety into its component parts and noticing how the thoughts, feelings, and physical sensations that comprise the emotion are constantly changing. In the context of Buddhist meditation, this process of inquiry is often applied to beliefs about the self, though it can similarly be applied to the nature and dynamics of perception, to the unfolding of thoughts and emotions, or to the nature of awareness.

Also, later in the same section, the paper makes a similar claim as what I was saying in my article: that establishing basic proficiency in meta-awareness / the attentional family is a prerequisite for achieving the basic skills for overcoming cognitive fusion, after which one can start developing ski... (read more)

Wow, thanks for this incredibly detailed comment. It clarified something for me, especially this:

It was something like... when I struggle against pain, there's an element of identifying (fusing together) with the subprocess that is fighting against the pain, rather than with the subprocess that's producing the pain.

It feels to me like I've acquired some of the skill of not fighting against pain, but I don't think I did it by doing anything to my sense of self. It's more like I just repeatedly noticed that experiencing pain kept not killing me.

Yeah, there are a lot of things that you can do - or which can happen to you - which will help with not fighting against pain. Just undergoing a lot of painful stuff and noticing that it doesn't actually kill you, is definitely one as well. (there are lots of anecdotes about people who've gone through a lot of terrible stuff and then being totally unfazed by more mundane things, being all like, "is that the best you've got, reality? I've been shot at in a combat zone, I'm not going to get freaked out by a dentist". OTOH, some do get traumatized and even more freaked out by small stuff.)

Meta note: the fact that pasting text into the comment box results in it being bold is a bug.

Let's not start using bold as a convention for indicating that text is a quote. The actual quote syntax (with greater than sign) or italics look much better. Don't they?

Adding to my other comment...

I'm skeptical about the value of most neurophysiological explanations in general: I think that in many cases, they just create an illusion of understanding by throwing in neurological terms that give an appearance of detail without actually contributing conceptual gears. If I say "learning to navigate a city causes structural changes in the hippocampus", that doesn't really tell most people anything that they could use, but does give them a feeling that they now understand this better.

Similarly, I could have quoted from the Dietrich paper

the prefrontal cortex enables the top layers of consciousness by contributing the highest-order cognitive functions to the conscious experience ... evidence suggests that initial and much ensuing information processing on perception, attention, or memory occurs in other brain areas before further integration in the frontal lobes ... meditation results in transient hypofrontality with the notable exception of the attentional network in the prefrontal cortex

and added something like "and thus, Looking is about learning to selectively downregulate the activity of the prefrontal cortex - which carrie... (read more)

As far as I can tell, this post successfully communicates a cluster of claims relating to "Looking, insight meditation, and enlightenment". It's written in a quite readable style that uses a minimum of metaphorical language or Buddhist jargon. That being said, likely due to its focus as exposition and not persuasion, it contains and relies on several claims that are not supported in the text, such as:

  • Many forms of meditation successfully train cognitive defusion.
  • Meditation trains the ability to have true insights into the mental causes of mental processes.
  • "Usually, most of us are - on some implicit level - operating off a belief that we need to experience pleasant feelings and need to avoid experiencing unpleasant feelings."
  • Flinching away from thoughts of painful experiences is what causes suffering, not the thoughts of painful experiences themselves, nor the actual painful experiences.
  • Impermanence, unsatisfactoriness, and no-self are fundamental aspects of existence that "deep parts of our minds" are wrong about.

I think that all of these are worth doubting without further evidence, and I think that some of them are in fact wrong.

If this post were coupled with others that s

... (read more)
6Raemon1yI'd be interested in you going into the details of which claims seem wrong and why.
8DanielFilan1yWell, I'm significantly more confident that at least one is wrong than about any particular one being wrong. That being said: * It seems wrong to claim that meditation tells people the causes of mental processes. You can often learn causal models from observations, but it's tricky, and my guess is that people don't do it automatically. * I don't think that most people implicitly act like they need to avoid mental experiences. * I don't know if 'suffering' is the right word for what painful experiences cause, but it sure seems like they are bad and worth avoiding. * My guess is that unsatisfactoriness is not a fundamental aspect of existence. That being said, there's enough wiggle room in these claims that the intended meanings would be things that I'd agree with, and I also think that there's a significant shot that I'm wrong about all of the above.

I'm still not 100% sure I understand Val's definition of Looking, so I'm not quite willing to commit to the claim that it's the same as Kaj's definition. But I do think it's not that hard to square Kaj's definition with those quotes, so I'll try to do that.

Kaj's definition is:

being able to develop the necessary mental sharpness to notice slightly lower-level processing stages in your cognitive processes, and study the raw concepts which then get turned into higher-level cognitive content, rather than only seeing the high-level cognitive content.

Everything you experience, no matter the object, is experienced via your own cognitive processes. When you're doing math, or talking to a friend, or examining the world, that is an experience you are having, which is being filtered by your cognitive processes, and therefore to which the structure of your mind is relevant.

As Kaj describes, the part of your thought processes you normally have conscious access to are a tiny fragment of what is actually happening. When you practice the skill of making more of it conscious and making finer and finer discriminations in mental experience, you find t... (read more)

Yeah, this is basically how I squared it with Val's version, too. A few other examples:

  • The koan thing: I haven't really done koans, so this might be wrong, but I'd guess that the intent is something like: as you are thinking about the koan, Look at the way that your mind represents the koan and how it struggles with trying to solve the paradox; see if those representations give you any hints about the nature of the answer. (One may note that the experience which triggered my "kensho" was by itself an attempt to answer a paradox, and maybe you could formulate it as a koan, something like "what do you do when you let go of doing", or whatever.) Certainly I've felt like meditation experience has given a slightly better intuition of what exactly it is that koans might be hinting at, though again, I haven't really tried working with them.
  • Val also talked about Looking as a way to see the intelligent social web, which also sounds like it's something directed at the outside not the inside. But after reading his post, I've spent some time paying attention to things like... what kinds of narratives and roles do people's words and po
... (read more)

Yep. I feel understood.

Cross-posting my comment from another thread here:

--

One way of [explaining what the Buddhist conception of no-self means], which I think should be mostly accurate, is that the state that is being a booed is a belief in the homunculus fallacy.

Dennett, Kurzban, and others have pointed out that there are facts about the way in which the mind and consciousness function which feel deeply counter-intuitive, and that even neuroscientists and psychologists who in principle know that the brain is just a distributed system of separate modules, still often seem to operate under an intuition that there is a single "central" self (as seen from some of the theories that they propose).

I'm not sure whether that's the source of the intuition, but it also seems related that humans seem to have a core system for reasoning about agency which takes as an axiom the assumption that agents exhibit independent, goal-directed motion (as opposed to objects, which only act when acted upon). Which makes sense if you're just reasoning about e.g. social dynamics, but gets you into trouble if you try to understand the functioning of the brain and feel intuitively convinced that there has... (read more)

I wonder whether long-term thinking is a smaller/partial example of this "no self" concept.

I mean, overcoming short-term desires (in order to better satisfy long-term desires) requires realizing that these short-term desires are not you. That e.g. "having your desire to eat chocolate frustrated" doesn't mean that you are being harmed, because "the desire to eat chocolate" is not you; it is possible to frustrate the desire, while benefiting (the other aspects of) you.

It's just that the typical method of breaking this identification is to replace it by identification with something else. Because it is easier to redirect the desire to identify, than to abandon it. So instead of identifying with your short-term goals, you are encouraged to identify with your longer-term goals. The "real you" is no longer the desire to eat chocolate, but e.g. the desire to be healthy, fit, and attractive. Maybe better, but not fundamentally different.

(To use an analogy, it is like abandoning a religion, by converting to a different religion. You no longer believe in god X, because now you believe in god Y instead. Now contrast it with atheism, which mean... (read more)

This definitely sounds at least related to me. (obligatory link: Kegan stages presenting moral development as a process of learning to de-identify with more and more things in the manner that you describe)

I really liked the part where you pointed out that identifying with our long-term desires over our short-term desires, is equally an act of identification, and no more arbitrary than identifying with the short-term ones. Here's something very similar that I wrote on the CFAR alumni mailing list three years ago:

(note that I wasn't really able to make the kind of thing I describe here, into a long-term habit; my cognitive defusion skills weren't developed enough, so I kept getting sucked into fusing / identifying with different desires again, and didn't have enough things to remind me to keep this habit up. for that matter, my skills aren't developed enough to particularly consistently maintain this now, either. need to meditate more!)

--

Briefly: most of us have probably had the experience where we know that we "should" do something, but feel too tired or otherwise unmotivated to do it. For instance, just before typing this message, I was thinking t... (read more)

This reminds me of a thing I formulated a little over a year ago and adopted as a "thought-resolution" (goal of changing some thought patterns) in 2017. I will also paste a thing I wrote back then:

---

"Instead of thinking about tradeoffs between what I WANT to do and what I SHOULD do, try to think about choices as tradeoffs between things I want and other things I want.

Examples:

- “I should go to sleep but I want to read this blog post and ALL the comments” --> “I want to read this blog post and ALL the comments right now. I also want to wake up on time tomorrow and have some energy.”

- “I should get up but I want to stay in bed” --> “I want to stay in bed. I also want to both get a good amount of work done today and finish work at a reasonable time.”

- “I want to eat this brownie but I shouldn’t.” --> “I want to eat this brownie. I also want [various good health outcomes].”

Why? Several reasons:

- Making the things that underlie the “should” more explicit might help me actually consider those things in my decision and ultimately make better choices. “I want outcome X” is more motivating than a general sense of unwanted obligation.

- The “should” framing makes me f... (read more)

6Qiaochu_Yuan3yI think it's more complicated than this. In my experience many shoulds come from social pressure, so "I should do X" is often implicitly something like "if I don't do X then the tribe will disapprove of me," e.g. I should exercise, I should eat well, I should study, and so forth.

Looking became far less mysterious. Thanks.

I see the main contribution of this post as being a personal, phenomenological account of one of the fundamental skills of rationality - this post contains incredibly clear examples and explanations of a very subtle phenomenon. It also helped me (and I believe many others) understand a discussion I'd been confused about. For these reasons, I've curated it.

The main reason I wouldn't want to curate this post is due to it's length, and the fact that I found the second half less clear than the first. But the post is surprisingly readable all-round, so this wasn't a big factor in my decision.

Newbies to meditation talking about enlightenment sounds about as dumb as science reporters talking about quantum mechanics. The whole discussion will improve if we taboo the word. Thinking that minor attainments are enlightenment has a several thousand year history at this point. The general advice given for attainments is give it 6 months before you go broadcasting it and preferably talk to a teacher who is farther along than you. For the rationality community I strongly encourage chatting with Michael Taft or Kenneth Folk, both of whom are available for online video conferencing. They are very good at separating out epistemic claims from perceptual/ontological claims and have decades of experience.

This sounds like a criticism of me speculating about the nature of enlightenment. I acknowledge that my hypothesis is based on very early-stage data and might be wrong / is the weakest part of my article (and I flagged it as such). But I felt like some speculation was necessary, in order to address the evidence brought up in the earlier threads which suggested that this whole enlightenment thing is just wireheading with no real benefit. It would have felt logically rude to simply write an article about the benefits of insight without making at least some attempt to square my current understanding of its usefulness with the evidence that had previously been offered for it being just useless wireheading.

If you think that my speculation is just blatantly wrong, as you seem to be implying, then I would appreciate a summary of a position that's more correct while also engaging with those criticisms.

Sorry that that sounded overly critical. I mostly wanted to alert the people reading these recent posts. I think this post is useful. The question of which aspects of contemplative practices/schools are just wireheading (at least some of them likely are) also is not benefited from the 'enlightenment' trope IMO.

PMd you more about the last part.

If someone wants to find out more about whether/which contemplative practices/schools are more than just wireheading, what's the best (i.e., lowest cost/risk) way of doing that? Are you aware of any good evidence or arguments about this, that haven't already been brought up here recently?

there are no summaries that I have encountered that I am truly happy with, and my guess based on past experience is that if I did, I would disagree with that assessment a year from now. Getting genre savvy in this way is apparently part of the reason teachers are mum on many aspects. My own motivation is based on an attempt to suss out upstream levers in a scope sensitive way, ie what are the modal properties of truth seeking processes. Attacking that one with an intent to dissolve misunderstandings eventually gets you out of the car. Or at least gets you a hand out the window.

Also, thanks for the useful thought: we have lots of thoughts about what counts as epistemic evidence. What counts as ontological evidence? Teleological evidence? Traditional answers are pretty low complexity: coherence, compressibility, reference class forecasting. Underspecified.

edit: I do recommend Michael Taft and Kenneth Folk's writings as well. As well as their teacher, Shinzen Young. Though he is more old school being from the previous generation and thus having fewer or incorrectly used shibboleths.

I'll also mention that the traditional answer is that people have to find teachers they reson... (read more)

9ESRogs3yWhat do you mean by these terms? Would ontological evidence be evidence about what is (in contrast to epistemic evidence being about which statements are true)? It's not clear to me that you'd want to evaluate answers to questions about what is differently from other kinds of claims.

Ontological: heuristics that result in your dividing up the world into categories in a certain way. Descriptive, prescriptive. What are your tacit heuristics, what is the result, do you endorse this result?

Teleological: same but for intentionality, goal directed behavior.

This is excellent, thank you for writing it.

I'm not as advanced as you, but I've gotten many of the earlier benefits you describe and think you've described them well. That said, I have some confusion about how stuff like this paragraph works:

And because those emotions no longer felt aversive, I didn’t have a reason to invest in not feeling those things - unless I had some other reason than the intrinsic aversiveness of an emotion to do so.

What does it mean to have another reason beyond the intrinsic aversiveness of an emotion? Who's the "I" who might have such a reason, and what form does such a reason take?

This is a specific question that comes out of a more general confusion, which is: why do descriptions of enlightenment and other advanced states so often seem to claim that enlightenment is almost epiphenomenal? If it were really the case that it didn't change anything, how would we know people had experienced it?

These are good questions... unfortunately, in my current state of mind, I don't feel confident in my ability to answer them accurately.

Several of the paragraphs describing my experience, were written based off notes that I made while in that kind of state, as well as memories of the explanations that I thought up while in that state. But even while in that state, I recognized that there's probably a bit of a verbal overshadowing effect going on, with the verbal description mostly but not quite matching my actual experience of the state, with that not-quite-it version nevertheless becoming the main thing that I'd recall from the state when no longer in it.

So, while I remember enough of that state to say that my descriptions here are probably roughly right, the level of detail that your question is trying to tease out is too precise for me to produce a reliable answer, in my current mostly-normal-again state.

I'll see if I can give you a better answer the next time I end up in a state like that. :-)

9G Gordon Worley III3yFor what it's worth I've addressed this issue a bit here [https://mapandterritory.org/is-feedback-suffering-cf18006deca8]. A relevant quote from it:

Although there may be different ideas about enlightenment within different lineages, what Kaj describes is pretty consistent with the way we think about enlightenment within Soto Zen. That is, enlightenment is just the state of always being awake to what's going on (Looking as Val put it), or as I would probably put it, to be able to hold as subject only intentionality and hold everything else as object. It doesn't give you special abilities or anything like that; it just means you notice what's happening.

That said, noticing what you're up to can have pretty profound effects on you over time because much of how we operate depends on hiding from ourselves, which is why Hanson sometimes talks about homo hypocritus, LW talks about heuristics and biases, and psychotherapy may talk about the shadow. I've certainly seen myself change a lot as a result of past subject-object shifts (a more technical term I would use for kensho, which also highlights that there is more than one level of awakening) as I've detailed to some extent here.

Here's something puzzling me: in terms of abstract description, enlightenment sounds a lot like dissociation. Yet I'm under the impression that those who experience the former tend to find it Very Good, while those who experience the latter tend to find it Very Bad.

That puzzles me too!

I've experienced both this stuff and mild dissociative states on occasion, and yeah, they're indeed different in the way that you describe. And I don't have a model which would explain the difference.

My best guess that the pattern of defusion is different, and that in dissociation you're somehow defused from your normal sense of self while still remaining fused with the conceptual structures that say that having a sense of self is important.

Or something. :)

Yeah, it's certainly very easy to confuse defusion and dissociation. Dissociation is something like trying not to let something in at all (a sensory experience or an emotion or a memory), whereas defusion is something like letting it in fully but then - not sure how to describe this part - feeling whatever you feel about it, and feeling calm one meta level up about whatever that is?

I'd like not to suffer for two reasons:

1. I'm compelled to avoid suffering, which is maladaptive; if I didn't care about suffering, I'd get more of the other things I want.

2. Suffering is bad; I'm interested in suffering less whether or not it changes my behavior.

I'm not really clear on what you mean when you say "suffering isn't aversive." (ETA: I meant "pain isn't aversive" or "pain doesn't cause suffering.") Intuitively, I'd expect it to mean that you fix both #1 and #2. Someone for whom suffering isn't aversive could, for example, regularly experience extreme pain just to prove the efficacy of their techniques or earn a small stipend.

My understanding of brains suggests that this is probably not possible to maintain. So it would be very interesting to learn that it is possible. As far as I can tell no one has ever done the kind of stunt that would convincingly show they have this ability, which makes me a bit suspicious but could have other explanations.

Fixing #2 would be consistent with my understanding of brains. But in that case "pain is no longer aversive" would be inapt, since in fact pain is still causing avoidance. Moreover, in this case it seems hard to distinguish "stop suffering" from "become deluded about whether you are suffering," and I'm not sure how I'd tell the difference.

Fixing #1 without fixing #2 would seem quite bad.

I should note that I didn't say that suffering wouldn't be aversive, I said that pain isn't aversive. My model is basically that suffering is aversion (to e.g. pain), so it wouldn't make sense to say that suffering isn't aversive. So I would reword your #1 as "I'm compelled to avoid pain, which is maladaptive".

That said, based on what I've been able to observe so far, there are at least three things that happen:

1. Something like pain asymbolia, where pain that is currently being experienced ceases to be aversive both in a behavioral and experiental sense: its neither feels subjectively unpleasant, nor do I do anything to disengage myself from the situation with it.

2. An effect where, if #1 happens often enough, my anticipations about the unpleasantness of painful experiences update: the anticipation of a painful event ceases to be aversive (in both the experiental and behavioral senses).

3. An effect where anticipations which have not updated become less aversive, in such a way that I no longer experience the anticipations themselves as being unpleasant. This seems to be a special case of #1, since the pain from anticipated pain is by its... (read more)

Those results look like they are in the placebo effect range, not the "qualitative change in the way that pain is processed" range.

Is your understanding that it should be possible to completely or almost completely eliminate the pain-suffering connection? If so, do you believe that any humans have actually achieved that?

(ETA: If the answers are "no" I don't think that's particularly damning. Mostly the relevant thing is that by default I'm not going to adjust me models of cognition based on this kind of report, worst case is that I miss out by failing to incorporate some evidence.)

The case of the Vietnamese monk who famously set himself on fire may meet your criteria. The Vietnamese government claimed that he had drugged himself, but it's hard to imagine a drug that would allow you to get out of a car under your own power and walk to a seated position, and then light a match to set yourself on fire but still have no reaction as your flesh burns off.

Is your understanding that it should be possible to completely or almost completely eliminate the pain-suffering connection? If so, do you believe that any humans have actually achieved that?

I don't have enough information to answer with any higher confidence than "maybe" to the first question. (For the second, I don't know what it would mean in practice for it to be a thing that's possible, but which no human has achieved - if nobody has managed to achieve it yet, including any of the monks who have the option of spending basically all their waking hours meditating, then it seems to be impossible for all practical intents and purposes.)

Prodigious improvement over other explanations I have seen! I have no inherent objection to identifying things as ineffable, but there are usually boundaries around the ineffability which can be identified. Some of them are very well understood.

For example, childbirth. There is no way to compress the experience of childbirth in such a way that someone who has not gone through it themselves can be said understand the experience: but we have doctors and midwives who specialize in managing it; books and classes about how to approach it and deal with it safely; tools and techniques for making it safer and more comfortable. Yet the experience itself remains ineffable.

Another example, combat. There is also no way to compress that experience. But we have large institutions that train people how to recognize and prepare for such an experience, by the million. We have specialized areas of medical knowledge that arose specifically to deal with the aftermath of these experiences. Yet the experience itself remains ineffable.

I wonder about ineffably-ineffable experiences; it seems like once we have a good institutional buildup for producing the prerequisite experiences then maybe we could build ineffable institutions to approach the next level. It seems like the population of enlightened-veteran-mothers will be very small though.

I'm not sure if you've tried psychedelics. Psychedelics have very different effects on people, but I was very lucky; on me they produced exactly the effect you described - reducing my mental processes to far more granular levels. I did psychedelics enough that now this type of 'unfusing' process feels somewhere between 'default' to 'always present but sleeping' to me. I feel rendered mute when trying to talk about this, because this topic triggers a strong inability in myself to remain fused with the thoughts I am trying to handle. It also makes me cry, which makes discussions awkward.

I've spent a lot of time in rationalist communities trying (and failing) to talk about this topic (cause of the crying). Reading stuff like this makes me feel a lot of emotions and gives me a desire to be around you and Valentine and the others who are saying similar things.

Thank you for sharing. :) I feel touched from hearing that my article affected you in such a major way.

Would be happy to hang out with you in case we're ever on the same country and continent!

[Note: mostly just me trying to order my thoughts, kind of hoping someone can see and tell me where my confusion comes from]

So the key insight regarding suffering seems to be that pain is not equal to suffering. Instead there is a mental motion (flinching away from pain) that produces (or is equal to?) suffering. And whereas most people see pain as intrinsically bad, Looking allows you to differentiate between the pain and the flinching away, realizing that pain in and of itself is not bad. It also allows you to get rid of the flinching away, thus eliminating the suffering, but without eliminating the pain. But is the flinching away intrinsically bad? Or is it also possible to defuse from the flinching in a way that makes it less unpleasant?

And then, is there also an equivalent for good experiences? Pain is to suffering as pleasure is to…? Is there a mental motion of turning towards, or welcoming an experience, which is ultimately responsible for seeing pleasurable experiences as good? And if the flinching away is in some way intrinsically bad, is this opposite motion intrinsically good?

Now, once you get that pain is not equal to suffering, and you've thus managed to eliminate... (read more)

But is the flinching away intrinsically bad? Or is it also possible to defuse from the flinching in a way that makes it less unpleasant?

I'm confused about what you mean by "intrinsically bad" here, and especially given the relationship of the second question to the first question, suspect that your concept of "intrinsically bad" conflates at least two things. Your second question is much easier to answer: yes, you can defuse from flinching, and yes, that makes it less unpleasant.

Is there a mental motion of turning towards, or welcoming an experience, which is ultimately responsible for seeing pleasurable experiences as good? And if the flinching away is in some way intrinsically bad, is this opposite motion intrinsically good?

Yes, there is a mental motion of welcoming an experience, and you can do it to any experience, not just pleasurable ones; you can even find joy in welcoming any experience, not just pleasurable ones. I am still confused about what you mean by "intrinsically good."

Now, once you get that pain is not equal to suffering, and you've thus managed to eliminate suffering for you personally, what reasons remain to try to change
... (read more)

In this post, Nate Soares outlines a thing he calls "Moving Towards the Goal", which feels incredibly relevant to this conversation.

This leads us to my second trick for avoiding akrasia: I am not Trying Really Hard. People who are Trying Really Hard give themselves rewards for progress or punishments for failure. They incentivize the behavior that they want to have. They keep on deciding to continue doing what they're doing, and they engage in valiant battle against akrasia. I don't do any of that.
Instead, I simply Move Towards the Goal.

I'd highly recommend Nate's Replacing Guilt sequence. In a very concrete, "traditional LW" way, he lays out how you can still do cool stuff, yet not think in terms of shoulds, guilt, or intrinsically good or bad.

1Eisher Saroya3yI wonder about this too. If there is pleasure and the mental experience of welcoming a pleasure, then what happens if you stop 'welcoming a pleasure'? Wouldn't you no longer be motivated to pursue pleasure? How would you ever feel happy? Would pleasure feel 'bland' or unsatisfying? I also wonder if it's possible to mistakenly decouple pleasure and 'welcoming a pleasure' without ever meditating?
6Raemon3ySeems like "Are Wireheads Happy? [https://www.lesserwrong.com/posts/HmfxSWnqnK265GEFM/are-wireheads-happy]" is relevant to this.

I still broadly agree with everything that I said in this post. I do feel that it is a little imprecise, in that I now have much more detailed and gears-y models for many of its claims. However, elaborating on those would require an entirely new post (one which I currently working on) with a sequence's worth of prerequisites. So if I were to edit this post, I would probably mostly leave it as it is, but include a pointer to the new post once it's finished.

In terms of this post being included in a book, it is worth noting that the post situates itself in the context of Valentine's Kensho post, which has not been nominated for the review and thus wouldn't be included in the book. So if this post were to be included, I should probably edit this so as to not require reading Kensho.

In the section On why enlightenment may not be very visible in one’s behavior which of the two things do you mean to argue?

  • Learning to Look at suffering is not likely to make a visible change in one's behaviour
  • Learning to Look is not likely to make a visible change in one's behaviour

Because I would buy the first claim but not the second. I expect that if someone in general is able to not flinch away from painful experiences, they're able to have better long-term relationships. I notice how people I talk to often flinch away from (a) silence (b) awkwardness. They will make any assumptions required "Oh, of course that was my fault, I apologise" to avoid having to deal with the fact that we have different norms and will have to explicitly navigate them. While this reduces immediate discomfort, it doesn't strengthen the long-term relationship as much.

It's a little complicated, but my current model is something like "learning to Look at suffering is going to make a visible change in your behavior, but the gains from some of the later-stage steps aren't necessarily as large as you'd expect from what a naive suggestion of 'overcoming suffering' might imply".

But to use your example of long-term relationships, I've definitely noticed improvements in my ability to e.g. just be okay with things that cause tension with my relationships with other people, in a way that lets me accept those tensions rather than react with an unhealthy need to "fix" the other person. (Because obviously if something about my relationship with someone else doesn't work the way I'd like, it's the other person that needs fixing... or at least, so some parts of my mind seem to think. But they've been a lot less vocal about this recently.)

I think this post basically succeeds at its goal (given the discussion in the comments), and feels like an important precursor to discussion of some of the directions the LW community has been moving in for the last several years. I think the connection to cognitive fusion was novel to me when I first read it, but immediately clicked in place.

Excellent and well worked on, suggesting many different interesting ideas and research avenues.

Based on how I experienced things when I had the experience that made enlightenment seem within reach, something like a lack of noticeable change is in fact exactly what I would expect from many people who become enlightened.

If this is the case, our experience becomes slightly surprising from an anthropics-ish point of view.

That is, if there are multiple ways to experience the world that are instrumentally the same (like suffering from pain or not), whichever one we happen to have is a random draw. It seems we could have evolved to have any of them with eq... (read more)

On a small point, maybe it would be helpful to use a more natural term than 'defusion', e.g. 'detachment' (if that expresses it clearly), or perhaps something like 'objectivity'.

As better to avoid the confusion of introducing a new technical term if something can be expressed just as well with a familiar one.

Kaj, where can I read more about the three marks of existence? Preferably something as detailed as possible while still being readable in no more than a full day.

4Kaj_Sotala2yGood question, I haven't really encountered anything that would provide a very good and comprehensive explanation in third-person terms. The sources that I've seen are more concerned with giving you pointers to things in your own experience that you can investigate and then come to experience them directly, as that's the thing that actually causes your mind to update, whereas simply getting a conceptual description of them doesn't.

I think that a succinct statement of enlightenment would be: one flavor.

You notice the oneness, the sameness, of all subjective experience, and cease flinching from certain ones and grasping at others.

Any thoughts