The blogpost author (SquirrelInHell on LessWrong) died awhile ago. I'm not sure who's currently paying for their website or how long it'll be up. I don't have the rights to this, but decided it was important enough to have on LessWrong that I decided to copy-paste this post and... I dunno, own whatever karmic debt I incur.

This is possibly my single-favorite rationality technique. The first day I tried this I immediately ended up teaching myself a valuable rationality-life-lesson due to the feedback loop it created. When I teach this technique at small workshops, typically ~25% of people go "oh wow that was immediately helpful." I haven't gotten as much value out of it as SquirrelInHell suggests (i.e. it's sometimes effortful to think, and they claim if you're doing it right it basically shouldn't be), but I also haven't really sat and trained it deliberately in-depth, and meanwhile I've gotten value from it each time I try it.


Text of original article:

Tuning Your Cognitive Strategies

What do you get out of it?

  • The good.
    • Better returns on thinking time.
      • Your cognition is much more powerful than just the part you have conscious access to, and it's crucial to make good use of it.
      • A small tweak to how your brain processes information in general is worth more than a big upgrade to your conscious repository of cognitive tricks.
    • Goal-oriented thinking.
      • When working on real-life problems, your peak performance matters less than the ability to simply think useful thoughts at all.
        • For example, if your current top priority is "start my own company", but you keep having insights about "what I'll say to my current boss when I finally quit"... that's maybe not the best way to make progress.
    • Improved ability to fix cognitive biases.
      • To the extent that other approaches work, it's because they manage to change your cognitive strategies. It's much easier when you know what you are doing.
    • More creativity and good ideas just "popping into your head".
      • There's no magic to it! Once you understand how the process works, it can be optimized for any purpose you choose.
    • Less anxiety about performing well in cognitive endeavors.
      • Once you realize exactly what is and what isn't under your conscious control, you stop beating yourself about not doing the impossible.
  • The bad.
    • Uncanny valley.
      • Most people already have a thinking style built on top of excessive conscious cognitive effort.
        • This often involves relying on side-effects of verbal and conscious thoughts, while mistakenly assigning the full credit for results to those effortful thoughts.
        • When you already have some conscious/verbal thoughts, it is tempting to imagine they are the only result of your thinking, and then try to pick up from there. But this is limiting, because the most power is in whatever generated that output.
      • As you tune your cognitive strategies you're likely to lose that thinking style.
        • While rebuilding from better foundations is certainly a good idea long-term, you'll probably need to slow down and re-learn some old tricks in a new framework.
    • Control anxiety.
      • Having good quality thinking happen effortlessly and automatically is great... unless you are a control freak, in which case you should Tune Your Emotional Processing before even reading this page.

How to tell if you have it?

Note: everyone has cognitive strategies, and challenging yourself with intellectual activity tends to improve them (e.g. mathematicians tend to be very good at a certain specific class of strategies). However, it is very unlikely that you have reached your full potential by blind gradient descent.

  • You know how to think without "trying hard".
    • The cost you pay for high quality thinking is mostly time, which you know needs to be free from other concerns.
    • You definitely don't pay the cost in effort or willpower.
  • Your thoughts don't get "stuck" when you most need them.
    • You can recognize and deal with every situation in which your mind stops generating useful output, whether it's because of going blank, spinning in circles, or going off into fantasy lands.
  • There's a constant stream of good ideas occurring to you.
    • If your brain is well tuned, it is going to produce useful output whenever it is feeling fresh and has a spare minute or two.

How does it work?

  • Consider this metaphor:
    • Imagine your mind as a giant bubbling cauldron full of "thoughts", including "feelings", "ideas", "words", "concepts", "memories", etc.
      • Some of those "thoughts" rise to the top of the cauldron, and get picked up by your conscious attention.
      • If the conscious "you" is like a cook standing over the cauldron, then the cook has only a very small spoon at their disposal. They can only taste whatever has bubbled to the surface.
      • Your creativity and thinking power come from the full depth of the cauldron.
      • The rules of how thoughts interact and form new thoughts are the same, regardless of whether those thoughts are conscious or not.
    • When you don't like whatever has risen up to the top of the cauldron, the last thing you want is to try to "fix it".
      • You only have access to the topmost layer, so it would be hopelessly ineffective anyway.
      • But it's much worse than that - by attempting to "fix" your cognition, you stop being able to see how it works.
      • How well your cognition works is shown not by what thoughts you have at the moment, but rather by the pattern of how one or more thoughts combine into a new thought ("cognitive strategy").
    • Instead, you want to learn as much as possible about the differences ("deltas") between each thought and the next, as they occur to you.
  • Your brain already has the ability to update its cognitive strategies (this is called "meta-cognitive reinforcement learning"). However, the usual mechanism works with unnecessary levels of indirection, as in:
    • Cognitive strategy -> Thought -> Action -> Reward or punishment
      • You get rewarded or punished for what you do (as measured by your brain's chemical responses). Good thoughts are more likely to be followed by good actions. Good cognitive strategies are more likely to generate good thoughts. On average, your brain will slowly update its cognitive strategies in the right direction.
    • Cognitive strategy -> Thought -> Reward or punishment
      • You have learned to be happy or unhappy about having certain ideas, even when you don't yet know how they apply to the real world. Now your brain gets rewarded or punished for thoughts, and on average good thoughts are more likely to be generated by good cognitive strategies. Your brain can update cognitive strategies faster, according to heuristics about what makes ideas "good".
  • However, by carefully looking at the "deltas" between conscious thoughts, we can get rid of the last remaining level of indirection (this is the key insight of this whole page!):
    • Cognitive strategy -> Reward or punishment
      • You have learned to perceive your cognitive strategies as they happen, and developed some heuristics that tell you whether they are good or bad. Now your brain can update cognitive strategies immediately, and do it regardless of the topic of your thoughts.
      • Even when you generate a useless idea from another useless idea, you can still track whether the cognitive strategy behind it was sound, and learn from the experience.

How to learn it?

  • Note: awareness is a muscle. Time spent trying to see your thoughts more clearly is time well spent, regardless of the degree to which you succeed at getting any specific results.
  • Step 1: basic sanity checks.
    • For practice, we'll start with improving some simple local efficiency heuristics. They definitely aren't the final goal, but will later be useful regardless of what goal you have.
    • Pick a small problem, question or thinking puzzle of any kind.
      • It's best to use something that you think you can solve in at most a few minutes, and which makes it easy to see how well you are doing.
      • Choose something outside of your area of expertise.
        • In areas where you have a lot of experience, your thought process will be faster and more automatic.
      • Beware of "school trauma": think about whatever you want to think about, not things someone else would like you to think about.
      • If you bend to external pressure, you'll just reinforce the pathological pattern that thinking tools are your enemies, because they limit your freedom.
      • If you don't have any ideas, you can always pick "picking a puzzle" as your puzzle.
    • Notice a thought chain.
      • Load the puzzle into your memory, and let go.
      • Instead of focusing on solving the puzzle, focus on the question "where do my thoughts go when this puzzle enters my attention"?
      • At minimum, try to notice a sequence of two thoughts (the shortest possible "chain"): the initial question you asked yourself, and the first thought that occurred to you afterwards.
        • It's very important to focus on what feels like very quick, atomic transitions. Do not wait until you have a full word or sentence formed in your mind!
        • Aim for sub-second timescales. In fact, you can easily have a chain of 5 or more conscious thoughts in one second. If you think you can't, you're just missing skill in noticing it.
      • Repeat as necessary to get a clear read - just trying to do this is already valuable cognitive training.
      • Definitely change the topic when it gets too boring, which is when you no longer expect to be surprised by what you notice about your thoughts.
      • Example: just now, my thoughts:

        looking at the typed word "Example:" -> wanting to know what to type next -> flash of dread at not having anything prepared -> noticing that flash of dread -> noticing that I noticed it -> looking at the whole thought chain so far -> noticing I executed the technique -> realizing I can use this as an example -> picking a grammatic form to describe it -> ...
    • Extract the pattern of "deltas".
      • After you become aware of at least one micro-scale thought chain, you can reflect on the principles that generated it.
        • This probably shouldn't be a very detailed or time-consuming analysis - your advantage here is that you have lots of raw data, so you don't need to be very parsimonious with it.
        • In fact, the act of reflecting on a thought chain will necessarily generate dozens of a new thought chains. It's basically impossible to run out of data to reflect on and learn from.
      • Think which "deltas" are doing good work for you, and which aren't.
        • This will send a signal to your brain to learn and update the corresponding cognitive strategies.
        • Do not try to assume forceful control over what you think! This applies both to thoughts and "deltas".
          • All you ever need to do is notice useful deltas, and have that little "oh, nice!" reaction. That's it. Really.
        • The delta which moves you into noticing your deltas is very useful. Give it the reward it deserves!
      • Example 1:
        • After someone asked me to add examples here, my thought chain was roughly:

          feeling of not wanting to bother -> checking reasons to do it -> noticing a cached thought that it's good to give examples -> doubting if this makes sense -> what happens if I just stop doing it -> intuition that this would be bad for BWT clarity -> flash of reasons why I care about writing BWT in the first place -> wanting to make a quick decision -> deciding to add an example -> ...
        • The deltas "planning X -> question reasons to do X" (appeared twice) and "suspicious belief -> try to negate it" seem useful.
        • There was also a pair of deltas "reasons feel shaky -> investigate" and "reasons feel solid -> use cache" which made me go off on a tangent once, but not in the other cases.
        • This means I'm also tracking in the background what it means for reasons to feel "solid", and already have cognitive strategies in place which update this information. This is all very useful.
      • Example 2:
        • On the other hand, a large amount of low-hanging fruit can be extracted from noticing deltas which are obviously broken, like in this thought chain:

          blank mind -> noticing having a blank mind -> verbal thought "my mind is blank" -> feeling of despair -> blank mind -> ...
        • More examples of useful cognitive strategies, and common low hanging fruit:
          • If you hit an impasse (no new useful thoughts), relax and let your mind wander to related but different topics.
          • If your mind wanders too much, check why you even care about the problem.
          • If you think the same thought again, change the topic.
          • If you know what you are going to think, think something else.
          • If you think with lots of effort, remember it's useless and just watch your thoughts happen.
          • If you don't know in which direction to think, pick whatever seems fun.
  • Step 2: make sure to win.
    • Notice thought chains you generate naturally as you go about your life.
      • While local efficiency (not getting stuck etc.) is useful, it hardly has the power to change how you play the game. The biggest challenge in an open environment is knowing what to focus on in the first place.
        • This means that more than anything, you need to learn cognitive strategies that connect you to your goals, and means of achieving them.
      • For example, you can notice thought chains when you:
        • choose the next task to do,
        • do better or worse than expected,
        • plan your day or week,
        • process emotions,
        • change the topic in conversations,
        • accept or reject offers.
      • It's recommended to do it without setting up external reminders.
        • A far better solution is to reinforce cognitive strategies which would make you naturally remember at the right times.
        • E.g. one or two straightforward deltas can take you from "feeling of mild dissatisfaction with decision" to "wanting to know how to think better", from where it's close to remembering to reflect on your thought chains.
    • Get the deltas.
      • Reconstruct as much as you can of how your mind went there. In real life, you are not restricted to the micro scale.
        • Try to identify both low-level and high-level patterns, such as key insights, emotions, changes of topic, and inspiration.
        • How does your emotional state influence your deltas?
          • You probably have a different cognitive style when excited, angry, happy, anxious, overwhelmed, content, scared, restless etc.
    • Keep your goals in mind.
      • Warning: this is definitely not about "policing" your thinking. You should never try to put restrictions on the content and style of your thoughts.
        • Do not use this under pressure (when someone or something tells you what goals you should have).
        • Also do not fall into the trap of rejecting vague, dreamy thoughts as worthless.
          • The best use of your brain when tired is probably to let it unwind and think relaxed, creative thoughts.
      • How well have these particular deltas performed in the past?
        • This amounts to maintaining a rough "track record" for all of them.
      • What are they optimized to do?
        • You'll often find goals which you don't necessarily feel proud of, e.g. feel better, impress someone (who?), prove something to yourself.
        • However, trying to attack those goals would be a terrible mistake - they are there as a result of your real preferences.
          • If you are surprised by this, it just means you didn't know enough about yourself.
        • You need to understand where the patterns come from, and what you really want to achieve in any given situation (see also Tune Your Emotional Processing).
      • How well do you expect to do if you continue the current trend?
        • What would it be like to do better than that?

Further Progress

  • Turn the skill on itself.
    • Reinforce cognitive strategies that will help you with reinforcing cognitive strategies, and finding better ways to reinforce cognitive strategies.
    • The skill will then quickly bootstrap itself into your most powerful and general thinking tool.

79

New Comment
38 comments, sorted by Click to highlight new comments since: Today at 8:37 AM

Worth noting that the reason SquirrelInHell is dead is that they committed suicide after becoming mentally unstable, likely in part due to experimentation with exotic self-modification techniques. This one in particular seems fine AFAICT, but, ya know, caveat utilitor.

Yeah, I considered explicitly leaving that note at the beginning but felt like this was just sufficiently different from the thing that led to their suicide that adding "WARNING! BUT ALSO I'M NOT THAT WORRIED?" didn't seem overall worth it. 

Romeosteven's comment updates me a bit, though my current guess is this is still a fairly different reference class of problem (and the post comes with it's own warnings about the thing romeo is pointing at, assuming I understand it properly)

This seems reasonable to note; at the same time, I think that a lot of people who end up badly after experimenting with exotic self-modification techniques do so despite rather than because of the techniques.

This technique seems best if your problem is that your thoughts tend to often go down loopy, unproductive, distressing paths, in a way that you can self-diagnose with confidence. Which is totally a real thing! I used to find my brain making up imaginary offenses people had committed against me, and I would feel angry or vindictive for a moment. Fortunately I developed a thought pattern that immediately just notes “… and that NEVER ACTUALLY HAPPENED,” and then I move on from the moment. That’s a situation where it’s really easy to notice a bad thought pattern and change it, cutting out any real world action. And once I’d done it a couple times, I started noticing this as an overall cognitive strategy.

Another example is from my work as an engineer. During my first year or so doing research, I noticed several bad patterns of thought and behavior: throwing things out prematurely when I’d make a mistake, doing overly complex mental math, and trying to emergency correct mistakes rather than going to my desk and working out an actual plan of solution.

But in these cases, while “noticing my thoughts” was key to the solution, because it interrupted a bad pattern of behavior, it was noting the bad outcome, then working backwards to a specific root cause that got me there. Continuously monitoring my stream of thoughts was not part of this process. It seems like a technique of continuous thought-monitoring would be more important if the problem you were having was with your thoughts themselves. If your problem manifests as behavior, then paying attention to the stream of behavior and figuring out the root cause seems best.

Man, it does make me sad that whenever I bring up this technique, there’s an obligatory version of this conversation.

That's understandable. But it does seem like the sort of thing I'd want to hear about before trying such a technique. Hopefully people can take it for what it's worth.(i.e I don't think we should automatically discount such techniques or anything)

I think that's somewhat reasonable in this case, but, want to flag that it should be possible at somepoint to reach an epistemic state where you can say "okay, yeah, it was mostly coincidence, or at least not relevant, that this happened to this person." Like, if someone invented a car, and then used the car to commit suicide driving over a cliff, you might go "holy shit maybe I should be worried about cars and suicide?", and if you didn't know much about cars maybe this would be a reasonable thing to worry about at first. But, like, it shouldn't be the case that forever after whenever someone sells a car they warn you that the guy who invented cars used them to commit suicide. It's privileging a hypothesis.

I think in this case it's less crazy than in the car case to worry about that, but, I do want to push back against the impulse to always have a disclaimer here.

In the car case I think it's obvious that car usage is not causally upstream of suicidality. If the inventor of the car died in a car accident, I do think that would be a relevant data point about the safety of cars, albeit not one that needs to be brought up every time. And in the real world, we do pretty universally talk about car crashes and how to avoid them when we're teaching people to drive. From that perspective romeosteven's comment is probably better and mine just got more upvotes because of the lurid details. (although, tail risks are important. And I think there's a way in which the author's personality can get imprinted in a text which makes the anecdote slightly more relevant than in the car case)

In cases like this I strongly prefer to be given the facts (or at least pointed toward them) and allowed to make my own judgment as to how relevant they are. 

Whether you choose to join the conversation and present the argument for their irrelevance is up to you, but sharing all the facts that your audience might consider important, rather than deciding for them that some apparently-relevant ones are best left unsaid, is IMO more respectful and reduces the risk of doing preventable harm in cases where your judgment is mistaken.

Is your worry more about "maybe this technique is more dangerous than it looks?" or "maybe people will follow up on this by generally following SquirrelInHell's footsteps, and maybe not all those footsteps are safe?"

More the latter. Or more like, doing things like this technique too much/too hard could be dangerous.

I think that might be true, but, at that level, I think it kinda makes more sense to put the warning over, like, the entirety of rationality techniques, and I think singling ones that SquirrelInHell wrote up doesn't actually seem like the right abstraction.

Like, I do generally think there's a failure mode to fall into here. I don't think SquirrelInHell is the only person to have fallen into it.

This post does seem like it warrants some specific warnings (which the original post already included). But I think those warnings are mostly unrelated to what ultimately went wrong.

Source/evidence? I believe you but this seems worth checking.

[This comment is no longer endorsed by its author]Reply

Another thing I notice after a few years of using this:

The OP says:

  • Your brain already has the ability to update its cognitive strategies (this is called "meta-cognitive reinforcement learning"). However, the usual mechanism works with unnecessary levels of indirection, as in:
    • Cognitive strategy -> Thought -> Action -> Reward or punishment
      • You get rewarded or punished for what you do (as measured by your brain's chemical responses). Good thoughts are more likely to be followed by good actions. Good cognitive strategies are more likely to generate good thoughts. On average, your brain will slowly update its cognitive strategies in the right direction.
    • Cognitive strategy -> Thought -> Reward or punishment
      • You have learned to be happy or unhappy about having certain ideas, even when you don't yet know how they apply to the real world. Now your brain gets rewarded or punished for thoughts, and on average good thoughts are more likely to be generated by good cognitive strategies. Your brain can update cognitive strategies faster, according to heuristics about what makes ideas "good".
  • However, by carefully looking at the "deltas" between conscious thoughts, we can get rid of the last remaining level of indirection (this is the key insight of this whole page!):
    • Cognitive strategy -> Reward or punishment
      • You have learned to perceive your cognitive strategies as they happen, and developed some heuristics that tell you whether they are good or bad. Now your brain can update cognitive strategies immediately, and do it regardless of the topic of your thoughts.
      • Even when you generate a useless idea from another useless idea, you can still track whether the cognitive strategy behind it was sound, and learn from the experience.

I think the author thinks of this as the primary insight here (i.e. getting to: "Cognitive strategy -> reward/punishment"). And... I'll be honest, I think this works and it makes sense to me, but it doesn't work so obviously that I'm like "yes this underlying theory definitely checked out."

But what I think is both more obvious, and still a useful stepping stone, is transitioning more from "Cognitive strategy -> Thought -> Action -> Reward or punishment" to "Cognitive strategy -> Thought -> Reward or punishment". A lot of my thoughts are obviously dumb (or useful) upon first glance. And shifting how much of my feedback loop happened within ~3 seconds vs longer timescales still seems very helpful.

Does anyone who knew SquirrelInHell know the subskills in the skill tree they never got around to writing?

 EDIT: To clarify, are there any known skills which are equivalent to the Red subskills in BWT's skill tree? I am very impressed with the exposition on BWT, and would guess the remaining skills were just as high value. Perhaps more than I'd naively guess, if there's some synnergy between them. If you think you know them, please speak out so we can get the complete BWT skillset. 

I didn't know them and can only speak to how I did the tuning ontology thing. For about 2 weeks, I noted any time I was chunking reasoning using concepts. Many of them familiar LW concepts, and lots of others from philosophy, econ, law, common sense sayings, and some of my own that I did or didn't have names for. This took a bit of practice but wasn't that hard to train a little 'noticer' for. After a while, the pace of new concepts being added to the list started to slow down a lot. This was when I had around 250 concepts. I then played around with the ontology of this list, chunking it different ways (temporal, provenance, natural seeming clusters of related concepts, domain of usefulness, etc.). After doing this for a bit it felt like I was able to get some compressions I didn't have before and overall my thinking felt cleaner than before. Separately, I also spent some time explicitly trying to compress concepts into as pithy as possible handles using visual metaphors and other creativity techniques to help. This also felt like it cleaned things up. Compression helps with memory because chunking is how we use working memory for anything more complicated than atomic bits of info. Augmenting memory also relied on tracking very closely whether or not a given representation (such as notes, drawing etc.) was actually making it easier to think or was just hitting some other easily goodharted metric, like making me feel more organized etc.

With regard to 'tracking reality with beliefs' the most important thing I ever noticed afaict is whether or not my beliefs 1. have fewer degrees of freedom than reality and thus have any explanatory power at all and avoid overfitting, 2. vary with reality in a way that is oriented towards causal models/intervention points that can easily be tested (vs abstraction towers).

With regard to 'tracking reality with beliefs' the most important thing I ever noticed afaict is whether or not my beliefs 1. have fewer degrees of freedom than reality and thus have any explanatory power at all and avoid overfitting, 2. vary with reality in a way that is oriented towards causal models/intervention points that can easily be tested (vs abstraction towers).

This seems like a potentially quite helpful concept to me.

I'd be interested in more details of how you go about checking for degrees of freedom.

I think when I do this sort of sanity-checking for myself, things I sometimes do include "wait, why do I believe this in the first place?" and "consider the world where the opposite is true, how would I know?" but those seem like different mental motions.

Easiest is a fictional dialog between a pro and anti position person. The anti person brings counter evidence and then gets to see how the pro position responds. If they respond by remapping the moving parts of the model in a different way, that indicates extra degrees of freedom. Then you can have an easier time noticing when you are doing this same move, ie back peddling and trying to 'save' a position when someone gives you push back on it.

I think that list would be very helpful for me.

Can you form a representative sample of your "list"? Or send the whole thing, if you have it written down.

Worth noting that both this and the fixing the motor cortex skill they advocate are very closely related to traditional buddhist insight practices and that without supporting emotional integration (Tune Your Emotional Processing, with Focusing as the particular version that Squirrelinhell advocated though a variety of self therapy modalities can work) it can be destabilizing.

I'm interested in more details about the failure modes to watch out for here. i.e. what sort of things might you notice happening to you if you were en route to being destabilized?

The post does explicitly warn about this, but I happened to a) already have some flavor of focusing by the time I started, and b) never actually ran at it that hard, so, I might still be underestimating how worried to be about it despite the warnings.

One possible issue that comes to mind is that if you start paying more attention to the low-level movements of your thoughts, you might start noticing thoughts that parts of you get triggered by, e.g. if they feel like particular kinds of thoughts are shameful to have. One concrete failure mode that I think many rationalists would be susceptible to, would be to notice something like

blank mind -> noticing having a blank mind -> verbal thought "my mind is blank" -> feeling of despair -> blank mind -> ...

and then feeling additional despair and shame over your mind being stuck in an unproductive cycle and feeling that you should be able to do better. That may then create another layer of shame and despair on top of the original one. Although the original instructions say that you shouldn't use this to police your mind, getting triggered in this way may create a compulsion to do so anyway.

Another could be mysterious feelings of dread and feeling bad, if you started noticing various thoughts/emotions that parts of you had been trying to block. Though I would expect that the most natural consequence of that would be you just losing the motivation to use the technique pretty rapidly, with it becoming another of those "that felt really useful but for some reason I don't feel any interest in doing it anymore, shrug" things. 

I think the main risk there would be if you had used this technique extensively enough to build up an increased introspective awareness that was harmless at first but then started catching more of whatever blocked trauma you had and had by that point been built up sufficiently that just stopping the practice wasn't enough to bring it down anymore. That kind of a scenario would be similar to the cases where people start getting trauma symptoms from doing mindfulness practices; if one has already tried that kind of a thing before and hasn't felt bad, then it might be an indication (on top of the base rate, which I think is reasonably low) that it's low-risk. 

Compulsive deconstructors shouldn't be handed a full toolbox is one way I have thought of it.

I meant that emotional integration (like focusing) is helpful for avoiding destabilization.

I would say the signs are the normal sort you 'd see in mental health breakdowns:

Depression, social withdrawal
Hostility or suspiciousness, extreme reaction to criticism
Deterioration of personal hygiene
Flat, expressionless affect
Inability to cry or express joy or inappropriate laughter or crying
Oversleeping or insomnia; forgetful, unable to concentrate
Odd or irrational statements; seeming difficulty with communicating in a normal way

ITT: links to the original post on various archives.

Previous discussion: https://www.lesswrong.com/posts/hGtBH7SJy6Y2SmAj6/tune-your-cognitive-strategies

Longevity-wise, https://squirrelinhell.blogspot.com/ should be up indefinitely since AFAIK, Blogspot/Blogger has no nasty deletion policies (although I have not checked specifically, they are one of the oldest blog hosts on the Internet, and apparently they are considered safe from Google axing because they are used internally so much for Google official posting). http://bewelltuned.com/ seems to duplicate a lot of the content, and the copyright date suggests most of it has been there for at least several years, and it looks easily crawled, so it should be well-archived.

I think it's worth sharing here some details about SquirrelInHell's suicide, specifically to point out to new people that Cognitive Tuning was not what killed SquirrelInHell.

This comment is from Slimepriestess, who is a friendly former-Zizian. I wouldn't necessarily trust 100% of everything said by a former Zizian (and who should definitely not be treated as a pariah). But it's pretty well known that SquirrelInHell was doing a ton of over-the-top shit at once (e.g. simultaneously attempting to use dolphin-like sleep deprivation to turn half of their brain into Lawful Evil and the other half into Transgender Good), and was simultaneously hanging around a bunch of violent and dangerous people, and they were all doing hardcore Roko's Basilisk research.

imo, Maia was trans and the components of her mind (the alter(s) they debucketed into "Shine") saw the body was physically male and decided that the decision-theoretically correct thing to do was to basically ignore being trans in favor of maximizing influence to save the world. Choosing to transition was pitted against being trans because of the cultural oppression against queers. I've run into this attitude among rationalist queers numerous times independently from Ziz and "I can't transition that will stop me from being a good EA" seems troubling common sentiment.

 Prior to getting involved with Ziz, the "Shine" half of her personality had basically been running her system on an adversarial 'we must act or else' fear response loop around saving the multiverse from evil using timeless decision theory in order to brute force the subjunctive evolution of the multiverse. 

So Ziz and Squirrel's start interacting, and at that point the "Maia" parts of her had basically been like, traumatized into submission and dissociation, and Ziz intentionally stirs up all those dissociated pieces and draws the realization that Maia is trans to the surface. This caused a spiraling optimization priority conflict between two factions that ziz had empowered the contradictory validity of by helping them reify themselves and define the terms of their conflict in her zero sum black and white good and evil framework.

But Maia didn't kill them, Shine killed them. I have multiple references that corroborate that. The "beat Maia into submission and then save the world" protocol that they using cooked out all this low level suicidality and "i need to escape, please where is the exit how do i decision-theoretically justify quitting the game?" type feelings of hopelessness and entrapment. The only "exit" that could get them out of their sense of horrifying heroic responsibility was by dying so Shine found a "decision theoretic justification" to kill them and did. "Squirrel's doom" isn't just "interhemispheric conflict" if anything it's much more specific, it's the specific interaction of:

"i must act or the world will burn. There is no room for anything less than full optimization pressure and utilitarian consequentialism"

vs

"i am a creature that exists in a body. I have needs and desires and want to be happy and feel safe"

This is a very common EA brainworm to have and I know lots of EAs who have folded themselves into pretzels around this sort of internal friction. Ziz didn't create Squirrel's internal conflict she just encouraged the "good" Shine half to adversarially bully the evil "Maia" half more and more, escalating the conflict to lethality. 

Generally, I think people should be deferring to Raemon on the question of "is Cognitive Tuning safe?" and should, at minimum, message him to get his side of the story. This situation is a really big deal; if Cognitive Tuning works, that's successful human intelligence augmentation, that is world-saving shit. Cognitive Tuning alone could become an entire field of intelligence augmentation, AND something that anyone with average intelligence can contribute heavily towards, since having a more typical mind will yield more insights that can be picked up and worked with by other people with more typical minds).

Gahhhh I've been waiting for the rest of BeWellTuned for a while now. I was hoping it was held up for a happy reason, like the author being busy with work they found important. :(

It seems to me like people here started focusing on the wrong things. People who knew SquirrelInHell know that the suicide was likely caused by SquirrelInHell simply starting out already over the edge, e.g. hardcore obsessive Roko's basilisk research.

The issue at hand with the matter of tuning cognitive strategies is not "does this drive people crazy", it is "does delta reinforcement actually work", because if delta reinforcement actually works, then that is

As in, like, comparable in value to the rest of Lesswrong put together. If this works, even if it only works on 10-25% of people (which Raemon's testimony indicates), then this is basically the world-saving nearterm human intelligence augmentation (which yud wants to scale).  

Everything we have so far, on alignment and macrostrategy, came from human minds that were not really tuning their cognitive strategies. High-output passive thinking, and fun downhill thinking, have immense potential to set the world up so that someone, somewhere, eventually thinks of a solution to the world's most pressing problems.

This is not something to sleep on.

The way I've personally used this technique/practice is to have a lapscreen, with two pages side by side – one as a notebook where I can jot thoughts down, and one with whatever puzzle I'm trying to solve. (I found brilliant.org to be a good source of puzzles)

I try to jot thoughts down as I have them (often with very rough notes that only make sense to me since trying to write down too much would slow down the process too much)

The post emphasizes noticing thoughts at the sub-second level. Obviously, writing out a focus-handle for 5 different thoughts in the space of a second isn't practical. But what I do here is often let myself have a few thoughts/impulses in a row, then go back and try to notice/remember them all, and then write them down after-the-fact in an attempt to crystallize them and reinfoce the noticing process.

Do you think having a well-defined puzzle (like a math problem) is a better way to make the usefulness of this technique clear?

A lot of what I work on are more open-ended questions like trying to remember how techniques work or concepts are about (ie ANOVA). In these cases, the process is more about recalling or reconstructing various insights, definitions and equations, with no clear stopping point. I’m wondering if I’ve been trying to apply this cognitive tuning technique to a problem it’s not well suited for?

I think the technique is relevant to basically all cognition, but working on well-defined problems is useful for the "figure out if it's actually helping" and "fine-tune your approach to ensure you're using it usefully".

(When I use this technique for more open-ended problems I think it's still useful to have two screen-pages open, one of which is still more for rough-unstructured notes and one of which is more for "here's my distillation of my current understanding of the problem."

New to LessWrong?