All of RainbowSpacedancer's Comments + Replies

There is value in the techniques taught and there are also serious concerns about the methodology, marketing, and psychological safety of the course. It's messy to talk about because it's simultaneously problematic and can be helpful so participants tend to come out on a particular side. I'd encourage anyone considering purchasing/supporting the course to read this review from an ex-participant or DIY the course with the techniques here.

TLDR of the review

While Finders Course advertises itself as a scientific research protocol on awakening/enlightenment, it

... (read more)

Kaj, are you familiar with the idea of a plurality of enlightenments? Both in the sense of a difference in degree (à la the Theravadan 4 path model) and a difference in kind (à la Jack Kornfield's Enlightenments). Wondering what your take on it is and how this series is going to navigate that. I suspect this is one reason I've noticed meditation practitioners trending toward the term awakening rather than enlightenment.

Yes. I currently think that there is no one enlightenment, but rather a wide variety of dimensions that one can explore, which lead to different kinds of outcomes depending on what you focus on and what you do with the things that you find. My intent is to cover enough of different things to give a taste of what's out there and what kinds of outcomes might be possible, while acknowledging that there's also a lot that I have no clue of yet.

Thanks for this podcast! I'm one of those people that primarily consume audio. Wanted to offer some encouragement, production quality definitely surpassed my expectations. Pleasure to listen to.

Apologies for the late reply. Thanks for your kind words and support! My Replacing Guilt output has been very low lately, but I'll have some more time flexibility in the near future and will start making progress again.
I'm not advocating trying for kenshō. You can't try for it in any useful way. That's not how it works. I honestly don't care whether I persuade anyone of its value, because it does not matter whether you try for it. Or rather, if it does matter, it does so by making you obsessed in a way that can actually block the seeing. So, there isn't really any good benefit to fighting with your analysis to try to persuade you of its value.

I understand where you are coming from. Efforting blocks realisation and kenshō doesn’t come from discurs... (read more)

You don't need to speak about Kenshō to talk about the value of meditation. You can advocate for taking up a meditation practice with arguments that are much simpler and that are about less time investment.
Is it easy for you to sketch what the map you're referring to is?

Not OP, but I can describe the map he’s referring too. Jeffery Martin interviewed 1,200 enlightened individuals and found that while their reported experience was different, their descriptions of their new phenomenological experience fell into similar clusters or ‘locations’. There’s around 20-40 locations in all (he is vague about exactly how many there are) but Jeffery only talks about the first 4 because that’s where the vast majority of people spend their time and he believes talking... (read more)

The awakened community definitely needs more rationality and the rationality community could probably benefit from some Insight, so thank you for starting this conversation. Hopefully it's just the first step. For anyone interested r/streamentry is a mostly woo-free, friendly community for discussing this sort of thing.

A particularly useful and traditional guideline is to wait a year and a day before claiming an attainment and completely making up your mind. This is slippery stuff sometimes, and many states and stages can easily fool someone into thi... (read more)

Great post! Some small formatting fixes that might help people searching this list.

'Exercise' the last section under 'Rest' isn't listed in the contents.

Two of the headings have non obvious renames for anyone doing a really quick skim. 'Expect to actually make progress' becomes 'Expect work to be effective' and 'Actually care about the task you're doing' becomes 'Increase the value of your task'.

2Adam Zerner6y
Thanks! Fixed. I like "Expect work to be effective" over "Expect to actually make progress". The latter does indeed seem unclear. It seems to beg the question, "progress on what?". So I have updated the article to use the former. However, I still like "Actually care about the task you're doing" over "Increase the value of your task". The latter seems unclear to me. It seems to beg the questions, "What does that mean? That I should do tasks that are more socially valuable? More likely to have me make progress towards my long term goals? Things I subjectively value?".

The thinking here seems muddled.

  1. Ordinarily locus of control refers to events in the future, explanatory style refers to events in the past. Your last 3 examples refer to past events but you switch back and forth between past and future when you break down each example. Anxiety is uncertainty, you can't feel anxious about events in the past.

  2. Locus of control is about the degree of control you have over an outcome/event. Events are in themselves good or bad. How you feel about a future event is a consequence of whether or not that event is within your con

... (read more)

Can you talk a bit more on this? I'm curious to know how you imagine talking yourself into believing something you don't believe, like some kind of double-think. And it seems avoiding scary thoughts is not a habit a rationalist would want to encourage.

Ok, let's spend a minute to construct a rational theology. At first, we need to prove that God exists. There are several independent ways to prove it: 1) Simulation argument. We are most likely are living in the world created by some form Superintelligence. It may create miracles, afterlife, whatever, and prevent us from proving that we live in the simulation. If we accept simulation argument, we also should accept multilevel simulation model, with higherst possible superintelligence on the highest level. 2) Mathematical Universe Platonia. If all possible math objects exist, then most complex objects exist too, more over, complex objects are dominating by the number between all possible math objects (like large digit are dominating on smaller digits). Thus most complex superintelligent computer programs must dominate as pure mathematical objects (programs are mathematical objects). However, it contradicts observations: we see a rather simple world. Solution could be that each superintelligence in platonia create multilevel simulation, so most observers any ways are downstream of simulations. 3) All the hell break loose if we accept Platonia, because not only mathematical ideas must exist, but also any linguistically presentable ideas. Thus in Platonia idea of God is equal to the God existence. 4) Forget Platomia and Simulation. But anyway we are going to create benevolent superintelligence during AI self-improvement in the next decades. It will be indistinguishable from God. However, it will exist only future half of infinity. 5) Forget Superintelligence. If some exotic interpretations of QM are true, and consciousness cause collapse, we need one and only one instance of consciousness to do so for all possible universes. Surprisingly, it is me: I am the only consciousness being in the world, all others are p-zomby. (High danger of mania of grandiosity detected.) 6) The same way anthropic principle in its worst form says that all visible universe must exist onl

I'd expect mobiles to be under-represented in these results as you can only vote if you are logged in and I'd expect more people are logged in on their desktop rather than their mobile.

Help the lesswrong wiki.

Is any effort to improve the wiki now in danger of disappearing once LW 2.0 comes around?

I don't think we plan to remove the wiki, but I also don't think we plan to improve it. (As I understand it, different code is powering the two, and we can switch over one without interfering with the other.) If it becomes especially good, then we might send dev resources that way, but I'm currently pessimistic about that.

Mundanification is just another one of these variants that's about being able to peek into those dark "no, I must never look in here!" corners of your mind and trying to actually state the worst-case scenario (which is often black-boxed as a Terrible Thing that is Never Opened).

How does it work specifically? I can't see the technique posted anywhere.

During the workshop, it wasn't well fleshed out (it was a short "flash class"), so I'm afraid I don't have too many details. Here are some pieces of the thing, though, and hopefully it helps point at the general idea. The class of techniques is about: * 1) Being able to notice when you feel aversive / scared / painful with regards to something in your head. * 2) Feel okay with looking into these areas, unpacking them, asking yourself the question of, "What is it, exactly, about this situation that's causing me distress?" * 3) Also, being able to explicate worst-case scenarios, being able to be okay with answering, in some detail, the question, "What's the worst thing that can really happen?"

Mundanification is about facing the truth, even when you flinch from it, via Litany of Gendlin-type things.

Can you talk a bit more about this? I'm only familiar with the Litany of Gendlin itself.

Sure! So there seems to be this conceptual cluster of rationality techniques that revolve around facing the truth, even when it's hard to face. This seems especially useful for those icky situations where your beliefs have some sort of incentive to not correspond to reality. Examples: * You don't want to clean out your fridge because if you had to look in there, then part of you feels like it would make the rotting food at the back more 'real'. (But in reality, your awareness of the food is independent of its existence, and if you don't clean it out, it'll only get worse.) * You don't want to get your homework done because it's boring/painful to think about, and if you don't do it, then you don't have to think about it, which basically means it's not really there. (But in reality, this only pushes it closer to the deadline.) * You plan to finish your project in 30 minutes even though it took you 1 hr last time, because part of you thinks that if you write down '1 hr', it'll really take you that long. But you really need it to be done in 30 minutes, so you write that down instead. (But in reality, you need to decouple your estimates from wishes to get well-calibrated. Your prediction is largely independent of your performance.) And on and on. These sorts of problems often comprise ugh fields [], feel painful to think about, and are often sources of aversion. To debug these sorts of problems, there are several (in my opinion) conceptual variants of harnessing epistemological rationality. These techniques often focus on trying to get to the root of the aversion and also calibrate your gut-level senses with the idea that your belief about a matter doesn't actually control reality. Mundanification is just another one of these variants that's about being able to peek into those dark "no, I must never look in here!" corners of your mind and trying to actually state the worst-case scenario (w

Books on leadership. The psychology + social dynamics of leadership and the traits of successful leaders. There are so many books I don't know where to start.

Olivia Cabane's books are where I'd start. Then Kegan's Immunity to Change.

Let's define "stupidity" as "low IQ" where IQ is measured by some standard tests.

That already seems pretty different to what OP is talking about. See -

"Stupidity," like "depression," is a sloppy "common-sense" word that we apply to different conditions, which may be caused by genetics (for instance, mutations in the M1 or M3 pathways, or two copies of Thr92Ala), deep subconscious conditioning (e.g., religion), general health issues (like not getting enough sleep), environment (ignorance, lack of reward

... (read more)
Instrumental rationality is often hard to judge since you don't know what the person is optimizing for (not necessarily consciously). The classic Hansonianism ("X is not about X", e.g. "Politics is not about policy", etc.) is one way is which you can be wrong about someone's instrumental rationality.

I've read all of Daniel Ingram's stuff. He's a fantastic resource. If you like his stuff, MCTB v2 is scheduled to come out later this year. The draft is much improved over the original IMO.

Oh, I feel silly, I should have just googled the names, I'm familiar with them. I know Gunaratana by his book and John Yates by his alternate name Culadasa. Thanks anyway, lifelonglearner, they've proven to be an excellent help.

This is a great post helldago. I've found a lot of these useful myself and the others I'm excited to try out because I can relate a lot. A couple of other things I have found useful for resilience.

  1. A Mental Health section in my Anki deck. There's about 170 cards which includes things like cognitive reframes (a bad behaviour doesn't make you a bad person, failure is useful if you use the information gained to update your plan etc.), common depression traps I might be caught in (comparison, labelling, all or nothing), stoic quotes and the like. I've never b

... (read more)

I've chatted a little with Shinzen on one of his retreats but I haven't yet looked into the other two. Thanks lifelonglearner.

No problem! John Yates is better known as "Culadasa", by the way. He's the author of The Mind Illuminated.

Mindfulness is a part of it, I'm interested in the end goal. The lasting changes in perception that are meant to come about through mindfulness or other practices.

I know of famous people in the mindfulness world (Shinzen Young, John Yates, and Bhante Gunaratana), but I don't know them personally. Still, emailing them may be worth a shot?

I'm working on an overview of the science on spiritual enlightenment. I'm also looking into who has credible claims to it, whether it is something worth pursuing and a survey of the methods used to get there.

If anyone knows someone (or is someone) that thinks they might be there or part-way there and who would be willing to chat a bit, that'd be lovely. If you've just dabbled in some mystical practices and had a few strange experiences and want to bounce some ideas around, that could be fun too.

This blog [] doesn't appear to be active anymore, but it contains a lot of helpful ideas from an LWer who was an experienced meditator. The blog led me to buy the book The Mind Illuminated [] which is a very clear, thorough, secular and neurologically sound (where possible) manual on attaining classical enlightenment through vipassana+mindfulness. I'm currently trying to follow its program as well as I can.
Specifically for meditation: I think Romeo Stevens has worked with mindfulness recently, if that's close to what you're looking for? (You can probably ping him here).

if you claim you are, I am going to show that you are not.

when I am allowed to explain what Ideal Money is then we will all see this

I'd like to bet with you on one or both of those predictions if you are open to it.

I'm silent because I haven't read it. The reason being without an overview I'm not sure if it's something worth investing time in. The non-default font causes some aversion to, albeit minor. n=1

Don't apologise, it's better that it exists without one than not at all. Looking forward to it.

Thanks. I am actually having difficulty evaluating whether this series is being well received or not. I can't tell whether silence is akin to, "it's really great and no one has anything to add", or more like, "It's not even wrong so I won't bother pointing out the flaws" or the infinite space in between. My hunch is, this series sits around the point of, "I implicitly agree with the premise(s) raised but it hasn't impacted my life yet or brought me revelations worth sharing.". If that's the case I need to dig deeper. My other option is to say more controversial statements so that people have content to dig their claws into when they want to talk about them. This is something I am considering.

What is the theme of this series Elo? What are you trying to achieve? I don't see an introductory post anywhere.

apologies - I don't have a summary yet. But I do have one more post before I stop and make them link better. I will write a summary of the series then.

It's called the CART (Comprehensive Assessment of Rational Thinking) and it's described in this book and (PDF Warning) this paper.

Thank you. Does anybody have any opinions about the quality of the test he developed? What's the process for a person who wants to take the test or run a study based on the test?

I've been using a backlog though I've never seen Forster's system, and have found it useful. I'm glad to see it made explicit. I also think you are right on the money in trying to run a middle path between the rigidity of a set daily task list in and the lack of priority in GTD's massive list of next actions. There's a lot of insight here, thank you for sharing. My one criticism would be to add some order to it to enhance readability, it's a wall of text right now.

Thanks. Although I am unlikely to change this particular post, you might be right about it being better to be more explicit in presenting the parts of the text. I'll think abut it.

The reason I visit LW is it satisfies a need for community. I'm glad to see the recent efforts at revitalisation, as a large part of the value for me generated by a single conversational locus is the social support it provides. This site has been inactive for a long time - and yet to my puzzlement I still found myself checking it regularly, despite not learning anything. I discovered that it's because I just wanted to keep in touch with what's going on in rationalist circles, and hang out a bit. I see myself as an aspiring rationalist, and that's a hard th... (read more)

When pushed on why Anthony Magnabosco is out interviewing people he responds with, "I like talking to people and finding out what they believe." True enough, but disingenuous. He presents himself as a seeker of the truth and his root goal is he is out to change minds. If the obtaining the truth was your primary motivation, street interviews is an incredibly inefficient method. The interviews come off as incredibly patronising. Questions such as, "If I gave you evidence about a biblical contradiction, and I'm not saying I do, but if I did, would you change your mind?" Of course you have a contradiction up your sleeve.

Honesty and effectiveness appear to be conflicting goals in street epistemology.

I don't see any substantial evidence from the videos (at least the ones I bothered to watch) that he was changing anyone's mind. Once I had a discussion with a group of Mormons. I reduced them to saying repeatedly, "well, I don't know what to say about that." At the end I basically lectured them for 10 minutes about how bad it is to believe a false religion, and they were silent. But I have no reason to believe that any of them changed their minds to even the slightest degree. I would guess that these videos are the same thing.

To balance the criticism with some praise, in addition to some of the great things you have mentioned, there are two worthwhile things about the Jocko podcast that are not explicit in your post that I want to highlight.

1) Jocko embodies growth mindset.

2) Much of Jocko's discipline comes because he has trained to become comfortable experiencing discomfort. Once you've done many painful things your fear of them falls and your sense of self-efficacy rises making it easier complete future painful goals.


  1. Make the decision and strongly commit to being a mentally strong person

  2. Continuously monitor your actions to ensure they are the actions of a mentally strong person

  3. Maintain this (for weeks/months/years?) until a new self-identity is formed.

If I've misrepresented something point it out, but this looks to me like a recipe for failure. It's missing fundamental parts of the human experience. People most often fail at their goals because of conflicting short and long term desires, forgetfulness and existing habits. Jocko doesn't adequately take that ... (read more)

I definitely don't think Jocko's material on "how to get things done" is his strongest suit, and I don't think he intends it to be really. I would say that temptations do disappear if you successfully implement a mindset of "it's really not an option", but again, the implementation of that mindset in the first place is tricky. Honestly I think one of the benefits of being in the military, at least for a certain type of person, is that the military provides a supporting framework and incentive structure for building good habits. You work out every day because it's part of your job, basically. You put yourself through all kinds of physical deprivation because you have to, it's required, you're not making yourself do it, you're being ordered to do it. For the same reason, professional athletes don't have to badger themselves to go to the gym -- going to the gym is aligned with their other goals. For people like me, going to the gym is a distraction from my other goals. is valuable to have an example of somebody who reliably executes on his the philosophy of "Decide to do it, then do it." If you find that you didn't do it, then you didn't truly decide to do it. In any case, your own choice or lack thereof is the only factor. "Discipline is freedom." If you adopt this habit as your reality, it become true.

It's possible I'm getting to confused with the language here but I've struggled to apply this advice in my own life. I'll decide that I'm not going to snack at work anymore and then find mysel... (read more)

Here is a method I use to good effect: 1) Take a detailed look at the pros and cons of what you want to change. This is sometimes sufficient by itself - more than once I have realized I simply get nothing out what I'm doing, and the desire goes away by itself. 2) Find a substitution for those pros. Alternatively, think about an example of when you decided to do something and then actually did it, and try to port the methods over. Personal example: I recently had a low-grade freakout over deciding to do a particular paperwork process that is famously slow and awful, and brings up many deeply negative feelings for me. Then I was cleaning my dutch oven, and reflected on getting a warranty replacement actually took about three months and several phone calls, which is frustrating but perfectly manageable. This gives me confidence that monitoring a slow administrative process is achievable, and I am more likely to complete it now.
I think too this is the main weak point. The distinction between 'discipline' and 'true discipline' is just semantics for 'willpower when it works and when it doesn't'.
"Discipline is freedom" summarizes the attitude that if you have trained yourself to wake up early, stay on task, exercise regularly, etc., etc., then you now have the freedom to do a variety of things that you would not otherwise be able to do. By having the discipline to exercise, you now have the ability to freely use a more fit body, by waking up early, you have extra hours at your disposal, and so on. To address your first question, I think Jocko would probably say: "If you form an intention to do something, and you don't do it, then you are mentally weak. The first thing to do is then to decide not to be mentally weak." In abstruse lesswrongspeak, this would like something like: "It is most important to form a self-governing narrative of the form 'a mentally strong person would execute on their intentions regardless of transient impulses or mental resistance, and I commit with utmost resolution to being a mentally strong person'. Then you must continuously monitor your daily activities for adherence to this commitment and to this narrative-mentality." Ironically, the Less Wrong deconstructionist approach of breaking the self up into multiple agents and carefully finding a minimum-enthalpy path through wantspace is itself antithetical to forming such a "simplistic" self-governing narrative, even if possessing and maintaining such a narrative were more effective.

CGP Grey follows a cycle that repeats -> (40 min work - 7 min break - 40 min work - 20 min break). I think he mentions it in here somewhere but I don't know the exact time. It seems probable that the most appropriate length and cycle for an individual should be based on their attention span and recovery.

If you just wanted blogs (i.e. no twitter+tumblr) the following are blogs I personally like that post frequently in rough order of how useful/insightful I have found them,


There are a few that are very infrequent but very good when they do post,

Until LessWrong 2.0 comes out, this is how I've been staying in touch with the Rationalist Diaspora. It took about an hour to set up and I can now see almost everything in the one place.

I've been using an RSS reader (I use feedly) to collate RSS feeds from these lists,

Rationist Blogs,

Effective Altruist Blogs,

Rationalist Tumblers,

And using this twitter to RSS tool for ... (read more)

Trying to think of what's not on this list: * The EA forum [] sometimes has insightful posts, mostly EA news * Givewell [] and Open Philanthropy Project [] blogs * [] * /r/slatestarcodex, /r/LessWrong, /r/HPMoR, /r/smartgiving, /r/effectivealtruism, /r/rational * Thing of Things [] (Ozy's blog) * Topher's blog URL is now [] * [] is a group blog by a few ex-LWers I believe * You could probably dig up more by looking through the blogrolls of the blogs you've already identified. For example, Scott Aaronson considers himself [] part of the rationalist blogosphere and is listed on the SlateStarCodex sidebar. * Andrew Critch's blog [] is great * PredictionBook [], Omnilibrium [] * Of course there are lots of Facebook groups (especially EA-related Facebook groups) and Facebook personalities, notably Eliezer [] * Somewhere I got the impression that HBD Chick [] and Sarah Perry of Ribbonfarm [] were LWers at some point. There's also the "post-rationalist" community which includes sites like Melting Asphalt []. * Scott's community map [] Many of these update infrequently, making it bothersome to check all of them. I'll bet it wouldn't be very hard to create a single site that lets you see what's new across the entire diaspora (including LW) by combining all these RSS feeds in to something like
This seems very useful. Thank you for posting it. Out of all of the blogs, which ones do you prioritize in reading first? It seems like there are far too many to always read all of them.

It's unlikely that someone is going to say something that will take away your pain. Death sucks. Losing someone you love sucks, and sadness is a normal reaction to that. There are emotionally healthy ways to deal with grief. Give yourself more self-care than you think you need throughout this process to counter the planning fallacy and better to err on the side of too much than too little.

If you do find yourself depressed, seeking professional help is not a sign of weakness and I would encourage you to seek it out. Summoning motivation can be an impossible... (read more)

The problem isn't simply clarity.

In this case it is. I believe I have been less than clear again.

The frame of mind of treating a conversation with your friends as PR is not useful for getting your friends to trust you and positively respond to what you are saying.

Agreed - but I've never done that. The conversations are ordinary in that I share rationality in the same way I would share a book or movie I've enjoyed. It is "I enjoy X, you should try it I bet you would enjoy it too" as opposed to, "I want to spread X and my friends are go... (read more)

We don't enjoy a topic as diverse as rationality in the same way we enjoy a book or movie. A book or movie is a much more concrete experience. You could speak about individual books like Kahnmann's instead of using the label rationality.

What exactly are you doing that you have PR problems?

Something like,

A: I've been reading a lot about rationality in the last year or two. It's pretty great.

B: What's that?

A: Explanation of instrumental + epistemic OR Biases a la Kahneman

B: Sounds dumb. I do that already.

A: I've found it great because X, Y, Z.

B: I think emotion is much more important than rationality. I don't want to be a robot.

Are you simply relabeling normal conversations with friends as PR?

Yes. Sorry for the lack of clarity.

The problem isn't simply clarity. The frame of mind of treating a conversation with your friends as PR is not useful for getting your friends to trust you and positively respond to what you are saying. If you do that, it's no wonder that someone thinks you are a Straw Vulcan because that mindset is communicating that vibe. That said, let's focus on your message. You aren't telling people that you are using rationality to make you life better. You are telling people that you read about rationality. That doesn't show a person the value of rationality. If I want to talk about the value of rationality I could take about how I'm making predictions in my daily life and the value that brings me. I can talk about how great it is to play double crux with other rationalists and actually have them change their mind. If I want to talk about the effect it has on friends, I can talk about how a fellow rationalist who thought he only cared about the people he's interacting with used rationality techniques to discover that he actually cares about rescuing children in the third world from dying from malaria. If I want to talk about society then I can talk about how the Good Judgement project outperforms CIA analysts who have access to classified information by 30%. I can talk about how better predictions of the CIA before the Iraq war might have stopped the war and therefore really matter a great deal. Superforcasting is a great book for having those war-stories.

Revisiting past conversations I think this is exactly what has been happening. When I mention rationality, reason, logic it becomes a logic v. emotion discussion. I'll taboo in future, thanks!

I have large PR problems when talking about rationality with others unfamiliar with it, with the Straw Vulcan being the most common trap conversation will fall into.

Are there any guides out there in the vein of the EA Pitch Wiki that could help someone avoid these traps and portray rationality in a more positive light? If not, would it be worth creating one?

So far I've found, how rationality can make your life more awesome, rationality for curiosity sake, rationality as winning, PR problems and the contrary rationality isn't all that great.

What exactly are you doing that you have PR problems? Are you simply relabeling normal conversations with friends as PR?
Not a guide, but I think the vocab you use matters a lot. Try tabooing 'rationality', the word itself mindkills some people straight to straw vulcan etc. Do the same with any other words that have the same effect.

Problems one and two (hard and imperfect) would suggest that people will get less value out of ScottL's post than a workshop. OK, fine. Don't let the perfect be the enemy of the good. Scale ScottL's post up through easy online access and the many, many people getting a smaller somewhat unreliable benefit turns into something very significant. But problem 3,

Having seen crappy, distorted versions of the CFAR curriculum (or having attempted to absorb it from text, and failed), a typical human would then be much, much less receptive to other, better explanat

... (read more)
My view, as a CFAR alum and donor, is that the primary arguments against CFAR releasing their material are 1) better returns on time and 2) making it more difficult to change the material. I think online material complements instead of competes with in person classes; standard advice in consulting is "give away your best material for free." (I think CFAR was sensible to wait until now to decide that some of its material is 'best' enough to give away.) I don't think independent compilations of rationality material are net negative, in the same way that I think Starbucks complements instead of competes with independent coffee shops. I do think it's weird to call this the CFAR canon if it's not explicitly endorsed by CFAR. (ScottL, what do you think the word 'canon' means?)
2[DEACTIVATED] Duncan Sabien7y
I'm not sure, re: whether your or ScottL's compilations provide negative value. I definitely fall shy of recommending that they be posted, but I think I ALSO fall shy of anything like requesting that they be taken down. I think there's probably a meaningful difference between CFAR publishing something, and friendly Less Wrongers being like, "Hey, here's this thing I pulled together, hope it helps." The risks I anticipate seem much stronger in the former case. As for what "near" means, I predict with 70% confidence that we will publish more than 5000 words about actual CFAR content before the end of 2016. That's a pretty weak prediction, I know, but also I'm not in a position to be very confident (given my naïveté). I will say that in the universes where we publish 5000 words, we're also likely to publish a lot MORE.

Rather than deferring to the judgment of the Smart Altruists and assuming that within their secret backroom discussions they've determined with logic, rigor, and a plethora of academic citations that it's crucial to the mission of raising the sanity waterline to not release a comprehensive exposition of their body of rationality techniques, perhaps we need only consider your second point except in less reverential light.

Given the ease with which CFAR could publish all their material online it seems worth considering why they haven't done so. If spreadin... (read more)

I had a very similar thought to this post. So similar in fact that I went ahead and wrote a kind of user guide for each CFAR's techniques (though it has changed a great deal even in the last 4 months since I finished writing). I also have never been to a CFAR workshop and drew on many of the same online sources that you have. It took about a month to compile of working in my spare time. My motivation for doing so was the cost of attending a workshop (financially and time costs) were simply too high for someone in my position overseas.

I've printed it and on... (read more)

Having it publicly available definitely has huge costs and tradeoffs. This is particularly true when you're worried about the processes you want to encourage getting stuck as a fixed doctrine - this is essentially why John Boyd preferred presentations over manuals when running his reform movement in the US military.
My assumption was that they don't have this because of time and effort constraints as well as other priorities. The CFAR team are valuable because they are practitioners, experimenters and pioneers, not because of their techniques. That is, they are not valuable because they are hoarding potentially valuable information, but because they are at the frontier and are able to teach their material extremely well. The important question is does my material or yours help with improving the art of rationality and peoples understanding of it. I still think it does, but In retrospect, I think that I should have made it clearer that trying to learn this material by yourself is probably a bad idea.
Rather than deferring to the judgment of the Smart Altruists and assuming that within their secret backroom discussions they've determined with logic, rigor, and a plethora of academic citations that it's crucial to the mission of raising the sanity waterline to not release a comprehensive exposition of their body of rationality techniques, perhaps we need only consider your second point except in less reverential light: So much for the Internet-era model of "free information to be disseminated to all". Without a deferential attitude toward the Great Rationalists of CFAR, Occam's Razor suggests that perhaps they're simply trying to keep the money flowing. Would it upset you if thousands of people without the resources or time to make it to a CFAR workshop had access to a self-study version of the CFAR curriculum?
2Ben Pace7y
I strongly endorse (1). I also expect them to change (1) before too long, or otherwise open up their activities much more, and because of these two points, I will not be linking people to the OP.

Faith, in the sense in which I am here using the word, is the art of holding on to things your reason has once accepted, in spite of your changing moods.

So Bayes update on intellectual arguments, but not on your emotions when you consider them likely to change in the immediate future? That seems like a good virtue if one desires accurate beliefs.

It is, and I think "faith" in this sense is indeed an intellectual virtue. But it seems to me that * many, many uses of the word "faith" by believers are describing something quite different and are in fact endorsing belief in the absence of evidence or in the teeth of contrary evidence; * even when the meaning is broadly in line with CSL's here, most applications of it refer not to holding on to belief merely "in spite of your changing moods" but advocate much firmer persistence than that. A Christian who after lengthy consideration is beginning to think that the problem of evil is insoluble is likely to be enjoined to exercise "faith". The second of those is not necessarily unreasonable. E.g., if you know you are about to talk to someone supernaturally persuasive, able to come up with extremely convincing arguments for any position true or false, you might do well to precommit to not being moved by whatever arguments they might offer you. Christians might suggest (indeed, in another place [] Lewis does suggest) that the influence of the devil is like that. But the possibilities for abuse are very obvious. [EDITED to add: I see that at least two people have downvoted this. Rereading it, it still looks perfectly reasonable to me. I don't suppose anyone who dislikes it would do me the favour of saying why?]

I recently attended a 10 day intensive Vipassana meditation retreat. Would a write-up of the experience be something LWers are interested in as an article for discussion?

I had minimal to moderate experience in meditation before this but now feel much more comfortable with it. I can see potential rationality relevance through,

* Discipline
* Concentration
* Emotion and habit regulation
* Seeing reality as it is

If there is interest then I would appreciate it if someone is willing to look over a draft of the article for me as I haven't written for LW before.

can do.
I would absolutely be very interested. I think Vipassana meditation can be used as a very powerful rationality technique, and I'm always interested to read rationalists explain their experiences with it.

I just attended one too! I am composing a post on this, about halfway done. I'd be interested in a collaboration where we both talk about our experiences, though I would like to see what you think. My post is laden with my own interpretations. Send me a message if you want to discuss once you have your outline down

Are they good quality for listening to?

I listen to audio books regularly and they are at the upper end in terms of quality.

Moreover, is the material they cover comprehensible?

Yes. Articles that don't translate well into audio are not produced e.g. Intuitive Bayes Theorem is unavailable.

have you found the Less Wrong casts to be understandable sufficiently well for a first-time listener?


I'd like to know in which order I should provide those articles that are available on Castify.

I don't know what you mean here but you can contact Castify ... (read more)

I mean, from the context of someone listening to Less Wrong for the first time, what order of listening would be suggested to make the understanding of each article most complete. I suppose that the various previous topics regarding this for reading order would work, too, but that would require some amount of cross-referencing that I'd rather avoid if possible; though I do not intend to offload this work onto others if it is necessary. I was just looking to see if there were a more easily acquired answer than that. Regardless, thanks a bunch for the help!

Sumatra PDF 3.0 on Windows 8.1 x64. I believe the problem is the same one this user had with the AI to Zombies ebook.

I'll be reading the epub personally (which works fine in Sumatra) on my Ipad so it doesn't bother me, but I thought I would mention it as Sumatra is a relatively popular reader and if this ebook is produced by the same team as the rationality ebook then it seems to be a recurring problem.

For what it's worth, I used xelatex and some of Alex Vermeer's code, but I can't see why any would effect the links, and can't find any suggestions for why this would occur in Sumatra. I'll just sit on this for now, but if more people have a similar issue, I'll look further. Thanks.

The epub and mobi links both lead to "page not found".

Also having problems with the links within the PDF. They are blacked out though they still function as links. Same problem with the AI to Zombies ebook I think.

Otherwise very excited to read this!

Fixed the links to the epub and mobi! Blacked out pdf links are new to me - what's the reader?
same I dont experience this, May be problem with your viewer?
Load More