CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21).

Topics that may be interesting include (but are not limited to):

  • Why we think there should be a CFAR;
  • Whether we should change our name to be less general;
  • How running mainline CFAR workshops does/doesn't relate to running "AI Risk for Computer Scientist" type workshops. Why we both do a lot of recruiting/education for AI alignment research and wouldn't be happy doing only that.
  • How our curriculum has evolved. How it relates to and differs from the Less Wrong Sequences. Where we hope to go with our curriculum over the next year, and why.

Several CFAR staff members will be answering questions, including: me, Tim Telleen-Lawton, Adam Scholl, and probably various others who work at CFAR. However, we will try to answer with our own individual views (because individual speech is often more interesting than institutional speech, and certainly easier to do in a non-bureaucratic way on the fly), and we may give more than one answer to questions where our individual viewpoints differ from one another's!

(You might also want to check out our 2019 Progress Report and Future Plans. And we'll have some other posts out across the remainder of the fundraiser, from now til Jan 10.)

[Edit: We're out of time, and we've allocated most of the reply-energy we have for now, but some of us are likely to continue slowly dribbling out answers from now til Jan 2 or so (maybe especially to replies, but also to some of the q's that we didn't get to yet). Thanks to everyone who participated; I really appreciate it.]

New Comment
324 comments, sorted by Click to highlight new comments since: Today at 6:01 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I feel like one of the most valuable things we have on LessWrong is a broad, shared epistemic framework, ideas with which we can take steps through concept-space together and reach important conclusions more efficiently than other intellectual spheres e.g. ideas about decision theory, ideas about overcoming coordination problems, etc. I believe all of the founding staff of CFAR had read the sequences and were versed in things like what it means to ask where you got your bits of evidence from, that correctly updating on the evidence has a formal meaning, and had absorbed a model of Eliezer's law-based approach to reasoning about your mind and the world.

In recent years, when I've been at CFAR events, I generally feel like at least 25% of attendees probably haven't read The Sequences, aren't part of this shared epistemic framework, and don't have an understanding of that law-based approach, and that they don't have a felt need to cache out their models of the world into explicit reasoning and communicable models that others can build on. I also have felt this way increasingly about CFAR staff over the years (e.g. it's not clear to me whether all current CFAR staff have read The Sequen

... (read more)

This is my favorite question of the AMA so far (I said something similar aloud when I first read it, which was before it got upvoted quite this highly, as did a couple of other staff members). The things I personally appreciate about your question are: (1) it points near a core direction that CFAR has already been intending to try moving toward this year (and probably across near-subsequent years; one year will not be sufficient); and (2) I think you asking it publicly in this way (and giving us an opportunity to make this intention memorable and clear to ourselves, and to parts of the community that may help us remember) will help at least some with our moving there.

Relatedly, I like the way you lay out the concepts.

Your essay (I mean, “question”) is rather long, and has a lot of things in it; and my desired response sure also has a lot of things in it. So I’m going to let myself reply via many separate discrete small comments because that’s easier.

(So: many discrete small comments upcoming.)

Ben Pace writes:

In recent years, when I've been at CFAR events, I generally feel like at least 25% of attendees probably haven't read The Sequences, aren't part of this shared epistemic framework, and don't have any understanding of that law-based approach, and that they don't have a felt need to cache out their models of the world into explicit reasoning and communicable models that others can build on.

The “many alumni haven't read the Sequences” part has actually been here since very near the beginning (not the initial 2012 minicamps, but the very first paid workshops of 2013 and later). (CFAR began in Jan 2012.) You can see it in our old end-of-2013 fundraiser post, where we wrote “Initial workshops worked only for those who had already read the LW Sequences. Today, workshop participants who are smart and analytical, but with no prior exposure to rationality -- such as a local politician, a police officer, a Spanish teacher, and others -- are by and large quite happy with the workshop and feel it is valuable.” We didn't name this explicitly in that post, but part of the hope was to get the workshops to work for a slightly larger/broader/more cognitively diverse set than t

... (read more)
that CFAR's natural antibodies weren't kicking against it hard.

Some of them were. This was a point of contention in internal culture discussions for quite a while.

(I am not currently a CFAR staff member, and cannot speak to any of the org's goals or development since roughly October 2018, but I can speak with authority about things that took place from October 2015 up until my departure at that time.)

Yeah, I predict that if one showed Val or Pete the line about fitting naturally into CFAR’s environment without triggering antibodies, they would laugh hard and despairingly. There was definitely friction.

What did this friction lead to (what changes in CFAR’s output, etc.)?

Ben Pace writes:

“... The Gwerns and the Wei Dais and the Scott Alexanders of the world won't have learned anything from CFAR's exploration.”

I’d like to distinguish two things:

  1. Whether the official work activities CFAR staff are paid for will directly produce explicit knowledge in the manner valued by the Gwern etc.
  2. Whether that CFAR work will help educate people who later produce explicit knowledge themselves in the manner valued by Gwern etc., and who wouldn't have produced that knowledge otherwise.

#1 would be useful but isn’t our primary goal (though I think we’ve done more than none of it). #2 seems like a component of our primary goal to me (“scientists” or “producers of folks who can make knowledge in this sense” isn’t all we’re trying to produce, but it’s part of it), and is part of what I would like to see us strengthen over the coming years.

To briefly list our situation with respect to whether we are accomplishing #2 (according to me):

  • There are in fact a good number of AI safety scientists in particular who seem to me to produce knowledge of this type, and who give CFAR some degree of credit for their present tendency to do this.
  • On a milder level, while CFAR work
... (read more)

I think a crisp summary here is: CFAR is in the business of helping create scientists, more than the business of doing science. Some of the things it makes sense to do to help create scientists look vaguely science-ish, but others don't. And this sometimes causes people to worry (understandably, I think) that CFAR isn't enthused about science, or doesn't understand its value.

But if you're looking to improve a given culture, one natural move is to explore that culture's blindspots. And I think exploring those blindspots is often not going to look like an activity typical of that culture.

An example: there's a particular bug I encounter extremely often at AIRCS workshops, but rarely at other workshops. I don't yet feel like I have a great model of it, but it has something to do with not fully understanding how words have referents at different levels of abstraction. It's the sort of confusion that I think reading A Human's Guide to Words often resolves in people, and which results in people asking questions like:

  • "Should I replace [my core goal x] with [this list of "ethical" goals I recently heard about]?"
  • "Why is the fact that I have a goal a good reason to optimize for it?"
  • "Are p
... (read more)

I felt a "click" in my brain reading this comment, like an old "something feels off, but I'm not sure what" feeling about rationality techniques finally resolving itself.

If this comment were a post, and I were in the curating-posts business, I'd curate it. The demystified concrete examples of the mental motion "use a tool from an unsciencey field to help debug scientists" are super helpful.

Just want to second that I think this comment is particularly important. There's a particular bug where I can get innoculated to a whole class of useful rationality interventions that don't match my smell for "rationality intervention", but the whole reason they're a blindspot in the first place is because of that smell... or something.

I feel like this comment should perhaps be an AIRCS class -- not on meta-ethics, but on 'how to think about what doing debugging your brain is, if your usual ontology is "some activities are object-level engineering, some activities are object-level science, and everything else is bullshit or recreation"'. (With meta-ethics addressed in passing as a concrete example.)

(To be clear the above is an account of why I personally feel excited about CFAR having investigated circling. I think this also reasonably describes the motivations of many staff, and of CFAR's behavior as an institution. But CFAR struggles with communicating research intuitions, too; I think in this case these intuitions did not propagate fully among our staff, and as a result that we did employ a few people for a while whose primary interest in circling seemed to me to be more like "for its own sake," and who sometimes discussed it in ways which felt epistemically unhealthy to me. I think people correctly picked up on this as worrying, and I don't want to suggest that didn't happen; just that there is, I think, a sensible reason why CFAR as an institution tends to investigate local blindspots by searching for non-locals with a patch, thereby alarming locals about our epistemic allegiance).

Thanks, that was really helpful. I continue to have a sense of disagreement that this is the right way to do things, so I’ll try to point to some of that. Unfortunately my comment here is not super focused, though I am just trying to say a single thing.

I recently wrote down a bunch of my thoughts about evaluating MIRI, and I realised that I think MIRI has gone through alternating phases of internal concentration and external explanation, in a way that feels quite healthy to me.

Here is what I said:

In the last 2-5 years I endorsed donating to MIRI (and still do), and my reasoning back then was always of the type "I don't understand their technical research, but I have read a substantial amount of the philosophy and worldview that was used to successfully pluck that problem out of the space of things to work on, and think it is deeply coherent and sensible and it's been surprisingly successful in figuring out AI is an x-risk, and I expect to find it is doing very sensible things in places I understand less well." Then, about a year ago, MIRI published the Embdded Agency sequence, and for the first time I thought "Oh, now I feel like I have an understanding of what the technical resear

... (read more)

Ben to check, before I respond—would a fair summary of your position be, "CFAR should write more in public, e.g. on LessWrong, so that A) it can have better feedback loops, and B) more people can benefit from its ideas?"

Philosophy strikes me as, on the whole, an unusually unproductive field full of people with highly questionable epistemics.

This is kind of tangential, but I wrote Some Thoughts on Metaphilosophy in part to explain why we shouldn't expect philosophy to be as productive as other fields. I do think it can probably be made more productive, by improving people's epistemics, their incentives for working on the most important problems, etc., but the same can be said for lots of other fields.

I certainly don’t want to turn the engineers into philosophers

Not sure if you're saying that you personally don't have an interest in doing this, or that it's a bad idea in general, but if the latter, see Counterintuitive Comparative Advantage.

I have an interest in making certain parts of philosophy more productive, and in helping some alignment engineers gain some specific philosophical skills. I just meant I'm not in general excited about making the average AIRCS participant's epistemics more like that of the average professional philosopher.

I was preparing to write a reply to the effect of “this is the most useful comment about what CFAR is doing and why that’s been posted on this thread yet” (it might still be, even)—but then I got to the part where your explanation takes a very odd sort of leap.

But we looked around, and noticed that lots of the promising people around us seemed particularly bad at extrospection—i.e., at simulating the felt senses of their conversational partners in their own minds.

It’s entirely unclear to me what this means, or why it is necessary / desirable. (Also, it seems like you’re using the term ‘extrospection’ in a quite unusual way; a quick search turns up no hits for anything like the definition you just gave. What’s up with that?)

This seemed worrying, among other reasons because early-stage research intuitions (e.g. about which lines of inquiry feel exciting to pursue) often seem to be stored sub-verbally.

There… seems to be quite a substantial line of reasoning hidden here, but I can’t guess what it is. Could you elaborate?

So we looked to specialists in extraspection for a patch.

Is there some reason to consider the folks who purvey (as you say) “woo-laden authentic relating ga

... (read more)
Is there some reason to consider the folks who purvey (as you say) “woo-laden authentic relating games” to be ‘specialists’ here? What are some examples of their output, that is relevant to … research intuitions? (Or anything related?)

I'm speaking for myself here, not any institutional view at CFAR.

When I'm looking at maybe-experts, woo-y or otherwise, one of the main things that I'm looking at is the nature and quality of their feedback loops.

When I think about how, in principle, one would train good intuitions about what other people are feeling at any given moment, I reason "well, I would need to be able to make predictions about that, and get immediate, reliable feedback about if my predictions are correct." This doesn't seem that far off from what Circling is. (For instance, "I have a story that you're feeling defensive" -> "I don't feel defensive, so much as righteous. And...There's a flowering of heat in my belly.")

Circling does not seem like a perfect training regime, to my naive sensors, but if I imagine a person engaging in Circling for 5000 hours, or more, it seems pretty plausible that they would get increasingly skilled along a particular axis.

This makes it se... (read more)

When I think about how, in principle, one would train good intuitions about what other people are feeling at any given moment, I reason “well, I would need to be able to make predictions about that, and get immediate, reliable feedback about if my predictions are correct.” This doesn’t seem that far off from what Circling is. (For instance, “I have a story that you’re feeling defensive” → “I don’t feel defensive, so much as righteous. And...There’s a flowering of heat in my belly.”)

Why would you expect this feedback to be reliable…? It seems to me that the opposite would be the case.

(This is aside from the fact that even if the feedback were reliable, the most you could expect to be training is your ability to determine what someone is feeling in the specific context of a Circling, or Circling-esque, exercise. I would not expect that this ability—even were it trainable in such a manner—would transfer to other situations.)

Finally, and speaking of feedback loops, note that my question had two parts—and the second part (asking for relevant examples of these purported experts’ output) is one which you did not address. Relatedly, you said:

This makes it seem worthwhile training with

... (read more)

I'm going to make a general point first, and then respond to some of your specific objections.

General point:

One of the things that I do, and that CFAR does, is trawl through the existing bodies of knowledge (or purported existing bodies of knowledge), that are relevant to problems that we care about.

But there's a lot of that in the world, and most of it is not very reliable. My response is only point at a heuristic that I use in assessing those bodies of knowledge, and weighing which ones to prioritize and engage with further. I agree that this heuristic on its own is insufficient for certifying a tradition or a body of knowledge as correct, or reliable, or anything.

And yes, you need to do further evaluation work before adopting a procedure. In general, I would recommend against adopting a new procedure as a habit, unless it is concretely and obviously providing value. (There are obviously some exceptions to this general rule.)

Specific points:

Why would you expect this feedback to be reliable…? It seems to me that the opposite would be the case.

On the face of it, I wouldn't assume that it is reliable, but I don't have that strong a reason to assume that i... (read more)

Some sampling of things that I'm currently investigating / interested in (mostly not for CFAR), and sources that I'm using:

  • Power and propaganda
    • reading the Dictator's Handbook and some of the authors' other work.
    • reading Kissinger's books
    • rereading Samo's draft
    • some "evil literature" (an example of which is "things Brent wrote")
    • thinking and writing
  • Disagreement resolution and conversational mediation
    • I'm currently looking into some NVC materials
    • lots and lots of experimentation and iteration
  • Focusing, articulation, and aversion processing
    • Mostly iteration with lots of notes.
    • Things like PJ EBY's excellent ebook.
    • Reading other materials from the Focusing institute, etc.
  • Ego and what to do about it
    • Byron Katie's The Work (I'm familiar with this from years ago, it has an epistemic core (one key question is "Is this true?"), and PJ EBY mentioned using this process with clients.)
    • I might check out Eckhart Tolle's work again (which I read as a teenager)
  • Learning
    • Mostly iteration as I learn things on the object level, right now, but I've read a lot on deliberate practice, and study methodology, as well as learned g
... (read more)
6Howie Lempel4y
Thanks for writing this up. Added a few things to my reading list and generally just found it inspiring. FYI - this link goes to an empty shopping cart. Which of his books did you mean to refer to? The best links I could find quickly were: * You, Version 2.0. * A Minute to Unlimit You
9Eli Tyre4y
A Minute to Unlimit You
I think I also damaged something psychologically, which took 6 months to repair.

I've been pretty curious about the extent to which circling has harmful side effects for some people. If you felt like sharing what this was, the mechanism that caused it, and/or how it could be avoided I'd be interested.

I expect, though, that this is too sensitive/personal so please feel free to ignore.

7Eli Tyre4y
It's not sensitive so much has context-heavy, and I don't think I can easily go into it in brief. I do think it would be good if we had a way to propagate different people's experiences of things like Circling better.
8Eli Tyre4y
Oh and as a side note, I have twice in my life had a short introductory conversation with a person, noticed that something unusual or interesting was happening (but not having any idea what), and then finding out subsequently that the person I was talking with had done a lot of circling. The first person was Pete, who I had a conversation with shortly after EAG 2015, before he came to work for CFAR. The other was an HR person at a tech company that I was cajoled into interviewing at, despite not really having any relevant skills. I would be hard pressed to say exactly what was interesting about those conversations: something like "the way they were asking questions was...something. Probing? Intentional? Alive?" Those words really don't capture it, but whatever was happening I had a detector that pinged "something about this situation unusual."
7Eli Tyre3y
Coming back to this, I think I would describe it as "they seemed like they were actually paying attention", which was so unusual as to be noteworthy.

Said I appreciate your point that I used the term "extrospection" in a non-standard way—I think you're right. The way I've heard it used, which is probably idiosyncratic local jargon, is to reference the theory of mind analog of introspection: "feeling, yourself, something of what the person you're talking with is feeling." You obviously can't do this perfectly, but I think many people find that e.g. it's easier to gain information about why someone is sad, and about how it feels for them to be currently experiencing this sadness, if you use empathy/theory of mind/the thing I think people are often gesturing at when they talk about "mirror neurons," to try to emulate their sadness in your own brain. To feel a bit of it, albeit an imperfect approximation of it, yourself.

Similarly, I think it's often easier for one to gain information about why e.g. someone feels excited about pursuing a particular line of inquiry, if one tries to emulate their excitement in one's own brain. Personally, I've found this empathy/emulation skill quite helpful for research collaboration, because it makes it easier to trade information about people's vague, sub-verbal curiosities and intuitions about e.g. "which questions are most worth asking."

Circlers don't generally use this skill for research. But it is the primary skill, I think, that circling is designed to train, and my impression is that many circlers have become relatively excellent at it as a result.

7Said Achmiz4y
Hmm. I see, thanks. Now, you say “You obviously can’t do this perfectly”, but it seems to me a dubious proposition even to suggest that anyone (to a first approximation) can do this at all. Even introspection is famously unreliable; the impression I have is that many people think that they can do the thing that you call ‘extrospection’[1], but in fact they can do no such thing, and are deluding themselves. Perhaps there are exceptions—but however uncommon you might intuitively think such exceptions are, they are (it seems to me) probably a couple of orders of magnitude less common than that. Do you have any data (other than personal impressions, etc.) that would show or even suggest that this has any practical effect? (Perhaps, examples / case studies?) ---------------------------------------- 1. By the way, it seems to me like coming up with a new term for this would be useful, on account of the aforementioned namespace collision. ↩︎
8Adam Scholl4y
Thanks for spelling this out. My guess is that there are some semi-deep cruxes here, and that they would take more time to resolve than I have available to allocate at the moment. If Eli someday writes that post about the Nisbett and Wilson paper, that might be a good time to dive in further.
4ChristianKl4y
To do good UX you need to understand the mental models that your users have of your software. You can do that by doing a bunch of explicit A/B tests or you can do that by doing skilled user interviews. A person who doesn't do skilled user interviews will project a lot of their own mental models of how the software is supposed to work on the users that might have other mental models. There are a lot of things about how humans relate to the world around them, that they normally don't share with other people. People with a decent amount of self-awareness know how they reason, but they don't know how other people reason at the same level. Circling is about creating an environment where things can be shared that normally aren't. While it would be theoretically possible that people lie, it feels good to share about one's intimate experience in a safe environment and be understood. At one LWCW where I lead two circles there was a person who was in both and who afterwards said "I thought I was the only person who does X in two cases where I now know that other people also do X".
4Said Achmiz4y
Do you claim that people who have experience with Circling, are better at UX design? I would like some evidence for this claim, if so.
4ChristianKl4y
My main claim is that the activity of doing user interviews is very similar to the experience of doing Circling. As far as the claim goes of getting better at UX design: UX of things were mental habits matter a lot. It's not as relevant to where you place your buttons but it's very relevant to designing mental intervention in the style that CFAR does. Evidence is great, but we have little controlled studies of Circling.
2Said Achmiz4y
This is not an interesting claim. Ok, it’s ‘very similar’. And what of it? What follows from this similarity? What can we expect to be the case, given this? Does skill at Circling transfer to skill at conducting user interviews? How, precisely? What specific things do you expect we will observe? So… we don’t have any evidence for any of these claims, in other words? I don’t think I quite understand what you’re saying, here (perhaps due to a typo or two). What does the term ‘UX’ even mean, as you are using it? What does “designing mental intervention” have to do with UX?
3Matt Goldenberg4y
Not a CFAR staff member, but particularly interested in this comment. One way to frame this would be getting really good at learning tacit knowledge. One way would be to interact one with them, notice "hey, this person is really good at this" and then inquire as to how they got so good. This is my experience with seasoned authentic relaters. Another way would be to realize there's a hole in understanding related to intuitions, and then start ssearching around for "people who are claiming to be really good at understanding others' intuitions", this might lead you to running into someone as described above and then seeing if they are indeed good at the thing. Let's say that as a designer, you wanted to impart your intuition of what makes good design. Would you rather have: 1. A newbie designer who has spent hundreds of hours of deliberate practice understanding and being able to transfer models of how someone is feeling/relating to different concepts, and being able to model them in their own mind. 2. A newbie designer who hasn't done that. To me, that's the obvious use case for circling. I think there's also a bunch of obvious benefits on a group level to being able to relate to people better as well.
5Said Achmiz4y
Is there some reason to believe that being good at “simulating the felt senses of their conversational partners in their own minds” (whatever this means—still unclear to me) leads to being “really good at learning tacit knowledge”? In fact, is there any reason to believe that being “really good at learning tacit knowledge” is a thing? Hmm, so in your experience, “seasoned authentic relaters” are really good at “simulating the felt senses of their conversational partners in their own minds”—is that right? If so, then the followup question is: is there some way for me to come into possession of evidence of this claim’s truth, without personally interacting with many (or any) “seasoned authentic relaters”? Can you say more about how you came to realize this? Well, my first step would be to stop wanting that, because it is not a sensible (or, perhaps, even coherent) thing to want. However, supposing that I nevertheless persisted in wanting to “impart my intuition”, I would definitely rather have #2 than #1. I would expect that having done what you describe in #1 would hinder, rather than help, the accomplishment of this sort of goal.
7Matt Goldenberg4y
This requires some model of how intuitions work. One model I like to use is to think about "intuition" is like a felt sense or aesthetic that relates to hundreds of little associations you're picking up from a particular situation. If i'm quickly able to in my mind, get a sense for what it feels like for you (i.e get that same felt sense or aesthetic feel when looking at what you're looking at), and use circling like tools to be able to tease out which parts of the environment most contribute to that aesthetic feel, I can quickly create similar associations in my own mind and thus develop similar intuitions. Possibly you could update by hearing many other people who have interacted with seasoned authentic relaters stating they believe this to be the case. I mean, to me this was just obvious seeing for instance how little emphasis the rationalists I interact with emphasize things like deliberate practice relative to things like conversation and explicit thinking. I'm not sure how CFAR recognized it. I think this is a coherent stance if you think the general "learning intuitions" skill is impossible. But imagine if it weren't, would you agree that training it would be useful?
0Said Achmiz4y
Hmm. It’s possible that I don’t understand what you mean by “felt sense”. Do you have a link to any discussion of this term / concept? That aside, the model you have sketched seems implausible to me; but, more to the point, I wonder what rent it pays? Perhaps it might predict, for example, that certain people might be really good at learning tacit knowledge, etc.; but then the obvious question becomes: fair enough, and how do we test these predictions? In other words, “my model of intuitions predicts X” is not a sufficient reason to believe X, unless those predictions have been borne out somehow, or the model validated empirically, or both. As always, some examples would be useful. It is not clear to me whether this would be evidence (in the strict Bayesian sense); is it more likely that the people from whom I have heard such things would make these claims if they were true than otherwise? I am genuinely unsure, but even if the answer is yes, the odds ratio is low; if evidence, it’s a very weak form thereof. Conversely, if this sort of thing is the only form of evidence put forth, then that itself is evidence against, as it were! Hmm, I am inclined to agree with your observation re: deliberate practice. It does seem puzzling to me that the solution to the (reasonable) view “intuition is undervalued, and as a consequence deliberate practice is under-emphasized” would be “let’s try to understand intuition, via circling etc.” rather than “let’s develop intuitions, via deliberate practice, whereupon the results will speak for themselves, and this will also lead to improved understanding”. (Corollary question: have the efforts made toward understanding intuitions yielded an improved emphasis on deliberate practice, and have the results thereof been positive and obvious?) Indeed, I would, but notice that what you’re asking is different than what you asked before. In your earlier comment, you asked whether I would find it useful (in the hypothetical “newbie designer
6Matt Goldenberg4y
The wikipedia article for Gendlin's focusing has a section trying to describe felt sense, taking out the specific part about "the body", the first part says: " Gendlin gave the name "felt sense" to the unclear, pre-verbal sense of "something"—the inner knowledge or awareness that has not been consciously thought or verbalized", Which is fairly close to my use of it here. One thing it might predict is that there are ways to train the transfer of intuition, from both the teaching and learning side of things, and that by teaching them people get better at picking up intuitions. I do believe CFAR at one point was teaching deliberate practice and calling it "turbocharged training". However, if one is really interested in intiution and thinks its' useful, the next obvious step is to ask "ok, I have this blunt instrument for teaching intuition called deliberate practice, can we use an understanding of how intuitions work to improve upon it?" Good catch, this assumes that my simplified model of how intuitions work is at least partly correct. If the felt sense you get from a particular situation doesn't relate to intuition, or if its' impossible for one human being to get better at feeling what another is feeling, than these are not equivalent. I happen to think both are true.
7Said Achmiz4y
I see, thanks. Well, my question stands. That is a prediction, sure (if a vague one), but now how do we test it? What concrete observations would we expect, and which are excluded, etc.? What has actually been observed? I’m talking specifics, now; data or case studies—but in any case very concrete evidence, not generalities! Yes… perhaps this is true. Yet in this case, we would expect to continue to use the available instruments (however blunt they may be) until such time as sharper tools are (a) available, and (b) have been firmly established as being more effective than the blunt ones. But it seems to me like neither (a) (if I’m reading your “at one point” comment correctly), nor (b), is the case here? Really, what I don’t think I’ve seen, in this discussion, is any of what I, in a previous comment, referred to as “the cake”. This continues to trouble me!
5Matt Goldenberg4y
I suspect the CFARians have more delicious cake for you, as I haven't put that much time into circling, and the related connection skills I worked on more than a decade ago and have atrophied since. Things I remember: * much quicker connection with people * there was a few things like exercise that I wasn't passionate about but wanted to be. After talking with people who were passionate I was able to become passionate myself for those things * I was able to more quickly learn social cognitive strategies by interacting with others who had them.
4philh4y
To suggest something more concrete... would you predict that if an X-ist wanted to pass a Y-ist's ITT, they would have more success if the two of them sat down to circle beforehand? Relative to doing nothing, and/or relative to other possible interventions like discussing X vs Y? For values of X and Y like Democrat/Republican, yay-SJ/boo-SJ, cat person/dog person, MIRI's approach to AI/Paul Christiano's approach? It seems to me that (roughly speaking) if circling was more successful than other interventions, or successful on a wider range of topics, that would validate its utility. Said, do you agree?
2Matt Goldenberg4y
Yes, although I expect the utility of circling over other methods to be dependent on the degree to which the ITT is based on intuitions.
6Ben Pace4y
I always think of 'felt sense' as, not just pre-verbal intuitions, but intuitions associated with physical sensations, be they in my head, shoulders, stomach, etc.
5Matt Goldenberg4y
I think that Gendlin thinks all pre-verbal intuitions are represented with physical sensations. I don't agree with him but still use the felt-sense language in these parts because rationalists seem to know what I'm talking about.
5Adam Scholl4y
Yeah, same; I think this term has experienced some semantic drift, which is confusing. I meant to refer to pre-verbal intuitions in general, not just ones accompanied by physical sensation.
2Matt Goldenberg4y
Also in particular - felt sense refers to the qualia related to intuitions, rather than the intuitions themselves.
6Adam Scholl4y
(Unsure, but I'm suspicious that the distinction between these two things might not be clear).
2Matt Goldenberg4y
Yes, I think there's a distinction between the semantic content of "My intuition is that Design A is better than Design B" referring to the semantic content or how the intuition "caches out" in terms of decisions. This contrast with the felt sense, which always seems to refer to what the intuition is like "from the inside," for example a sense of unease when looking at Design A, and rightness when looking at Design B. I feel like using the word "intuition" can refer to both the latter and the former, whereas when I say "felt sense" it always refers to the latter.

"For example, we spent a bunch of time circling for a while"

Does this imply that CFAR now spends substantially less time circling? If so and there's anything interesting to say about why, I'd be curious.

CFAR does spend substantially less time circling now than it did a couple years ago, yeah. I think this is partly because Pete (who spent time learning about circling when he was younger, and hence found it especially easy to notice the lack of circling-type skill among rationalists, much as I spent time learning about philosophy when I was younger and hence found it especially easy to notice the lack of philosophy-type skill among AIRCS participants) left, and partly I think because many staff felt like their marginal skill returns from circling practice were decreasing, so they started focusing more on other things.

Whether CFAR staff (qua CFAR staff, as above) will help educate people who later themselves produce explicit knowledge in the manner valued by Gwern, Wei Dai, or Scott Alexander, and who wouldn’t have produced (as much of) that knowledge otherwise.

This seems like a good moment to publicly note that I probably would not have started writing my multi-agent sequence without having a) participated in CFAR's mentorship training b) had conversations with/about Val and his posts.

With regard to whether our staff has read the sequences: five have, and have been deeply shaped by them; two have read about a third, and two have read little. I do think it’s important that our staff read them, and we decided to run this experiment with sabbatical months next year in part to ensure our staff had time to do this over the coming year.

I honestly think, in retrospect, that the linchpin of early CFAR's standard of good shared epistemics was probably Critch.

I, too, believe that Critch played a large and helpful role here.

Note that Val's confusion seems to have been because he misunderstood Oli's point.

https://www.lesswrong.com/posts/tMhEv28KJYWsu6Wdo/kensh?commentId=SPouGqiWNiJgMB3KW#SPouGqiWNiJgMB3KW

(apologies for this only sort-of being a question, and for perhaps being too impressed with the cleverness of my metaphor at the expense of clarity)

I have a vague model that's something like (in programming terms): 

  • the original LessWrong sequences were the master branch of a codebase (in terms of being a coherent framework for evaluating the world and making decisions)
  • CFAR forked that codebase into (at least one) private repo and did a bunch of development on it, kinda going off in a few divergent directions. My impression that was "the CFAR dev branch" is more introspection-focused, and "internal alignment" focused.
  • Many "serious rationalist" I know (including myself) have incorporated some of elements from "the CFAR dev branch" into their epistemogy (and overall worldview).
    • (Although, one person said they got more from Leverage than CFAR.)
  • In the past couple years, there's a bit of confusion on LessWrong (and adjaecent spaces) about what exactly the standards are, with (some) longterm members offhandedly referring to concepts that haven't been written up in longform, and with unclear epistemic tagging.
  • Naively attempting to merge the latest dev branch back into "Sequences Era Le
... (read more)

Re: 1—“Forked codebases that have a lot in common but are somewhat tricky to merge” seems like a pretty good metaphor to me.

The question I'd like to answer that is near your questions is: "What is the minimal patch/bridge that will let us use all of both codebases without running into merge conflicts?"

We do have a candidate answer to this question, which we’ve been trying out at AIRCS to reasonable effect. Our candidate answer is something like: an explicit distinction between “tacit knowledge” (inarticulate hunches, early-stage research intuitions, the stuff people access and see in one another while circling, etc.) and the “explicit” (“knowledge” worthy of the name, as in the LW codebase—the thing I believe Ben Pace is mainly gesturing at in his comment above).

Here’s how we explain it at AIRCS:

  • By “explicit” knowledge, we mean visible-to-conscious-consideration denotative claims that are piecewise-checkable and can be passed explicitly between humans using language.
    • Example: the claim “Amy knows how to ride a bicycle” is explicit.
  • By “tacit” knowledge, we mean stuff that allows you to usefully navigate the world (and so contains implicit information about the world, and can b
... (read more)

People taking “I know it in my gut” as zero-value, and attempting to live via the explicit only. My sense is that some LessWrong users like Said_Achmiz tend to err in this direction.

This is not an accurate portrayal of my views.

7Raemon4y
I’d be particularly interested, in this context, if you are up clarifying what your views are here. 

I’d be happy to, except that I’m not sure quite what I need to clarify.

I mean, it’s just not true that I consider “tacit” knowledge (which may, or may not be, the same thing as procedural knowledge—but either way…) to be “zero-value”. That isn’t a thing that I believe, nor is it adjacent to some similar thing that I believe, nor is it a recognizable distortion of some different thing that I believe.

For instance, I’m a designer, and I am quite familiar with looking at a design, or design element, and declaring that it is just wrong, or that it looks right this way and not that way; or making something look a certain way because that’s what looks good and right; etc., etc. Could I explicitly explain the precise and specific reason for every detail of every design decision I make? Of course not; it’s absurd even to suggest it. There is such a thing as “good taste”, “design sense”, etc. You know quite well, I’m sure, what I am talking about.

So when someone says that I attempt to live via the explicit only, and take other sorts of knowledge as having zero value—what am I to say to that? It isn’t true, and obviously so. Perhaps Anna could say a bit about what led her to this conclusion about my views. I am happy to comment further; but as it stands, I am at a loss.

For what it's worth, I think that saying "Person X tends to err in Y direction" does not mean "Person X endorses or believes Y".

7Said Achmiz4y
If what Anna meant was “Said undervalues ‘gut’ knowledge, relative to explicit knowledge”… well, that is, of course, not an obviously false or absurd claim; but what she wrote is an odd way of saying it. I have reread the relevant section of Anna’s comment several times, and it is difficult to read it as simply a note that certain people (such as, ostensibly, myself) are merely on somewhat the wrong point along a continuum of placing relative value on this vs. that form of knowledge; it is too banal and straightforward a point, to need to be phrased in such a way as Anna phrased it. But then, this is getting too speculative to be useful. Perhaps Anna can clarify what she meant.
5Matt Goldenberg4y
If it helps for your own calibration of how you come across, there was a thread a while back where I expressed indignation at the phrase "Overcoming intuitions" and you emphatically agreed. I remember being surprised that you agreed, and having to update my model of your beliefs.
3Said Achmiz4y
Can you think of an example of something I said that led you to that previous, pre-update model?
3Matt Goldenberg4y
I can't, but here's an example from this same thread: https://www.lesswrong.com/posts/96N8BT9tJvybLbn5z/we-run-the-center-for-applied-rationality-ama#HgQCE8aHctKjYEWHP In this comment, you explicitly understood and agreed with the material that was teaching explicit knowledge (philosophy), but objected to the material designed to teach intuitions (circling).
9Said Achmiz4y
Surely you can see how this does not at all imply that I object to intuition, yes? Logically, after all, there are at least three other possibilities: 1. That I don’t believe that intuitions can be taught; or… 2. That I don’t believe that this particular approach (circling) is good for teaching intuitions; or… 3. That I object to circling for reasons unrelated to the (purported) fact that it teaches intuitions. (There are other, subtler, possibilities; but these three are the obvious ones.) The conclusion that I have something against intuitions, drawn from the observation that I am skeptical of circling in particular (or any similar thing), seems to me to be really quite unwarranted.
2Matt Goldenberg4y
Yes. If you're wondering, I basically updated more towards #1. I wouldn't call the conclusion unwarranted by the way, it's a perfectly valid interpretation of seeing this sort of stance from you, it was simply uninformed.
2Said Achmiz4y
How does your “tacit vs. explicit” dichotomy relative to the “procedural vs. declarative” dichotomy? Are they identical? (If so, why the novel terminology?) Are they totally orthogonal? Some other relationship?
6Matt Goldenberg4y
Explicit vs. tacit knowledge isn't a CFAR concept, and is pretty well established in the literature. Here's an example: https://www.basicknowledge101.com/pdf/km/KM_roles.pdf

Some notes, for my own edification and that of anyone else curious about all this terminology and the concepts behind it.

Some searching turns up an article by one Fred Nickols, titled “The Knowledge in Knowledge Management” [PDF]. (As far as I can tell, “knowledge management” seems to be a field or topic of study that originates in the world of business consulting; and Fred Nickols is a former executive at a consulting firm of some sort.)

Nickols offers the following definitions:

Flowchart diagram defining explicit, implicit, and tacit knowledge

Explicit knowledge, as the first word in the term implies, is knowledge that has been articulated and, more often than not, captured in the form of text, tables, diagrams, product specifications and so on. … An example of explicit knowledge with which we are all familiar is the formula for finding the area of a rectangle (i.e., length times width). Other examples of explicit knowledge include documented best practices, the formalized standards by which an insurance claim is adjudicated and the official expectations for performance set forth in written work objectives.

Tacit knowledge is knowledge that cannot be articulated. As Michael Polanyi (1997), the chemist-turned-philosopher who coined the term put

... (read more)
4Raemon4y
Thanks! (I'm assuming you made the diagrams?)
4Said Achmiz4y
Oh, no. The diagrams are taken from the paper; they’re in the PDF I linked. EDIT: Which paper is, by the way, quite worth reading; it’s written in an exceptionally clear and straightforward way, and gets right to the heart of all relevant matters. I was very impressed, truth be told. I could’ve usefully quoted much more, but then I’d just be pasting the whole paper (which, in addition to its other virtues, is mercifully short).
2Raemon4y
Huh, thought I skimmed the paper and didn’t see diagrams but somehow missed them I guess
6Jan Kulveit4y
I like the metaphor! Just wanted to note: in my view the original LW Sequences are not functional as a stand-alone upgrade for almost any human mind, and you can empirically observe it: You can think about any LW meet-up group around the world as an experiment, and I think to a first approximation it's fair to say aspiring Rationalists running just on the Sequences do not win, and good stuff coming out of the rationalist community was critically dependent of presence of minds Eliezer & others. (This is not say Sequences are not useful in many ways)
6Matt Goldenberg4y
I agree with your conclusion here, but think that this is an exceptionally harsh experiment. I conjecture that basically any meetup group, no matter what source they're using, won't emperically lead to most people who attend it "winning". Either it would drive most people away because its' too intense, or it would not be focused and intense enough to actually make a difference.

Also, the meetup groups are selected against for agency and initiative, because, for better or for worse, the most initiative taking people often pick up and move to the hubs in the Bay or in Oxford.

4Kaj_Sotala4y
Or are just otherwise too busy with their life to have the time for meetups.

What is CFAR's goal/purpose/vision/raison d'etre? Adam's post basically said "we're bad at explaining it", and an AMA sounds like a good place to at least attempt an explanation.

My closest current stab is that we’re the “Center for Bridging between Common Sense and Singularity Scenarios.” (This is obviously not our real name. But if I had to grab a handle that gestures at our raison d’etre, at the moment I’d pick this one. We’ve been internally joking about renaming ourselves this for some months now.)

To elaborate: thinking about singularity scenarios is profoundly disorienting (IMO, typically worse than losing a deeply held childhood religion or similar). Folks over and over again encounter similar failure modes as they attempt this. It can be useful to have an institution for assisting with this -- collecting concepts and tools that were useful for previous waves who’ve attempted thought/work about singularity scenarios, and attempting to pass them on to those who are currently beginning to think about such scenarios.

Relatedly, the pattern of thinking required for considering AI risk and related concepts at all is pretty different from the patterns of thinking that suffice in most other contexts, and it can be useful to have a group that attempts to collect these and pass them forward.

Further, it can be useful to figure out how the heck to do teams

... (read more)

Examples of some common ways that people sometimes find Singularity scenarios disorienting:

When a person loses their childhood religion, there’s often quite a bit of bucket error. A person updates on the true fact “Jehovah is not a good explanation of the fossil record” and accidentally confuses that true fact with any number of other things, such as “and so I’m not allowed to take my friends’ lives and choices as real and meaningful.”

I claimed above that “coming to take singularity scenarios seriously” seems in my experience to often cause even more disruption / bucket errors / confusions / false beliefs than does “losing a deeply held childhood religion.” I’d like to elaborate on that here by listing some examples of the kinds of confusions/errors I often encounter.

None of these are present in everyone who encounters Singularity scenarios, or even in most people who encounter it. Still, each confusion below is one where I’ve seen it or near-variants of it multiple times.

Also note that all of these things are “confusions”, IMO. People semi-frequently have them at the beginning and then get over them. These are not the POV I would recommend or consider correct -- more like the

... (read more)

and makes it somewhat plausible why I’m claiming that “coming to take singularity scenarios seriously can be pretty disruptive to common sense,” and such that it might be nice to try having a “bridge” that can help people lose less of the true parts of common sense as their world changes

Can you say a bit more about how CFAR helps people do this? Some of the "confusions" you mentioned are still confusing to me. Are they no longer confusing to you? If so, can you explain how that happened and what you ended up thinking on each of those topics? For example lately I'm puzzling over something related to this:

Given this, should I get lost in “what about simulations / anthropics” to the point of becoming confused about normal day-today events?

9Howie Lempel4y
[Possibly digging a bit too far into the specifics so no worries if you'd rather bow out.] Do you think these confusions[1] are fairly evenly dispersed throughout the community (besides what you already mentioned: "People semi-frequently have them at the beginning and then get over them.")? Two casual observations: (A) the confusions seem less common among people working full-time at EA/Rationalist/x-risk/longtermist organisation than in other people who "take singularity scenarios seriously."[2] (B) I'm very uncertain but they also seem less prevalent to me in the EA community than the rationalist community (to the extent the communities can be separated).[3] [4] Do A and B sound right to you? If so, do you have a take on why that is? If A or B *are* true, do you think this is in any part caused by the relative groups taking the singularity [/x-risk/the future/the stakes] less seriously? If so, are there important costs from this? [1] Using your word while withholding my own judgment as to whether every one of these is actually a confusion. [2] If you're right that a lot of people have them at the beginning and then get over them, a simple potential explanation would be that by the time you're working at one of these orgs, that's already happened. Other hypothesis: (a) selection effects; (b) working FT in the community gives you additional social supports and makes it more likely others will notice if you start spiraling; (c) the cognitive dissonance with the rest of society is a lot of what's doing the damage. It's easier to handle this stuff psychologically if the coworkers you see every day also take the singularity seriously.[i] [3] For example perhaps less common at Open Phil, GPI, 80k, and CEA than CFAR and MIRI but I also think this holds outside of professional organisations. [4] One potential reason for this is that a lot of EA ideas are more "in the air" than rationalist/singularity ones. So a lot of EAs may have had their 'crisis of faith' befo
4Howie Lempel4y
This seemed really useful. I suspect you're planning to write up something like this at some point down the line but wanted to suggest posting this somewhere more prominent in the meantime (otoh, idea inoculation, etc.)
1Artyom Kazak4y
The state of confusion you're describing sounds a lot like Kegan's 4.5 nihilism (pretty much everything at meaningness.com is relevant). A person's values have been demolished by a persuasive argument, but they haven't yet internalized that people are "allowed" to create their own systems and values. Alright. 1. I assume that LW-adjacent people should actually be better at guiding people out of this stage, because a lot of people in the community have gone through the same process and there is an extensive body of work on the topic (Eliezer's sequences on human values, David Chapman's work, Scott Alexander's posts on effective altruism / axiology-vs-morality / etc). 2. I also assume that in general we want people to go through this process – it is a necessary stage of adult development. Given this, I'm leaning towards "guiding people towards nihilism is good as long as you don't leave them in the philosophical dark re/ how to get out of it". So, taking a random smart person, persuading them they should care about Singularity, and leaving – this isn't great. But introducing people to AI risk in the context of LW seems much more benign to me.
We’ve been internally joking about renaming ourselves this for some months now.

I'm not really joking about it. I wish the name better expressed what the organization does.

Though I admit that CfBCSSS, leaves a lot to be desired in terms of acronyms.

I nominate "Society of Effective Epistemics For AI Risk" or SEE-FAR for short.

:) There's something good about "common sense" that isn't in "effective epistemics", though -- something about wanting not to lose the robustness of the ordinary vetted-by-experience functioning patterns. (Even though this is really hard, plausibly impossible, when we need to reach toward contexts far from those in which our experiences were based.)

8Eli Tyre4y
This is the best idea I've heard yet. It would be pretty confusing to people, and yet...
7AnnaSalamon4y
To clarify: we're not joking about the need to get "what we do" and "what people think we do" more in alignment, via both communicating better and changing our organizational name if necessary. We put that on our "goals for 2020" list (both internally, and in our writeup). We are joking that CfBCSSS is an acceptable name (due to its length making it not-really-that). (Eli works with us a lot but has been taking a leave of absence for the last few months and so didn't know that bit, but lots of us are not-joking about getting our name and mission clear.)
7Howie Lempel4y
[I realise there might not be precise answers to a lot of these but would still be interested in a quick take on any of them if anybody has one.] Within CFAR, how much consensus is there on this vision? How stable/likely to change do you think it is? How long has this been the vision for (alternatively, how long have you been playing with this vision for)? Is it possible to describe what the most recent previous vision was?
4habryka4y
Typo: My guess is that the N should be replaced with a number, and the sentence wasn't intended to trail off like that. 

Thus spake Eliezer: "Every Cause Wants to be a Cult".

An organization promising life-changing workshops/retreats seems especially high-risk for cultishness, or at least pattern matches on it pretty well. We know the price of retaining sanity is vigilance. What specific, concrete steps are you at CFAR taking to resist the cult attractor?

What are the LessWrong posts that you wish you had the time to write?

Here’s a very partial list of blog post ideas from my drafts/brainstorms folder. Outside view, though, if I took the time to try to turn these in to blog posts, I’d end up changing my mind about more than half of the content in the process of writing it up (and then would eventually end up with blog posts with somewhat different these).

I’m including brief descriptions with the awareness that my descriptions may not parse at this level of brevity, in the hopes that they’re at least interesting teasers.

Contra-Hodgel

  • (The Litany of Hodgell says “That which can be destroyed by the truth should be”. Its contrapositive therefore says: “That which can destroy [that which should not be destroyed] must not be the full truth.” It is interesting and sometimes-useful to attempt to use Contra-Hodgel as a practical heuristic: “if adopting belief X will meaningfully impair my ability to achieve good things, there must be some extra false belief or assumption somewhere in the system, since true beliefs and accurate maps should just help (e.g., if “there is no Judeo-Christian God” in practice impairs my ability to have good and compassionate friendships, perhaps there is some false belief somew
... (read more)
The need to coordinate in this way holds just as much for consequentialists or anyone else.

I have a strong heuristic that I should slow down and throw a major warning flag if I am doing (or recommending that someone else do) something I believe would be unethical if done by someone not aiming to contribute to a super high impact project. I (weakly) believe more people should use this heuristic.

Some off the top of my head.

  • A bunch of Double Crux posts that I keep promising but am very bad at actually finishing.
  • The Last Term Problem (or why saving the world is so much harder than it seems) - A abstract decision theoretic problem that has confused me about taking actions at all for the past year.
  • A post on how the commonly cited paper on how "Introspection is Impossible" (Nisbett and Wilson) is misleading.
  • Two takes on confabulation - About how the Elephant in the Brain thesis doesn't imply that we can't tell what our motivations actually are, just that we aren't usually motivated to.
  • A lit review on mental energy and fatigue.
  • A lit review on how attention works.

Most of my writing is either private strategy documents, or spur of the moment thoughts / development-nuggets that I post here.

Can you too-tersely summarize your Nisbett and Wilson argument?

Or, like... writer a teaser / movie trailer for it, if you're worried your summary would be incomplete or inoculating?

This doesn't capture everything, but one key piece is "People often confuse a lack of motivation to introspect with a lack of ability to introspect. The fact of confabulation does not demonstrate that people are unable articulate what's actually happening in principle." Very related to the other post on confabulation I note above.

Also, if I remember correctly, some of the papers in that meta analysis, just have silly setups: testing whether people can introspect into information that they couldn't have access too. (Possible that I misunderstood or am miss-remembering.)

To give a short positive account:

  • All introspection depends on comparison between mental states at different points in time. You can't introspect on some causal factor that doesn't vary.
  • Also, the information has to be available at the time of introspection, ie still in short term memory.
  • But that gives a lot more degrees for freedom that people seem to predict, and in practice I am able to notice many subtle intentions (such as when my behavior is motivated by signalling), that others want to throw out as unknowable.

This isn’t a direct answer to, “What are the LessWrong posts that you wish you had the time to write?” It is a response to a near-by question, though, which is probably something along the lines of, “What problems are you particularly interested in right now?” which is the question that always drives my blogging. Here’s a sampling, in no particular order.

[edit: cross-posted to Ray's Open Problems post.]

There are things you’re subject to, and things you can take as object. For example, I used to do things like cry when an ambulance went by with its siren on, or say “ouch!” when I put a plate away and it went “clink”, yet I wasn’t aware that I was sensitive to sounds. If asked, “Are you sensitive to sounds?” I’d have said “No.” I did avoid certain sounds in local hill-climby ways, like making music playlists with lots of low strings but no trumpets, or not hanging out with people who speak loudly. But I didn’t “know” I was doing these things; I was *subject* to my sound sensitivity. I could not take it as *object*, so I couldn’t delib... (read more)

I have a Google Doc full of ideas. Probably I'll never write most of these, and if I do probably much of the content will change. But here are some titles, as they currently appear in my personal notes:

  • Mesa-Optimization in Humans
  • Primitivist Priors v. Pinker Priors
  • Local Deontology, Global Consequentialism
  • Fault-Tolerant Note-Scanning
  • Goal Convergence as Metaethical Crucial Consideration
  • Embodied Error Tracking
  • Abnormally Pleasurable Insights
  • Burnout Recovery
  • Against Goal "Legitimacy"
  • Computational Properties of Slime Mold
  • Steelmanning the Verificationist Criterion of Meaning
  • Manual Tribe Switching
  • Manual TAP Installation
  • Keep Your Hobbies

I don’t think that time is my main constraint, but here are some of my blog post shaped ideas:

  • Taste propagates through a medium
  • Morality: do-gooding and coordination
  • What to make of ego depletion research
  • Taboo "status"
  • What it means to become calibrated
  • The NFL Combine as a case study in optimizing for a proxy
  • The ability to paraphrase
  • 5 approaches to epistemics

Why did so much of the initial CFAR employees decide to leave the organization?

My guesses, in no particular order:

  • Being a first employee is pretty different from being in a middle-stage organization. In particular, the opportunity to shape what will come has an appeal that can I think rightly bring in folks who you can’t always get later. (Folks present base rates for various reference classes below; I don’t know if anyone has one for “founding” vs “later” in small organizations?)

    • Relatedly, my initial guess back in ~2013 (a year in) was that many CFAR staff members would “level up” while they were here and then leave, partly because of that level-up (on my model, they’d acquire agency and then ask if being here as one more staff member was or wasn’t their maximum-goal-hitting thing). I was excited about what we were teaching and hoped it could be of long-term impact to those who worked here a year or two and left, as well as to longer-term people.
  • I and we intentionally hired for diversity of outlook. We asked ourselves: “does this person bring some component of sanity, culture, or psychological understanding -- but especially sanity -- that is not otherwise represented here yet?” And this… did make early CFAR fertile, and also made it an unusual

... (read more)
9ChristianKl4y
In curious about that. It seems like a new point for me. What concrete negative effects have you seen there?

(This is Dan, from CFAR since 2012)

Working at CFAR (especially in the early years) was a pretty intense experience, which involved a workflow that regularly threw you into these immersive workshops, and also regularly digging deeply into your thinking and how your mind works and what you could do better, and also trying to make this fledgling organization survive & function. I think the basic thing that happened is that, even for people who were initially really excited about taking this on, things looked different for them a few years later. Part of that is personal, with things like burnout, or feeling like they’d gotten their fill and had learned a large chunk of what they could from this experience, or wanting a life full of experiences which were hard to fit in to this (probably these 3 things overlap). And part of it was professional, where they got excited about other projects for doing good in the world while CFAR wanted to stay pretty narrowly focused on rationality workshops.

I’m tempted to try to go into more detail, but it feels like that would require starting to talk about particular individuals rather the set of people who were involved in early CFAR and I feel weird about that.

7Adam Scholl4y
So I’m imagining there might be both a question (for what types of reasons have CFAR staff left?) and a claim (CFAR’s rate of turnover is unusual) here. Anna should be able to better address the question, but with regard to the claim: I think it’s true, at least relative to average U.S. turnover. The median length Americans spend in jobs is 4.2 years, while the median length CFAR employees have stayed in their jobs is 2.2 years; 32% of our employees (7 people) left within their first year.

I am not fully sure about the correct reference class here, but employee turnover in Silicon Valley is generally very high, so that might also explain some part of the variance: https://www.inc.com/business-insider/tech-companies-employee-turnover-average-tenure-silicon-valley.html

6ChristianKl4y
I would guess that many startups in Silicon Valley making big promising about changing the world and then when they employers spend a year at the company, they see that there's little meaning in the work they are doing. If people at CFAR don't keep working at CFAR that seems like a value judgment on CFAR not being that important.
I would guess that many startups in Silicon Valley making big promising about changing the world and then when they employers spend a year at the company, they see that there's little meaning in the work they are doing.

I've worked for 4-6 silicon valley startups now (depending on how we count it), and this has generally not been my experience. For me and most of the people I've worked with, staying in one job for a long time just seems weird. Moving around frequently is how you grow fastest and keep things interesting; people in startups see frequent job-hopping as normal, and it's the rest of the world that's strange.

That said, I have heard occasional stories about scammy startups who promise lots of equity and then suck. My impression is that they generally lure in people who haven't been in silicon valley before; people with skills, who've done this for a little while, generally won't even consider those kinds of offers.

This explanation seems unlikely to me. More likely seem to me the highly competitive labor market (with a lot of organizations trying to outbid each other), a lot of long work hours and a lot of people making enough money that leaving their job for a while is not a super big deal. It's not an implausible explanation, but I don't think it explains the variance very well. 

5ChristianKl4y
If you primarily work to earn a paycheck, then you can easily switch around to another organization that pays more money. If you strongly believe in a certain organization having a mission that's very important it's harder to change. The personal development workshops I usually attend are taught be people who have more then two decades of teaching experience and likely more then 20000 hours of time refining their skills behind them. From what I hear from CFAR about the research they are doing a lot of it is hard to transfer from one head to another. It seems that if you have a median tenure of 2 years most of the research gets lost and nobody will develop 10000 hours in the domain.
8Adam Scholl4y
Well, I think it can both be the case that a given staff member thinks the organization's mission is important, and also that due to their particular distribution of comparative advantages, current amount of burnout, etc., that it would be on net better for them to work elsewhere. And I think most of our turnover has resulted from considerations like this, rather than from e.g. people deciding CFAR's mission was doomed. I think the concern about short median tenure leading to research loss makes sense, and has in fact occurred some. But I'm not that worried about it, personally, for a few reasons: * This cost is reduced because we're in the teaching business. That is, relative to an organization that does pure research, we're somewhat better positioned to transfer institutional knowledge to new staff, since much of the relevant knowledge has already been heavily optimized for easy transferability. * There's significant benefit to turnover, too. I think the skills staff develop while working at CFAR are likely to be useful for work at a variety of orgs; I feel excited about the roles a number of former staff are playing elsewhere, and expect I'll be excited about future roles our current staff play elsewhere too. * Many of our staff already have substantial "work-related experience," in some sense, before they're hired. For example, I spent a bunch of time in college reading LessWrong, trying to figure out metaethics, etc., which I think helped me become a better CFAR instructor than I might have been otherwise. I expect many lesswrongers, for example, have already developed substantial skill relevant to working effectively at CFAR.
5denkenberger4y
Note that that statistic is how long people have been in their current job, not how long they will stay in their current job total. If everyone stayed in their jobs for 40 years, and you did a survey of how long people have been in their job, the median will come out to 20 years. I have not found hard data for the number we actually want, but this indicates that the median time that people stay in their jobs is about eight years, though it would be slightly shorter for younger people.

What are your thoughts on Duncan Sabien's Facebook post which predicts significant differences in CFAR's direction now that he is no longer working for CFAR?

My rough guess is “we survived; most of the differences I could imagine someone fearing didn’t come to pass”. My correction on that rough guess is: “Okay, but insofar as Duncan was the main holder of certain values, skills, and virtues, it seems pretty plausible that there are gaps now today that he would be able to see and that we haven’t seen”.

To be a bit more specific: some of the poles I noticed Duncan doing a lot to hold down while he was here were:

  • Institutional accountability and legibility;
  • Clear communication with staff; somebody caring about whether promises made were kept; somebody caring whether policies were fair and predictable, and whether the institution was creating a predictable context where staff, workshop participants, and others wouldn’t suddenly experience having the rug pulled out from under them;
  • Having the workshop classes start and end on time; (I’m a bit hesitant to name something this “small-seeming” here, but it is a concrete policy that supported the value above, and it is easier to track)
  • Revising the handbook into a polished state;
  • Having the workshop classes make sense to people, have clear diagrams and a clear point, etc.; having polish and visi
... (read more)

On reading Anna's above answer (which seems true to me, and also satisfies a lot of the curiosity I was experiencing, in a good way), I noted a feeling of something like "reading this, the median LWer will conclude that my contribution was primarily just ops-y and logistical, and the main thing that was at threat when I left was that the machine surrounding the intellectual work would get rusty."

It seems worth noting that my model of CFAR (subject to disagreement from actual CFAR) is viewing that stuff as a domain of study, in and of itself—how groups cooperate and function, what makes up things like legibility and integrity, what sorts of worldview clashes are behind e.g. people who think it's valuable to be on time and people who think punctuality is no big deal, etc.

But this is not necessarily something super salient in the median LWer's model of CFAR, and so I imagine the median LWer thinking that Anna's comment means my contributions weren't intellectual or philosophical or relevant to ongoing rationality development, even though I think Anna-and-CFAR did indeed view me as contributing there, too (and thus the above is also saying something like "it turned out Duncan's disappearance didn't scuttle those threads of investigation").

I agree very much with what Duncan says here. I forgot I need to point that kind of thing out explicitly. But a good bit of my soul-effort over the last year has gone into trying to inhabit the philosophical understanding of the world that can see as possibilities (and accomplish!) such things as integrity, legibility, accountability, and creating structures that work across time and across multiple people. IMO, Duncan had a lot to teach me and CFAR here; he is one of the core models I go to when I try to understand this, and my best guess is that it is in significant part his ability to understand and articulate this philosophical pole (as well as to do it himself) that enabled CFAR to move from the early-stage pile of un-transferrable "spaghetti code" that we were when he arrived, to an institution with organizational structure capable of e.g. hosting instructor trainings and taking in and making use of new staff.

2ChristianKl4y
Reading this I'm curious about what the actual CFAR position on punctuality was before and now. Was it something like the Landmark package under your tenure?

(This is Dan, from CFAR since June 2012)

These are more like “thoughts sparked by Duncan’s post” rather than “thoughts on Duncan’s post”. Thinking about the question of how well you can predict what a workshop experience will be like if you’ve been at a workshop under different circumstances, and looking back over the years...

In terms of what it’s like to be at a mainline CFAR workshop, as a first approximation I’d say that it has been broadly similar since 2013. Obviously there have been a bunch of changes since January 2013 in terms of our curriculum, our level of experience, our staff, and so on, but if you’ve been to a mainline workshop since 2013 (and to some extent even before then), and you’ve also had a lifetime full of other experiences, your experience at that mainline workshop seems like a pretty good guide to what a workshop is like these days. And if you haven’t been to a workshop and are wondering what it’s like, then talking to people who have been to workshops since 2013 seems like a good way to learn about it.

More recent workshops are more similar to the current workshop than ... (read more)

What I get from Duncan’s FB post is (1) an attempt to disentangle his reputation from CFAR’s after he leaves, (2) a prediction that things will change due to his departure, and (3) an expression of frustration that more of his knowledge than necessary will be lost.

  1. It's a totally reasonable choice.
  2. At the time I first saw Duncan’s post I was more worried about big changes to our workshops from losing Duncan than I have observed since then. A year later I think the change is actually less than one would expect from reading Duncan’s post alone. That doesn’t speak to the cost of not having Duncan—since filling in for his absence means we have less attention to spend on other things, and I believe some things Duncan brought have not been replaced.
  3. I am also sad about this, and believe that I was the person best positioned to have caused a better outcome (smaller loss of Duncan’s knowledge and values). In other words I think Duncan’s frustration is not only understandable, but also pointing at a true thing.
8[DEACTIVATED] Duncan Sabien4y
(I expect the answer to 2 will still be the same from your perspective, after reading this comment, but I just wanted to point out that not all influences of a CFAR staff member cash out in things-visible-in-the-workshop; the part of my FB post that you describe as 2 was about strategy and research and internal culture as much as workshop content and execution. I'm sort of sad that multiple answers have had a slant that implies "Duncan only mattered at workshops/Duncan leaving only threatened to negatively impact workshops.")
7[DEACTIVATED] Duncan Sabien4y
I'd be curious for an answer to this one too, actually.
8Adam Scholl4y
To be honest I haven't noticed much change, except obviously for the literal absence of Duncan (which is a very noticeable absence; among other things Duncan is an amazing teacher, imo better than anyone currently on staff).

At this point, you guys must have sat down with 100s of people for 1000s of hours of asking them how their mind works, prodding them with things, and seeing how they turn out like a year later. What are some things about how a person thinks that you tend to look out for as especially positive (or negative!) signs, in terms of how likely they are in the future to become more agentic? (I'd be interested in concrete things rather than attempts to give comprehensive-yet-vague answers.)

I've heard a lot of people say things along the lines that CFAR "no longer does original research into human rationality." Does that seem like an accurate characterization? If so, why is it the case that you've moved away from rationality research?

Hello, I am a CFAR contractor who considers nearly all of their job to be “original research into human rationality”. I don’t do the kind of research many people imagine when they hear the word “research” (RCT-style verifiable social science, and such). But I certainly do systematic inquiry and investigation into a subject in order to discover or revise beliefs, theories, applications, etc. Which is, you know, literally the dictionary.com definition of research.

I’m not very good at telling stories about myself, but I’ll attempt to describe what I do during my ordinary working hours anyway.

All of the time, I keep an eye out for things that seem to be missing or off in what I take to be the current art of rationality. Often I look to what I see in the people close to me, who are disproportionately members of rationality-and-EA-related organizations, watching how they solve problems and think through tricky stuff and live their lives. I also look to my colleagues at CFAR, who spend many many hours in dialogue with people who are studying rationality themselves, for the first time or on a continuing basis. But since my eyes are in my ow... (read more)

9Grue_Slinky4y
How do CFAR's research interests/priorities compare with LW's Open Problems in Human Rationality? Based on Brienne and Anna's replies here, I suspect the answer is "they're pretty different", but I'd like to hear what accounts for this divergence.

I quite like the open questions that Wei Dai wrote there, and I expect I'd find progress on those problems to be helpful for what I'm trying to do with CFAR. If I had to outline the problem we're solving from scratch, though, I might say:

  • Figure out how to:
    • use reason (and stay focused on the important problems, and remember “virtue of the void” and “lens that sees its own flaws, and be quick where you can) without
    • going nutso, or losing humane values, and while:
    • being able to coordinate well in teams.

Wei Dai’s open problems feel pretty relevant to this!

I think in practice this goal leaves me with subproblems such as:

  • How do we un-bottleneck “original seeing” / hypothesis-generation;
  • What is the “it all adds up to normality” skill based in; how do we teach it;
  • Where does “mental energy” come from in practice, and how can people have good relationships to this;
  • What’s up with people sometimes seeming self-conscious/self-absorbed (in an unfortunate, slightly untethered way) and sometimes seeming connected to “something to protect” outside themselves?
    • It seems to me that “something to protect” makes people more robustly mentally healthy. Is that true? If so why? Also how d
... (read more)
1Howie Lempel4y
I'm not sure I understand what you mean by "something to protect." Can you give an example? [Answered by habryka]
4habryka4y
Presumable it's a reference to: https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect 
3Howie Lempel4y
Thanks! forgot about that post.
6Ben Pace4y
If Brienne wanted to give their own answer to that post, even if it was incomplete, I'd be very excited about that.
4Ben Pace4y
Hurrah! :D

My model is that CFAR is doing the same activity it was always doing, which one may or may not want to call “research”.

I’ll describe that activity here.  I think it is via this core activity (plus accidental drift, or accidental hill-climbing in response to local feedbacks) that we have generated both our explicit curriculum, and a lot of the culture around here.

Components of this core activity (in no particular order):

  1. We try to teach specific skills to specific people, when we think those skills can help them.  (E.g. goal-factoring; murphyjitsu; calibration training on occasion; etc.)
  2. We keep our eyes open while we do #1.  We try to notice whether the skill does/doesn’t match the student’s needs.  (E.g., is this so-called “skill” actually making them worse at something that we or they can see?  Is there a feeling of non-fit suggesting something like that? What’s actually happening as the “skill” gets “learned”?)
    • We call this noticing activity “seeking PCK” and spend a bunch of time developing it in our mentors and instructors.
  3. We try to stay in touch with some of our alumni after the workshop, and to notice what the long-term impacts seem to be (are t
... (read more)
4Adam Scholl4y
Also worth noting that there are a few different claims of the sort OP mentions that people make, I think. One thing people sometimes mean by this is “CFAR no longer does the sort of curriculum development which would be necessary to create an 'Elon Musk factory.'" CFAR never had the goal of hugely amplifying the general effectiveness of large numbers of people (which I’m happy about, since I’m not sure achieving that goal would be good). One should not donate to CFAR in order to increase the chances of an Elon Musk factory.

That is, we were always focused on high-intensity interventions for small numbers of people -- especially the people who are the very easiest to impact (have free time; smart and reflective; lucky in their educational background and starting position). We did not expect things to generalize to larger sets.

(Mostly. We did wonder about books and things for maybe impacting the epistemics (not effectiveness) of some larger number of people a small amount. And I do personally think that if there ways to help with the general epistemic, wisdom, or sanity of larger sets of people, even if by a small amount, that would be worth meaningful tradeoffs to create. But we are not presently aiming for this (except in the broadest possible "keep our eyes open and see if we someday notice some avenue that is actually worth taking here" sense), and with the exception of helping to support Julia Galef's upcoming rationality book back when she was working here, we haven't ever attempted concrete actions aimed at figuring out how to impact larger sets of people).

I agree, though, that one should not donate to CFAR in order to increase the chances of an Elon Musk factory.

5johnswentworth4y
Do you have any advice on who to donate to in order to increase the chances of an Elon Musk factory?
4Matt Goldenberg4y
It seems like paradigm academy is trying to do something like create an Elon Musk Factory: http://paradigmacademy.co/ But then again, so is Y-combinator, and every other incubator, as well as pretty much every leadership retreat (ok maybe not the leadership retreats, because Elon Musk is a terrible leader, but they're trying to do something like create a factory for what people imagine Elon Musk to be like). It seems like a very competitive space to create an Elon Musk factory, because its' so economically valuable.
because Elon Musk is a terrible leader

This is a drive-by, but I don't believe this statement, based on the fact that Elon has successfully accomplished several hard things via the use of people organized in hierarchies (companies). I'm sure he has foibles, and it might not be fun to work for him, but he does get shit done.

2Matt Goldenberg4y
I claim that Elon has done this despite his leadership abilities. I think that it's possible to be a bad leader but an effective CEO.
5ChristianKl4y
It seems to me unclear what exactly do you mean with the terms. What do you mean with leadership as compared to being a CEO?
2Matt Goldenberg4y
Leadership (as for instance leadership retreats are trying to teach it) is the intersection between management and strategy. Another way to put it, its' the discipline of getting people to do what's best for your organization.
4ChristianKl4y
Do you think that Elon doesn't get his employees to do what's best for his companies?
2Matt Goldenberg4y
I think he's bad at this. You can see this in some aspects of his companies. High micromanagement. High turnover. Disgruntled former employees.
4Adam Scholl4y
I'm not aware of existing organizations that seem likely to me to create such a factory.
3Ben Pace4y
I think that there are many rare and positive qualities of Musk that I try to emulate, and some rare qualities that are damaging and that I shouldn't emulate. Importantly, from many broad perspectives (like thinking that economic growth is a robust good) it's pretty weird to think that Elon Musk is bad. I presume you think Musk is pretty unilateralist and think that he probably did net damage with the building of OpenAI?
8Adam Scholl4y
I think Musk is impressive in many ways. I didn't really intend to express skepticism of him in particular, so much as of what might happen if one created loads more people as agenty as him. For example, I can easily imagine this accelerating capabilities progress relative to safety progress, which strikes me as bad.

What organisation, if it existed and ran independently of CFAR, would be the most useful to CFAR?

I wish someone would create good bay area community health. It isn't our core mission; it doesn't relate all that directly to our core mission; but it relates to the background environment in which CFAR and quite a few other organizations may or may not end up effective.

One daydream for a small institution that might help some with this health is as follows:

  1. Somebody creates the “Society for Maintaining a Very Basic Standard of Behavior”;
  2. It has certain very basic rules (e.g. “no physical violence”; “no doing things that are really about as over the line as physical violence according to a majority of our anonymously polled members”; etc.)
  3. It has an explicit membership list of folks who agree to both: (a) follow these rules; and (b) ostracize from “community events” (e.g. parties to which >4 other society members are invited) folks who are in bad standing with the society (whether or not they personally think those members are guilty).
  4. It has a simple, legible, explicitly declared procedure for determining who has/hasn’t entered bad standing (e.g.: a majority vote of the anonymously polled membership of the society; or an anonymous vote of a smaller “jury” randomly chosen from
... (read more)
6mako yass4y
Why aren't there Knowers of Character who Investigate all Incidents Thoroughly Enough for The Rest of The Community to Defer To, already? Isn't that a natural role that many people would like to play? Is it just that the community hasn't explicitly formed consensus that the people who're already very close to being in that role can be trusted, and forming that consensus takes a little bit of work?

No; this would somehow be near-impossible in our present context in the bay, IMO; although Berkeley's REACH center and REACH panel are helpful here and solve part of this, IMO.

5sapphire4y
I would have a lot of trust in a vote. I seriously doubt we as a community would agree on a set of knower I would trust. Also some similar ideas have been tried and went horribly in at least some cases (ex the alumni dispute resolution council system). It is much harder for bad actors to subvert a vote than to subvert a small number of people.
4drethelin4y
I believe the reason why is that knowing everyone in the community would literally be a full-time job and no one wants to pay for that.

No; that isn't the trouble; I could imagine us getting the money together for such a thing, since one doesn't need anything like a consensus to fund a position. The trouble is more that at this point the members of the bay area {formerly known as "rationalist"} "community" are divided into multiple political factions, or perhaps more-chaos-than-factions, which do not trust one another's judgment (even about pretty basic things, like "yes this person's actions are outside of reasonable behavioral norms). It is very hard to imagine an individual or a small committee that people would trust in the right way. Perhaps even more so after that individual or committee tried ruling against someone who really wanted to stay, and and that person attempted to create "fear, doubt, and uncertainty" or whatever about the institution that attempted to ostracize them.

I think something in this space is really important, and I'd be interested in investing significantly in any attempt that had a decent shot at helping. Though I don't yet have a strong enough read myself on what the goal ought to be.

4habryka4y
For whatever it's worth, my sense is that it's actually reasonably doable to build an institution/process that does well here, and gets trust from a large fraction of the community, though it is by no means an easy task. I do think it would likely require more than one full-time person, and at least one person of pretty exceptional skill in designing processes and institutions (as well as general competence). 
4Raemon4y
I think Anna roughly agrees (hence her first comment), she was just answering the question of "why hasn't this already been done?" I do think adversarial pressure (i.e. if you rule against a person they will try to sow distrust against you and it's very stressful and time consuming) is a reason that "reasonably doable" isn't really a fair description. It's doable, but quite hard, and a big commitment that I think is qualitatively different from other hard jobs.

CFAR relies heavily on selection effects for finding workshop participants. In general we do very little marketing or direct outreach, although AIRCS and MSFP do some of the latter; mostly people hear about us via word of mouth. This system actually works surprisingly (to me) well at causing promising people to apply.

But I think many of the people we would be most happy to have at a workshop probably never hear about us, or at least never apply. One could try fixing this with marketing/outreach strategies, but I worry this would disrupt the selection effects which I think have been a necessary ingredient for nearly all of our impact.

So I fantasize sometimes about a new organization being created which draws loads of people together, via selection effects similar to those which have attracted people to LessWrong, which would make it easier for us to find more promising people.

(I also—and this isn’t a wish for an organization, exactly, but it gestures at the kind of problem I speculate some organization could potentially help solve—sometimes fantasize about developing something like “scouts” at existing places with such selection effects. For example, a bunch of safety researchers competed in IMO/IOI when they were younger; I think it would be plausibly valuable for us to make friends with some team coaches, and for them to occasionally put us in touch with promising people).

What kind of people do you think never hear about CFAR but that you want to have at your workshops?

3Adam Scholl4y
I expect there are a bunch which never hear about us due to language barrier, and/or because they're geographically distant from most of our alumni. But I would be surprised if there weren't also lots of geographically-near, epistemically-promising people who've just never happened to encounter someone recommending a workshop.
5ChristianKl4y
It seems to me like being more explicit about what kind of people should be there would make it easier for other people to send them your way.

My impression is that CFAR has moved towards a kind of instruction where the goal is personal growth and increasing one's ability to think clearly about very personal/intuition-based matters, and puts significantly less emphasis on things like explicit probabilistic forecasting, that are probably less important but have objective benchmarks for success.

  1. Do you think that this is a fair characterisation?
  2. How do you think these styles of rationality should interact?
  3. How do you expect CFAR's relative emphasis on these styles of rationality to evolve over time?

I think it’s true that CFAR mostly moved away from teaching things like explicit probabilistic forecasting, and toward something else, although I would describe that something else differently—more like, skills relevant for hypothesis generation, noticing confusion, communicating subtle intuitions, updating on evidence about crucial considerations, and in general (for lack of a better way to describe this) “not going insane when thinking about x-risk.”

I favor this shift, on the whole, because my guess is that skills of the former type are less important bottlenecks for the problems CFAR is trying to help solve. That is, all else equal, if I could press a button to either make alignment researchers and the people who surround them much better calibrated, or much better at any of those latter skills, I’d currently press the latter button.

But I do think it’s plausible CFAR should move somewhat backward on this axis, at the margin. Some skills from the former category would be pretty easy to teach, I think, and in general I have some kelly betting-ish inclination to diversify the goals of our curricular portfolio, in case our core assumptions are wrong.

To be clear, this is not to say that those skills are bad, or even that they’re not an important part of rationality. More than half of the CFAR staff (at least 5 of the 7 current core staff, not counting myself, as a contractor) have personally trained their calibration, for instance.

In general, just because something isn’t in the CFAR workshop doesn’t mean that it isn’t an important part of rationality. The workshop is only 4 days, and not everything is well-taught in a workshop context (as opposed to [x] minutes of practice every day, for a year, or something like an undergraduate degree).

On the level of individual life outcomes, do you think CFAR outperforms other self help seminars like Tony Robbins, Landmark, Alethia, etc?

I think it would depend a lot on which sort of individual life outcomes you wanted to compare. I have basically no idea where these programs stand, relative to CFAR, on things like increasing participant happiness, productivity, relationship quality, or financial success, since CFAR mostly isn't optimizing for producing effects in these domains.

I would be surprised if CFAR didn't come out ahead in terms of things like increasing participants' ability to notice confusion, communicate subtle intuitions, and navigate pre-paradigmatic technical research fields. But I'm not sure, since in general I model these orgs as having sufficiently different goals than us that I haven't spent much time learning about them.

I'm not sure, since in general I model these orgs as having sufficiently different goals than us that I haven't spent much time learning about them.

Note that as someone who has participated in many other workshops, and who is very well read in other self-help schools, I think this is a clear blind spot and missstep of CFAR.

I think you would have discovered many other powerful concepts for running effective workshops, and been significantly further with rationality techniques, if you took these other organizations seriously as both competition and sources of knowledge, and had someone on staff who spent a significant amount of time simply stealing from existing schools of thought.

9Eli Tyre4y
Well, there are a lot of things out there. Why did you promote these ones? CFAR staff have done a decent amount of trawling through self help space, in particular people did investigation that turned up Focusing, Circling, and IFS. There have also been other things that people around here tried, and haven't gone much further. Granted, this is not a systematic investigation of the space of personal development stuff, but that seems less promising to me than people thinking about particular problems (often personal problems, or problems that they've observed in the rationality and EA communities) and investigating know solutions or attempted solutions that relate to those problems.
3Matt Goldenberg4y
I don't think these ones in particular, I listed these as some of the most popular ones. I personally have gotten a lot out of a hybrid approach, where I find a problem, investigate the best relevant self-helpy solutions, then go down the rabbit hole of finding all the other things created by that person, and all of their sources, influences, and collaborators. I suspect someone who's job it is to do this could have a similar function as the "living library" role at MIRI (I'm not sure how exactly that worked for them though)
9Eli Tyre4y
Oh. Yeah. I think this is pretty good. When someone does something particularly good, I do try to follow up on all their stuff. And, I do keep track of the histories of the various lineages and where people came from and what influenced them. It's pretty interesting how many different things are descended from the same nodes. But, you know, limited time. I don't follow up on everything.
9Adam Scholl4y
To be clear, others at CFAR have spent time looking into these things, I think; Anna might be able to chime in with details. I just meant that I haven't personally.

I haven't done any of the programs you mentioned. And I'm pretty young, so my selection is limited. But I've done lots of personal development workshop and trainings, both before and after my CFAR workshop, and my CFAR workshop was far and above the densest in terms of content, and most transformative on both my day-to-day processing, and my life trajectory.

The only thing that compares are some dedicated, years long relationships with skilled mentors.

YMMV. I think my experience was an outlier.

(This is Dan from CFAR)

Warning: this sampling method contains selection effects.

9Eli Tyre4y
Hahahahah. Strong agree.

Moderator note: I've deleted six comments on this thread by users Ziz and Gwen_, who appear to be the primary people responsible for barricading off last month's CFAR alumni reunion, and who were subsequently arrested on multiple charges, including false imprisonment

I explicitly don't want to judge the content of their allegations against CFAR, but both Ziz and Gwen_ have a sufficient track record of being aggressive offline (and Ziz also online) that I don't really want them around on LessWrong or to provide a platform to them. So I've banned them for the next 3 months (until March 19th), during which I and the other moderators will come to a more long-term decision about what to do about all of this. 

How much interesting stuff do you think there is in your curriculum that hasn't percolated into the community? What's stopping said percolation?

Is there something you find yourselves explaining over and over again in person, and that you wish you could just write up in an AMA once and for all where lots of people will read it, and where you can point people to in future?

Does CFAR "eat its own dogfood"? Do the cognitive tools help in running the organization itself? Can you give concrete examples? Are you actually outperforming comparable organizations on any obvious metric due to your "applied rationality"? (Why ain'tcha rich? Or are you?)

A response to just the first three questions. I’ve been at CFAR for two years (since January 2018). I've noticed, especially during the past 2-3 months, that my mind is changing. Compared to a year, or even 6 months ago, it seems to me that my mind more quickly and effortlessly moves in some ways that are both very helpful and resemble some of the cognitive tools we offer. There’s obviously a lot of stuff CFAR is trying to do, and a lot of techniques/concepts/things we offer and teach, so any individual’s experience needs to be viewed as part of a larger whole. With that context in place, here are a few examples from my life and work:

  • Notice the person I'm talking to is describing a phenomenon but I can't picture anything —> Ask for an example (Not a technique we formally teach at the workshop, but seems to me like a basic application of being specific. In that same vein: while teaching or explaining a concept, I frequently follow a concrete-abstract-concrete structure.)
  • I'm making a plan —> Walk through it, inner sim / murphyjitsu style (I had a particularly vivid and exciting instance of this a few weeks ago: I was packi
... (read more)

(Just responding here to whether or not we dogfood.)

I always have a hard time answering this question, and nearby questions, personally.

Sometimes I ask myself whether I ever use goal factoring, or seeking PCK, or IDC, and my immediate answer is “no”. That’s my immediate answer because when I scan through my memories, almost nothing is labeled “IDC”. It’s just a continuous fluid mass of ongoing problem solving full of fuzzy inarticulate half-formed methods that I’m seldom fully aware of even in the moment.

A few months ago I spent some time paying attention to what’s going on here, and what I found is that I’m using either the mainline workshop techniques, or something clearly descended from them, many times a day. I almost never use them on purpose, in the sense of saying “now I shall execute the goal factoring algorithm” and then doing so. But if I snap my fingers every time I notice a feeling of resolution and clarification about possible action, I find that I snap my fingers quite often. And if, after snapping my fingers, I run through my recent memories, I tend to find that I’ve just done goal f... (read more)

So, is CFAR rich?

I don’t really know, because I’m not quite sure what CFAR’s values are as an organization, or what its extrapolated volition would count as satisfaction criteria.

My guess is “not much, not yet”. According to what I think it wants to do, it seems to me like its progress on that is small and slow. It seems pretty disorganized and flaily much of the time, not great at getting the people it most needs, and not great at inspiring or sustaining the best in the people it has.

I think it’s *impressively successful* given how hard I think the problem really is, but in absolute terms, I doubt it’s succeeding enough.

If it weren’t dogfooding, though, it seems to me that CFAR would be totally non-functional.

Why would it be totally non-functional? Well, that’s really hard for me to get at. It has something to do with what sort of thing a CFAR even is, and what it’s trying to do. I *do* think I’m right about this, but most of the information hasn’t made it into the crisp kinds of thoughts I can see clearly and make coherent words about. I figured I’d just go ahead and post this anyhow, and y'all can make or not-make what you want of my intuitions.

More about why CFAR would be non-functional if it weren’t dogfooding:

As I said, my thoughts aren’t really in such a state that I know how to communicate them coherently. But I’ve often found that going ahead and communicating incoherently can nevertheless be valuable; it lets people’s implicit models interact more rapidly (both between people and within individuals), which can lead to developing explicit models that would otherwise have remained silent.

So, when I find myself in this position, I often throw a creative prompt to the part of my brain that thinks it knows something, and don’t bother trying to be coherent, just to start to draw out the shape of a thing. For example, if CFAR were a boat, what sort of boat would it be?

If CFAR were a boat, it would be a collection of driftwood bound together with twine. Each piece of driftwood was yanked from the shore in passing when the boat managed to get close enough for someone to pull it in. The riders of the boat are constantly re-organizing the driftwood (while standing on it), discarding parts (both deliberately and accidentally), and trying out variations on rudders and oars and sails. All the w... (read more)

I think we eat our own dogfood a lot. It’s pretty obvious in meetings—e.g., people do Focusing-like moves to explain subtle intuitions, remind each other to set TAPs, do explicit double cruxing, etc.

As to whether this dogfood allows us to perform better—I strongly suspect so, but I’m not sure what legible evidence I can give about that. It seems to me that CFAR has managed to have a surprisingly large (and surprisingly good) effect on AI safety as a field, given our historical budget and staff size. And I think there are many attractors in org space (some fairly powerful) that would have made CFAR less impactful, had it fallen into them, that it’s avoided falling into in part because its staff developed unusual skill at noticing confusion and resolving internal conflict.

I'm reading the replies of current CFAR staff with great interest (I'm a former staff member who ended work in October 2018), as my own experience within the org was "not really; to some extent yes, in a fluid and informal way, but I rarely see us sitting down with pen and paper to do explicit goal factoring or formal double crux, and there's reasonable disagreement about whether that's good, bad, or neutral."

5Timothy Telleen-Lawton4y
All of these answers so far (Luke, Adam, Duncan) resonate for me. I want to make sure I’m hearing you right though, Duncan. Putting aside the ‘yes’ or ‘no’ of the original question, do the scenes/experiences that Luke and Adam describe match what you remember from when you were here?
5[DEACTIVATED] Duncan Sabien4y
They do. The distinction seems to me to be something like endorsement of a "counting up" strategy/perspective versus endorsement of a "counting down" one, or reasonable disagreement about which parts of the dog food are actually beneficial to eat at what times versus which ones are Goodharting or theater or low payoff or what have you.
4Eli Tyre10mo
I wrote the following comment during this AMA back in 2019, but didn't post it because of the reasons that I note in the body of the comment.  I still feel somewhat unsatisfied with what I wrote. I think something about the tone feels wrong, or gives the wrong impression, somehow. Or maybe this only presents part of the story. But it still seems better to say aloud than not. I feel more comfortable posting it now, since I'm currently early in the process of attempting to build an organization / team that does meet these standards. In retrospect, I think probably it would have been better if I had just posted this at the time, and hashed out some disagreements with others in the org in this thread. (In some sense this comment is useful mainly as bit of a window into the kind of standards that I, personally, hold a rationality-development / training organization to.) My original comment is reproduced verbatim below (plus a few edits for clarity). 

What are the most important considerations for CFAR with regards to whether or not to publish the Handbook?

Historically, CFAR had the following concerns (I haven't worked there since Oct 2018, so their thinking may have changed since then; if a current staff member gets around to answering this question you should consider their answer to trump this one):

  • The handbook material doesn't actually "work" in the sense that it can change lives; the workshop experience is crucial to what limited success CFAR *is* able to have, and there's concern about falsely offering hope
  • There is such a thing as idea inoculation; the handbook isn't perfect and certainly can't adjust itself to every individual person's experience and cognitive style. If someone gets a weaker, broken, or uncanny-valley version of a rationality technique out of a book, not only may it fail to help them in any way, but it will also make subsequently learning [a real and useful skill that's nearby in concept space] correspondingly more difficult, both via conscious dismissiveness and unconscious rounding-off.
  • To the extent that certain ideas or techniques only work in concert or as a gestalt, putting the document out on the broader internet where it will be chopped up and rearranged and quoted in chunks and riffed off of and likely misinterpreted, etc., might be worse than not putting it out at all.

Back in April, Oliver Habryka wrote:

Anna Salamon has reduced her involvement in the last few years and seems significantly less involved with the broader strategic direction of CFAR (though she is still involved in some of the day-to-day operations, curriculum development, and more recent CFAR programmer workshops). [Note: After talking to Anna about this, I am now less certain of whether this actually applies and am currently confused on this point]

Could someone clarify the situation? (Possible sub-questions: Why did Oliver get this impression? Why was he confused even after talking talking to Anna? To what extent and in what ways has Anna reduced her involvement in CFAR in the last few years? If Anna has reduced her involvement in CFAR, what is she spending her time on instead?)

I’ve worked closely with CFAR since it’s founding in 2012, for varying degrees of closely (ranging from ~25 hrs/week to ~60 hrs/week). My degree of involvement in CFAR’s high-level and mid-level strategic decisions has varied some, but at the moment is quite high, and is likely to continue to be quite high for at least the coming 12 months.

During work-type hours in which I’m not working for CFAR, my attention is mostly on MIRI on MIRI’s technical research. I do a good bit of work with MIRI (though I am not employed by MIRI -- I just do a lot of work with them), much of which also qualifies as CFAR work (e.g., running the AIRCS workshops and assisting with the MIRI hiring process; or hanging out with MIRI researchers who feel “stuck” about some research/writing/etc. type thing and want a CFAR-esque person to help them un-stick). I also do a fair amount of work with MIRI that does not much overlap CFAR (e.g. I am a MIRI board member).

Oliver remained confused after talking with me in April because in April I was less certain how involved I was going to be in upcoming strategic decisions. However, it turns out the answer was “lots.” I have a lot of hopes and vision for CFAR over t

... (read more)

What important thing do you believe about rationality, that most others in the rationality community do not?

I'd be interested in both

  • An organizational thesis level, IE, a belief that guides the strategic direction of the organization
  • an individual level from people who are responding to the AMA.

What do you consider CFAR's biggest mistake? 

Do you have any non-fiction book recommendations? 

The two best books on Rationality:

  • The Sequences
  • Principles by Ray Dalio (I read the PDF that leaked from bridge water. I haven't even looked at the actual book.)

My starter kit for people who want to build the core skills of the mind / personal effectiveness stuff (I reread all of these, for reminders, every 2 years or so):

  • Getting things Done: the Art of Stress-free Productivity
  • Nonviolent Communication: a Language for Life
  • Focusing
  • Thinking, Fast and Slow
7gilch4y
I note that Principles and Getting things Done are not on CFAR's reading list, even though the rest are.
7mingyuan4y
But we've already read all of those!

Metaphors We Live By by George Lakoff — Totally changed the way I think about language and metaphor and frames when I read it in college. Helped me understand that there are important kinds of knowledge that aren't explicit.

I really like Language, Truth and Logic, by A.J. Ayer. It's an old book (1936) and it's silly in some ways. It's basically an early pro-empiricism manifesto, and I think many of its arguments are oversimplified, overconfident, or wrong. Even so, it does a great job of teaching some core mental motions of analytic philosophy. And its motivating intuitions feel familiar—I suspect that if 25-year-old Ayer got transported to the present, given internet access etc., we would see him on LessWrong pretty quick.

What mistakes have you made at CFAR that you have learned the most from? (Individually or as an organization?)

I feel like CFAR has learned a lot about how to design a space to bring about certain kinds of experiences in the people in that space (i.e. encouraging participants to re-examine their lives and how their minds work). What are some surprising things you've learned about this, that inform how you design e.g. CFAR's permanent venue?

Ambience and physical comfort are surprisingly important. In particular:

  • Lighting: Have lots of it! Ideally incandescent but at least ≥ 95 CRI (and mostly ≤ 3500k) LED, ideally coming from somewhere other than the center of the ceiling, ideally being filtered through a yellow-ish lampshade that has some variation in its color so the light that gets emitted has some variation too (sort of like the sun does when filtered through the atmosphere).

  • Food/drink: Have lots of it! Both in terms of quantity and variety. The cost to workshop quality of people not having their preferences met here sufficiently outweighs the cost of buying too much food, that in general it’s worth buying too much as a policy. It's particularly important to meet people's (often, for rationalists, amusingly specific) specific dietary needs, have a variety of caffeine options, and provide a changing supply of healthy, easily accessible snacks.

  • Furniture: As comfortable as possible, and arranged such that multiple small conversations are more likely to happen than one big one.

6Said Achmiz4y
What are the effects of following, and of not following, these guidelines? What tests have you run to determine these effects, and is the data from those tests available for download?

We have not conducted thorough scientific investigation of our lamps, food or furniture. Just as one might have reasonable confidence in a proposition like "tired people are sometimes grumpy" without running an RCT, one can I think be reasonably confident that e.g. vegetarians will be upset if there’s no vegetarian food, or that people will be more likely to clump in small groups if the chairs are arranged in small groups.

I agree the lighting recommendations are quite specific. I have done lots of testing (relative to e.g. the average American) of different types of lamps, with different types of bulbs in different rooms, and have informally gathered data about people’s preferences. I have not done this formally, since I don’t think that would be worth the time, but in my informal experience, the bulb preferences of the subset of people who report any strong lighting preferences at all tend to correlate strongly with that bulb’s CRI. Currently incandescents have the highest CRI of commonly-available bulbs, so I generally recommend those. My other suggestions were developed via a similar process.

7Ben Pace4y
Pretty sure effect sizes are obvious - I’ve been to events without enough snacks, and people leave early because they’re tired and out of energy. I think lighting also has obvious effect sizes when you try it, and also room layout just obviously changes the accordance’s of a space (classroom lecture vs sitting in circle vs kitchen etc). Added: I don't think I disagree much with the things Said and others say below, I just meant to say that I don't think that careful statistics is required to have robust beliefs about these topics.
5habryka4y
My guess is also that CFAR has seen many datapoints in this space, and could answer Said's question fine. I don't expect them to have run controlled experiments, but I do expect them to have observed a large variety of different lighting setups, food/drink availability and furniture arrangements, and would be able to give aggregate summaries of their experiences with that. 
1Said Achmiz4y
Surely we’re not taking seriously recommendations based on “it’s just obvious”…? (There’s at least some sort of journal of events that notes these parameters and records apparent effects, that can be perused for patterns, etc.… right?) Besides which, consider this: These are very specific recommendations! I assume this means that the CFAR folks tried a bunch of variations—presumably in some systematic, planned way—and determined that these particular parameters are optimal. So… how was this determination made? What was the experimentation like? Surely it wasn’t just… “we tried some stuff in an ad-hoc manner, and this particular very specific set of parameters ended up being ‘obviously’ good”…? EDIT: Let me put it another way: What will happen if, instead of incandescent lighting, I use halogen bulbs? What if the light is 90 CRI instead of 95+? If it’s 4500K instead of 3500K—or, conversely, if it’s 2700K? What if the light is in the center of the ceiling? What if the lampshade is greenish and not yellowish? Etc., etc.—what specifically ought I expect to observe, if I depart from the recommended lighting pattern in each of those ways (and others)?
8ESRogs4y
Why do you assume this? I would guess it was local hill climbing. (The base rate for local hill climbing is much higher than for systematic search, isn't it?)
1Said Achmiz4y
No doubt it is. But then, the base rate for many things is much higher than the base rate for the corresponding more “optimal” / “rational” / “correct” versions of those things. Should I assume in each case that CFAR does everything in the usual way, and not the rarer–but–better way? (Surely a depressing stance to take, if accurate…)

Yes, when the better way takes more resources.

On the meta level, I claim that doing things the usual way most of the time is the optimal / rational / correct way to do things. Resources are not infinite, trade-offs exist, etc.

EDIT: for related thoughts, see Vaniver's recent post on T-Shaped Organizations.

Strongly second this. Running a formal experiment is often much more costly from a decision theoretic perspective than other ways of reducing uncertainty.

6Said Achmiz4y
I think that you, and ESRogs, and possibly also habryka (though probably less so, if at all), have rather misunderstood the thrust of my comments. I was not, and am not, suggesting that CFAR run experiments in a systematic (not ‘formal’—that is a red herring) way, nor am I saying that they should have done this. Rather, what I was attempting to point out was that Adam Scholl’s comment, with its specific recommendations (especially the ones about lighting), would make sense if said recommendations were arrived at via a process of systematic experimentation (or, indeed, any even semi-systematic approach). On the other hand, suggestions such as “at least ≥ 95 CRI (and mostly ≤ 3500k) LED, ideally coming from somewhere other than the center of the ceiling, ideally being filtered through a yellow-ish lampshade” make no sense at all if arrived at via… what, exactly? Just trying different things and seeing which of them seemed like it was good? If you missed it before, I would like to draw your attention to the part of this comment of mine elsethread that comes after the “EDIT” note. Judging from the specificity of his recommendations, I must assume that Adam can answer the questions I ask there.
On the other hand, suggestions such as “at least ≥ 95 CRI (and mostly ≤ 3500k) LED, ideally coming from somewhere other than the center of the ceiling, ideally being filtered through a yellow-ish lampshade” make no sense at all if arrived at via… what, exactly? Just trying different things and seeing which of them seemed like it was good?

Why not?

If you're running many, many events, and one of your main goals is to get good conversations happening you'll begin to build up an intuition about which things help and hurt. For instance, look at a room, and be like "it's too dark in here." Then you go get your extra bright lamps, and put them in the middle of the room, and everyone is like "ah, that is much better, I hadn't even noticed."

It seems like if you do this enough, you'll end up with pretty specific recommendations like what Adam outlined.

Actually, I think this touches on something that is useful to understand about CFAR in general.

Most of our "knowledge" (about rationality, about running workshops, about how people can react to x-risk, etc.) is what I might call "trade knowledge", it comes from having lots of personal experience in the domain, and building up good procedures via mostly-trial and error (plus metacognition and theorizing about noticed problems might be, and how to fix them).

This is distinct from scientific knowledge, which is build up from robustly verified premises, tested by explicit attempts at falsification.

(I'm reminded of an old LW post, that I can't find, about Eliezer giving some young kid (who wants to be a writer) writing advice, while a bunch of bystanders signal that they don't regard Eliezer as trustworthy.)

For instance, I might lead someone through an IDC like process at a CFAR workshop. This isn't because I've done rigorous tests (or I know of others who have done rigorous tests) of IDC, or because I've concluded from the neuroscience literature the IDC is the optimal process for arriving at true beliefs.

Rather, its that I (and other ... (read more)

5Said Achmiz4y
I don’t think this works. A carpenter might say that his knowledge is trade knowledge and not scientific knowledge, and when challenged to provide some evidence that this supposed “trade knowledge” is real, and is worth something, may point to the chairs, tables, cabinets, etc., which he has made. The quality of these items may be easily examined, by someone with no knowledge of carpentry at all. “I am a trained and skilled carpenter, who can make various useful things for you out of wood” is a claim which is very, very easy to verify. But as I understand it, CFAR has considerable difficulty providing, for examination, any equivalent of a beautifully-made oak cabinet. This makes claims of “trade knowledge” rather more dubious.
4Vaniver4y
You're thinking of You're Calling *Who* A Cult Leader? An important clarification, at least from my experience of the metacognition, is that it's both getting good results and not triggering alarms (in the form of participant pushback or us feeling skeevy about doing it). Something that gets people to nod along (for the wrong reasons) or has some people really like it and other people really dislike it is often the sort of thing where we go "hmm, can we do better?"
3ESRogs4y
Thank you for this clarification.
6habryka4y
I think every debrief document I've interacted with (which are all before CFAR got a permanent venue) included a section on "thoughts on the venue and layout" as well as "thoughts and food and snacks" that usually discussed the effects of how the food and snacks were handled and how the venue seemed to affect the workshop (and whether CFAR should go back to that venue in the future). I am not sure whether that meets your threshold for systematicness, but it should at least allow a cross-verification of the listed patterns with observations at the time of the workshops in different conditions. 
2Said Achmiz4y
It’s a start, at least! If all the parameters (i.e., CRI / color temperature / etc. of the lighting, and of course furniture layout and so on) were recorded each time, and if notes on effects were taken consistently, then this should allow at least some rough spotting of patterns. Is this data available somewhere, in aggregated form? How comprehensive is it (i.e., how far back does it date, and how complete is the coverage of CFAR events)?
3habryka4y
My guess is someone could dig up the debriefs for probably almost all workshops for the past 4 years, though synthesizing that is probably multiple days of work. I don't expect specific things like CRI to have been recorded, but I do expect the sections to say stuff like "all the rooms were too dark, and this one room had an LED light in it that gave me headaches, and I've also heard from one attendee that they didn't like being in that room", which would allow you to derive a bunch of those parameters from context. 
1Said Achmiz4y
See this comment elsethread. To summarize a bit: it was not (and is not) my intention to ask or require anyone to do the sort of digging and synthesis that you describe[1]. Rather, I was wondering how the specific recommendations listed in Adam Scholl’s comment were arrived at (if not via a process even as systematic as synthesis from informal debriefs)—and, in consequence, how exactly those recommendations are to be understood (that is: “this is one point in the space of possibilities which we have stumbled upon and which seems good”? or, “this is the optimal point in the possibility space”? what are we to understand about the shape of the surrounding fitness landscape across the dimensions described? etc.). ---------------------------------------- 1. Though of course it would be interesting to do, regardless! If the debriefs can be made available for public download, en masse, I suspect a number of people would be interested in sifting through them for this sort of data, and much other interesting info as well. ↩︎

I have seen/heard from at least two sources something to the effect that MIRI/CFAR leadership (and Anna in particular) has very short AI timelines and high probability of doom (and apparently having high confidence in these beliefs). Here is the only public example that I can recall seeing. (Of the two examples I can specifically recall, this is not the better one, but the other was not posted publicly.) Is there any truth to these claims?

Riceissa's question was brief, so I'll add a bunch of my thoughts on this topic.

I also remember there was something of a hush around the broader x-risk network on the topic of timelines, sometime around the time of FLI's second AI conference. Since then I've received weird mixed signals about what people think, with hushed tones of being very worried/scared. The explicit content is of a similar type to Sam Altman's line "if you believe what I believe about the timeline to AGI and the effect it will have on the world, it is hard to spend a lot of mental cycles thinking about anything else" but rarely accompanied with an explanation of the reasoning that lead to that view.

I think that you can internalise models of science, progress, computation, ML, and geopolitics, and start to feel like "AGI being built" is part of your reality, your world-model, and then figure out what actions you want to take in the world.  I've personally thought about it a bit and come to some of my own conclusions, and I've generally focused on plans designed for making sure AGI goes well. This is the important and difficult work of incorporating abstract, far ideas into your models of near-mode reality.

... (read more)

FWIW, I don't feel this way about timelines anymore. Lot more pessimistic about estimates being mostly just noise.

For the record, parts of that ratanon post seem extremely inaccurate to me; for example, the claim that MIRI people are deferring to Dario Amodei on timelines is not even remotely reasonable. So I wouldn't take it that seriously.

Agreed I wouldn’t take the ratanon post too seriously. For another example, I know from living with Dario that his motives do not resemble those ascribed to him in that post.

I don't know Dario well, but I know enough to be able to tell that the anon here doesn't know what they're talking about re Dario.

7Ben Pace4y
Huh, thanks for the info, I'm surprised to hear that.  I myself had heard that rumour, saying that at the second FLI conference Dario had spoken a lot about short timelines and now everyone including MIRI was scared. IIRC I heard it from some people involved in ML who were in attendance of that conference, but I didn't hear it from anyone at MIRI. I never heard much disconfirmatory evidence, and it's certainly been a sort-of-belief that's bounced around my head for the past two or so years.

Certainly MIRI has written about this, for example see the relevant part of their 2018 update:

The latter scenario is relatively less important in worlds where AGI timelines are short. If current deep learning research is already on the brink of AGI, for example, then it becomes less plausible that the results of MIRI’s deconfusion work could become a relevant influence on AI capabilities research, and most of the potential impact of our work would come from its direct applicability to deep-learning-based systems. While many of us at MIRI believe that short timelines are at least plausible, there is significant uncertainty and disagreement about timelines inside MIRI, and I would not feel comfortable committing to a course of action that is safe only in worlds where timelines are short.

Also see Eliezer's top-notch piece on timelines, which includes the relevant quote:

Of course, the future is very hard to predict in detail. It's so hard that not only do I confess my own inability, I make the far stronger positive statement that nobody else can do it either.

Eliezer also updated after losing a bet that AlphaGo would not be able to beat humans so well, which he wrote about in AlphaGo Ze

... (read more)

More timeline statements, from Eliezer in March 2016:

That said, timelines are the hardest part of AGI issues to forecast, by which I mean that if you ask me for a specific year, I throw up my hands and say “Not only do I not know, I make the much stronger statement that nobody else has good knowledge either.” Fermi said that positive-net-energy from nuclear power wouldn’t be possible for 50 years, two years before he oversaw the construction of the first pile of uranium bricks to go critical. The way these things work is that they look fifty years off to the slightly skeptical, and ten years later, they still look fifty years off, and then suddenly there’s a breakthrough and they look five years off, at which point they’re actually 2 to 20 years off.

If you hold a gun to my head and say “Infer your probability distribution from your own actions, you self-proclaimed Bayesian” then I think I seem to be planning for a time horizon between 8 and 40 years, but some of that because there’s very little I think I can do in less than 8 years, and, you know, if it takes longer than 40 years there’ll probably be some replanning to do anyway over that time period.

And from me in April 2017:

Since

... (read more)

I had already seen all of those quotes/links, all of the quotes/links that Rob Bensinger posts in the sibling comment, as well as this tweet from Eliezer. I asked my question because those public quotes don't sound like the private information I referred to in my question, and I wanted insight into the discrepancy.

9Ben Pace4y
Okay. I was responding to "Is there any truth to these claims?" which sounded like it would be a big shock to discover MIRI/CFAR staff were considering short timelines a lot in their actions, when they'd actually stated it out loud in many places. While I agree that I'm confused about MIRI/CFAR's timelines and think that info-cascades around this have likely occurred, I want to mention that the thing you linked to is pretty hyperbolic. I want to say that I think that Dario is not obviously untrustworthy; I think well of him for being an early EA who put in the work to write up their reasoning about donations (see his extensive writeup on the GiveWell blog from 2009) which I always take as a good sign about someone's soul; the quote also says there's no reason or argument to believe in short timelines, but the analyses above in Eliezer's posts on AlphaGo Zero and Fire Alarm provide plenty of reasons for thinking AI could come within a decade. Don't forget that Shane Legg, one of the cofounders of DeepMind, has been consistently predicting AGI with 50% probability by 2028 (e.g. he said it here in 2011).
2philh4y
Just noting that since then, half the time to 2028 has elapsed. If he's still giving 50%, that's kind of surprising.

Why is that surprising? Doesn't it just mean that the pace of development in the last decade has been approximately equal to the average over Shane_{2011}'s distribution of development speeds?

4philh4y
I don't think it's that simple. The uncertainty isn't just about pace of development but about how much development needs to be done. But even if it does mean that, would that not be surprising? Perhaps not if he'd originally given a narrow confidence internal, but his 10% estimate was in 2018. For us to be hitting the average precisely enough to not move the 50% estimate much... I haven't done any arithmetic here, but I think that would be surprising, yeah. And my sense is that the additional complexity makes it more surprising, not less.
2Jalex Stark4y
Yes, I agree that the space of things to be uncertain about is multidimensional. We project the uncertainty onto a one-dimensional space parameterized by "probability of <event> by <time>". It would be surprising for a sophisticated person to show a market of 49 @ 51 on this event. (Unpacking jargon, showing this market means being willing to buy for 49 or sell at 51 a contract which is worth 100 if the hypothesis is true and 0 if it is false) (it's somewhat similar saying that your 2-sigma confidence interval around the "true probability" of the event is 49 to 51. The market language can be interpreted with just decision theory while the confidence interval idea also requires some notion of statistics) My interpretation of the second-hand evidence about Shane Legg's opinion suggests that Shane would quote a market like 40 @ 60. (The only thing I know about Shane is that they apparently summarized their belief as 50% a number of years ago and hasn't publicly changed their opinion since)
3philh4y
Perhaps I'm misinterpreting you, but I feel like this was intended as disagreement? If so, I'd appreciate clarification. It seems basically correct to me, and consistent with what I said previously. I still think that: if, in 2011, you gave 10% probability by 2018 and 50% by 2028; and if, in 2019, you still give 50% by 2028 (as an explicit estimate, i.e. you haven't just not-given an updated estimate); then this is surprising, even acknowledging that 50% is probably not very precise in either case.
4Ben Pace4y
I realised after writing that I didn't give a quote to show he that still believed it. I have the recollection that he still says 2028, I think someone more connected to AI/ML probably told me, but I can't think of anywhere to quote him saying it.

Ok, I'll bite. Why should CFAR exist? Rationality training is not so obviously useful that an entire org needs to exist to support it; especially now that you've iterated so heavily on the curriculum, why not dissolve CFAR and merge back into (e.g.) MIRI and just reuse the work to train new MIRI staff?

even more true if CFAR is effective recruitment for MIRI, but merging back in would allow you to separately optimize for that.

It's at least as important for CFAR to train people who end up at OpenAI, Deepmind, FHI, etc.

4JohnBuridan4y
I'm sure the methods of CFAR have wider application than to Machine Learning...

What do you consider CFAR's biggest win? 

The CFAR branch of rationality is heavily inspired by General Semantics, with its focus on training your intuitive reactions, evaluation, the ways in which we're biased by language, etc. Eliezer Yudkowsky mentions that he was influenced by The World of Null-A, a science fiction novel about a world where General Semantics has taken over as the dominant philosophy of society.

Question: Considering the similarity to what Alfred Korzybski was trying to do with General Semantics to the workshop and consulting model of CFAR, are you aware of a good analysis of how General Semantics failed? If so, has this informed your strategic approach with CFAR at all?

I buy that General Semantics was in some sense a memetic precursor to some of the ideas described in the sequences/at CFAR, but I think this effect was mostly indirect, so it seems misleading to me to describe CFAR as being heavily influenced by it. Davis Kingsley, former CFAR employee and current occasional guest instructor, has read a bunch about GM, I think, and mentions it frequently, but I'm not aware of direct influences aside from this.

9yagudin4y
I think Nuno's time-capped analysis is good.

Which of the rationalists virtues do you think you’ve practised the most in working at CFAR?

(This is Dan from CFAR)

I did a quick poll of 5 staff members and the average answer was 5.6.

6Ben Pace4y
Ah, the virtue of precision. "More can be said about the 5.6th virtue than of all the virtues in the world!"

Not precise at all. The confidence interval is HUGE.

stdev = 5.9 (without Bessel's correction)

std error = 2.6

95% CI = (0.5, 10.7)

The confidence interval should not need to go that low. Maybe there's a better way to do the statistics here.

To reduce sampling error you could ask everyone again.

I'll interpret this as "Which of the rationalist virtues do you think CFAR has gotten the most mileage from your practicing".

The virtue of the void. Hands down. Though I still haven't done it nearly as much as it would be useful to do it. Maybe this year?

If I instead interpret this as "Which of the rationalist virtues do you spend the most minutes practicing": curiosity. Which would be my runner-up for "CFAR got the most mileage from my practicing".

Does CFAR have a research agenda? If so, is it published anywhere?

I know you do follow-ups with most/all CFAR attendees. Do you have any aggregate data from the questionnaires? How much do they improve on the outcomes you measure and which ones?

What can the LessWrong community do (or the broader rationality-aligned movement do) to help with CFAR's mission?

9E. Garrett4y
At the risk of sounding trite: stay fun, stay interested, stay fresh, and stay sane! We want the people we bring on and the culture that surrounds us to be a good place to be for epistemics and also for people. We have a bunch of instructor candidates that I am very excited about. One of the ones I am most excited about strikes me as an intellectual offspring of the Sequences, and he’s rocking it. I would like to encounter more people like him, so I hope this community continues to make good, strong, “what odds would you give me”, rationalists that are interested in teaching and curriculum development. We would also like more thoughts on how different mental tech breaks and how to create reason-based, healthy communities with better immune systems that do not break. Partial clues most welcome, especially if you write them up as readable LW posts and email us so we can read the discussion :) If you have ideas of something particular you want to do or add, reach out to us. We are looking for someone to do metrics with Dan. We are looking for particularly skilled computer scientists to attend the AIRCS program. We are potentially looking for a couple summer interns. We are looking for someone with a lot of professional high level ops experience and a good dose of common sense. We also *just happen to be* running a fundraiser right now!

What does Dan actually do? What's his output and who decides what he looks into?

Good question! I also had it earlier this year, so I studied him, and here is what I learned of Dan:

Dan is workshop staff at most of the workshops we run, including AIRCS workshops, mainlines, and other programs like instructor training. So, for ~16 weeks of 2019, he was helping run, teaching at, and doing ops at workshops.

Dan is also in charge of all our spreadsheets and data and everything that happens after a workshop: synthesizing the feedback we get from attendees, putting attendees into follow-up groups, pairing people with mentors, and sending out exercises.

Dan is in charge of our impact reports. This involves figuring out what to measure and how to measure it, doing the actual measuring, and then writing about it in a way that hopefully people understand. Dan decides what he looks into based on his own judgements and the questions funders and our exec team have. He’s currently working on metrics data for the fundraiser that will be out soon.

Dan is a general CFAR staff member. This means he contributes to our weekly colloquia with thoughts about rationality, interviews people for workshops, does follow-ups with participants, and other random projects.

What’s a post from The Sequences that has really affected how you think about rationality?

I really loved this post on Occam's Razor. Before encountering it, I basically totally misunderstood the case for using the heuristic, and so (wrongly, I now think) considered it kind of dumb.

I also especially loved "The Second Law of Thermodynamics, and Engines of Cognition," which gave me a wonderful glimpse (for the first time) into how "laws of inference" ground in laws of physics.

In another post, Adom Scholl says:

Historically, I think CFAR has been really quite bad at explaining its goals, strategy, and mechanism of impact—not just to funders, and to EA at large, but even to each other. I regularly encounter people who, even after extensive interaction with CFAR, have seriously mistaken impressions about what CFAR is trying to achieve.

What are the common mistaken impressions?

For each, do you think they would be net good if done by someone else? Are you aware of other groups that are attempting to achieve those aims? And what do you think it would take to make one?

CFAR must have a lot of information about the efficacy of various rationality techniques and training methods (compared to any other org, at least). Is this information, or recommendations based on it, available somewhere? Say, as a list of techniques currently taught at CFAR - which are presumably the best ones in this sense. Or does one have to attend a workshop to find out?

3Eigil Rischel4y
If anyone came across this comment in the future - the CFAR Participant Handbook is now online, which is more or less the answer to this question.

Do you have any fiction book recommendations? 

Thanks to your recommendation I recently read New Atlantis, by Francis Bacon, and it was so great! It's basically Bacon's list of things he wished society had, ranging from "clothes made of sea-water-green satin" and "many different types of beverages" to "research universities that employ full-time specialist scholars."

7habryka4y
I am very glad to hear that! 

Which of CFAR's cognitive tools has had the greatest impact on your life personally? Can you give concrete examples?

Do you think the AI risk for computer scientists workshops turn anyone off AI risk? How does this compare to the degree to which regular workshops turn people off rationality? Do these workshops target audiences of interest to the AI risk community in general (successful computer scientists in general), or audiences that are of special interest to MIRI (e.g. rationalist pure mathematicians)?

Hello,

Could you shed some light on this recent incident (mirror) involving CFAR? I am sure I am not the only one who is confused.

Best regards.

(This is Dan from CFAR)

In terms of what happened that day, the article covers it about as well as I could. There’s also a report from the sheriff’s office which goes into a bit more detail about some parts.

For context, all four of the main people involved live in the Bay Area and interact with the rationality community. Three of them had been to a CFAR workshop. Two of them are close to each other, and CFAR had banned them prior to the reunion based on a bunch of concerning things they’ve done. The other two I’m not sure how they got involved.

They have made a bunch of complaints about CFAR and other parts of the community (the bulk of which are false or hard to follow), and it seems like they were trying to create a big dramatic event to attract attention. I’m not sure quite how they expected it to go.

This doesn’t seem like the right venue to go into details to try to sort out the concerns about them or the complaints they’ve raised; there are some people looking into each of those things.

This is probably the least important question (the answer is that some people are nuts) but also the one that I most want to see answered for some reason.

6Eigil Rischel4y
Information about people behaving erratically/violently is better at grabbing your brain's "important" sensor? (Noting that I had exactly the same instinctual reaction). This seems to be roughly what you'd expect from naive evopsych (which doesn't mean it's a good explanation, of course)
4mako yass4y
I'd guess there weren't as many nutcases in the average ancestral climate, as there are in modern news/rumor mills. We underestimate how often it's going to turn out that there wasn't really a reason they did those things.

Who designed all the wall-decoration in the CFAR venue and do you have a folder with all the art-pieces you used? I might want to use some of them for future art/design projects.

4Adam Scholl4y
I did. I have some but not all of the images saved; happy to share what I have, feel free to pm me for links.
6johnswentworth4y
Follow-up question: how did you go about finding/picking all that stuff? I was particularly surprised to see a ctenophore picture on the wall - it's the sort of thing which makes sense in the collection, but only if you have (what I thought to be) some fairly esoteric background knowledge about evo-devo.
8Adam Scholl4y
I just googled around for pictures of things I think are neat. I think ctenophores are neat, since they look like alien spaceships and maybe evolved neurons independently; I think it's neat that wind sometimes makes clouds do the vortex thing that canoe paddles make water do, etc.

By looking in-depth at individual case studies, advances in cogsci research, and the data and insights from our thousand-plus workshop alumni, we’re slowly building a robust set of tools for truth-seeking, introspection, self-improvement, and navigating intellectual disagreement—and we’re turning that toolkit on itself with each iteration, to try to catch our own flawed assumptions and uncover our own blindspots and mistakes.

This is taken from the about page on your website (emphasis mine). I also took a look at this list of resources and notice I'm still curious:

Question: What literature (academic or otherwise) do you draw on the most often for putting together CFAR's curriculum? For example, I remember being told that the concept of TAP's was taken from some psychology literature, but searching Google scholar didn't yield anything interesting.

The name for TAP's in the psychology literature is implementation intention. CFAR renamed it.

How does CFAR plan to scale its impact?

Can CFAR help raise the sanity waterline? Has curriculum been developed that can teach any of the cognitive tools to more than a handful of people at a time at workshops? Perhaps a MOOC or a franchise?

6ChristianKl4y
This question seems to assume that CFAR sees the impact of their workshops as being about successfully teaching cognitive tools. That doesn't seem to be the case based on conversation I had in the past with CFAR folks.

What did that conversation cause you to think CFAR believes the impact of their workshops *is* about?

8ChristianKl4y
The definition I got was "Making people more agenty about changing their thinking". I'm not sure about the exact wording that was used. It might have been "feel agency" instead of being agenty and it might have been thinking habits instead of thinking, but that's a gist that I remember from a conversation at LWCW. Falk Lieder who runs an academic research group on applied rationality was asking what potential there's for cooperating with CFAR to study the effectiveness of the techniques and the response was something along the lines of "CFAR doesn't really care that much about the individual techniques, the only thing that might be interesting is to measure whether the whole CFAR workshop as a unit produces those agency changes". If any org has the goal of creating a strict list of cognitive tools who are individually powerful for helping people, cooperating with Falk to get academic backing both in terms of independent scientific authority and in terms of being more clear about the value of the cognitive tool would be valuable.

The end of the Sequences, The Craft and the Community, concluded with "Go Forth and Create the Art!" Is that what CFAR is doing? Is anyone else working on this?

4[DEACTIVATED] Duncan Sabien4y
[Disclaimer: have not been at CFAR since October 2018; if someone currently from the org contradicts this, their statement will be more accurate about present-day CFAR] No (CFAR's mission has always been narrower/more targeted) and no (not in any systematic, competent fashion).

What have been the changes in how people respond to your classes and workshops as you've changed from the general public to having a substantial (50%?) focus on people who may work on AI alignment? I mean obviously I expect that they're more able to hold up a technical discussion, but I'm curious what else you've noticed.

What's a SlateStarCodex posts you have thought a lot about while thinking about rationality / CFAR?

(This is Dan from CFAR)

Guided By The Beauty Of Our Weapons

Asymmetric vs. symmetric tools is now one of the main frameworks that I use to think about rationality (although I wish we had better terminology for it). A rationality technique (as opposed to a productivity hack or a motivation trick or whatever) helps you get more done on something in cases where getting more done is a good idea.

This wasn’t a completely new idea when I read Scott’s post about it, but the post seems to have helped a lot with getting the framework to sink in.

9[DEACTIVATED] Duncan Sabien4y
In case no one who currently works at CFAR gets around to answering this (I was there from Oct 2015 to Oct 2018 in a pretty influential role but that means I haven't been around for about fourteen months): * Meditations on Moloch is top of the list by a factor of perhaps four * Different Worlds as a runner up Lots of social dynamic stuff/how groups work/how individuals move within groups: * Social Justice and Words, Words, Words * I Can Tolerate Anything Except The Outgroup * Guided By The Beauty Of Our Weapons * Yes, We Have Noticed The Skulls * Book Review: Surfing Uncertainty
2Tenoke4y
Is that post really that much more relevant than everything else for TEACHING rationality? How come?
5[DEACTIVATED] Duncan Sabien4y
That's not the question that was asked, so ... no. Edit: more helpfully, I found it valuable for thinking about rationality and thinking about CFAR from a strategic perspective—what it was, what it should be, what problems it was up against, how it interfaced with the rest of society.
-7Tenoke4y

Meta: Because I think a lot of users will be unusually interested in this Q&A, I'll pin this to the frontpage post-list while it's ongoing, and then afterwards move it back to Anna's personal blog.

As you've seen people grow as rationalists and become more agentic, what patterns have you noticed in how people change their relationships with their emotions?

What aspects of CFAR's strategy would you be most embarrassed by if they were generally known? :P

Who is the rightful caliph? 

All hail Logmoth, the rightful caliph!

An innovative choice

9s0ph1a4y
Gwern.

What cognitive tools can CFAR offer those who have already read Eliezer's Sequences/R:AZ?

The Sequences seemed more focused on epistemic rationality. I assume by "Applied Rationality", you mean the instrumental kind?

Anna previously wrote about the challenges of making your explicit reasoning trustworthy, and later about bucket errors and how they are often designed to help keep one's reasoning sane. Can Anna and/or other instructors talk about how much you've seen people's reasoning get more trustworthy over time, and what that looks like? I'm also interested if you still feel like you catch yourself with false beliefs regularly, or how you think about it for yourself.

4Ben Pace4y
As a follow-up, can you talk about times you've experienced participants having built explicit anti-epistemology to defend themselves from doing too much explicit reasoning? I'm talking about cases analogous to when people have made bucket errors and defend it anyway, so that they don't accidentally break things.

You teach classes like Goal Factoring, Internal Double Crux, and TAPs (Trigger Actions Plans), which are all about breaking parts of your mind down into smaller parts.

I expect you to endorse the statement "Breaking things down into smaller parts is good!" but I'm curious if you have any more detailed opinions about that. Can you share your sense of when it's the right next thing to do versus not the right next thing to do, when solving problems and understanding your own mind? I'd also be interested in stories of when you've seen people do it especially well or badly.

which are all about breaking parts of your mind down into smaller parts

Na, my mind's a bunch of super tiny stuff to begin with. When I do IDC, I just stop in my unified-person-fabrication a little earlier than the point at which I've erased all ability to perceive internal distinctions.

(Sorry, I know that's not an answer to your question. Maybe somebody, perhaps even a future me, will come by and give you a real answer.)

5Ben Pace4y
Hah, I did not expect that reply. Do you think this is a pretty Brienne-specific way of working internally, or d’you think if I practised IDC with you a couple of times I’d start to realise this was how I worked too?
6Ben Pace4y
I suppose I'm not sure why I think I'm a coherent agent generally.  I guess I have to talk as though I'm one a lot, using words like "I think" and "My perspective on this is" and "You're disregarding my preferences here".  I've found the CFAR classes called 'IDC' helpful when I myself am confused about what I want or what I think, especially when it's a social situation, where I'm e.g. feeling bad but not sure why, and I split the conflicting feelings into subagents that have beliefs and goals. If I'm able to actually name supposed subagents that would give rise to the current conflict I'm feeling (and I find this be 90% of the battle), then I find that the confusion is quickly dissolved, and I am able to more clearly integrate them into a whole.  To give a real example (or something pretty close to a real time I used it), it sounds internally something like: At which point I no longer felt conflicted, and I was able to just think about "What will I do next?" without having to go a level lower into why two parts of me were pulling in different directions.

What's your operating definition of what rationality happens to be?

The capacity to develop true beliefs, so as to better achieve your goals.

What is your model wrt the link between intelligence and rationality?

1. What rough fraction of the benefits of CFAR do you think accrue through recruiting for MIRI?

2. What rough fraction of CFAR's revenues come from MIRI?

What have you learned about transfer in your experience at CFAR? Have you seen people gain the ability to transfer the methods of one domain into other domains? How do you make transfer more likely to occur?

Regarding the talking point "Why we think there should be a CFAR", what are the best things that could exist instead of a CFAR? Are there any variants that would maintain CFAR's benefits while attracting fewer controversies?

How much of the gain of participating in a mainstream workshop is learning techniques?

Do you chose techniques to teach based on their effectiveness as the only criteria or do you also teach some techniques not because they are the most effective but because you believe that it's useful to bring participants in contact with ideas that are foreign to them?

I've been developing a game. Systemically, it's about developing accurate theories. The experience of generating theories, probing specimens, firing off experiments, figuring out where the theories go wrong, and refining the theories into fully general laws of nature which are reliable enough to create perfect solutions to complex problem statements. This might make it sound complicated, but it does all of that with relatively few components. Here's a screenshot of the debug build of the game over a portion of the visual design scratchpad (i... (read more)

I think this project sounds cool. This might be (I don't know enough to know) an example of rationality training in something other than the CFAR paradigm of "1) present 2) techniques to 3) small groups 4) at workshops."

But I think your question is too high context to easily answer. Is there a way I can play the current version? If so would try it for a bit and then tell you what I think, personally.

2mako yass4y
If you have an android phone, sure. I'll DM you a link to the apk. I should note, it's pretty brutal right now and I have not yet found a way to introduce enough primitives to the player to make really strict tests, so it's possible to guess your way all the way to the end. Consider the objective to be figure out the laws, rather than solve the puzzles.
5D_Malik4y
I don't understand that screenshot at all (maybe the resolution is too low?), but from your description it sounds in a similar vein to Zendo and Eleusis and Penultima, which you could get ideas from. Yours seems different though, and I'd be curious to know more details. I tried implementing some single-player variants of Zendo five years ago, though they're pretty terrible (boring, no graphics, probably not useful for training rationality). I do think there's some potential for rationality improvements from games, though insofar as they're optimized for training rationality, they won't be as fun as games optimized purely for being fun. I also think it'll be very difficult to achieve transfer to life-in-general, for the same reason that learning to ride a bike doesn't train you to move your feet in circles every time you sit in a chair. ("I pedal when I'm on a bike, to move forward; why would I pedal when I'm not on a bike, and my goal isn't to move forward? I reason this way when I'm playing this game, to get the right answer; why would I reason this way when I'm not playing the game, and my goal is to seem reasonable or to impress people or to justify what I've already decided?")

I've heard of Zendo and I've been looking for someone to play Eleusis with for a while heh (maybe I'll be able to get the local EA group to do it one of these days).

though insofar as they're optimized for training rationality, they won't be as fun as games optimized purely for being fun

Fun isn't a generic substance. Fun is subjective. A person's sense of fun is informed by something. If you've internalised the rationalist ethos, if your gut trusts your mind, if you know deeply that rationality is useful and that training it is important, a game that trains rationality is going to be a lot of fun for you.

This is something I see often during playtesting. The people who're quickest to give up on the game tend to be the people who don't think experimentation and hypothesising has any place in their life.


I am worried about transfer failure. I guess I need to include discussion of the themes of the game and how they apply to real world situations. Stories about wrong theories, right theories, the power of theorising, the importance of looking closely at cases that break our theories.

I need to... make sure that people can find the symmetry between the game and parts of their lives.

Would it be possible and cost-effective to release video courses at a much lower cost?

[This comment is no longer endorsed by its author]Reply