New post: Some things I think about Double Crux and related topics
I've spent a lot of my discretionary time working on the broad problem of developing tools for bridging deep disagreements and transferring tacit knowledge. I'm also probably the person who has spent the most time explicitly thinking about and working with CFAR's Double Crux framework. It seems good for at least some of my high level thoughts to be written up some place, even if I'm not going to go into detail about, defend, or substantiate, most of them.
The following are my own beliefs and do not necessarily represent CFAR, or anyone else.
I, of course, reserve the right to change my mind.
[Throughout I use "Double Crux" to refer to the Double Crux technique, the Double Crux class, or a Double Crux conversation, and I use "double crux" to refer to a proposition that is a shared crux for two people in a conversation.]
Here are some things I currently believe:
(General)
Double Crux is one (highly important) tool/ framework among many. I want to distinguish between the the overall art of untangling and resolving deep disagreements and the Double Crux tool in particular. The
People rarely change their mind when they feel like you have trapped them in some inconsistency [...] In general (but not universally) it is more productive to adopt a collaborative attitude of sincerely trying to help a person articulate, clarify, and substantiate [bolding mine—ZMD]
"People" in general rarely change their mind when they feel like you have trapped them in some inconsistency, but people using the double-crux method in the first place are going to be aspiring rationalists, right? Trapping someone in an inconsistency (if it's a real inconsistency and not a false perception of one) is collaborative: the thing they were thinking was flawed, and you helped them see the flaw! That's a good thing! (As it is written of the fifth virtue, "Do not believe you do others a favor if you accept their arguments; the favor is to you.")
Obviously, I agree that people should try to understand their interlocutors. (If you performatively try to find fault in something you don't understand, then apparent "faults" you find are likely to be your own misunderstandings rather than actual faults.) But if someone spots an actual inconsistency in my ideas, I want them to tell me right away. Pe
1Slider1yI would think that inconsistencies are easier to appriciate when they are in the
central machinery. A rationalist might have more load bearing on their beliefs
so most beliefs are central to atleast something but I think a
centrality/point-of-communication check is more upside than downside to keep.
Also cognitive time spent looking for inconsistencies could be better spent on
more constructive activities. Then there is the whole class of heuristics which
don't even claim to be consistent. So the ability to pass by an inconsistency
without hanging onto it will see use.
2ChristianKl6moHow about doing this a few times on video? Watching the video might not be as
effective as the one-on-one teaching but I would expect that watching a few
1-on-1 explanations would be a good way to learn about the process.
From a learning perspective it also helps a lot for reflecting on the technique.
The early NLP folks spent a lot of time analysing tapes of people performing
techniques to better understand the techniques.
2elityre6moI in fact recorded a test session of attempting to teach this via Zoom last
weekend. This was the first time I tried a test session via Zoom however and
there were a lot of kinks to work out, so I probably won't publish that version
in particular.
But yeah, I'm interested in making video recordings of some of this stuff and
putting up online.
2Chris_Leong8moThanks for mentioning conjugative cruxes. That was always my biggest objection
to this technique. At least when I went through CFAR, the training completely
ignored this possibility. It was clear that it often worked anyway, but the
impression that I got was that it was the general frame
[https://www.lesswrong.com/posts/f886riNJcArmpFahm/noticing-frame-differences]
which was important more than the precise methodology which at that time still
seemed in need of refinement.
2DanielFilan1yFYI the numbering in the (General) section is pretty off.
3elityre1yWhat do you mean? All the numbers are in order. Are you objecting to the nested
numbers?
2DanielFilan1yTo me, it looks like the numbers in the General section go 1, 4, 5, 5, 6, 7, 8,
9, 3, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 2, 3, 3, 4, 2, 3, 4 (ignoring the nested
numbers).
2DanielFilan1y(this appears to be a problem where it displays differently on different
browser/OS pairs)
A few months ago, I wrote about how RAND, and the “Defense Intellectuals” of the cold war represent another precious datapoint of “very smart people, trying to prevent the destruction of the world, in a civilization that they acknowledge to be inadequate to dealing sanely with x-risk.”
Since then I spent some time doing additional research into what cognitive errors and mistakes those consultants, military officials, and politicians made that endangered the world. The idea being that if we could diagnose which specific irrationalities they were subject to, that this would suggest errors that might also be relevant to contemporary x-risk mitigators, and might point out some specific areas where development of rationality training is needed.
However, this proved somewhat less fruitful than I was hoping, and I’ve put it aside for the time being. I might come back to it in the coming months.
It does seem worth sharing at least one relevant anecdote, from Daniel Ellsberg’s excellent book, the Doomsday Machine, and analysis, given that I’ve already written it up.
3elityre1yCan you say more about what you got from it?
4billzito1yI can't speak for habryka, but I think your post did a great job of laying out
the need for "say oops" in detail. I read the Doomsday Machine and felt this
point very strongly while reading it, but this was a great reminder to me of its
importance. I think "say oops" is one of the most important skills for actually
working on the right thing, and that in my opinion, very few people have this
skill even within the rationality community.
4Adam Scholl1yThere feel to me like two relevant questions here, which seem conflated in this
analysis:
1) At what point did the USSR gain the ability to launch a
comprehensively-destructive, undetectable-in-advance nuclear strike on the US?
That is, at what point would a first strike have been achievable and effective?
2) At what point did the USSR gain the ability to launch such a first strike
using ICBMs in particular?
By 1960 the USSR had 1,605 nuclear warheads
[https://en.wikipedia.org/wiki/Historical_nuclear_weapons_stockpiles_and_nuclear_tests_by_country]
; there may have been few ICBMs among them, but there are other ways to deliver
warheads than shooting them across continents. Planes fail the "undetectable"
criteria, but ocean-adjacent cities can be blown up by small boats, and by 1960
the USSR had submarines [https://en.wikipedia.org/wiki/Soviet_submarine_K-19]
equipped with six "short"-range (650 km and 1,300 km) ballistic missiles. By
1967 they were producing subs like this
[https://en.wikipedia.org/wiki/Yankee-class_submarine], each of which was armed
with 16 missiles with ranges of 2,800-4,600 km.
All of which is to say that from what I understand, RAND's fears were only a few
years premature.
[Note: I’ve started a research side project on this question, and it is already obvious to me that this ontology importantly wrong.]
There’s a common phenomenology of “mental energy”. For instance, if I spend a couple of hours thinking hard (maybe doing math), I find it harder to do more mental work afterwards. My thinking may be slower and less productive. And I feel tired, or drained, (mentally, instead of physically).
Mental energy is one of the primary resources that one has to allocate, in doing productive work. In almost all cases, humans have less mental energy than they have time, and therefore effective productivity is a matter of energy management, more than time management. If we want to maximize personal effectiveness, mental energy seems like an extremely important domain to understand. So what is it?
The naive story is that mental energy is an actual energy resource that one expends and then needs to recoup. That is, when one is doing cognitive work, they are burning calories, depleting their bodies energy stores. As they use energy, they have less fuel to burn.
My current understanding is that this story is not physiologically realistic. T... (read more)
6gilch1yOn Hypothesis 3, the brain may build up waste as a byproduct of its metabolism
when it's working harder than normal, just as muscles do. Cleaning up this
buildup seems to be one of the functions of sleep. Even brainless animals like
jellyfish sleep. They do have neurons though.
5G Gordon Worley III1yI also think it's reasonable to think that multiple things may be doing on that
result in a theory of mental energy. For example, hypotheses 1 and 2 could both
be true and result in different causes of similar behavior. I bring this up
because I think of those as two different things in my experience: being "full
up" and needing to allow time for memory consolidation where I can still force
my attention it just doesn't take in new information vs. being unable to force
the direction of attention generally.
3elityre1yYeah. I think you're on to something here. My current read is that "mental
energy" is at least 3 things.
Can you elaborate on the what "knowledge saturation" feels like for you?
2G Gordon Worley III1ySure. It feels like my head is "full", although the felt sense is more like my
head has gone from being porous and sponge-like to hard and concrete-like. When
I try to read or listen to something I can feel it "bounce off" in that I can't
hold the thought in memory beyond forcing it to stay in short term memory.
3Matt Goldenberg1yIsn't it possible that there's some other biological sink that is time delayed
from caloric energy? Like say, a very specific part of your brain needs a very
specific protein, and only holds enough of that protein for 4 hours? And it can
take hours to build that protein back up. This seems to me to be at least
somewhat likeely.
2Ruby1ySomeone smart once made a case like to this to me in support of a specific
substance (can't remember which) as a nootropic, though I'm a bit skeptical.
2eigen1yI think about this a lot. I'm currently dangling with the fourth Hypothesis,
which seems more correct to me and one where I can actually do something to
ameliorate the trade-off implied by it.
In this comment
[https://www.lesswrong.com/posts/9mXi6QNN7udsGcDYJ/eigen-s-shortform?commentId=g7dgEbryTMMhQ3Y6Y]
, I talk what it means to me and how I can do something about it, which ,in
summary, is to use Anki a lot and change subjects when working memory gets
overloaded. It's important to note that mathematics is sort-of different from
another subjects, since concepts build on each other and you need to keep up
with what all of them mean and entail, so we may be bound to reach an overload
faster in that sense.
A few notes about your other hypothesis:
Hypothesis 1c:
It's because we're not used to it. Some things come easier than other; some
things are more closely similar to what we have been doing for 60000 years (math
is not one of them). So we flinch from that which we are not use to. Although,
adaptation is easy and the major hurdle is only at the beginning.
It may also mean that the reward system is different. Is difficult to see on a
piece of mathematics, as we explore it, how fulfilling it's when we know that we
may not be getting anywhere. So the inherent reward is missing or has to be more
artificially created.
Hypothesis 1d:
This seems correct to me. Consider the following: “This statement is false”.
Thinking about it for a few minutes (or iterations of that statement) is quickly
bound to make us flinch away in just a few seconds. How many other things take
this form? I bet there are many.
Instead of working to trust System 2 is it there a way to train System 1? It
seems more apt to me, like training tactics in chess or to make rapid
calculations.
Thank you for the good post, I'd really like to further know more about your
findings.
2Viliam1ySeems to me that mental energy is lost by frustration. If what you are doing is
fun, you can do it for a log time; if it frustrates you at every moment, you
will get "tired" soon.
The exact mechanism... I guess is that some part of the brain takes frustration
as an evidence that this is not the right thing to do, and suggests doing
something else. (Would correspond to "1b" in your model?)
2AprilSR1yI’ve definitely experienced mental exhaustion from video games before -
particularly when trying to do an especially difficult task.
I recently read Prisoner’s Dilemma, which half an introduction to very elementary game theory, and half a biography of John Von Neumann, and watched this old PBS documentary about the man.
I’m glad I did. Von Neumann has legendary status in my circles, as the smartest person ever to live. [1] Many times I’ve written the words “Von Neumann Level Intelligence” in a AI strategy document, or speculated about how many coordinated Von Neumanns would it take to take over the world. (For reference, I now think that 10 is far too low, mostly because he didn’t seem to have the entrepreneurial or managerial dispositions.)
Learning a little bit more about him was humanizing. Yes, he was the smartest person ever to live, but he was also an actual human being, with actual human traits.
Watching this first clip, I noticed that I was surprised by a number of thing.
That VN had an accent. I had known that he was Hungarian, but somehow it had never quite propagated that he would speak with a Hungarian accent.
That he was middling height (somewhat shorter than the presenter he’s talking too).
3Viliam1yThank you, this is very interesting!
Seems to me the most imporant lesson here is "even if you are John von Neumann,
you can't take over the world alone."
First, because no matter how smart you are, you will have blind spots.
Second, because your time is still limited to 24 hours a day; even if you'd
decide to focus on things you have been neglecting until now, you would have to
start neglecting the things you have been focusing on until now. Being better at
poker (converting your smartness to money more directly), living healthier and
therefore on average longer, developing social skills, and being strategic in
gaining power... would perhaps come at a cost of not having invented half of the
stuff. When you are John von Neumann, your time has insane opportunity costs.
1Liam Donovan1yIs there any information on how Von Neumann came to believe Catholicism was the
correct religion for Pascal Wager purposes? "My wife is Catholic" doesn't seem
like very strong evidence...
3elityre1yI don't know why Catholicism.
I note that it does seem to be the religion of choice for former atheists, or at
least for rationalists. I know of several rationalists that converted to
catholicism, but none that have converted to any other religion.
TL;DR: I’m offering to help people productively have difficult conversations and resolve disagreements, for free. Feel free to email me if and when that seems helpful. elitrye [at] gmail.com
Facilitation
Over the past 4-ish years, I’ve had a side project of learning, developing, and iterating on methods for resolving tricky disagreements, and failures to communicate. A lot of this has been in the Double Crux frame, but I’ve also been exploring a number of other frameworks (including, NVC, Convergent Facilitation, Circling-inspired stuff, intuition extraction, and some home-grown methods).
As part of that, I’ve had a standing offer to facilitate / mediate tricky conversations for folks in the CFAR and MIRI spheres (testimonials below). Facilitating “real disagreements”, allows me to get feedback on my current conversational frameworks and techniques. When I encounter blockers that I don’t know how to deal with, I can go back to the drawing board to model those problems and interventions that would solve them, and iterate from there, developing new methods.
I generally like doing this kind of conversational facilitation and am open to do... (read more)
8riceissa4moI am curious how good you think the conversation/facilitation was in the AI
takeoff double crux between Oliver Habryka and Buck Shlegeris
[https://www.lesswrong.com/posts/p5iuER9Ms5QrReFW2/sunday-august-23rd-12pm-pdt-double-crux-with-buck-shlegeris]
. I am looking for something like "the quality of facilitation at that event was
X percentile among all the conversation facilitation I have done".
[I wrote a much longer and more detailed comment, and then decided that I wanted to think more about it. In lieu of posting nothing, here's a short version.]
I mean I did very little facilitation one way or the other at that event, so I think my counterfactual impact was pretty minimal.
In terms of my value added, I think that one was in the bottom 5th percentile?
In terms of how useful that tiny amount of facilitation was, maybe 15 to 20th percentile? (This is a little weird, because quantity and quality are related. More active facilitation has a quality span: active (read: a lot of) facilitation can be much more helpful when it is good and much more disruptive / annoying / harmful, when it is bad, compared to less active backstop facilitation,
Overall, the conversation served the goals of the participants and had a median outcome for that kind of conversation, which is maybe 30th percentile, but there is a long right tail of positive outcomes (and maybe I am messing up how to think about percentile scores with skewed distributions).
The outcome that occured ("had an interesting conversation, and had some new thoughts / clarifications") is good but also far below the sort of outcome that I'm ussually aiming for (but often missing), of substantive, permanent (epistemic!) change to the way that one or both of the people orient on this topic.
1m_arj4moCould you recommended the best book about this topic?
3elityre4moNope?
I've gotten very little out of books in this area.
It is a little afield, but strongly recommend the basic NVC book: Nonviolent
Communication: A Language for Life. I recommend that at minimum, everyone read
at least the first two chapters, which is something like 8 pages long, and has
the most content in the book. (The rest of the book is good too, but it is
mostly examples.)
Also, people I trust have gotten value out of How to Have Impossible
Conversations. This is still on my reading stack though (for this month, I
hope), so I don't personally recommend it. My expectation, from not having read
it yet, is that it will cover the basics pretty well.
I spend a lot of time trying to build skills, because I want to be awesome. But there is something off about that.
I think I should just go after things that I want, and solve the problems that come up on the way. The idea of building skills sort of implies that if I don't have some foundation or some skill, I'll be blocked, and won't be able to solve some thing in the way of my goals.
But that doesn't actually sound right. Like it seems like the main important thing for people who do incredible things is their ability to do problem solving on the things that come up, and not the skills that they had previously built up in a "skill bank".
Raw problem solving is the real thing and skills are cruft. (Or maybe not cruft per se, but more like a side effect. The compiled residue of previous problem solving. Or like a code base from previous project that you might repurpose.)
Part of the problem with this is that I don't know what I want for my own sake, though. I want to be awesome, which in my conception, means being able to do things.
I note that wanting "to be able to do things" is a leaky sort of motivation: because the... (read more)
3Marcello5moYour seemingly target-less skill-building motive isn't necessarily irrational or
non-awesome. My steel-man is that you're in a hibernation period, in which
you're waiting for the best opportunity of some sort (romantic, or business, or
career, or other) to show up so you can execute on it. Picking a goal to focus
on really hard now might well be the wrong thing to do; you might miss a golden
opportunity if your nose is at the grindstone. In such a situation a good
strategy would, in fact, be to spend some time cultivating skills, and some time
in existential confusion (which is what I think not knowing which broad
opportunities you want to pursue feels like from the inside).
The other point I'd like to make is that I expect building specific skills
actually is a way to increase general problem solving ability; they're not at
odds. It's not that super specific skills are extremely likely to be useful
directly, but that the act of constructing a skill is itself trainable and a
significant part of general problem solving ability for sufficiently large
problems. Also, there's lots of cross-fertilization of analogies between skills;
skills aren't quite as discrete as you're thinking.
3Dagon5moSkills and problem-solving are deeply related. The basics of most skills are
mechanical and knowledge-based, with some generalization creeping in on your 3rd
or 4th skill in terms of how to learn and seeing non-obvious crossover.
Intermediate (say, after the first 500 to a few thousand hours) use of skills
requires application of problem-solving within the basic capabilities of that
skill. Again, you get good practice within a skill, and better across a few
skills. Advanced application in many skills is MOSTLY problem-solving. How to
apply your well-indexed-and-integrated knowledge to novel situations, and how to
combine that knowledge across domains.
I don't know of any shortcuts, though - it takes those thousands of hours to get
enough knowledge and basic techniques embedded in your brain that you can intuit
what avenues to more deeply explore in new applications.
There is a huge amount of human variance - some people pick up some domains
ludicrously easily. This is a blessing and a curse, as it causes great
frustration when they hit a domain that they have to really work at. Others have
to work at everything, and never get their Nobel, but still contribute a whole
lot of less-transformational "just work" within the domains they work at.
2Viliam5moSeems to me there is some risk either way. If you keep developing skills without
applying them to a specific goal, it can be a form of procrastination (an
insidious one, because it feels so virtuous). There are many skills you could
develop, and life is short. On the other hand, as you said, if you go right
after your goal, you may find an obstacle you can't overcome... or even worse,
an obstacle you can't even properly analyze, so the problem is not merely that
you don't have the necessary skill, but that you even have no idea which skill
you miss (so if you try to develop the skills as needed, you may waste time
developing the wrong skills, because you misunderstood the nature of the
problem).
It could be both. And perhaps you notice the problem-specific skills more,
because those are rare.
But I also kinda agree that the attitude is more important, and skills often can
be acquired when needed.
So... dunno, maybe there are two kinds of skills? Like, the skills with obvious
application, such as "learn to play a piano"; and the world-modelling skills,
such as "understand whether playing a piano would realistically help you
accomplish your goals"? You can acquire the former when needed, but you need the
latter in advance, to remove your blind spots?
Or perhaps some skills such as "understand math" are useful in many kinds of
situations and take a lot of time to learn, so you probably want to develop
these in advance? (Also, if you don't know yet what to do, it probably helps to
get power: learn math, develop social skills, make money... When you later make
up your mind, you will likely find some of this useful.)
And maybe you need the world-modelling skills before you make specific goals,
because how could your goal be to learn play the piano, if you don't know the
piano exists? You could have a more general goal, such as "become famous at
something", but if you don't know that piano exists, maybe you wouldn't even
look in this direction.
Could this also be abo
2Matt Goldenberg5moI've gone through something very similar.
Based on your language here, it feels to me like you're in the contemplation
stage along the stages of change.
So the very first thing I'd say is to not feel the desire to jump ahead and "get
started on a goal right now." That's jumping ahead in the stages of change, and
will likely create a relapse. I will predict that there's a 50% chance that if
you continue thinking about this without "forcing it", you'll have started in on
a goal (action stage) within 3 months.
Secondly, unlike some of the other responses here, I think your analysis is
fairly accurate.I've certainly found that picking up gears when I need them for
my goals is better than learning them ahead of time.
[https://www.lesswrong.com/posts/A2TmYuhKJ5MbdDiwa/when-gears-go-wrong]
Now, in terms of "how to actually do it."
I'm pretty convinced that they key to getting yourself to do stuff is "Creative
Tension" - creating a clear internal tension between the end state that feels
good and the current state that doesn't feel as good. There are 4 ways I know to
go about generating internal tension:
1. Develop a strong sense of self, and create tension between the world where
you're fully expressing that self and the world where you're not.
2. Develop a strong sense of taste, and create tension between the beautiful
things that could exist and what exists now.
3. Develop a strong pain, and create tension between the world where you have
that pain and the world where you've solved it.
4. Develop a strong vision, and create tension between the world as it is now
and the world as it would be in your vision.
One especially useful trick that worked for me coming from the "just develop
myself into someone awesome" place was tying the vision of the awesome person I
could be with the vision of what I'd achieved - that is, in m vision of the
future, including a vision of the awesome person I had to become in order to
reach that future.
I then would d
[This is a draft, to be posted on LessWrong soon.]
I’ve spent a lot of time developing tools and frameworks for bridging "intractable" disagreements. I’m also the person affiliated with CFAR who has taught Double Crux the most, and done the most work on it.
People often express to me something to the effect, “The important thing about Double Crux is all the low level habits of mind: being curious, being open to changing your mind, paraphrasing to check that you’ve understood, operationalizing, etc. The ‘Double Crux’ framework, itself is not very important.”
I half agree with that sentiment. I do think that those low level cognitive and conversational patterns are the most important thing, and at Double Crux trainings that I have run, most of the time is spent focusing on specific exercises to instill those low level TAPs.
However, I don’t think that the only value of the Double Crux schema is in training those low level habits. Double cruxes are extremely powerful machines that allow one to identify, if not the most efficient conversational path, a very high efficiency conversationa... (read more)
[This is an essay that I’ve had bopping around in my head for a long time. I’m not sure if this says anything usefully new-but it might click with some folks. If you haven’t read Social Status: Down the Rabbit Hole on Kevin Simler’s excellent blog, Melting Asphalt read that first. I think this is pretty bad and needs to be rewritten and maybe expanded substantially, but this blog is called “musings and rough drafts.”]
In this post, I’m going to outline how I think about status. In particular, I want to give a mechanistic account of how status necessarily arises, given some set of axioms, in much the same way one can show that evolution by natural selection must necessarily occur given the axioms of 1) inheritance of traits 2) variance in reproductive success based on variance in traits and 3) mutation.
(I am not claiming any particular skill at navigating status relationships, any more than a student of sports-biology is necessarily a skilled basketball player.)
By “status” I mean prestige-status.
Axiom 1: People have goals.
That is, for any given human, there are some things that they want. This can include just about anything. You might wan... (read more)
4Kaj_Sotala1yRelated: The red paperclip theory of status
[https://www.lesswrong.com/posts/7ZkHyrBFaDwZ3XgLi/the-red-paperclip-theory-of-status]
describes status as a form of optimization power, specifically one that can be
used to influence a group.
4Raemon1y(it says "more stuff here" but links to your overall blog, not sure if that
meant to be a link to a specific post)
Something that I've been thinking about lately is the possibility of an agent's values being partially encoded by the constraints of that agent's natural environment, or arising from the interaction between the agent and environment.
That is, an agent's environment puts constraints on the agent. From one perspective removing those constraints is always good, because it lets the agent get more of what it wants. But sometimes from a different perspective, we might feel that with those constraints removed, the agent goodhearts or wire-heads, or otherwise fails to actualize its "true" values.
The Generator freed from the oppression of the Discriminator
As a metaphor: if I'm one half of a GAN, let's say the generator, then in one sense my "values" are fooling the discriminator, and if you make me relatively more powerful than my discriminator, and I dominate it...I'm loving it, and also no longer making good images.
But you might also say, "No, wait. That is a super-stimulus, and actually what you value is making good images, but half of that value was encoded in your partner."
This second perspective seems a little stupid to me. A little too Aristotelian. I mean if we're going to take that ... (read more)
2elityre1moSide note, which is not my main point: I think this also has something to do
with what meditation and psychedelics do to people, which was recently up for
discussion on Duncan's Facebook. I bet that mediation is actually a way to
repair psychblocks and trauma and what-not. But if you do that enough, and you
remove all the psych constraints...a person might sort of become so relaxed that
they become less and less of an agent. I'm a lot less sure of this part.
Childhood lead exposure reduces one’s IQ, and also causes one to be more impulsive and aggressive.
I always assumed that the impulsiveness was due, basically, to your executive function machinery working less well. So you have less self control.
But maybe the reason for the IQ-impulsiveness connection, is that if you have a lower IQ, all of your subagents/ subprocesses are less smart. Because they’re worse at planning and modeling the world, the only way they know how to get their needs met are very direct, very simple, action-plans/ strategies. It’s not so much that you’re better at controlling your anger, as the part of you that would be angry is less so, because it has other ways of getting its needs met.
7jimrandomh1yA slightly different spin on this model: it's not about the types of strategies
people generate, but the number. If you think about something and only come up
with one strategy, you'll do it without hesitation; if you generate three
strategies, you'll pause to think about which is the right one. So people who
can't come up with as many strategies are impulsive.
1elityre1yThis seems that it might be testable. If you force impulsive folk to wait and
think, do they generate more ideas for how to proceed?
1capybaralet1yThis reminded me of the argument that superintelligent agents will be very good
at coordinating and just divvy of the multiverse and be done with it.
It would be interesting to do an experimental study of how the intelligence
profile of a population influences the level of cooperation between them.
2elityre1yI think that's what the book referenced here
[https://www.overcomingbias.com/2015/11/statestupidity.html], is about.
[Part of my Psychological Principles of Personal Productivity, which I am writing mostly in my Roam, now.]
Metacognitive space is a term of art that refers to a particular first person state / experience. In particular it refers to my propensity to be reflective about my urges and deliberate about the use of my resources.
I think it might literally be having the broader context of my life, including my goals and values, and my personal resource constraints loaded up in peripheral awareness.
Metacognitive space allows me to notice aversions and flinches, and take them as object, so that I can respond to them with Focusing or dialogue, instead of being swept around by them. Similarly, it seems to, in practice, to reduce my propensity to act on immediate urges and temptations.
[Having MCS is the opposite of being [[{Urge-y-ness | reactivity | compulsiveness}]]?]
It allows me to “absorb” and respond to happenings in my environment, including problems and opportunities, taking considered instead of semi-automatic, first response that occurred to me, action. [That sentence there feels a little fake, or maybe about something else, or may... (read more)
4steve21524dI haven't seen such a document but I'd be interested to read it too. I made an
argument to that effect here:
https://www.lesswrong.com/posts/PTkd8nazvH9HQpwP8/building-brain-inspired-agi-is-infinitely-easier-than
[https://www.lesswrong.com/posts/PTkd8nazvH9HQpwP8/building-brain-inspired-agi-is-infinitely-easier-than]
(Well, a related argument anyway. WBE is about scanning and simulating the brain
rather than understanding it, but I would make a similar argument using
"hard-to-scan" and/or "hard-to-simulate" things the brain does, rather than
"hard-understand" things the brain does, which is what I was nominally blogging
about. There's a lot of overlap between those anyway; the examples I put in
mostly work for both.)
2elityre3dGreat. This post is exactly the sort of thing that I was thinking about.
There’s a psychological variable that seems to be able to change on different timescales, in me, at least. I want to gesture at it, and see if anyone can give me pointers to related resources.
[Hopefully this is super basic.]
There a set of states that I occasionally fall into that include what I call “reactive” (meaning that I respond compulsively to the things around me), and what I call “urgy” (meaning that that I feel a sort of “graspy” desire for some kind of immediate gratification).
2Matt Goldenberg7moI remembered there was a set of audios from Eben Pagan that really helped me
before I turned them into the 9 breaths technique. Just emailed them to you.
They go a bit more into depth and you may find them useful.
2Matt Goldenberg7moI don't know if this is what you're looking for, but I've heard the variable
you're pointing at referred to as your level of groundedness, centeredness, and
stillness in the self-help space.
There are all sorts of meditations, visualizations, and exercises aimed to make
you more grounded/centered/still and a quick google search pulls up a bunch.
One I teach is called the 9 breaths technique.
[https://roamresearch.com/#/app/Matthew/page/IxOJ6P1ir]
Here's another.
[https://www.amandagilbertmeditation.com/insights/2018/4/16/arriving-home-meditations-for-grounding]
1rk2yThis link (and the one for "Why do we fear the twinge of starting?") is broken
(I think it's an admin view?).
(Correct link
[https://musingsandroughdrafts.wordpress.com/2019/06/04/why-does-outlining-my-day-in-advance-help-so-much/]
)
6Raemon2yThanks! I just read through a few of your most recent posts and found them all
real useful.
5elityre2yCool! I'd be glad to hear more. I don't have much of a sense of which thing I
write are useful or how.
2Hazard1yRelating to the "Perception of Progress" bit at the end. I can confirm for a
handful of physical skills I practice there can be a big disconnect between
Perception of Progress and Progress from a given session. Sometimes this looks
like working on a piece of sleight of hand, it feeling weird and awkward, and
the next day suddenly I'm a lot better at it, much more than I was at any point
in the previous days practice.
I've got a hazy memory of a breakdancer blogging about how a particular shade of
"no progress fumbling" can be a signal that a certain about of "unlearning" is
happening, though I can't find the source to vet it.
I’ve decided that I want to to make more of a point to write down my macro-strategic thoughts, because writing things down often produces new insights and refinements, and so that other folks can engage with them.
This is one frame or lens that I tend to think with a lot. This might be more of a lens or a model-let than a full break-down.
There are two broad classes of problems that we need to solve: we have some pre-paradigmatic science to figure out, and we have have the problem of civilizational sanity.
[Epistemic status: a quick thought that I had a minute ago.]
There are goals / desires (I want to have sex, I want to stop working, I want to eat ice cream) and there are reflexes (anger, “wasted motions”, complaining about a problem, etc.).
If you try and squash goals / desires, they will often (not always?) resurface around the side, or find some way to get met. (Why not always? What are the difference between those that do and those that don’t?) You need to bargain with them, or design outlet poli... (read more)
3eigen1yI'm interested about knowing more about the meditation aspect and how it relates
to productivity!
2Matt Goldenberg1yI'm currently running a pilot program that takes a very similar psychological
slant on productivity and procrastination, and planning to write a sequence
starting in the next week or so. It covers a lot of the same subjects, including
habits, ambiguity or overwhelm aversion, coercion aversion, and creating good
relationships with parts. Maybe we should chat!
Totally an experiment, I'm trying out posting my raw notes from a personal review / theorizing session, in my short form. I'd be glad to hear people's thoughts.
This is written for me, straight out of my personal Roam repository. The formatting is a little messed up because LessWrong's bullet don't support indefinite levels of nesting.
This one is about Urge-y-ness / reactivity / compulsiveness
I don't know if I'm naming this right. I think I might be lumping categories together.
[Epistemic status: a half-thought, which I started on earlier today, and which might or might not be a full thought by the time I finish writing this post.]
I’ve long counted exercise as an important component of my overall productivity and functionality. But over the past months my exercise habit has slipped some, without apparent detriment to my focus or productivity. But this week, after coming back from a workshop, my focus and productivity haven’t really booted up.
2Viliam1yAlternative hypothesis: maybe what expands your time horizon is not exercise and
meditation per se, but the fact that you are doing several different things
(work, meditation, exercise), instead of doing the same thing over and over
again (work). It probably also helps that the different activities use different
muscles, so that they feel completely different.
This hypothesis predicts that a combination of e.g. work, walking, and painting,
could provide similar benefits compared to work only.
2elityre1yWell, my working is often pretty varied, while my "being distracted" is pretty
monotonous (watching youtube clips), so I don't think it is this one.
It seems like it would be useful to have very fine-grained measures of how smart / capable a general reasoner is, because this would allow an AGI project to carefully avoid creating a system smart enough to pose an existential risk.
I’m imagining slowly feeding a system more training data (or, alternatively, iteratively training a system with slightly more compute), and regularly checking its capability. When the system reaches “chimpanzee level” (whatever that means), you... (read more)
A fire alarm creates common knowledge, in the you-know-I-know sense, that there is a fire; after which it is socially safe to react. When the fire alarm goes off, you know that everyone else knows there is a fire, you know you won’t lose face if you proceed to exit the building.
If I have a predetermined set of tests, this could serve as a fire alarm, but only if you've successfully built a consensus that it is one. This is hard, and the consensus would need to be quite strong. To avoid ambiguity, the test itself would need to be demonstrably resistant to being clever Hans'ed. Otherwise it would be just another milestone.
This is my current take about where we're at in the world:
Deep learning, scaled up, might be basically enough to get AGI. There might be some additional conceptual work necessary, but the main difference between 2020 and the year in which we have transformative AI is that in that year, the models are much bigger.
If this is the case, then the most urgent problem is strong AI alignment + wise deployment of strong AI.
We'll know if this is the case in the next 10 years or so, because either we'll continue to see incredible gains from increasingly bigger Deep L... (read more)
1niplav20d(This question is only related to a small point)
You write that one possible foundational strategy could be to "radically
detraumatize large fractions of the population". Do you believe that
1. A large part of the population is traumatized
2. That trauma is reversible
3. Removing/reversing that trauma would improve the development of humanity
drastically?
If yes, why? I'm happy to get a 1k page PDF thrown at me.
I know that this has been a relatively popular talking point on twitter, but
without a canonical resource, and I also haven't seen it discussed on LW.
6elityre18dI was wondering if I would get comment on that part in particular. ; )
I don't have a strong belief about your points one through three, currently. But
it is an important hypothesis in my hypothesis space, and I'm hoping that I can
get to the bottom of it in the next year or two.
I do confidently think that one of the "forces for badness" in the world is that
people regularly feel triggered or threatened by all kinds of different
proposals, reflexively act to defend themselves. I think this is among the top
three problems in having good discourse and cooperative politics. Systematically
reducing that trigger response would be super high value, if it were feasible.
My best guess is that that propensity to be triggered is not mostly the result
of infant or childhood trauma. It seems more parsimonious to posit that it is
basic tribal stuff. But I could imagine it having its root in something like
"trauma" (meaning it is the result of specific experiences, not just general
dispositions, and it is practically feasible, if difficult, to clear or heal the
underlying problem in a way completely prevents the symptoms).
I think there is no canonical resource on trauma-stuff because 1) the people on
twitter are less interested on average, in that kind of theory building than we
are on lesswong and 2) because mostly those people are (I think) extrapolating
from their own experience, in which some practices unlocked subjectively huge
breakthroughs in personal well-being / freedom of thought and action.
Does that help at all?
2Hazard15dI plan to blog more about how I understand some of these trigger states and how
it relates to trauma. I do think there's a decent amount of written work, not
sure how "canonical", but I've read some great stuff that from sources I'm
surprised I haven't heard more hype about. The most useful stuff I've read so
far is the first three chapters of this book. It has hugely sharpened my
thinking.
I agree that a lot of trauma discourse on our chunk of twitter is more for used
on the personal experience/transformation side, and doesn't let itself well to
bigger Theory of Change type scheming.
http://www.traumaandnonviolence.com/chapter1.html
[http://www.traumaandnonviolence.com/chapter1.html]
2elityre15dThanks for the link! I'm going to take a look!
1niplav15dYes, it definitely does–you just created the resource I will will link people
to. Thank you!
Especially the third paragraph is cruxy. As far as I can tell, there are many
people who have (to some extent) defused this propensity to get triggered for
themselves. At least for me, LW was a resource to achieve that.
I was thinking lately about how there are some different classes of models of psychological change, and I thought I would outline them and see where that leads me.
It turns out it led me into a question about where and when Parts-based vs. Association-based models are applicable.
This is the frame that I make the most use of, in my personal practice. It assumes that all behavior is the result of some goal directed subproce... (read more)
6Raemon6moI like this a lot, and think it’d make a good top level post.
2elityre6moReally? I would prefer to have something much more developed and/or to have
solved my key puzzle here before I put as a top level post.
2Raemon6moI saw the post more as giving me a framework that was helping for sorting
various psych models, and the fact that you had one question about it didn't
actually feel too central for my own reading. (Separately, I think it's
basically fine for posts to be framed as questions rather than definitive
statements/arguments after you've finished your thinking)
4Viliam6moI wonder how the ancient schools of psychotherapy would fit here. Psychoanalysis
is parts-based. Behaviorism is association-based. Rational therapy seems
narrative-based. What about Rogers or Maslow?
Seems to me that Rogers and the "think about it seriously for 5 minutes"
technique should be in the same category. In both cases, the goal is to let the
client actually think about the problem and find the solution for themselves.
Not sure if this is or isn't an example of narrative-based, except the client is
supposed to find the narrative themselves.
Maslow comes with a supposed universal model of human desires and lets you find
yourself in that system. Jung kinda does the same, but with a mythological
model. Sounds like an externally provided narrative. Dunno, maybe the
narrative-based should be split into more subgroups, depending on where the
narrative comes from (a universal model, an ad-hoc model provided by the
therapist, an ad-hoc model constructed by the client)?
2ChristianKl6moThe way I have been taught NLP, you usually don't use either anchors or an
ecological check but both.
Behavior changes that are created by changing around anchors are not long-term
stable when they violate ecology.
Changing around associations allows to create new strategies in a more detailed
way then you get by just doing parts work and I have the impression that it's
often faster in creating new strategies.
(A) Interventions that are about resolving traumas feel to me like a different
model.
(B) None of the three models you listed address the usefulness of connecting
with the felt sense of emotions.
(C) There's a model of change where you create a setting where people can have
new behavioral experiences and then hopefully learn from those experiences and
integrate what they learned in their lives.
CFAR's goal of wanting to give people more agency about ways they think seems to
work through C where CFAR wants to expose people to a bunch of experiences where
people actually feel new ways to affect their thinking.
In the Danis Bois method both A and C are central.
3romeostevensit1ytime for a new instance of this?
https://www.lesswrong.com/posts/4sAsygakd4oCpbEKs/lesswrong-help-desk-free-paper-downloads-and-more-2014
[https://www.lesswrong.com/posts/4sAsygakd4oCpbEKs/lesswrong-help-desk-free-paper-downloads-and-more-2014]
0Raemon1yI edited the image into the comment box, predicting that the reason you didn't
was because you didn't know you could (using markdown). Apologies if you prefer
it not to be here (and can edit it back if so)
In this case it seems fine to add the image, but I feel disconcerted that mods have the ability to edit my posts.
I guess it makes sense that the LessWrong team would have the technical ability to do that. But editing a users post, without their specifically asking, feels like a pretty big breach of... not exactly trust, but something like that. It means I don’t have fundamental control over what is written under my name.
That is to say, I personally request that you never edit my posts, without asking (which you did, in this case) and waiting for my response. I furthermore, I think that should be a universal policy on LessWrong, though maybe this is just an idiosyncratic neurosis of mine.
4Raemon1yUnderstood, and apologies.
A fairly common mod practice has been to fix typos and stuff in a sort of "move
first and then ask if it was okay" thing. (I'm not confident this is the best
policy, but it saves time/friction, and meanwhile I don't think anyone had had
an issue with it). But, your preference definitely makes sense and if others
felt the same I'd reconsider the overall policy.
(It's also the case that adding an image is a bit of a larger change than the
usual typo fixing, and may have been more of an overstep of bounds)
In any case I definitely won't edit your stuff again without express permission.
1elityre1yCool.
: )
4Wei_Dai1yIf it's not just you, it's at least pretty rare. I've seen the mods "helpfully"
edit posts several times (without asking first) and this is the first time I've
seen anyone complain about it.
1elityre1yI knew that I could, and didn’t, because it didn’t seem worth it. (Thinking that
I still have to upload it to a third party photo repository and link to it. It’s
easier than that now?)
2Raemon1yIn this case your blog already counted as a third party repository.
4Raemon1ySome of these seem likely to generalize and some seem likely to be more
specific.
Curious about your thoughts "best experimental approaches to figuring out your
own napping protocol."
Doing actual mini-RCTs can be pretty simple. You only need 3 things:
1. A spreadsheet
2. A digital coin for randomization
3. A way to measure the variable that you care about
I think one of practically powerful "techniques" of rationality is doing simple empirical experiments like this. You want to get something? You don't know how to get it? Try out some ideas and check which ones work!
There are other applications of empiricism that are not as formal, and sometimes faster. Those are also awesome. But at the very least, I've found that doing ... (read more)
My understanding is that there was a 10 year period starting around 1868, in which South Carolina's legislature was mostly black, and when the universities were integrated (causing most white students to leave), before the Dixiecrats regained power.
I would like to find a relatively non-partisan account of this period.
I remember reading a Zvi Mowshowitz post in which he says something like "if you have concluded that the most ethical thing to do is to destroy the world, you've made a mistake in your reasoning somewhere."
I spent some time search around his blog for that post, but couldn't find it. Does anyone know what I'm talking about?
2Raemon4dProbably this one?
http://lesswrong.com/posts/XgGwQ9vhJQ2nat76o/book-trilogy-review-remembrance-of-earth-s-past-the-three
2elityre3dThanks!
I thought that it was in the context of talking about EA, but maybe this is what
I am remembering?
It seems unlikely though, since wouldn't have read the spoiler-part.
Anyone have a link to the sequence post where someone posits that AIs would do art and science from a drive to compress information, but rather it would create and then reveal cryptographic strings (or something)?
1niplav12dI think you are thinking of “AI Alignment: Why It’s Hard, and Where to Start”
[https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/]
:
There's also a mention of that method in this post
[https://www.lesswrong.com/posts/sizjfDgCgAsuLJQmm/reply-to-holden-on-tool-ai#3___Why_we_haven_t_already_discussed_Holden_s_suggestion]
.
I remember reading a Zvi Mowshowitz post in which he says something like "if you have concluded that the most ethical thing to do is to destroy the world, you've made a mistake in your reasoning somewhere."
I spent some time search around his blog for that post, but couldn't find it. Does anyone know what I'm talking about?
This post outlines a hierarchy of behavioral change methods. Each of these approaches is intended to be simpler, more light-weight, and faster to use (is that right?), than the one that comes after it. On the flip side, each of these approaches is intended to resolve a common major blocker of the approach before... (read more)
Can anyone get a copy of this paper for me? I'm looking to get clarity about how important cryopreserving non-brain tissue is for preserving personality.
New post: Some things I think about Double Crux and related topics
I've spent a lot of my discretionary time working on the broad problem of developing tools for bridging deep disagreements and transferring tacit knowledge. I'm also probably the person who has spent the most time explicitly thinking about and working with CFAR's Double Crux framework. It seems good for at least some of my high level thoughts to be written up some place, even if I'm not going to go into detail about, defend, or substantiate, most of them.
The following are my own beliefs and do not necessarily represent CFAR, or anyone else.
I, of course, reserve the right to change my mind.
[Throughout I use "Double Crux" to refer to the Double Crux technique, the Double Crux class, or a Double Crux conversation, and I use "double crux" to refer to a proposition that is a shared crux for two people in a conversation.]
Here are some things I currently believe:
(General)
- Double Crux is one (highly important) tool/ framework among many. I want to distinguish between the the overall art of untangling and resolving deep disagreements and the Double Crux tool in particular. The
... (read more)"People" in general rarely change their mind when they feel like you have trapped them in some inconsistency, but people using the double-crux method in the first place are going to be aspiring rationalists, right? Trapping someone in an inconsistency (if it's a real inconsistency and not a false perception of one) is collaborative: the thing they were thinking was flawed, and you helped them see the flaw! That's a good thing! (As it is written of the fifth virtue, "Do not believe you do others a favor if you accept their arguments; the favor is to you.")
Obviously, I agree that people should try to understand their interlocutors. (If you performatively try to find fault in something you don't understand, then apparent "faults" you find are likely to be your own misunderstandings rather than actual faults.) But if someone spots an actual inconsistency in my ideas, I want them to tell me right away. Pe
... (read more)Old post: RAND needed the "say oops" skill
[Epistemic status: a middling argument]
A few months ago, I wrote about how RAND, and the “Defense Intellectuals” of the cold war represent another precious datapoint of “very smart people, trying to prevent the destruction of the world, in a civilization that they acknowledge to be inadequate to dealing sanely with x-risk.”
Since then I spent some time doing additional research into what cognitive errors and mistakes those consultants, military officials, and politicians made that endangered the world. The idea being that if we could diagnose which specific irrationalities they were subject to, that this would suggest errors that might also be relevant to contemporary x-risk mitigators, and might point out some specific areas where development of rationality training is needed.
However, this proved somewhat less fruitful than I was hoping, and I’ve put it aside for the time being. I might come back to it in the coming months.
It does seem worth sharing at least one relevant anecdote, from Daniel Ellsberg’s excellent book, the Doomsday Machine, and analysis, given that I’ve already written it up.
The missile gap
In the late nineteen-fi... (read more)
This was quite valuable to me, and I think I would be excited about seeing it as a top-level post.
New post: What is mental energy?
[Note: I’ve started a research side project on this question, and it is already obvious to me that this ontology importantly wrong.]
There’s a common phenomenology of “mental energy”. For instance, if I spend a couple of hours thinking hard (maybe doing math), I find it harder to do more mental work afterwards. My thinking may be slower and less productive. And I feel tired, or drained, (mentally, instead of physically).
Mental energy is one of the primary resources that one has to allocate, in doing productive work. In almost all cases, humans have less mental energy than they have time, and therefore effective productivity is a matter of energy management, more than time management. If we want to maximize personal effectiveness, mental energy seems like an extremely important domain to understand. So what is it?
The naive story is that mental energy is an actual energy resource that one expends and then needs to recoup. That is, when one is doing cognitive work, they are burning calories, depleting their bodies energy stores. As they use energy, they have less fuel to burn.
My current understanding is that this story is not physiologically realistic. T... (read more)
New post: Some notes on Von Neumann, as a human being
I recently read Prisoner’s Dilemma, which half an introduction to very elementary game theory, and half a biography of John Von Neumann, and watched this old PBS documentary about the man.
I’m glad I did. Von Neumann has legendary status in my circles, as the smartest person ever to live. [1] Many times I’ve written the words “Von Neumann Level Intelligence” in a AI strategy document, or speculated about how many coordinated Von Neumanns would it take to take over the world. (For reference, I now think that 10 is far too low, mostly because he didn’t seem to have the entrepreneurial or managerial dispositions.)
Learning a little bit more about him was humanizing. Yes, he was the smartest person ever to live, but he was also an actual human being, with actual human traits.
Watching this first clip, I noticed that I was surprised by a number of thing.
- That VN had an accent. I had known that he was Hungarian, but somehow it had never quite propagated that he would speak with a Hungarian accent.
- That he was middling height (somewhat shorter than the presenter he’s talking too).
- The thin
... (read more)TL;DR: I’m offering to help people productively have difficult conversations and resolve disagreements, for free. Feel free to email me if and when that seems helpful. elitrye [at] gmail.com
Facilitation
Over the past 4-ish years, I’ve had a side project of learning, developing, and iterating on methods for resolving tricky disagreements, and failures to communicate. A lot of this has been in the Double Crux frame, but I’ve also been exploring a number of other frameworks (including, NVC, Convergent Facilitation, Circling-inspired stuff, intuition extraction, and some home-grown methods).
As part of that, I’ve had a standing offer to facilitate / mediate tricky conversations for folks in the CFAR and MIRI spheres (testimonials below). Facilitating “real disagreements”, allows me to get feedback on my current conversational frameworks and techniques. When I encounter blockers that I don’t know how to deal with, I can go back to the drawing board to model those problems and interventions that would solve them, and iterate from there, developing new methods.
I generally like doing this kind of conversational facilitation and am open to do... (read more)
[I wrote a much longer and more detailed comment, and then decided that I wanted to think more about it. In lieu of posting nothing, here's a short version.]
I mean I did very little facilitation one way or the other at that event, so I think my counterfactual impact was pretty minimal.
In terms of my value added, I think that one was in the bottom 5th percentile?
In terms of how useful that tiny amount of facilitation was, maybe 15 to 20th percentile? (This is a little weird, because quantity and quality are related. More active facilitation has a quality span: active (read: a lot of) facilitation can be much more helpful when it is good and much more disruptive / annoying / harmful, when it is bad, compared to less active backstop facilitation,
Overall, the conversation served the goals of the participants and had a median outcome for that kind of conversation, which is maybe 30th percentile, but there is a long right tail of positive outcomes (and maybe I am messing up how to think about percentile scores with skewed distributions).
The outcome that occured ("had an interesting conversation, and had some new thoughts / clarifications") is good but also far below the sort of outcome that I'm ussually aiming for (but often missing), of substantive, permanent (epistemic!) change to the way that one or both of the people orient on this topic.
(Reasonably personal)
I spend a lot of time trying to build skills, because I want to be awesome. But there is something off about that.
I think I should just go after things that I want, and solve the problems that come up on the way. The idea of building skills sort of implies that if I don't have some foundation or some skill, I'll be blocked, and won't be able to solve some thing in the way of my goals.
But that doesn't actually sound right. Like it seems like the main important thing for people who do incredible things is their ability to do problem solving on the things that come up, and not the skills that they had previously built up in a "skill bank".
Raw problem solving is the real thing and skills are cruft. (Or maybe not cruft per se, but more like a side effect. The compiled residue of previous problem solving. Or like a code base from previous project that you might repurpose.)
Part of the problem with this is that I don't know what I want for my own sake, though. I want to be awesome, which in my conception, means being able to do things.
I note that wanting "to be able to do things" is a leaky sort of motivation: because the... (read more)
New post: The Basic Double Crux Pattern
[This is a draft, to be posted on LessWrong soon.]
I’ve spent a lot of time developing tools and frameworks for bridging "intractable" disagreements. I’m also the person affiliated with CFAR who has taught Double Crux the most, and done the most work on it.
People often express to me something to the effect, “The important thing about Double Crux is all the low level habits of mind: being curious, being open to changing your mind, paraphrasing to check that you’ve understood, operationalizing, etc. The ‘Double Crux’ framework, itself is not very important.”
I half agree with that sentiment. I do think that those low level cognitive and conversational patterns are the most important thing, and at Double Crux trainings that I have run, most of the time is spent focusing on specific exercises to instill those low level TAPs.
However, I don’t think that the only value of the Double Crux schema is in training those low level habits. Double cruxes are extremely powerful machines that allow one to identify, if not the most efficient conversational path, a very high efficiency conversationa... (read more)
Old post: A mechanistic description of status
[This is an essay that I’ve had bopping around in my head for a long time. I’m not sure if this says anything usefully new-but it might click with some folks. If you haven’t read Social Status: Down the Rabbit Hole on Kevin Simler’s excellent blog, Melting Asphalt read that first. I think this is pretty bad and needs to be rewritten and maybe expanded substantially, but this blog is called “musings and rough drafts.”]
In this post, I’m going to outline how I think about status. In particular, I want to give a mechanistic account of how status necessarily arises, given some set of axioms, in much the same way one can show that evolution by natural selection must necessarily occur given the axioms of 1) inheritance of traits 2) variance in reproductive success based on variance in traits and 3) mutation.
(I am not claiming any particular skill at navigating status relationships, any more than a student of sports-biology is necessarily a skilled basketball player.)
By “status” I mean prestige-status.
Axiom 1: People have goals.
That is, for any given human, there are some things that they want. This can include just about anything. You might wan... (read more)
Something that I've been thinking about lately is the possibility of an agent's values being partially encoded by the constraints of that agent's natural environment, or arising from the interaction between the agent and environment.
That is, an agent's environment puts constraints on the agent. From one perspective removing those constraints is always good, because it lets the agent get more of what it wants. But sometimes from a different perspective, we might feel that with those constraints removed, the agent goodhearts or wire-heads, or otherwise fails to actualize its "true" values.
The Generator freed from the oppression of the Discriminator
As a metaphor: if I'm one half of a GAN, let's say the generator, then in one sense my "values" are fooling the discriminator, and if you make me relatively more powerful than my discriminator, and I dominate it...I'm loving it, and also no longer making good images.
But you might also say, "No, wait. That is a super-stimulus, and actually what you value is making good images, but half of that value was encoded in your partner."
This second perspective seems a little stupid to me. A little too Aristotelian. I mean if we're going to take that ... (read more)
[Real short post. Random. Complete speculation.]
Childhood lead exposure reduces one’s IQ, and also causes one to be more impulsive and aggressive.
I always assumed that the impulsiveness was due, basically, to your executive function machinery working less well. So you have less self control.
But maybe the reason for the IQ-impulsiveness connection, is that if you have a lower IQ, all of your subagents/ subprocesses are less smart. Because they’re worse at planning and modeling the world, the only way they know how to get their needs met are very direct, very simple, action-plans/ strategies. It’s not so much that you’re better at controlling your anger, as the part of you that would be angry is less so, because it has other ways of getting its needs met.
new post: Metacognitive space
[Part of my Psychological Principles of Personal Productivity, which I am writing mostly in my Roam, now.]
Metacognitive space is a term of art that refers to a particular first person state / experience. In particular it refers to my propensity to be reflective about my urges and deliberate about the use of my resources.
I think it might literally be having the broader context of my life, including my goals and values, and my personal resource constraints loaded up in peripheral awareness.
Metacognitive space allows me to notice aversions and flinches, and take them as object, so that I can respond to them with Focusing or dialogue, instead of being swept around by them. Similarly, it seems to, in practice, to reduce my propensity to act on immediate urges and temptations.
[Having MCS is the opposite of being [[{Urge-y-ness | reactivity | compulsiveness}]]?]
It allows me to “absorb” and respond to happenings in my environment, including problems and opportunities, taking considered instead of semi-automatic, first response that occurred to me, action. [That sentence there feels a little fake, or maybe about something else, or may... (read more)
Does anyone know of a good technical overview of why it seems hard to get Whole Brain Emulations before we get neuromorphic AGI?
I think maybe I read a PDF that made this case years ago, but I don't know where.
There’s a psychological variable that seems to be able to change on different timescales, in me, at least. I want to gesture at it, and see if anyone can give me pointers to related resources.
[Hopefully this is super basic.]
There a set of states that I occasionally fall into that include what I call “reactive” (meaning that I respond compulsively to the things around me), and what I call “urgy” (meaning that that I feel a sort of “graspy” desire for some kind of immediate gratification).
These states all have... (read more)
new (boring) post on controlled actions.
New post: Why does outlining my day in advance help so much?
New post: some musings on deliberate practice
I’ve decided that I want to to make more of a point to write down my macro-strategic thoughts, because writing things down often produces new insights and refinements, and so that other folks can engage with them.
This is one frame or lens that I tend to think with a lot. This might be more of a lens or a model-let than a full break-down.
There are two broad classes of problems that we need to solve: we have some pre-paradigmatic science to figure out, and we have have the problem of civilizational sanity.
Preparadigmatic science
There are a number ... (read more)
New (short) post: Desires vs. Reflexes
[Epistemic status: a quick thought that I had a minute ago.]
There are goals / desires (I want to have sex, I want to stop working, I want to eat ice cream) and there are reflexes (anger, “wasted motions”, complaining about a problem, etc.).
If you try and squash goals / desires, they will often (not always?) resurface around the side, or find some way to get met. (Why not always? What are the difference between those that do and those that don’t?) You need to bargain with them, or design outlet poli... (read more)
new post: Intro to and outline of a sequence on a productivity system
Totally an experiment, I'm trying out posting my raw notes from a personal review / theorizing session, in my short form. I'd be glad to hear people's thoughts.
This is written for me, straight out of my personal Roam repository. The formatting is a little messed up because LessWrong's bullet don't support indefinite levels of nesting.
This one is about Urge-y-ness / reactivity / compulsiveness
- I don't know if I'm naming this right. I think I might be lumping categories together.
- Let's start with what I know:
- There are th
... (read more)New post: Some musings about exercise and time discount rates
[Epistemic status: a half-thought, which I started on earlier today, and which might or might not be a full thought by the time I finish writing this post.]
I’ve long counted exercise as an important component of my overall productivity and functionality. But over the past months my exercise habit has slipped some, without apparent detriment to my focus or productivity. But this week, after coming back from a workshop, my focus and productivity haven’t really booted up.
Her... (read more)
New post: Capability testing as a pseudo fire alarm
[epistemic status: a thought I had]
It seems like it would be useful to have very fine-grained measures of how smart / capable a general reasoner is, because this would allow an AGI project to carefully avoid creating a system smart enough to pose an existential risk.
I’m imagining slowly feeding a system more training data (or, alternatively, iteratively training a system with slightly more compute), and regularly checking its capability. When the system reaches “chimpanzee level” (whatever that means), you... (read more)
In There’s No Fire Alarm for Artificial General Intelligence Eliezer argues:
If I have a predetermined set of tests, this could serve as a fire alarm, but only if you've successfully built a consensus that it is one. This is hard, and the consensus would need to be quite strong. To avoid ambiguity, the test itself would need to be demonstrably resistant to being clever Hans'ed. Otherwise it would be just another milestone.
This is my current take about where we're at in the world:
Deep learning, scaled up, might be basically enough to get AGI. There might be some additional conceptual work necessary, but the main difference between 2020 and the year in which we have transformative AI is that in that year, the models are much bigger.
If this is the case, then the most urgent problem is strong AI alignment + wise deployment of strong AI.
We'll know if this is the case in the next 10 years or so, because either we'll continue to see incredible gains from increasingly bigger Deep L... (read more)
I was thinking lately about how there are some different classes of models of psychological change, and I thought I would outline them and see where that leads me.
It turns out it led me into a question about where and when Parts-based vs. Association-based models are applicable.
Google Doc version.
Parts-based / agent-based models
Some examples:
This is the frame that I make the most use of, in my personal practice. It assumes that all behavior is the result of some goal directed subproce... (read more)
Can someone affiliated with a university, ect. get me a PDF of this paper?
https://psycnet.apa.org/buy/1929-00104-001
It is on Scihub, but that version is missing a few pages in which they describe the methodology.
[I hope this isn't an abuse of LessWrong.]
New (image) post: My strategic picture of the work that needs to be done
In this case it seems fine to add the image, but I feel disconcerted that mods have the ability to edit my posts.
I guess it makes sense that the LessWrong team would have the technical ability to do that. But editing a users post, without their specifically asking, feels like a pretty big breach of... not exactly trust, but something like that. It means I don’t have fundamental control over what is written under my name.
That is to say, I personally request that you never edit my posts, without asking (which you did, in this case) and waiting for my response. I furthermore, I think that should be a universal policy on LessWrong, though maybe this is just an idiosyncratic neurosis of mine.
New post: Napping Protocol
Doing actual mini-RCTs can be pretty simple. You only need 3 things:
1. A spreadsheet
2. A digital coin for randomization
3. A way to measure the variable that you care about
I think one of practically powerful "techniques" of rationality is doing simple empirical experiments like this. You want to get something? You don't know how to get it? Try out some ideas and check which ones work!
There are other applications of empiricism that are not as formal, and sometimes faster. Those are also awesome. But at the very least, I've found that doing ... (read more)
New (unedited) post: The bootstrapping attitude
New (unedited) post: Exercise and nap, then mope, if I still want to
New post: _Why_ do we fear the twinge of starting?
My understanding is that there was a 10 year period starting around 1868, in which South Carolina's legislature was mostly black, and when the universities were integrated (causing most white students to leave), before the Dixiecrats regained power.
I would like to find a relatively non-partisan account of this period.
Anyone have suggestions?
I remember reading a Zvi Mowshowitz post in which he says something like "if you have concluded that the most ethical thing to do is to destroy the world, you've made a mistake in your reasoning somewhere."
I spent some time search around his blog for that post, but couldn't find it. Does anyone know what I'm talking about?
Anyone have a link to the sequence post where someone posits that AIs would do art and science from a drive to compress information, but rather it would create and then reveal cryptographic strings (or something)?
I remember reading a Zvi Mowshowitz post in which he says something like "if you have concluded that the most ethical thing to do is to destroy the world, you've made a mistake in your reasoning somewhere."
I spent some time search around his blog for that post, but couldn't find it. Does anyone know what I'm talking about?
A hierarchy of behavioral change methods
Follow up to, and a continuation of the line of thinking from: Some classes of models of psychology and psychological change
Related to: The universe of possible interventions on human behavior (from 2017)
This post outlines a hierarchy of behavioral change methods. Each of these approaches is intended to be simpler, more light-weight, and faster to use (is that right?), than the one that comes after it. On the flip side, each of these approaches is intended to resolve a common major blocker of the approach before... (read more)
Can anyone get a copy of this paper for me? I'm looking to get clarity about how important cryopreserving non-brain tissue is for preserving personality.
Older post: Initial Comparison between RAND and the Rationality Cluster
New post: my personal wellbeing support pillars
New post: The seed of a theory of triggeredness