In this post I lay out a model of beliefs and communication that identify two types of things we might think of as ‘beliefs,’ how they are communicated between people, how they are communicated within people, and what this might imply about intellectual progress in some important fields. As background, Terence Tao has a blog post describing three stages of mathematics: pre-rigorous, rigorous, and post-rigorous. It’s only about two pages; the rest of this post will assume you’ve read it. Ben Pace has a blog post describing how to discuss the models generating an output, rather than the output itself, which is also short and is related, but has important distinctions from the model outlined here.

[Note: the concept for this post comes from a talk given by Anna Salamon, and I sometimes instruct for CFAR, but the presentation in this post should be taken to only represent my views.]

If a man will begin with certainties, he shall end with doubts, but if he will be content to begin with doubts he shall end in certainties. -- Francis Bacon

FORMAL COMMUNICATION

Probably the dominant model of conversations among philosophers today is Robert Stalnaker’s. (Here’s an introduction.) A conversation has a defined set of interlocutors, and some shared context, and speech acts add statements to the context, typically by asserting a new fact.

I’m not an expert in contemporary philosophy, and so from here on out this is my extension of this view that I’ll refer to as ‘formal.’ Perhaps this extension is entirely precedented, or perhaps it’s controversial. My view focuses on situations where logical omniscience is not assumed, and thus simply pointing out the conclusion that arises from combining facts can count as such an assertion. Proper speech considers this and takes inferential distance into account; my speech acts should be derivable from our shared context or an unsurprising jump from them. Both new logical facts and environmental facts count as adding information to the shared context. That I am currently wearing brown socks while writing this part of the post is not something you could derive from our shared context, but is nevertheless ‘unsurprising.’

It’s easy to see how a mathematical proof might fit into this framework. We begin with some axioms and suppositions, and then we compute conclusions that follow from those premises, and eventually we end up at the theorem that was to be proved.

If I make a speech act that’s too far of a stretch--either because it disagrees with something in the context (or your personal experience), or is just not easily derivable from the common context--then the audience should ‘beep’ and I should back up and justify the speech act. A step in the proof that doesn’t obviously follow means I need to expand the proof to make it clear how I got from A to B, or how a pair of statements that appear contradictory is in fact not contradictory. (“Ah, by X I meant the restricted subset X’, such that this counterexample is excluded; my mistake.”)

This style of conversation seems to be minimizing surprise on the low level; from moment to moment, actions are being taken in a way that views justification and validation by independent sources as core constraints. What is this good for? Interestingly, the careful avoidance of surprises on the low level permits surprises on the high level, as a conclusion reached by airtight logic can be as trustworthy as the premises of that logic, regardless of how bizarre the conclusion seems. A plan fleshed out with enough detail that it can be independently reconstructed by many different people is a plan that can scale to a large organization. The body of scientific knowledge is communicated mostly this way; Nullius in verba requires this sort of careful communication because it bans the leaps one might otherwise make.

PUBLIC POSITIONS

One way to model communication is a function that takes objects of a certain type and tries to recreate them in another place. A telephone takes sound waves and attempts to recreate them elsewhere, whereas an instant messenger takes text strings and attempts to recreate them elsewhere. So conjugate to the communication methodology is ‘the thing that can be communicated by this methodology’. For example, you can also play music over the telephone, which is much harder to do over instant messenger (tho this example perhaps betrays my age).

I’m going to define ‘public positions’ as the sort of beliefs that are amenable to communication through ‘formal communication’ (this style where you construct conclusions out of a chain of simple additions to the pre-existing context). The ‘public’ bit emphasizes that they’re optimized for justification or presentation; many things I believe don’t count as public positions because I can’t reach them through this sort of formal communication. For example, I find the smell of oranges highly unpleasant; I can communicate that fact about my preferences through formal communication but can’t communicate the preference itself through formal communication. The ‘positions’ bit emphasizes that they are defensible and legible; you can ‘know where I stand’ on a particular topic.

PRIVATE GUTS

I’m going to call a different sort of belief one’s ‘private guts.’ By ‘guts,’ I’m pointing towards the historical causes of a belief (like the particular bit of my biochemistry that causes me to dislike the smell of oranges), or to the sense of a ‘gut feeling.’ By private, I’m pointing towards the fact that this is often opaque or not shaped like something that’s communicable, rather than something deliberately hidden. If you’re familiar with Gendlin’s Focusing, ‘felt senses’ are an example of private guts.

What are private guts good for? As far as I can tell, lizards probably don’t have public positions, but they probably do have private guts. That suggests those guts are good for predicting things about the world and achieving desirable world states, as well as being one of the channels by which the desirability of world states is communicated inside a mind. It seems related to many sorts of ‘embodied knowledge’, like how to walk, which is not understood from first principles or in an abstract way, or habits, like adjective order in English. A neural network that ‘knows’ how to classify images of cats, but doesn’t know how it knows (or is ‘uninterpretable’), seems like an example of this. “Why is this image a cat?” -> “Well, because when you do lots of multiplication and addition and nonlinear transforms on pixel intensities, it ends up having a higher cat-number than dog-number.” This seems similar to gut senses that are difficult to articulate; “why do you think the election will go this way instead of that way?” -> “Well, because when you do lots of multiplication and addition and nonlinear transforms on environmental facts, it ends up having a higher A-number than B-number.” Private guts also seem to capture a category of amorphous visions; a startup can rarely write a formal proof that their project will succeed (generally, if they could, the company would already exist). The postrigorous mathematician’s hunch falls into this category, which I’ll elaborate on later.

There are now two sorts of interesting communication to talk about: the process that coheres public positions and private guts within a single individual, and the process that communicates private guts across individuals.

COHERENCE, FOCUSING, AND SCIENCE

Much of CFAR’s focus, and that of the rationality project in general, has involved taking people who are extremely sophisticated at formal communication and developing their public positions, and getting them to notice and listen to their private guts. An example, originally from Julia Galef, is the ‘agenty duck.’ Imagine a duck whose head points in one direction (“I want to get a PhD!”) and whose feet are pointed in another (mysteriously, this duck never wants to work on their dissertation). Many responses to this sort of intrapersonal conflict seem maladaptive; much better for the duck to have head and feet pointed in the same direction, regardless of which direction that is. An individual running a coherence process that integrates the knowledge of the ‘head’ and ‘feet’, or the public positions and the private guts, will end up more knowledgeable and functional than an individual that ignores one to focus on the other.

Discovering the right coherence process is an ongoing project, and even if I knew it as a public position it would be too long for this post. So I will merely leave some pointers and move on. First, the private guts seem highly trainable by experience, especially through carefully graduated exposure. Second, Focusing and related techniques (like Internal Double Crux) seem quite effective at searching through the space of articulable / understandable sentences or concepts in order to find those that resonate with the private guts, drawing forth articulation from the inarticulate.

It’s also worth emphasizing the way in which science depends on such a coherence process. The ‘scientific method’ can be viewed in this fashion: hypotheses can be wildly constructed through any method, because hypotheses are simply proposals rather than truth-statements; only hypotheses that survive the filter of contact with reality through experimentation graduate to full facts, at which point their origin is irrelevant, be it induction, a lucky guess, or the unconscious mind processing something in a dream.

Similarly for mathematicians, according to Tao. The transition from pre-rigorous mathematics to rigorous mathematics corresponds to being able to see formal communication and public positions as types, and learning to trust them over persuasion and opinions. The transition from rigorous mathematics to post-rigorous mathematics corresponds to having trained one’s private guts such that they line up with the underlying mathematical reality well enough that they generate fruitful hypotheses.

Consider automatic theorem provers. One variety begins with a set of axioms, including the negated conclusion, and then gradually expands outwards, seeking to find a contradiction (and thus prove that the desired conclusion follows from the other axioms). Every step of the way proceeds according to the formal communication style, and every proposition in the proof state can be justified through tracing the history of combinations of propositions that led from the initial axioms to that proposition. But the process is unguided, reliant on the swiftness of computer logic to handle the massive explosion of propositions, almost all of which will be irrelevant to the final proof. The human mathematician instead has some amorphous sense of what the proof will look like, sketching a leaky argument that is not correct in the details, but which is correctable. Something interesting is going on in the process that generates correctable arguments, perhaps even more interesting than what’s going on in the processes that trivially generate correct arguments by generating all possible arguments and then filtering.

STARTUPS, DOUBLE CRUX, AND CIRCLING

Somehow, people are sometimes able to link up their private guts with each other. This is considerably more fraught than linking up public positions; positions are of a type that is optimized for verifiability and reconstruction, whereas internal experiences, in general, are not. Even if we’re eating the same cake, how would we even check that our internal experience of eating the cake is similar? What about something simpler, like seeing the same color?

While the abstraction of formal conversation is fairly simple, it’s still obvious that there are many skills related to correct argumentation. Similarly, there seems to be a whole family of skills related to syncing up private guts, and rather than teaching those skills, this section will again by a pointer to where those skills could be learned or trained. Learning how to reproduce music is related to learning how to participate in jam sessions, but the latter is a much closer fit to this sort of communication.

The experience of startups is that small teams are best, primarily because of the costs of coordinative communication. Startups are often chasing an amorphous, rapidly changing target; a team that’s able to quickly orient in the same direction and move together, or trust in the guts of each other rather than requiring elaborate proofs, will often perform better.

While Double Crux can generate a crisp tree of logical deductions from factual disagreements, it often instead exposes conflicting intuitions or interpretations. While formal communication involves a speaker optimizing over speech acts to jointly minimize surprise and maximize distance towards their goal, double crux instead involves both parties in the optimization, and often causes them to seek surprises. A crux is something that would change my mind, and I expose my cruxes in case you disagree with them, seeking to change my mind as quickly as possible.

Cruxes also respect the historical causes of beliefs; when I say “my crux for X is Y,” I am not saying that Y should cause you to believe X, only that not-Y would cause me to believe not-X. This weaker filter means many more statements are permissible, and my specific epistemic state can be addressed, rather than playing a minimax game by which all possible interlocutors would be pinned down by the truth. In Stalnakerian language, rather than needing to emit statements that are understandable and justifiable by the common context, I only need the weaker restriction that those statements are understandable in the common context and justifiable in my private context.

Circling is also beyond the scope of this post, except as a pointer. It seems relevant as a potential avenue for deliberate practice in understanding and connecting to the subjective experience of others in a way that perhaps facilitates this sort of conversation.

CONCLUSION

As mentioned, this post is seeking to set out a typology, and perhaps crystallize some concepts. But why think these concepts are useful?

Primarily, because this seems to be related to the way in which rationalists differ from other communities with similar interests, or from their prior selves before coming a rationalist, in a way that seems related to the difference between postrigorous mathematicians and rigorous mathematicians. Secondarily, because many contemporary issues of great practical importance require correctly guessing matters that are not settled. Financial examples are easy (“If I buy bitcoin now, will it be worth more when I sell it by more than a normal rate of economic return?”), but longevity interventions have a similar problem (“knowing whether or not this works for humans will take a human lifetime to figure out, but by that point it might be too late for me. Should I do it now?”), and it seems nearly impossible to reason correctly about existential risks without reliance on private guts (and thus on methods to tune and communicate those guts).

New to LessWrong?

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 9:47 PM

I enjoyed this tremendously.

Small feedback: this post is a mix of fundamentals and good introductions to key concepts, but also seems to assume a very high level of knowledge of the norms and current recent terminology and developments in the rationality community.

I'm probably among the top 5-10% of time spent in the community (read all the sequences in real time when Eliezer was originally writing them at Overcoming Bias), but I'm certainly not in the top 1-2%, and there were a number of references I didn't get and therefore points I couldn't quite follow. Of course, I could dig-up and cross-reference everything to work at them towards max understanding, but then, I just dropped in for 30 minutes here at LW before I'm about to get on a phonecall.

If that's intentional, no problem. Just pointing it out so you can do a quick check on target audience. What I did get seemed really marvelous, at least a half-dozen very interesting ideas here.

Small feedback: this post is a mix of fundamentals and good introductions to key concepts, but also seems to assume a very high level of knowledge of the norms and current recent terminology and developments in the rationality community.

I'd also be interested in a list, or maybe some commentary on how well the links work as references. One of the issues I'm struggling with here is that even if someone has written up a good introduction to, say, internal double crux, there really are several inferential gaps between someone who hasn't done it and someone who has, and probably it's better understood after the reader has tried it a time or two. When IDC is fundamental to the point, there's nothing to be done; they need to read up on that prereq first. When IDC is helpful to understanding the point, there's a splitting effect; the person who already knows IDC gets their understanding strengthened but the person who doesn't know IDC gets distracted.

I do try to be careful about having links to things, in part because it helps me notice when there isn't an online description of the thing (which happened with "metacognitive blind spot," which is referenced in a draft that's going live later today).

Yeah, that's it — IDC, circling, etc are things I'm peripherally aware of but which I haven't tried and which aren't really contextualized; it felt sort of like, "If you know this, here's how they connect; if not, well, you could go find out and come back." I also got the feeling that 'Agenty Duck' was more significant than the short description of it, but I hadn't come across that before and felt like I was probably missing something.

I think the biggest issue, actually, wasn't the specific technical terms that I was aware I wasn't fully up to speed on, but rather with words like "coherence" — I wasn't sure if there was a formal definition/exploration being alluded to that I haven't heard, or if it's the plain English meaning. So my trust in my own ability to be reading the piece correctly really started to decrease at the end of the "Public Guts" section — I wasn't sure which words/phrases were technical terms that I wasn't up to speed on, and which were just plain English meaning that I could read and use natural assumptions to keep going.

Even then, still got a lot of it — just wanted to point it out since I liked the piece a lot. Also, it does make sense much of the time to write for an audience that's maximally informed to push the field forwards; this community and the world at large certainly benefits from both technical pieces that assume context as well as more "spell it out for you" materials.

When IDC is helpful to understanding the point, there's a splitting effect; the person who already knows IDC gets their understanding strengthened but the person who doesn't know IDC gets distracted.

Hmm. This makes me think about something like "Arbital style 'click here to learn the math-heavy explanation, click here to learn a more standard explanation'" thingy, except for "have you practiced this particular introspective skill?"

(I don't actually think that'll turn out to be a good idea for various reasons, but seemed a bit interesting)

I’d be curious to list the concepts that weren’t familiar - I have a hard time noting which concepts are newer and have some interest in getting some general CFAR style updates more formally merged into the main lesswrong discourse.

Good call. Replied above to Vaniver on this point.

Seconding Kaj. In addition to what he said, I find this post especially helpful when thinking about group epistemics.

Having separate terms for these two positions has been particularly useful to me, because I have often felt reluctant to really lay out positions which were motivated by my private guts and for which I did not have a good public position for. Reminding myself of the difference and that it's legitimate to take your time in working out the position implied by your private guts, has made it feel more legitimate to even try.

I view my work with my multi-agent minds sequence as an extended attempt to work out model for which I initially had strong intuitions but no legible elaboration, and I'm glad that I had this post encouraging me to give it a shot anyway.

Promoted to curated: I personally felt like I gained some useful abstractions and handles from this post, and also feel like I know have a canonical place to link to for the concepts explained in the post. I also think it is covering a topic that is quite key to discussion on LessWrong these days, and have some hope it will bridge some important inferential gaps.

I mostly agree with lionhearted's comment and think the post could have been better if you had invested more effort into minimizing dependencies and making it more self-contained, but I also think that doing that for this post is uniquely challenging.

I think the right way to look at this is that Public Positions are the output of top-down processes, and Private Guts are the output of bottom-up processes. Private Guts are obtained by taking a heap of concrete data and abstracting something out of that, for example taking the sense data obtained from smelling an orange and abstracting out a quality you find unpleasant. The reason it's hard to communicate Private Guts, and the reason they are Private, is that it's unlikely two people will be running their bottom-up processes on the same set of data (sense data in the case of smelling an orange). This is why art is so alluring: it presents us a distilled set of (usually visual) data from which we can use bottom-up processes to abstract away something that we couldn't easily communicate to another person verbally let alone identify ourselves in our everyday experiences due to the amount of noise that might make it hard to identify whatever it is that the artwork is telling you.

Top-down reasoning is more about having a goal (highly abstract) and working down towards concrete implementations on how to achieve that goal (e.g. prove a hypothesis). The reason the outputs of top-down processes become our Public Positions is partially because it doesn't take much data to communicate, but moreso because it is not dependent on a contextual dataset from which the Public Position originated.

It is not a prerequisite that you need to have a generative understanding of something (e.g. what makes a tiger a tiger?) for you to be able to infer its traits from a data set like an image (I've seen other things like this and they were allegedly tigers, so this is a tiger). Knowing something is 'like' something else is useful enough for us, and is why we have bottom-up processes. But this does not lend itself well to communication because the bandwidth of words is so low. Lizards don't have public positions because their brains are mostly purposed towards bottom-up processing. Top-down processing is mostly the domain of humans, and no doubt our language developed alongside that processing because concepts originating in the abstract are easy to communicate.

The reason humans are currently better than algorithms at proving theorems is that we can switch between top-down and bottom-up reasoning as we please. We can start with a top-down expression of the theorem we're trying to prove, and take a few valid steps in the direction of proving it, and once we can't see how to go any further down, we switch to bottom-up processes where we look at the stuff we've written down so far and say 'This part here has a likeness to some other things I've seen, maybe the same logic that worked with those other things will work with this thing as well'. And so on.

I think this is the best model for viewing Public Positions vs Private Guts

I was inspired to write my own blog post to better articulate what I'm getting at in this comment :)

https://medium.com/@couldbewrong/reconciling-left-and-right-from-the-bottom-up-3227eb1c0b47

I wonder how much of the effectiveness of small groups is due to being able to see information people don't intend to communicate. Under a lot of pressure you get to see the other person under pressure, and how they react, and whether they act differently afterward.

If their behavior changes, it would be reasonable to infer some kind of gut-level-crux was at work. This information is not otherwise available.

I wonder how much of the effectiveness of small groups is due to being able to see information people don't intend to communicate.

(emphasis added)

Or don't know how to communicate.