Tl;dr:

I've noticed a dichotomy between "thinking in toolboxes" and "thinking in laws".

The toolbox style of thinking says it's important to have a big bag of tools that you can adapt to context and circumstance; people who think very toolboxly tend to suspect that anyone who goes talking of a single optimal way is just ignorant of the uses of the other tools.

The lawful style of thinking, done correctly, distinguishes between descriptive truths, normative ideals, and prescriptive ideals. It may talk about certain paths being optimal, even if there's no executable-in-practice algorithm that yields the optimal path. It considers truths that are not tools.

Within nearly-Euclidean mazes, the triangle inequality - that the path AC is never spatially longer than the path ABC - is always true but only sometimes useful. The triangle inequality has the prescriptive implication that if you know that one path choice will travel ABC and one path will travel AC, and if the only pragmatic path-merit you care about is going the minimum spatial distance (rather than say avoiding stairs because somebody in the party is in a wheelchair), then you should pick the route AC. But the triangle inequality goes on governing Euclidean mazes whether or not you know which path is which, and whether or not you need to avoid stairs.

Toolbox thinkers may be extremely suspicious of this claim of universal lawfulness if it is explained less than perfectly, because it sounds to them like "Throw away all the other tools in your toolbox! All you need to know is Euclidean geometry, and you can always find the shortest path through any maze, which in turn is always the best path."

If you think that's an unrealistic depiction of a misunderstanding that would never happen in reality, keep reading.


Here's a recent conversation from Twitter which I'd consider a nearly perfect illustration of the toolbox-vs.-laws dichotomy:

David Chapman: "By rationalism, I mean any claim that there is an ultimate criterion according to which thinking and acting could be judged to be correct or optimal... Under this definition, 'rationalism' must go beyond 'systematic methods are often useful, hooray!'... A rationalism claims there is one weird trick to correct thinking, which guarantees an optimal result. (Some rationalisms specify the trick; others insist there must be one, but that it is not currently knowable.) A rationalism makes strongly normative judgments: everyone ought to think that way."
Graham Rowe: "Is it fair to say that rationalists see the world entirely through rationality while meta-rationalists look at rationality as one of many tools (that they can use fluently and appropriately) to be used in service of a broader purpose?"
David Chapman: "More-or-less, I think! Although I don’t think rationalists do see the world entirely through rationality, they just say they think they ought to."
Julia Galef: "I don't think the 'one weird trick' description is accurate. It's more like: there's one correct normative model in theory, which cannot possibly be approximated by a single rule in practice, but we can look for collections of 'tricks' that seem like they bring us closer to the normative model. e.g., 'On the margin, taking more small risks is likely to increase your EV' is one example."
David Chapman: "The element that I’d call clearly meta-rational is understanding that rationality is not one well-defined thing but a bag of tricks that are more-or-less applicable in different situations."

Julia then quoted a paper mentioning "The best prescription for human reasoning is not necessarily to always use the normative model to govern one's thinking." To which Chapman replied:

"Baron’s distinction between 'normative' and 'prescriptive' is one I haven’t seen before. That seems useful and maybe key. OTOH, if we’re looking for a disagreement crux, it might be whether a normative theory that can’t be achieved, even in principle, is a good thing."

I'm now going to badly stereotype this conversation in the form I feel like I've seen it many times previously, including e.g. in the discussion of p-values and frequentist statistics. On this stereotypical depiction, there is a dichotomy between the thinking of Msr. Toolbox and Msr. Lawful that goes like this:

Msr. Toolbox: "It's important to know how to use a broad variety of statistical tools and adapt them to context. The many ways of calculating p-values form one broad family of tools; any particular tool in the set has good uses and bad uses, depending on context and what exactly you do. Using likelihood ratios is an interesting statistical technique, and I'm sure it has its good uses in the right contexts. But it would be very surprising if that one weird trick was the best calculation to do in every paper and every circumstance. If you claim it is the universal best way, then I suspect you of blind idealism, insensitivity to context and nuance, ignorance of all the other tools in the toolbox, the sheer folly of callow youth. You only have a hammer and no real-world experience using screwdrivers, so you claim everything is a nail."

Msr. Lawful: "On complex problems we may not be able to compute exact Bayesian updates, but the math still describes the optimal update, in the same way that a Carnot cycle describes a thermodynamically ideal engine even if you can't build one. You are unlikely to find a superior viewpoint that makes some other update even more optimal than the Bayesian update, not without doing a great deal of fundamental math research and maybe not at all. We didn't choose that formalism arbitrarily! We have a very broad variety of coherence theorems all spotlighting the same central structure of probability theory, saying variations of 'If your behavior cannot be viewed as coherent with probability theory in sense X, you must be executing a dominated strategy and shooting off your foot in sense Y'."

I currently suspect that when Msr. Law talks like this, Msr. Toolbox hears "I prescribe to you the following recipe for your behavior, the Bayesian Update, which you ought to execute in every kind of circumstance."

This also appears to me to frequently turn into one of those awful durable forms of misunderstanding: Msr. Toolbox doesn't see what you could possibly be telling somebody to do with a "good" or "ideal" algorithm besides executing that algorithm.

It would not surprise me if there's a symmetrical form of durable misunderstanding where a Lawist has trouble processing a Toolboxer's disclaimer: "No, you don't understand, I am not trying to describe the one true perfect optimal algorithm here, I'm trying to describe a context-sensitive tool that is sometimes useful in real life." Msr. Law may not see what you could possibly be doing with a supposedly "prudent" or "actionable" recipe besides saying that it's the correct answer, and may feel very suspicious of somebody trying to say everyone should use an answer while disclaiming that they don't really think it's true. Surely this is just the setup for some absurd motte-and-bailey where we claim something is the normative answer, and then as soon as we're challenged we walk back and claim it was 'just one tool in the toolbox'.

And it's not like those callow youths the Toolboxer is trying to lecture don't actually exist. The world is full of people who think they have the One True Recipe (without having a normative ideal by which to prove that this is indeed the optimal recipe given their preferences, knowledge, and available computing power).

The only way I see to resolve this confusion is by grasping a certain particular abstraction and distinction - as a more Lawfully inclined person might put it. Or by being able to deploy both kinds of thinking, depending on context - as a more Toolbox-inclined person might put it.

It may be that none of my readers need the lecture at this point, but I've learned to be cautious about that sort of thing, so I'll walk through the difference anyways.


Every traversable maze has a spatially shortest path; or if we are to be precise in our claims but not our measurements, a set of spatially shortest-ish paths that are all nearly the same distance.

We may perhaps call this spatially shortest path the "best" or "ideal" or "optimal" path through the maze, if we think our preference for walking shorter distances is the only pragmatically important merit of a path.

That there exists some shortest path, which may even be optimal according to our preferences, doesn't mean that you can come to an intersection at the maze and "just choose whichever branch is on the shortest path".

And the fact that you cannot, at an intersection, just choose the shorter path, doesn't mean that the concepts of distance and greater or lesser distance aren't useful.

It might even be that the maze-owner could truthfully tell you, "By the way, this right-hand turn here keeps you on the shortest path," and yet you'd still be wiser to take the left-hand turn... because you're following the left-hand rule. Where the left-hand rule is to keep your left hand on the wall and go on walking, which works for not getting lost inside a maze whose exit is connected to the start by walls. It's a good rule for agents with sharply bounded memories who can't always remember their paths exactly.

And if you're using the left-hand rule it is a terrible, terrible idea to jump walls and make a different turn just once, even if that looks like a great idea at the time, because that is an excellent way to get stuck traversing a disconnected island of connected walls inside the labyrinth.

So making the left-hand turn leads you to walk the shortest expected distance, relative to the other rules you're using. Making the right-hand turn instead, even if it seemed locally smart, might have you traversing an infinite distance instead.

But then you may not be on the shortest path, even though you are following the recommendations of the wisest and most prudent rule given your current resources. By contemplating the difference, you know that there is in principle room for improvement. Maybe that inspires you to write a maze-mapping, step-counting cellphone app that lets you get to the exit faster than the left-hand rule.

And the reason that there's a better recipe isn't that "no recipe is perfect", it isn't that there exists an infinite sequence of ever-better roads. If the maze-owner gave you a map with the shortest path drawn in a line, you could walk the true shortest path and there wouldn't be any shorter path than that.

Shortness is a property of paths; a tendency to produce shorter paths is a property of recipes. What makes a phone app an improvement is not that the app is adhering more neatly to some ideal sequence of left and right turns, it's that the path is shorter in a way that can be defined independently of the app's algorithms.

Once you can admit a path can be "shorter" in a way that abstracts away from the walker - not better, which does depend on the walker, but shorter - it's hard not to admit the notion of there being a shortest path.

I mean, I suppose you could try very hard to never talk about a shortest path and only talk about alternative recipes that yield shorter paths. You could diligently make sure to never imagine this shorterness as a kind of decreased distance-in-performance-space from any 'shortest path'. You could make very sure that in your consideration of new recipes, you maintain your ideological purity as a toolboxer by only ever asking about laws that govern which of two paths are shorter, and never getting any inspiration from any kind of law that governs which path is shortest.

In which case you would have diligently eliminated a valuable conceptual tool from your toolbox. You would have carefully made sure that you always had to take longer roads to those mental destinations that can be reached the fastest by contemplating properties of ideal solutions, or distance from ideal solutions.

But why? Why would you?


I think at this point the Toolbox reply - though I'm not sure I could pass its Ideological Turing Test - might be that idealistic thinking has a great trap and rottenness at its heart.

It might say:

Somebody who doesn't wisely shut down all this thinking about "shortest paths" instead of the left-hand rule as a good tool for some mazes - someone who begins to imagine some unreachable ideal of perfection, instead of a series of apps that find shorter paths most of the time - will surely, in practice, begin to confuse the notion of the left-hand rule, or their other current recipe, with the shortest path.

After all, nobody can see this "shortest path", and it's supposedly a virtuous thing. So isn't it an inevitable consequence of human nature that people will start to use that idea as praise for their current recipes?

And also in the real world, surely Msr. Law will inevitably forget the extra premise involved with the step from "spatially shortest path" to "best path"- the contextual requirement that our only important preference was shorter spatial distances so defined. Msr. Law will insist that somebody in a wheelchair go down the "best path" of the maze, even though that path involves going up and down a flight of stairs.

And Msr. Law will be unable to mentally deal with a helicopter overflying the maze that violates their ontology relative to which "the shortest path" was defined.

And it will also never occur to Msr. Law to pedal around the maze in a bicycle, which is a much easier trip even if it's not the shortest spatial distance.

And Msr. Law will assume that the behavior of mortgage-backed securities is independently Gaussian-random because the math is neater that way, and then derive a definite theorem showing a top-level tranche of MBSs will almost never default, thus bringing down their trading firm -

To all of which I can only reply: "Well, yes, that happens some of the time, and there are contextual occasions where it is a useful tool to lecture Msr. Law on the importance of having a diverse toolbox. But it is not a universal truth that everyone works like that and needs to be prescribed the same lecture! You need to be sensitive to context here!"

There are definitely versions of Msr. Law who think the universal generalization they've been told about is a One Weird Trick That Is All You Need To Know; people who could in fact benefit from a lecture on the importance of diverse toolboxes.

There are also extreme toolbox thinkers could benefit from a lecture on the importance of thinking that considers unreachable ideals, and how to get closer to them, and the obstacles that are moving us away from them.

Not to commit the fallacy of the golden mean or anything, but the two viewpoints are both metatools in the metatoolbox, as it were. You're better off if you can use both in ways that depend on context and circumstance, rather than insisting that only toolbox reasoning is the universally best context-insensitive metaway to think.

If that's not putting the point too sharply.

Thinking in terms of Law is often useful. You just have to be careful to understand the context and the caveats: when is the right time to think in Law, how to think in Law, and what type of problems call for Lawful thinking.

Which is not the same as saying that every Law has exceptions. Thermodynamics still holds even at times, like playing tennis, when it's not a good time to be thinking about thermodynamics. If you thought that every Law had exceptions because it wasn't always useful to think about that Law, you'd be rejecting the metatool of Law entirely, and thinking in toolbox terms at a time when it wasn't useful to do so.

Are there Laws of optimal thought governing the optimal way to contextualize and caveat, which might be helpful for finding good executable recipes? The naturally Lawful thinker will immediately suspect so, even if they don't know what those Laws are. Not knowing these Laws won't panic a healthy Lawful thinker. Instead they'll proceed to look around for useful yet chaotic-seeming prescriptions to use now instead of later - without mistaking those chaotic prescriptions for Laws, or treating the chaos of their current recipes as proof that there's no good normative ideals to be had.

Indeed, it can sometimes be useful to contemplate, in detail, that there are probably Laws you don't know. But that's a more advanced metatool in the metatoolbox, useful in narrower ways and in fewer contexts having to do with the invention of new Laws as well as new recipes, and I'd rather not strain Msr. Toolbox's credulity any further.


To close out, one recipe I'd prescribe to reduce confusion in the toolbox-inclined is to try to see the Laws as descriptive statements, rather than being any kind of normative ideal at all.

The idea that there's a shortest path through the maze isn't a "normative ideal" instead of a "prescriptive ideal", it's just true. Once you define distance there is in fact a shortest path through the maze.

The triangle inequality might sound very close to a prescriptive rule that you ought to walk along AC instead of ABC. But actually the prescriptive rule is only if you want to walk shorter distances ceteris paribus, only if you know which turn is which, only if you're not trying to avoid stairs, and only if you're not taking an even faster route by getting on a bicycle and riding outside the whole maze to the exit. The prescriptive rule "try walking along AC" isn't the same as the triangle inequality itself, which goes on being true of spatial distances in Euclidean or nearly-Euclidean geometries - whether or not you know, whether or not you care, whether or not it's useful to think about at any given moment, even if you own a bicycle.

The statement that you can't have a heat-pressure engine more efficient than a Carnot cycle isn't about gathering in a cultish circle to sing praises of the Carnot cycle as being the ideally best possible kind of engine. It's just a true fact of thermodynamics. This true fact might helpfully suggest that you think about obstacles to Carnot-ness as possible places to improve your engine - say, that you should try to prevent heat loss from the combustion chamber, since heat loss prevents an adiabatic cycle. But even at times when it's not in fact useful to think about Carnot cycles, it doesn't mean your heat engine is allowed on those occasions to perform better than a Carnot engine.

You can't extract any more evidence from an observation than is given by its likelihood ratio. You could see this as being true because Bayesian updating is an often-unreachable normative ideal of reasoning, so therefore nobody can do better than it. But I'd call it a deeper level of understanding to see it as a law saying that you can't get a higher expected score by making any different update. This is a generalization that holds over both Bayes-inspired recipes and non-Bayes-inspired recipes. If you want to assign higher probability to the correct hypothesis, it's a short step from that preference to regarding Bayesian updates as a normative ideal; but the idea begins life as a descriptive assertion, not as a normative assertion.

It's a relatively shallow understanding of the coherence theorems to say "Well, they show that if you don't use probabilities and expected utilities you'll be incoherent, which is bad, so you shouldn't do that." It's a deeper understanding to state, "If you do something that is incoherent in way X, it will correspond to a dominated strategy in fashion Y. This is a universal generalization that is true about every tool in the statistical toolbox, whether or not they are in fact coherent, whether or not you personally prefer to avoid dominated strategies, whether or not you have the computing power to do any better, even if you own a bicycle."

I suppose that when it comes to the likes of Fun Theory, there isn't any deeper fact of nature underlying the "normative ideal" of a eudaimonic universe. But in simpler matters of math and science, a "normative ideal" like the Carnot cycle or Bayesian decision theory is almost always the manifestation of some simpler fact that is so closely related to something we want that we are tempted to take one step to the right and view it as a "normative ideal". If you're allergic to normative ideals, maybe a helpful course would be to discard the view of whatever-it-is as a normative ideal and try to understand it as a fact.

But that is a more advanced state of understanding than trying to understand what is better or best. If you're not allergic to ideals, then it's okay to try to understand why Bayesian updates are often-unreachable normative ideals, before you try to understand how they're just there.

New Comment
49 comments, sorted by Click to highlight new comments since:

Given the discussion of the mazes, Feymann's speech about cargo-cult science seems relevant:

All experiments in psychology are not of this type, however. For example, there have been many experiments running rats through all kinds of mazes, and so on—with little clear result. But in 1937 a man named Young did a very interesting one. He had a long corridor with doors all along one side where the rats came in, and doors along the other side where the food was. He wanted to see if he could train the rats to go in at the third door down from wherever he started them off. No. The rats went immediately to the door where the food had been the time before.
The question was, how did the rats know, because the corridor was so beautifully built and so uniform, that this was the same door as before? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and, still the rats could tell.
He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.
Now, from a scientific standpoint, that is an A‑Number‑l experiment. That is the experiment that makes rat‑running experiments sensible, because it uncovers the clues that the rat is really using—not what you think it’s using. And that is the experiment that tells exactly what conditions you have to use in order to be careful and control everything in an experiment with rat‑running.
I looked into the subsequent history of this research. The subsequent experiment, and the one after that, never referred to Mr. Young. They never used any of his criteria of putting the corridor on sand, or being very careful. They just went right on running rats in the same old way, and paid no attention to the great discoveries of Mr. Young, and his papers are not referred to, because he didn’t discover anything about the rats. In fact, he discovered all the things you have to do to discover something about rats. But not paying attention to experiments like that is a characteristic of Cargo Cult Science.

For Feymann those rat psychologists, who believed in idealized mazes where they can just assume that "the laws" hold, were Cargo-Cultists.

This doesn't pass my ITT for anti-law-thinking. The step where law thinking goes wrong is when it assumed that there exists a map that is the territory, and thus systematically underestimates the discrepancies involved in (for instance) optimizing for min Euclidean distance.

I realize that this post addresses that directly, but then it spends a lot of energy on something else which isn't the real problem in my book.

I'm finding this comment hard to parse for some reason. In particular, I'm not sure I understand the phrase "map that is the territory." On my understanding those terms (which I thought was the usual one, but may not be), it's a category error to think of the territory as just another map, even if a particularly special one; the territory is qualitatively distinct from any map, it's a different kind of thing. So "a map that is the territory" doesn't parse, because the territory isn't a map, it's the territory. Are you using these terms in a different sense, or intentionally/actively disagreeing with this framing [EDIT: (e.g., claiming that it's "just maps all the way down")], or something else? Also, usually a discrepancy is between two things A and B, so I'm having trouble understanding what you mean by "discrepancies involved in (for instance) optimizing for min Euclidean distance" without a specification of what the discrepancies are between.

I'm saying that law thinking can seem to forget that the map (model) will never be the territory. The real world has real invariants but these are not simply reproduced in reasonable utility functions.

Ah, okay, I think I understand now. That reminds me of Kant's noumena-phenomena distinction, where the territory is the noumena, and you're saying we will never have access to the territory/noumena directly, only various maps (phenomena), and none of those maps can ever perfectly correspond to the territory. And Law thinking sometimes forgets that we can never have access to the territory-as-it-is. Is that about right?

The words "universal best way" suggest that there's something that's true not just for particular maps but that it's true in a more general way.

Curated.

I like this post for being a clear, comprehensive reference post for whenever people-in-the-future end up making one of the major error modes depicted here (i.e. either assuming they've found the One True Law, or not understanding the "there's ultimately one reality that our maps/recipes are trying to match").

I also enjoyed it's brevity-relative-to-other-Eliezer-posts. :)

I've used this analogy quite a few times, and also got a good amount of mileage out of categorizing my own mental processed according to this classification. 

BTW, this is off-topic, but since you link to that Arbital page, and I don't know where else to comment on that -- the theorem you're looking for, that grounds both utility and probability simultaneously, in a non-circular fashion, and without any assumption baked in that R is the correct system of numbers to use[0], is not the complete class theorem. It is Savage's theorem.

[0]Yes, Savage's theorem includes an Archimedean assumption, which you could argue is the same thing as baking in R; but I'd say it's not, because this Archimedean assumption is a direct statement about the agent's preferences, whereas it's not immediately clear what picking R as your number system means about the agent's preferences (and I suspect that most people have used R more on the basis of convenience/familiarity rather than because they recognized the necessity of an Archimedean condition).

Savage's Theorem isn't going to convince anyone who doesn't start out believing that preference ought to be a total preorder. Coherence theorems are talking to anyone who starts out believing that they'd rather have more apples.

I can't make sense of this comment.

If one is talking about one's preferences over number of apples, then the statement that it is a total preorder, is a weaker statement than the statement that more is better. (Also, you know, real number assumptions all over the place.) If one is talking about preferences not just over number of apples but in general, then even so it seems to me that the complete class theorem seems to be making some very strong assumptions, much stronger than the assumption of a total preorder! (Again, look at all those real number assumptions.)

Moreover it's not even clear to me that the complete class theorem does what you claim it does, like, at all. Like it starts out assuming the notion of probability. How can it ground probability when it starts out assuming it? And perhaps I'm misunderstanding, but are the "risk functions" it discusses not in utility? It sure looks like expected values of them are being taken with the intent that smaller is better (this seems to be implicit in the definition of r(θ), that r(θ) is measured by expected value when T isn't a pure strategy). Is that mistaken?

(Possible source of error here: I can't seem to find a statement of the complete class theorem that fits neatly into Savage/VNM/Cox/etc-style formalism and I'm having some trouble translating it to such, so I may be misunderstanding. The most sense I'm making of it at the moment is that it's something like your examples for why probabilities must sum to one -- i.e., it's saying, if you already believe in utility, and something almost like probability, it must actually be probability. Is that accurate, or am I off?)

(Edit: Also if you're taking issue with the preorder assumption, does this mean that you no longer consider VNM to be a good grounding of the notion of utility for those who already accept the idea of probability?)

I'm pretty certain that I can pass the ITT for both law and tool thinking, but it's complicated because undwrneath Tool Thinking is the belief that there are things about thinking that need to be concealed, which implies that it will generally oppose an accurate account of it's actual beliefs. In Law Thinking, every truth reveals every other truth and concealment is impossible in theory and impractical in the long term in practice. If things need to be concealed, it's important to oppose the possibility of inference.

underneath Tool Thinking is the belief that there are things about thinking that need to be concealed

Why do you think this? By "things about thinking", do you mean like, the criteria for deciding which tool to use? If so:

I think people do sometimes deploy toolbox-thinking in cases where they want to conceal why exactly they chose one tool over another. Lots of ethical debates go this way. But more often, it's about complexity rather than concealment: we choose our tools based on a myriad of small, nebulous, overlapping patterns, learned from diverse sources, some difficult to express in words, and the collective mass of them too large to communicate.

Thank you for addressing this! I also had a feeling that there was some fundamental misunderstanding here, but I couldn't express it clearly.

From reading David Chapman's blog, my impression is that he presents his philosophy of meaning as a solution to a dilemma between two opposing extremes. One extreme is nihilism, which refuses to debate meaning, because it insists there is simply no such thing. Another extreme is people who believe in a simplistic solution; two archetypes in this category are a religious fanatic and a Vulcan rationalist. (I am using my own words here.)

The proposed solution is something I would call "reflective compartmentalization", i.e. considering various aspects of your life separately and finding a local meaning in each of them; and being aware that there is no overarching story; and being okay with that. In other words, there is no global meaning, but there are local meanings; the true meaning of your life are the local meanings.

Then this somehow turns into an argument about epistemology -- as if finding separate meanings in multiple separate contexts requires using multiple maps, and insisting that there is one territory implies that there is one objective global meaning of life. I am not sure that I understand this part; this is simply how it sounds to me.

Then, Chapman notices an analogy between his model and Kegan's model of moral development. The Vulcan rationalist (or the religious fanatic) corresponds to level 4 which is about having a strong identity. And Chapman's philosophy of "many local meaning" and "many maps" corresponds to level 5 which Kegan describes as "interpenetration of systems"; also in both views this is the highest place in the hierarchy. -- This makes me a bit worried about affective spirals: the best way to find meaning in life happens to be the best epistemology and also happens to make you most moral and capable of genuine love. (On the other hand, one could easily make a similar accusation against LW.) Oh, and it's also somehow connected with the best religion, i.e. Buddhism.

Now, maybe I just missed something, but I don't remember reading David Chapman mentioning Less Wrong specifically. So I don't understand his opinions per se to be attacks against rationality as defined by LW. (I think it's more about those his fans who also happen to be familiar with LW jumping to the conclusion: "Chapman totally pwned LW, rationality is debunked, all the cool kids are meta-rationalists now!") He seems to be familiar with Vulcan rationality, which is a popular trope in our culture, and let's admit honestly that the trope is based on real behavior of some real people. So I don't blame him for using Vulcan rationality as the prototype of "rationality". I imagine (perhaps incorrectly) that he would agree with some parts of LW common knowledge -- such as complexity of human value -- and consider them an improvement over the Vulcan rationality. He just seems to insist that the true meaning of "rationality" is the Vulcan rationality, and frankly most of the world would probably agree with him, and this is the opponent he really debates.

The crux of the disagreement seems to be whether the belief in a territory is incompatible with having multiple maps and finding them useful, and whether trying to be rational (in the LW sense) is just another irrational and limiting identity. (And the related dictionary debate about whether the true meaning of "rationality" is 'winning at life', or the true meaning of "rationality" is 'Vulcan rationality' and the true meaning of "meta-rationality" is 'winning at life'.)

My opinion is that when debating LW, Chapman's perspective is partially kicking at an open door ("being a Vulcan rationalist is stupid" "thanks, we know already"; "humans are complex" "indeed, I bet there is a lesson about it somewhere in the Sequences") and partially... what was addressed in this article.

EDIT: About Kegan... I didn't think about his model deeply, but I would also guess he was addressing the Vulcan rationality. (And the idea of there being one territory seems to be generally unwelcome in social sciences.)

Now, maybe I just missed something, but I don't remember reading David Chapman mentioning Less Wrong specifically. So I don't understand his opinions per se to be attacks against rationality as defined by LW.
[...]
I imagine (perhaps incorrectly) that he would agree with some parts of LW common knowledge
[...]
He just seems to insist that the true meaning of "rationality" is the Vulcan rationality

Your understanding seems to match what he says in these tweets:

Important: by “rationalists,” I do NOT primarily mean the LW-derived community. I’m pointing to a whole history going back to the Ancient Greeks, and whose most prototypical example is early-20th-century logical positivism.

https://twitter.com/Meaningness/status/993623171411529728

I think that much of the best work of the LW-derived community is “meta-rational” as I define that. The book is supposed to explain why that is a good thing.

https://twitter.com/Meaningness/status/993623388806496256

While David Chapman wasn't one of the main LessWrong contributors but he has 432 LessWrong karma (a. The first longer post of him engaging with the LessWrong philosophy is https://meaningness.com/metablog/bayesianism-updating which starts by referencing a video of Julia Galef.

If you read the comment thread of that post you will find many familiar LessWrong names and Scott wrote an article on his blog in response.

Later Chapman wrote the more technical Probability theory does not extend logic in which Chapman who has a MIT AI PHD shows how the core claim that probability theory is an extension of logic that's made in the sequences is wrong.

If we step back it's worth noting that you find the term Bayesianism a lot less today on LessWrong than five years ago when Chapman wrote the above posts. CFAR dropped their class that teaches Bayes rule (against Eliezer's wishes) and instead teaches double crux which often doesn't contain any thinking about probabilities.

Valentine who's at the head of CFAR curriulum development was more influential of how the "LessWrong ideology" developed in the last five years than Eliezer.

I think there's a good chance that Julia Galef would cringe a bit when she today looks back on that Bayes rule video.

The crux of the disagreement seems to be whether the belief in a territory is incompatible with having multiple maps and finding them useful, and whether trying to be rational (in the LW sense) is just another irrational and limiting identity.

That doesn't sound to me like you pass the Ideological Turing Test. I'm not even sure whether Eliezer would argue that probability is an inherent feature of the territory.

Later Chapman wrote the more technical Probability theory does not extend logic in which Chapman who has a MIT AI PHD shows how the core claim that probability theory is an extension of logic that's made in the sequences is wrong.

As far as I can tell, his piece is mistaken. I'm going to copypaste what I've written about it elsewhere:

So I looked at Chapman’s “Probability theory does not extend logic” and some things aren’t making sense. He claims that probability theory does extend propositional logic, but not predicate logic.

But if we assume a countable universe, probability will work just as well with universals and existentials as it will with conjunctions and disjunctions. Even without that assumption, well, a universal is essentially an infinite conjunction, and an existential statement is essentially an infinite disjunction. It would be strange that this case should fail.

His more specific example is: Say, for some x, we gain evidence for “There exist distinct y and y’ with R(x,y)”, and update its probability accordingly; how should we update our probability for “For all x, there exists a unique y with R(x,y)”? Probability theory doesn’t say, he says. But OK — let’s take this to a finite universe with known elements. Now all those universals and existentials can be rewritten as finite conjunctions and disjunctions. And probability theory does handle this case?

I mean… I don’t think it does. If you have events A and B and you learn C, well, you update P(A) to P(A|C), and you update P(A∩B) to P(A∩B|C)… but the magnitude of the first update doesn’t determine the magnitude in the second. Why should it when the conjunction becomes infinite? I think that Chapman’s claim about a way in which probability theory does not extend predicate logic, is equally a claim about a way in which it does not extend propositional logic. As best I can tell, it extends both equally well.

(Also here is a link to a place where I posted this and got into an argument with Chapman about this that people might find helpful?)

[-]TAG10

But if we assume a countable universe, probability will work just as well with universals and existentials as it will with conjunctions and disjunctions.

If you regard probability as a tool for thinking , which is pretty reasonable, it's not going to work, in the sense of being usable, if it contains countable infinities or very large finite numbers.

Also, it is not a good idea to build assumptions about how the world works into the tools you are using to figure out how the world works.

But the question wasn't about whether it's usable. The question was about whether there is some sense in which probability extends propositional logic but not predicate logic.

But OK — let’s take this to a finite universe with known elements.

If everything is known you don't need probability theory in the first place. You just know what happens. See Probability is in the Mind.

Most of the factors that we encounter are not known and good decision making is about dealing with the unknown and part of the promise of Bayesianism is that it helps you dealing with the unknown.

So, I must point out that a finite universe with known elements isn't actually one where everything is known, although it certainly is one where we know way more than we ever do in the real world. But this is irrelevant. I don't see how anything you're saying relates to the claim is that probability theory extends propositional logic but not predicate logic.

Edit: oops, wrote "point" instead of "world"

But this is irrelevant.

Why is it irrelevant when you assume a world where the agent who has to make the decision knows more than they actually know?

Decision theory is about making decisions based on certain information that known.

I don't see how anything you're saying relates to the claim is that probability theory extends propositional logic but not predicate logic.

I haven't studied the surrounding math but as far as I understand according to Cox’s Theorem probability theory does extend propositional calculus without having to make additional assumptions about finite universe or certain things being known.

Why is it irrelevant when you assume a world where the agent who has to make the decision knows more than they actually know? Decision theory is about making decisions based on certain information that known.

I think you've lost the chain a bit here. We're just discussing to what extent probability theory does or does not extend various forms of logic. The actual conditions in the real world do not affect that. Now obviously if it only extends it in conditions that do not hold in the real world, then that is important to know; but if that were the case then "probability theory extends logic" would be a way too general statement anyhow and I hope nobody would be claiming that!

(And actually if you read the argument with Chapman that I linked, I agree that "probability theory extends logic" is a misleading claim, and that it indeed mostly does not extend logic. The question isn't whether it extends logic, the question is whether propositional and predicate logic behave differently here.)

But again all of this is irrelevant because nobody is claiming anything like that! I mentioned a finite universe, where predicate logic essentially becomes propositional logic, to illustrate a particular point -- that probability theory does not extend propositional logic in the sense Chapman claims it does. I didn't bring it up to say "Oho well in a finite universe it does extend predicate logic, therefore it's correct to say that probability theory extends predicate logic"; I did the opposite of that! At no point did I make any actual-rather-than-illustrative assumption to the effect that that the real world is or is like a finite universe. So objecting that it isn't has no relevance.

I haven't studied the surrounding math but as far as I understand according to Cox’s Theorem probability theory does extend propositional calculus without having to make additional assumptions about finite universe or certain things being known.

Cox's theorem actually requires a "big world" assumption, which IINM is incompatible with a finite universe!

I think this is getting off-track a little. To review: Chapman claimed that, in a certain sense, probability theory extends propositional but not predicate logic. I claimed that, in that particular sense, it actually extends both of them equally well. (Which is not to say that it truly does extend both of them, to be clear -- if you read the argument with Chapman that I linked, I actually agree that "probability theory extends logic" is a misleading claim, and that it mostly doesn't.)

So now the question here is, what are you arguing for? If you're arguing for Chapman's original claim, the relevance of your statement of Cox's theorem is unclear, as it's not clear that this relates to the particular sense he was talking about.

If you're arguing for a broader version of Chapman's claim -- broadening the scope to allow any sense rather than the particular one he claimed -- then you need to exhibit a sense in which probability theory extends propositional logic but not predicate logic. I can buy the claim that Cox's theorem provides a certain sense in which probability theory extends propositional logic. And, though you haven't argued for it, I can even buy the claim that this is a sense in which it does not extend predicate logic [edit: at least, in an uncountable universe]. But, well, the problem is that regardless if it's true, this broader claim -- or this particular version of it, anyway -- just doesn't seem to have much to do with his original one.

I'm not even sure whether Eliezer would argue that probability is an inherent feature of the territory.

Probability is in the Mind

[-]TAG10

if you want to prove that there is no probabilty in the territory, you need to examine the territory.

Yeah I was referencing Eliezer's views on the topic rather than stating my own. Personally I think it does make sense to think of the Born probabilities as some sort of propensity, which it might be fair to describe as "probability in the territory". Other than that I am not sure what it would mean to talk about "probability in the territory".

David Chapman directly discusses his opinion about LessWrong here.

His description of LW there is: "LW suggests (sometimes, not always) that Bayesian probability is the main tool for effective, accurate thinking. I think it is only a small part of what you need."

This seems to reflect the toolbox vs. law misunderstanding that Eliezer describes in the OP. Chapman is using a toolbox frame and presuming that, when LWers go on about Bayes, they are using a similar frame and thinking that it's the "main tool" in the toolbox.

In the rest of the post it looks like Chapman thinks that what he's saying is contrary to the LW ethos, but it seems to me like his ideas would fit in fine here. For example, Scott has also discussed how a robot can use simple rules which outsource much of its cognition to the environment instead of constructing an internal representation and applying Bayes & expected utility maximization.

I think this is a good summary; see also my comment below.

So, my understanding of Chapman -- and this is based on other thing's he's written which I unfortunately can't find right now, he can of course correct me if I'm wrong here -- is that he's often just not saying what it sounds like he's saying, because he's implicitly prefixing everything with "human-". The article that I can't find at the moment that made this clear was where he said, there's no system to do X, and then said, there's no system to do X, and then anticipated the counterargument, but the human brain does X, and replied, yes but I'm talking about systems a human could execute, so "the human brain" does X is not relevant to what I'm talking about. But this is the only place he explicitly said that! So I think when reading him you just have to do that everywhere -- prefix "human-" to everything (although the exact meaning of that prefix seems to vary). When he says "system", he actually means "system a human could execute". When he says "rationality", he actually means "how people usually construe rationality". When he seems to confuse systems of facts and systems of norms, that's not him getting mixed up, it's that he's actually talking about other people's maps -- and in other people's maps these are often conflated -- rather than talking about the territory. Now personally I think this sort of terminology obfuscates rather than clarifies -- you could just, you know, explicitly mark when you're talking about human-X rather than X, or when you're talking about people's maps rather than the territory directly -- but I think you have to understand it if you want to read Chapman's writing.

In a lot of his articles Chapman uses the word system with the meaning the term has in developmental psychology and particularly in Kegan's writing. It's what Kegan labels as level 4. I don't think "system that a human could execute" is a gloss that would allow someone without any background to distinguish things that are systems in the developmental psychology sense from things that aren't.

I think maybe you were thinking of this bit from the post "What they don’t teach you at STEM school":

By system, I mean, roughly, a collection of related concepts and rules that can be printed in a book of less than 10kg and followed consciously. A rational system is one that is “good” in some way. There are many different conceptions of what makes a system rational. Logical consistency is one; decision-theoretic criteria can form another. The details don’t matter here, because we are going to take rationality for granted.

I'm pretty sure that's not the particular one, but thank you all the same!

This one? From the CT-thesis section in A first lesson in meta-rationality.

the objection turns partly on the ambiguity of the terms “system” and “rationality.” These are necessarily vague, and I am not going to give precise definitions. However, by “system” I mean, roughly, a set of rules that can be printed in a book weighing less than ten kilograms, and which a person can consciously follow.11 If a person is an algorithm, it is probably an incomprehensibly vast one, which could not written concisely. It is probably also an incomprehensibly weird one, which one could not consciously follow accurately. I say “probably” because we don’t know much about how minds work, so we can’t be certain.
What we can be certain is that, because we don’t know how minds work, we can’t treat them as systems now. That is the case even if, when neuroscience progresses sufficiently, they might eventually be described that way. Even if God told us that “a human, reasoning meta-systematically, is just a system,” it would be useless in practice. Since we can’t now write out rules for meta-systematic reasoning in less than ten kilograms, we have to act, for now, as if meta-systematic reasoning is non-systematic.

That sounds like it might have been it?

[-]TAG10

Or you could explicitly mark when you are talking about impractical-ideal-X. Chapman's default seems more reasonable to me.

I can't agree with that, for a number of reasons. Note that the thing that I'm claiming Chapman does is really a number of things which I've summed up as "you have to prepend 'human-' to everything", but the meaning of that prefix I'm summing things up with is actually context dependent. Here's a few examples of what it can mean (if I'm correct -- again, if Chapman himself wants to correct me, great!) and why it's not a good way of talking.

  1. Sometimes this means talking about... certain human patterns, that a particular notion tends to invoke. E.g. "rationality" above -- it does indeed frequently happen that those who go in for "rationality" or similar notions end up falling into the Straw Vulcan pattern. And it's important to be able to discuss these patterns. But it's a mistake to conflate the pattern itself with the idea that invokes it -- especially as there may be multiple of the latter, that are distinct from one another; this is a lossy operation. Better to say "rationality" when you mean rationality, and say "the pattern invoked by rationality" (or in this case, "Straw Vulcanism", since we have a name for it in this case) when you mean that. Because otherwise how will you tell apart the different ideas that can invoke the Straw Vulcan pattern?

Like, let's diagram this. The usual approach is that "rationality" (the word) points to rationality (the concept) which then itself has an arrow (via the "invokes in humans" operator) to Straw Vulcanism. If we take the initial arrow from "rationality" to rationality, and alter it instead to point to Straw Vulcanism, how do we refer to rationality? "Idealized Straw Vulcanism?" I don't think so! Especially because once again which idealization?

The alternative, I suppose, is that we don't reroute any arrows, but instead just take it as implicit that we're always supposed to apply "human-" afterward. And, like, use some sort of quotation thingy (e.g. the "idealized-" prefix) when we want to stop that application (like how we use quote marks to indicate that we are mentioning rather than using a word). But even though we're using "rationality" to talk about Straw Vulcanism, under this way of talking, we have to keep in mind that rationality doesn't actually mean Straw Vulcanism (even though that's what we're using it to mean!) so that when we say "idealized rationality" we know what that means. This... this does not sound like a good way of handling things. I would recommend having words directly point to the thing they refer to.

  1. Sometimes this means talking about the map rather than the territory. Taking "X" not to mean X but to mean "X", people's idea of X.

The problem is that, well, most of the time we want to talk about the territory, not people's maps. If I say "there were no Kuiper belt in 1700" you should say "that is false", not "that is true, because the idea of a Kuiper belt had not yet been hypothesized". If I want to say "there was no concept of a 'Kuiper belt' in 1700", I can say that explicitly. Basically this way of talking is in a sense saying, you can't actually use words, you can only mention them. But most of the time I do in fact want to use words, not mention them!

And again this ends up with similar problems to above, which I won't detail in full once again. In this case they seem a bit more handleable because there's not the lossiness issue -- the usual way of speaking is to say X in order to use the word "X" and to say "X" in order to mention the word "X", but one could notionally come up with some bizarre reverse convention here. (Which to be clear I haven't seen Chapman use -- what he says when he actually wants to use a word rather than mentioning it, I don't know. "The real, actual Kuiper belt?" IDK.) I still don't think this is a good idea.

  1. The most defensible one, I think, is where it effectively means "humanly realizable", like with the "system" example above. This one is substantially less bad than the others, because it's still a bad idea, it's at least workable. It's usably bad rather than unusably bad. But I do still think it's a bad idea. Once again this is a lossy operation -- the distinction betwen "nondeterministic" and "chaotic", that can both get collapsed to "unpredictable in practice", is worth preserving. And once again to adopt this systematically would require similar contortions to above, even if not as bad; once again I'll skip the full argument. But yeah, I don't think this is a good way of talking.

In the post it was hinted at several times that there is another way of thinking:

Are there Laws of optimal thought governing the optimal way to contextualize and caveat, which might be helpful for finding good executable recipes? The naturally Lawful thinker will immediately suspect so, even if they don't know what those Laws are. Not knowing these Laws won't panic a healthy Lawful thinker. Instead they'll proceed to look around for useful yet chaotic-seeming prescriptions to use now instead of later - without mistaking those chaotic prescriptions for Laws, or treating the chaos of their current recipes as proof that there's no good normative ideals to be had.
Indeed, it can sometimes be useful to contemplate, in detail, that there are probably Laws you don't know.

[...]

The idea that there's a shortest path through the maze isn't a "normative ideal" instead of a "prescriptive ideal", it's just true. Once you define distance there is in fact a shortest path through the maze.

[...]

If you're allergic to normative ideals, maybe a helpful course would be to discard the view of whatever-it-is as a normative ideal and try to understand it as a fact.

Let's call this "Fact-thinking". I'm not sure if it's just me seeing this distinction, so feel free to comment.

  • Fact-thinking relies on the assumtion that there is such a thing as 'true underlying reality'.
  • It doesn't really matter if the whole of this 'reality' can be expressed in a bunch of single true sentences (facts), or just parts of it. (Presuming that you are an omniscient being that knows everything there is to know about this 'reality'.) Either way, there is a (presumably) bounded amount of distinct facts which describe as much of reality as is possible, given its nature. Wether you are aware of all these facts or not is irrelevant to their existence.
  • Assuming that there are processes which can distinguish truths from falsehood given enough time; and assuming that there are hypothesis-producing processes that generate enough hypothesis to cover the whole of 'reality that can be expressed as facts'; then it is possible to distinguish the whole package of these facts after some finite amount of time, dependend on the efficiency of the processes.
  • These facts can be used to describe literally anything that you can do in reality, because they describe the whole of reality, which you are a part of. Therefore, they can describe all laws that you can come up with to approximate or describe some process; they can describe any tool and any box of tools; etc.
  • Given some problem or situation to puzzle through, the Factual thinker doesn't dither over wether or not the knowledge they're using to solve this relates to some approach called 'Law-thinking' or 'Toolbox-thinking'. It doesn't really matter if you use the left-hand-method to find your way out of a maze, or your knowledge of there being a shortest path to build a cellphone app which finds a way for you, or if you burrow underneath the maze, or teleport to the other side. What matters is that, whichever approach you use, it depends on your knowledge of the situation at the time; and this knowledge is either correct in some approximate fashion to the facts that literally describe the situation in every possible way, at every possible time, or it is not. That is, you should be aware that your knowledge and ideas of how to conquer the maze is only a subset of all possible ways to conquer the maze; and, within this subset or outside of it, there may or may not be a better approach to how you can get to the other side, given the parameters you care about (wheelchair-accessability, fastest time of all possible times, balancing a red triangle made of jelly on your head while singing to the tune of your favorite childhood song, ...).

Factual thinking seems to be alreay hinted at in the post. I just wanted to point to it, because I have seen examples of 'Law-thinking' and 'Toolbox-thinking' in people, but if I was pressed to categorize my way of thinking into something slightly idealised, I favor what I would call the 'Fact-thinking' approach, which, if I haven't misunderstood something greatly, is distinct from the other two.

I think there is a danger in toolbox thinking BECOMING law thinking without the due diligence required to justify the law outside its original context.

I.E. an obsolete mode of lawful thought is superceded using a tool, and that tool becomes the new soon-to-be-obsolete mode of lawful thought, until a new tool solves the contradiction... and becomes its own new lawful mode. Sort of a Hegelian dialectical process.

I.E. Progresses are stupid, but they work anyway.

Used as a research source for my EA/rationality novel project, found this interesting and useful.

Is there a meaningful difference between a Toolbox thinker, and a Law thinker who is "careful to understand the context and the caveats: when is the right time to think in Law, how to think in Law, and what type of problems call for Lawful thinking"?

Sure; any particular Toolbox Thinker might not have 'Law Thinking' in their toolbox for one.

Inveterate and unapologetic toolbox-thinker here. I really appreciate this write-up as a reminder that there _is_ (probably ;) ) a reality and it's not just maps all the way down. [edit much later: it can still be maps a LONG LONG way down, with reality being inaccessible to any perception of agents within that reality.]

I must admit that the advice to think of laws as descriptive statements makes no sense to me. Why use the term "law" rather than the simpler and more precise term "prediction" (or "measurement", or maybe even "truth" (but I can't tell if you're making that equivalence))?

One key difference between a claim of Law and a claim of Truth: truth-claims are abandoned (or refined, but that's really "abandoned and replaced by a slightly different truth-claim") after a single counter-example. Law-claims, from what I've seen, are fuzzier and tend to wiggle rather than breaking when shown not to be universal.

There's "Law of Physics" and there's "Human Law", which I think might be getting confused here.

Right. Also "Iron Law" sometimes seen as a strong tendency but not provably universal. Mixing them up is an error which is easy to make when "Law" is used without qualifiers.

It may be that none of my readers need the lecture at this point, but I've learned to be cautious about that sort of thing, so I'll walk through the difference anyways.

One of my favorite literature professors used to tell me that one should always write under the assumption that each piece one writes is the first piece of one's work that the reader has encountered. Not only does this make one's writing more accessible (because odds are there will be someone for whom that is true!), it also helps us to be internally consistent, because we have to summarize our reasoning rather than take shortcuts because we assume our audience already knows.

Not to commit the fallacy of the golden mean or anything, but the two viewpoints are both metatools in the metatoolbox, as it were. You're better off if you can use both in ways that depend on context and circumstance, rather than insisting that only toolbox reasoning is the universally best context-insensitive metaway to think.

I think you're committing the fallacy of the golden mean. "Metatools" are still tools, and "metatoolboxes" are still toolboxes. If I'm understanding you correctly, and your point is "Toolbox thinking and lawful thinking are metatools in metatoolboxes, and should be used accordingly", then you actually are arguing that toolbox reasoning is the universally best context-insensitive metaway to think.

Heck, right at the very beginning of this essay, you described the toolbox way of thinking as "[having] a big bag of tools that you can adapt to context and circumstance", and you used that same wording almost verbatim to state your main argument about metatools and metatoolboxes. So it would appear that you are ultimately arguing in favor of toolbox thinking, yet for some reason saying you're not. Have I misunderstood something somewhere?

If I'm understanding you correctly, and your point is "Toolbox thinking and lawful thinking are metatools in metatoolboxes, and should be used accordingly", then you actually are arguing that toolbox reasoning is the universally best context-insensitive metaway to think.

Eliezer's argument in this post is that "toolbox reasoning is the best way to think" is ambiguous between at least three interpretations:

  • (a) Humans shouldn't try to base all their daily decisions on a single simple explicit algorithm.
  • (b) Humans should never try to think in terms of simple, all-encompassing, unconditional, exceptionless rules and patterns, or should only do so when there's minimal risk of mistaking that rule for a simple-algorithm-you-can-base-every-decision-on.
  • (c) Humans should rarely try to think in terms of such rules. It's useful sometimes, but only in weird exceptional cases.

Your point is that (a) is true, and that toolbox thinking therefore "wins". But this depends on which interpretation we use for "toolbox thinking" — which is a question that doesn't matter and has no right answer anyway, because "toolbox thinking" is just a phrase Eliezer made up to gesture at a possible miscommunication/confusion, and doesn't have an established meaning.

Eliezer's claim, if I understand him right, is that (a) is clearly true, (b) is clearly false, and (c) is very probably false. (c) is the more interesting version of the claim, and the hardest to quickly resolve, since terms like "rarely" are themselves vague and need more operationalization. But a fair number of people do reject something like (a), and a fair number of people do endorse something like (b), so we need to address those views in some way, while being careful not to weak-man people who have more credible and nuanced positions.

If I search for the phrase "toolbox thinking" on LessWrong I find posts like Developmental Thinking Shout-out to CFAR that use it, that suggest to me that it's not something that Yudkowsky just made up.

In the context of this post David Chapman's How To Think Real Good doesn't use the word tool box but it does speak about intellectual tools. When Yudkowsky here uses the term it seems to me that he does gesture towards the argument made in that article.

To me the disagreement seems to be:

Yudkowsky: Thinking of the maze as inherently being an Euclidean object by it's essential nature is the correct way to think of the maze, even when you might actually use a different algorithm to navigate in it.

Chapman: The maze doesn't have an essential nature that you can describe as an Euclidean object. It's an Euclidean object after you apply a specific mental model to it.

Or to move to the more specific disagreement:

Yudkowsky: Reality is probabilistic in it's essential nature even if we might not have the mental tools to calculate things out with Bayes rule.

Chapman: Probability theory doesn't extend logic and there are things in reality that logic describes well but probability theory doesn't, so reality is not probabilistic in it's essential nature.

This reads like an analogy of dual process theory, and the tested nuances of system 1v2 thinking map fairly cleanly to the open questions and gaps in reasoning put forward here.

Edit: Possible bucket error or overcompression. In my frame of reference the likelihood ratio was sufficient to post, external likelihood ratio is perhaps -7 bits or more.