LESSWRONG
LW

Anticipated ExperiencesDisagreementRationalityWorld Modeling
Frontpage

13

Models vs beliefs

by Adam Zerner
2nd Sep 2025
2 min read
3

13

Anticipated ExperiencesDisagreementRationalityWorld Modeling
Frontpage

13

Models vs beliefs
2Dagon
2Adam Zerner
2Vladimir_Nesov
New Comment
3 comments, sorted by
top scoring
Click to highlight new comments since: Today at 11:11 PM
[-]Dagon3h20

This seems more like an underspecified question than a prediction difference.  You and Ben (and Omega) have different criteria for your rankings.  Or, I guess different factual data about what happened - maybe you misread a stat or something.

The reason you feel a dissonance is that you’re not noticing the difference between “rank of peak using my subjective and unspecified weighting”, which is not objectively testable against any future experience, vs “my prediction of what someone else would say to a different question using the same words”, which is resolvable.

Reply
[-]Adam Zerner32m20

I hear ya, but no, I don't think it's a criteria difference. Ben and I both are evaluating players based on the criteria of, roughly, how much that player helps you win a championship, or Championship Odds over Replacement Player (CORP). It's a "real" disagreement.

Often times that isn't the case though with these sorts of top 25 lists. For example, some people incorporate "floor raising" -- making a bad team average -- and not just "ceiling raising". 

Reply
[-]Vladimir_Nesov4h20

The use of models/theories is in their legibility, you don't necessarily want to obey your models even when forming beliefs on your own. Developing and applying models is good exercise, and there is nothing wrong with working on multiple mutually contradictory models.

Framings take this further, towards an even more partial grasp on reality, and can occasionally insist on patently wrong claims for situations that are not central to how they view the world. Where models help with local validity and communication, framings help with prioritization of concepts/concerns, including prioritization of development of appropriate kinds of models.

Neither should replace the potentially illegible judgement that isn't necessarily possible to articulate or motivate well. That seems to be an important failure mode that leads to either rigid refusal to work with (and get better at) the situations that are noncentral for your favored theories, or to deference to such theories even where they have no business having a clue. If any situation is free to spin up new framings and models around itself, even when they are much worse than and contradictory to the nearby models and framings that don't quite fit, then there is potential to efficiently get better at understanding new things, without getting overly anchored to ways of thinking that are much more familiar or better understood.

Reply
Moderation Log
More from Adam Zerner
View more
Curated and popular this week
3Comments

I think that there is an important difference between sharing your beliefs and sharing what your model predicts. Let me explain.

I'm a basketball fan. There's this guy named Ben Taylor who has a podcast called Thinking Basketball. He's currently doing a series of episodes on the top 25 peaks of the century. And he ranked a guy named Draymond Green as having the 22nd best peak.

I don't agree with this. I would probably have Draymond as, I don't know, maybe I'd have him somewhere in the 40s? 50s? Maybe even the 60s?

Well, but if you had a gun to my head I'd probably just adopt Taylor's belief and rank Draymond 22nd.

Suppose the all-knowing god of the courts, Omega, has the answer to where Draymond's peak is. And suppose that Omega allows me one guess and will shoot me if I'm wrong. Or, if you want to be less grim, gives me $1,000,000 if I'm right. Either way, my guess would be 22.

But despite that being my guess, I still wouldn't say that I agree with Taylor. There's this voice inside me that wants to utter "I think you're wrong, Ben".

What's going on here?

I think what's going on here is that my belief differs from what my model predicts. Let me explain. Dammit, I said that already. But still, let me explain.

My model of how good a basketball player depends on various things. Shot creation, spacing, finishing, perimeter defense, rim protection, etc etc. It also incorporates numbers and statistics. Box score stats like points per game. On-off stats like how the teams defensive efficiency is with you on the court vs off the court. It even incorporates things like award voting and general reputation.

Anyway, when I do my best to model Draymond and determine where his peak ranks amongst other players this century, this model has him at around 45.

But I don't trust my model. Well, that's not true. I have some trust in my model. It's just that, with a gun to my head, I'd have more trust in Taylor's model than I would in my own.

Is there anything contradictory about this? Not at all! Or at least not from what I can tell. There are just two separate things at play here.

I feel like I often see people conflate these two separate things. Like if someone has a certain hot take about effective altruism that differs from the mainstream. I find myself wanting to ask them whether this hot take is their genuine gun-to-your-head belief or whether it is just what their best attempt at a model would predict.

I don't want to downplay the importance of forming models nor of discussing the predictions of your models. Imagine if, in discussing the top 25 peaks of the century, the only conversation was "Ben Taylor says X. I trust Taylor and adopt his belief." That doesn't sound like a recipe for intellectual progress. Similarly, if everyone simply deferred to Toby Ord on questions of effective giving -- or to Eliezer on AI timelines, Zvi on Covid numbers, NNGroup on hamburger menus, whatever -- I don't think that would be very productive either.

But we're in a "two things could be true" situation here. It is almost certainly true that sharing your models and their predictions is good for intellectual progress. It is also true that the experiences you actually anticipate are not necessarily the same experiences that your model anticipates. That with a gun to your head, you very well might ditch your model and adopt the beliefs of others.