I don't have great faith in the epistemics of postrats as they exist today. My somewhat limited experience of post-rattish meetups and TPOT is that it's a mix of people who are either indistinguishable from rats (and indeed lots are just rats), people who are mostly normie-ish and don't think about epistemics, and totally woo people who are obviously wrong about lots of things (astrology, karma, UFOs) with no epistemic gain.
My guess is what's happening is that the rationalist frame is 80% correct, and the best alternative is normie epistemics in the remaining 20% of time. The first type of "postrats" just use the rationalist frame. The second type swap in some amount of normie epistemology, but not in a way which correlates with the times they should actually be swapping in normie epistemology. The third type of postrats are swapping a woo/religious frame into the rationalist frame, which seems mostly just worse than the rationalist.
The second and third groups do have better interpersonal skills than rats, but I think this is mostly just regression to the mean.
I don't have great faith in the epistemics of postrats as they exist today.
Yeah, you and me both.
I've said this elsewhere before, but in hindsight it was a mistake for us to promote terms like "postrationality" and "metarationality" to the point of fixation. They're exactly the type of words that invite pre/post confusion and allows pre-rats to masquerade as post-rats if there's insufficient gatekeeping (and there usually is).
And yet, there's something in the desire of folks like myself to point to a place that says "hey, I think rationalists are doing a lot of things right, but are screwing up in fundamental ways that are contrary to the vibe of rationality, and it's useful to give that thing a name so we can easily point to it".
In my ideal world, people would be trained in rationality-as-it-exists-today first, and then be trained in the limits of those methods so they know how to transcend them safely when they break down. Then post-rat would really mean something: one who fully trained as a rationalist, and then used that as the bedrock on which to learn how to handle the situations the methods of rationality are not good at dealing with.
Some people will argue that's just rationality, and sure maybe it is some ideal version of rationality as proposed in The Sequences, but as I see it, actual rationalists screw up in predictable ways, those ways are related to the rationalist vibe, and thus the internal experience must be to transcend that vibe, whatever we want to label it.
I had this dialogue with Claude Opus 4.5 on vibestemics and your vision of epistemics as a whole. As far as I understand it, vibestemics is supposed to stitch the benefits of two approaches:
I suspect that you meant something like attempts to reinforce vibes-based parts of the world model with rationality questioning hard-to-believe results (think of Yudkowsky's derision of empiricists which would end up believing that the Ponzi scheme does produce revenue; this example would have sceptics point out that revenue would have to come from somewhere and that the Ponzi scheme didn't provide an explanation) or outright heuristics protecting from adversarial attacks (e.g. mankind had a Russian sociologist claim[1] that it's the rational mind which has to be reinforced by tradition; however, such a reinforcement could be achievable by a different heuristics).
Ironically, the sociologist also used a Ponzi-like scheme as an example.
A few months ago I coined the word “vibestemics”, mostly for myself, in a tweet. At that point, the word was more vibes than ‘stemics. I used it with some friends at a party. They loved it. Since then, nothing.
But I think the word has legs. I just have to figure out what it actually means!
On the surface, it’s obvious. It’s the combination of “vibes” and “epistemics”, so more or less naming the core idea of the post/meta-rationalist project. But again, what does it actually mean? It’s easy to point at a large body of work and say “I don’t know, whatever the thing going on over there is”, but much harder to say what the thing actually is.
So to start, let’s talk about epistemics. What is it? I see people using the word two ways. One is to mean the way we know things in general. The other is to mean the way we know things via episteme, that is knowledge that’s reasoned from evidence, as opposed to doxa and techne and many other ways of knowing (if those Greek words mean nothing to you, I highly recommend reading the post at the link before continuing). Unfortunately, some people equivocate between epistemics-as-knowing and epistemics-as-knowing-via-episteme to give the impression that episteme is the only good way to know anything. That, to me, is a problem.
I think it’s a problem because such equivocation discounts valuable sources of knowledge that aren’t easily made legible. Now, to be fair, there’s some reason to do this, because the pre-rationalist epistemic stance says legibility doesn’t matter and logic is just a means to justify one’s preferred ends. The rationalist stance is largely that everything that can be made legible should be, and that which cannot be made legible needs to be treated with great caution because that’s how we slip back into pre-rationality. So I understand the desire to equate epistemics with episteme (and, etymologically, the English language tries very hard to do this), but I also find it frustrating because it encourages excessive devaluing of other ways of knowing, especially metis, techne, and other forms of knowledge that are less legible.
That’s where the vibes come in. They can rescue us from an excessive focus on episteme and temper the excesses of legibility. But what are vibes and how can they help?
Vibes are the embodiment of what we care about. The stoner, for example, has stoner vibes because they care about chilling and feeling good. The Christian has Christian vibes because they want to do what Jesus would do. And the rationalist has rationalist vibes because they care about knowing the truth with high predictive accuracy. For any vibe, there is always something the person expressing it cares about deeply that causes them to have that vibe.
This matters in epistemics because knowing is contingent on care. I make this argument in detail in Fundamental Uncertainty (currently in revision ahead of publication), but the short version is that we have a mental model of the world, truth is the degree to which our mental model is accurate, we want an accurate mental model because it’s useful, and usefulness is a function of what we care about, thus truth is grounded by and contingent on care. And since vibes are the embodiment of care, vibes have an influence on the act of knowing, hence, vibestemics.
(If this argument seems handwavy to you, it is. You’ll have to read the book to get the full argument because it takes about 10k words in the middle of it to lay it all out. If you want to read the first draft for that argument, it’s in Chapter 5, 6, and 7 which start here. Alternatively, although I think “Something to Protect“ does a poor job of emphasizing the epistemic relevance of care in favor of explaining a particular way of caring, I read it as ultimately claiming something similar.)
Share
Okay, but that’s the theoretical argument for what vibestemics is. What does it mean in practice? Let’s dive into that question by first considering a few examples of different epistemic vibes.
Woo: The epistemic vibe of woo is that whatever’s intuitive is true. Woo is grounded in gnosis and largely eschews doxastic logic and careful epistemic reasoning. That said, it’s not completely devoid of epistemics. It’s definitionally true that whatever you experience is your experience. Unfortunately, that’s roughly where woo stops making sense. It interprets everything through a highly personal lens, so even when it leads to making accurate predictions, those predictions are hard to verify by anyone other than the person who made them, and woo-stemics easily falls prey to classic heuristic and bias mistakes. This severely restricts its usefulness unless you have reason to fully trust yourself (and you shouldn’t when it comes to making predictions).
Religion: The vibe of religion is that God or some other supernatural force knows what’s true. Knowledge of what God knows may require gnosis, or it may be revealed through mundane observations of miraculous events. Although not true of every religion, religious epistemics can be a friend of logic, and many religions demand internal logical consistency based on the assumptions they make. Sometimes these theological arguments manage to produce accurate world models, but often they have to be rationalized because the interpretation of the supernatural is fraught and we mere mortals may misunderstand God.
Science: Science as actually practiced by scientists involves empirically testing beliefs and updating them based on evidence. The vibe is pragmatic—build hypotheses, test them, see what happens, and revise accordingly. The only problem is that science requires the ability to replicate observations to determine if they’re true, and that’s where it hits its limits. When events can’t be observed or can’t be replicated, science is forced to say “don’t know”. Thus, science is fine as far as it goes, but its vibe forces it to leave large swaths of the world unmodeled.
Rationality: The vibe of rationality is to be obsessed with verifying that one really knows the truth. This has driven rationalists to adopt methods like Bayesian reasoning to make ever more accurate predictions. Alas, much as is the case for science, rationality struggles to deal with beliefs where predictions are hard to check. It also tends to smuggle in positivist beliefs for historical reasons, and these frequently result in an excess concern for belief consistency at the cost of belief completeness.
Post-rationality: The post-rationality vibe is that rationality is great but completeness matters more than consistency. Thus it attempts to integrate other ways of knowing when episteme reaches its limits. Unfortunately, how to do this well is more art than science, and there’s a real risk of getting things so wrong that a post-rationalist wraps back around into pre-rationality. Arguably this is what happened to the first post-rationalists (the postmodernists), and it continues to be a threat today.
What I hope you pick up from these examples is that different epistemic vibes are optimizing for different things and making different tradeoffs. Although it may seem strange, especially if you’re a rationalist, that someone could have a good reason to ignore predictive accuracy in favor of intuition or dogma, for those with woo and religious vibes that choice is locally adaptive for them. They similarly look back at you and think you are deeply confused about what matters, and this is a place where arguments about who’s right will fail, because they’re ultimately arguments about what each person values.
All that said, it’s clear that some vibes are more epistemically adaptive than others. Accurate world models convey real benefits, so adopting a vibe that leads you to develop better world models is usually a good move. This, incidentally, is what I would argue is the pragmatic case for post-rationality over rationality: it’s rationality plus you can break out of the rationalist ontology when it’s adaptive to do so (though admittedly at the risk of it becoming rationality minus the guardrails that were keeping you sane).
And this ability to shift between vibes is why I think having a word like “vibestemics” is valuable. When we can only speak of epistemics, we risk losing sight of the larger goal of living what we value. We can become narrowly focused on a single value like accurate model prediction, Goodhart on it, and forget to actually win. We can forget that knowledge and truth exist to serve us and our needs, not the other way around. Vibestemics invites us to know more and better than we can with episteme alone, if only we have the courage to let our grip on a single vibe go.