Gordon Seidoh Worley

I'm writing a book about epistemology. It's about The Problem of the Criterion, why it's important, and what it has to tell us about how we approach knowing the truth.

I've also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.

Sequences

Advice to My Younger Self
Fundamental Uncertainty: A Book
Zen and Rationality
Filk
Formal Alignment
Map and Territory Cross-Posts
Phenomenological AI Alignment

Comments

I guess this is fine, but I'm not convinced. This mostly just seems like you pushing your personal aesthetic preferences, but it feels like I could easily come up with arguments for following exactly the opposite advice.

This post reminds me of lots of writing advice: seems fine, so long as you have the same aesthetic sensibilities as the person giving the advice.

I'm confused by the title. Reading this, I don't really see a case for the benefits of the "poison" in ayahuasca to recommend it over purer forms of DMT. My take away from your points is that ayahuasca is significantly worse than other forms of DMT and should be avoided.

It's unclear to me and I am not a lawyer: does Musk have standing?

I don't think there's any other option. Wholesomeness is something you have to learn by doing. If you try to imitate what you think is wholesome after reading about it, you'll likely end up in some uncanny valley of weirdly unwholesome behavior even though it has all the trappings of wholesomeness.

Sure, feel free to use it, or riff on it to create something better.

This is a fully general problem with using words: the categories they point to are always a bit off, especially if the reader doesn't share a lot of our context. I find it best to state things as directly as I can, and let others sort out their own confusion.

I don't think it has to be hard to say what wholesomeness is. I don't know what you mean by the word, but to me it's simply acting in a way that has compassion and respect to everything, leaving nothing out. Very hard to do, but easy enough to state.

Coming back to add another thought:

There's a difference between the idea of dependent origination and the insight into or faith in dependent origination that comes from practice. That is, in the above, I think rationalists and most people understand the idea, but I'd also claim that most don't have insight into it or trust its truth in the way you can only come to trust it by coming to see the truth of the world that we're trying to point at with the theory.

I don't want people to read the above and think "oh, well, guess I don't need to care about dependent origination", because that would be a mistake, but it'd also be a mistake to think that the average person doesn't have a reasonable understanding of the core idea, it's just that they have an abstract rather than embodied understanding of it.

First, just want to clarify some terminology, which I know can be a bit confusing when new to the site. Less Wrong is about rationality rather than rationalism, which are linguistically close and not totally unrelated, but are used as jargon to mean different things. The short way to understand it is that rationality is about having accurate beliefs, rationalism is the philosophical stance that reason is the primary source of true knowledge, as opposed to observation, prior beliefs, etc.

Second, I wrote a series of posts about the intersection of rationality and zen, since I practice both. Maybe I'll get back to adding to it one day. You might find that interesting.

Third, let's address your question, on the assumption that you meant "rationality" by "rationalism" (sorry if this is a wrong assumption!). So, to the extent that the theory of dependent origination is correct, then rational agents should place proportional credence in its truth. That said, dependent origination is a metaphysical claim, thus hard to test, so most rational agents will be forced to be relatively uncertain about its truth because we are limited in our ability to check its truth. For example, from inside the world, we can't distinguish between one where dependent origination is true and one where the world exists for exactly the single moment when you bothered to check and it just happened to be arranged in a way that made it looked like there was cause and effect, though the former belief is also obviously more useful than the latter even if you can't tell which one is really the true nature of the world.

On the other side, rationality is deeply tied up with the theory of causation because an understanding of causation is necessary to make sense of much of the evidence we observe and make updates to our beliefs that are going to be as accurate as possible. I think most rationalists implicitly believe in something like dependent origination even if they don't have an explicit theory of it because it's (mostly) part of the standard Western model of the world. Modern Westerners generally seem to understand that everything is causally connected within a given thing's Hubble volume, even if some of those connections are quite weak. So all this is to say that rationality's compatibility with dependent origination is not intentional, but because dependent origination, under different names, is now just the background assumption of the type of people who become rationalists.

That said, two additional thoughts. One, just because it's the background assumption doesn't mean people understand it well if you press them for details, and you can easily get people to claim that things are not causally connected because they are unreflectively ignoring causes that don't fit within their ontology. Second, the teaching on dependent origination is best understood in context, where it was given at a time when many people had a model of the world that suggested many things had independent causes, or maybe even no causes at all! So it's worth looking at the connection, but also don't be surprised if you dig into it and find everything adding up to normality.

Despite some problems with the dual process model, I think of this as a S1/S2 thing.

It's relatively easy to get an insight into S2. All it takes is a valid argument that convinces you. It's much harder to get an insight into S1, because that requires a bunch of beliefs to change such that the insight becomes an obvious facet of the world rather than a linguistically specified claim.

We might also think of this in terms of GOFAI. Tokens in a Lisp program aren't grounded to reality by default. A program can say bananas are yellow but that doesn't really mean anything until all the terms are grounded. So, to extend the analogy, what's happening when an insight finally clicks is that the words are now grounded in experience and in some way made real, whereas before they were just words that you could understand abstractly but were't part of your lived experience. You couldn't embody the insight yet.

For what it's worth, this is a big part of what drew me to Buddhist practice. I had plenty of great ideas and advice, but no great methods for making those things real. I needed some practices, like meditation, that would help me ground the things that were beyond my ability to embody just by reading and thinking about them.

Stepping back from the physical questions, we can also wonder why generalization works in general without reference to a particular physical model of the world. And we find, much like you have, that generalization works because it is contingent on the person doing the generalizing.

In philosophy we talk about this through the problem of induction, which arises because the three standard options for justifying its validity are unsatisfactory: assuming it is valid as a matter of dogma, proving it is a valid method of finding the truth (which bumps into the problem of the criterion), or proving its validity recursively (i.e. induction works because it's worked in the past).

One of the standard approaches is to start from what would be the recursive justification and ground out the recursion by making additional claims, and a commonly needed claim is known as the uniformity principal, which says roughly that we should expect future evidence to resemble past evidence (in Bayesian terms we might phrase this as future and past evidence drawing from the same distribution). But the challenge then becomes to justify the uniformity principal, and it leads down the same path you've explored here in your post, finding that ultimately we can't really justify it except if we privilege our personal experiences of finding that each new moment seems to resemble the past moments we can recall.

This ends up being the practical means by which we are able to justify induction (i.e. it seems to work when we've tried it), but also does nothing to guarantee it would work in another universe or even outside our Hubble volume.

Load More