The host has requested RSVPs for this event
16 Going3 Maybe1 Can't Go
Sean Aubin
Jenn
Mike White
Luke Pereira
Arth
Peter
Sheikh Abdur Raheem Ali
Harry
Thomas Broadley
Akhil
MISHA
Kimk
Viktor Riabtsev
anithite
Bobby
Sacha
Vaeda
Graeme
Naryan
peripetical@gmail.com

In a nutshell, Applied Rationality is figuring out good actions to take towards a goal. Going meta, by questioning whether the goal itself is good, is necessary and useful. But navigating the existential pitfalls that come with this questioning can feel like a waste of time.

How do you balance going meta with actually doing the work?

Location

Enter the Mars Atrium via University Avenue entrance. We'll meet in front of the CIBC Live Lounge (see picture), which is in the atrium on the ground floor. I'll be wearing a bright neon windbreaker. We'll loiter there until 14:30 and then head somewhere comfier depending on how many people show up.

Reading

An abridged post where David Chapman frames the problem, it's importance and it's common causes of frustration, but offers no solutions.

Please recommend/bring other readings.

New to LessWrong?

New Comment
Everyone who RSVP'd to this event will be notified.
11 comments, sorted by Click to highlight new comments since: Today at 11:51 AM

Would it be okay to start some discussion about the David Chapman reading in the comments here?

Here's some thoughts that I had while reading.

When Einstein produced general relativity, the success criteria was "it produces Newton's laws of gravity as a special case approximation". I.e. it had to produce the same models as have already been verified as accurate to a certain level of precision.

If more rationality knowledge produces depression and otherwise less stable equilibria within you, then that's not a problem with rationality. Quoting from a lesswrong post: We need the word 'rational' in order to talk about cognitive algorithms or mental processes with the property "systematically increases map-territory correspondence" (epistemic rationality) or "systematically finds a better path to goals" (instrumental rationality).

A happy, stable productive you (or the previous stable version of you), is a necessary condition of using "more rationality". If it comes out otherwise, then it's not rationality. It's some other confused phenomenon. Like a crisis of self-consistency. Which if it happens, and feels understandably painful, should eventually produce a better you at the end. If it doesn't, then it actually wasn't worth starting on the entire adventure, or stressing much about it.

Just to make sure I am not miscommunicating, "a little rationality can actually be worse for you" is totally a real phenomenon. I wouldn't deny it.

A happy, stable productive you (or the previous stable version of you), is a necessary condition of using "more rationality". If it comes out otherwise, then it's not rationality. It's some other confused phenomenon. Like a crisis of self-consistency. Which if it happens, and feels understandably painful, should eventually produce a better you at the end. If it doesn't, then it actually wasn't worth starting on the entire adventure, or stressing much about it.

I conjecture roughly the opposite, that is sometimes in the pursuit of winning or truth with rationality, that sometimes there will be things that are more likely to be right but also cause bad mental health/instability.

In other words, there are truths that are both important but also likely to cause bad mental health.

I feel like there are local optima. That getting to a different stable equilibrium involves having to "get worse" for a period of time. To question existing paradigms and assumptions. I.e. performing the update feels terrible, in that you get periodic glimpses of "oh, my current methodology is clearly inadequate", which feels understandably crushing.

The "bad mental health/instability" is an interim step where you are trying to integrate your previous emotive models of certain situations, with newer models that appeal to you intelligently (i.e. feels like they ought to be the correct models). There is conflict when you try to integrate those, which is often meta discouraging.

If you're curious about what could possibly be happening in the brain when that process occurs, I would recommend Mental Mountains by Scott A., or even better the whole Multiagent Models of Mind sequence.

I definitely agree that the goal should be to be emotionally healthy while accepting reality as it is, but my point really is that the two goals may not always come together.

I suspect that truths that could cause bad mental health/instability probably have the following properties:

  1. Non-local belief changes must be made. That is, you can't compartmentalize the changes to a specific area.

  2. Extreme implications, that is it implies much higher implications than your previous beliefs.

  3. Contradicts what you deeply believe or value.

These are the properties I expect to cause mental health problems for truths.

I think David's primary concern is choosing the goals in "systematically finds a better path to goals" which he wants to name "meta-rationality" for the sake of discussion, but I think could be phrased as part of the rationality process?

So the premise is that there are goals you can aim for. Could you give an example a goal you are currently aiming for?

I am irrationally/disproportionately insecure about discussing my mediocre/generic goals in a public forum, so I'd rather discuss this in-person at the meetup. :apologetic-emoji

No, that's fair.

I was mostly having trouble consuming that 3-4-5 stage paradigm. Afraid that it's a not a very practically useful map; i.e. doesn't actually help you instrumentally navigate anywhere. But realized half way through composing that argument, that it's very possible I'm just wrong. So decided to ask for an example of someone using this framework to actually successfully orient somewhere.

I wonder if the post-rationalist/tpot community is a kind of meta-rationalism or if it's another stepping stone before it. This recent tweet is interesting, though i'd say that post-rats are more concerned with the meaning crisis than effective accelerationism: https://twitter.com/nosilverv/status/1625951461673734169

I truly do not know. I have many friends (Hazard) and acquaintances (Malcolm Ocean) in those communities and only understand their blog posts and investigations/discussions 20% of the time, but maybe that's because I'm a stage 4.5 dweller.

Hilariously, this meetup is motivated by the sentiment/discomfort/uncertainty expressed in Ellie Hain's reply to the tweet you linked:

the problem is though, that the more meta you go, the less concrete solutions you get. A lot of talking but very little doing

Aside: I've met Ellie Hain in person and like the School for Social Design, but have yet to find a way to design an approachable meetup around their ideas.

Apologies I have not made it before. This will be my first time. This is certainly an applied problem I am interested in discussing, especially from the point of view of pedagogy. Thank you for hosting. Cheers, Mike