[ Question ]

Is LW making progress?

by zulupineapple 1mo24th Aug 201911 comments

23


Some people have an intuition that with free exchange of ideas, the best ones will eventually come out on top. I'm less optimistic, so I ask if that's really happening.

The alternative would be to have the same people talking about the same problems without accumulating anything. People would still make updates, but some would update in opposite directions, leaving the total distribution of ideas in the community largely unchanged. There would be occasional great ideas, but they would soon get buried in the archives, without leaving much of an impact. I have some hope we're not living in that world and I want to see evidence to justify it.

One way to look at the question is to consider the sequences. They are 10 years old now. Are they in some way outdated? Are there some new and crucial ideas that they are missing?

New Answer
Ask Related Question
New Comment
Write here. Select text for formatting options.
We support LaTeX: Cmd-4 for inline, Cmd-M for block-level (Ctrl on Windows).
You can switch between rich text and markdown in your user settings.

3 Answers

There's some debate about which things are "improvements" as opposed to changes. It's varied a bit which of these have happened directly on LessWrong, but things that seem like improvements to me, which I now think of as important parts of the LessWrong idea ecosystem include:

  • Updates on decision theory
    • seem like the clearest example of intellectual progress on an idea that happened "mostly on LessWrong", as opposed to mostly happening in private and then periodically being written up on LessWrong afterwards.
  • Sequences were written pre-replication crisis.
    • At least some elements were just wrong due to that. (More recent editing passes on Rationality A-Z have removed those, from what I recall. For example, it no longer focuses on the Robber Caves Experiment)
  • AI Landscape has evolved
    • During the sequences days, a lot of talk about how AI was likely to develop was more in the "speculation" phase. By now we've seen a lot of concrete advances in the state of the art, which makes for more concrete and different discussions of how things are likely to play out and what good strategies are to address it.

Shift from specific biases towards general mental integration/flexible skillsets

In the Sequences Days, a lot of discussion focused on "how do we account for particular biases." There has been some shift away from this overall mindset, because dealing with individual biases mostly isn't that useful.

There are particular biases like 'confirmation' and 'scope insensitivity' that still seem important to address directly, but it used to be more common to, say, read through Wikipedia's list of cognitive biases and address each one)

Instead there's a bit more of a focus on how to integrate your internal mental architecture in such a way that you can notice biases/motivated thinking/etc and address it flexibly. In particular, if dialog with yourself about why you don't seem to be acting on information even after it's pointed out to you.

  • Understanding of Trigger-Action-Plans and Noticing
    • Developing the general skill of noticing a situation, and then taking some kind of action on it, turns out to be a core rationality skill. (There are versions of this that focus on the action you take in particular situations, and there are versions that are more focused on the "noticing" part, where you just focus on noticing-in-the-moment particular mental states, conversational patterns, or situations that warrant applying some kind of 'thinking on purpose' to)
  • Focusing (and other introspective techniques)
    • There's a class of information you might be operating on subconsciously that's hard to think concretely about.
  • Doublecrux
    • Generated by CFAR but by now has extensive writeups that have built upon it by other people.
  • Internal Doublecrux

I've been lurking on LW for many years, and overall, my impression is that there's been steady progress. At the end of a very relevant essay from Scott, way back in 2014, he states:

I find this really exciting. It suggests there’s this path to be progressed down, that intellectual change isn’t just a random walk. Some people are further down the path than I am, and report there are actual places to get to that sound very exciting. And other people are around the same place I am, and still other people are lagging behind me. But when I look back at where we were five years ago, it’s so far back that none of us can even see it anymore, so far back that it’s not until I trawl the archives that realise how many things there used to be that we didn’t know.

5 years later, I still think that this still applies. It explains some of the rehashing of topics that were previously discussed. All the things I'm going to point out below are some of the most notable insights I can remember.

When LW was relatively inactive, there were essays from the surrounding sphere that stuck with me. For instance, this essay by paul chrisiano. Which was, for me, the first clear examples of how epistemically irrational things that humans do can actually be instrumentally rational in the right setting, something that wasn't really discussed much in the original sequences.

I think LW has also started focusing a fair bit on group rationality, along with norms and systems that foster it. That can be seen by looking at how the site has changed, along with all of the meta discussion that follows. I think that in pursuit of this, there's also been quite a bit of discussion about group dynamics. Most notable for me was Scott's Meditations on Moloch and Toxoplasma of rage. Group rationality looks like a very broad topic, and insightful discussion about it are still happening now. Such as this discussion on simulacra levels.

On the AI safety side, I feel like there's been an enormous amount of progress. Most notably for me was Stuart Armstrong's post: Humans can be assigned any values whatsoever.. Along with all the discussion about the pros and cons, of different methods of achieving alignment, such as AI Safety Via Debate, HCH, and Value Learning.

As for the sequences, I don't have any examples off the top of my head, but I think at least some of the quoted psychology results that were referenced failed to replicate during the replication crisis. But I can't remember too much else about them, since it's been so long since I read them. Many of the core idea feel like they've become background knowledge that I take for granted, even if I've forgotten their original source.

As someone who worked for CFAR full-time from early 2014 to mid-late 2016 and still teaches at CFAR workshops fairly regularly as a contractor, I can tell you that there has definitely been progress from the "Sequences era" style of rationality to what we are currently teaching. Earlier versions of the CFAR curriculum were closer to the Sequences and were also in my view worse (for instance, CFAR no longer teaches Bayes's Theorem).

Not all of this has been fully visible to the public, though at least some of it is -- for instance, here is Duncan's post explaining Double Crux, one of CFAR's "post-Sequences" innovations. I don't think there are posts for every new technique but I do quite think there's progress being made, some of which is reflected on LW.