[ Question ]

If there were an interactive software teaching Yudkowskian rationality, what concepts would you want to see it teach?

by MikkW1 min read2nd Sep 202021 comments

22

Spaced RepetitionSkill BuildingSoftware ToolsRationality
Frontpage

I've been noticing some complaints (such as this post by Richard Ngo) lately about the quality of the modern LW community's contribution to the big picture of humanity's knowledge.

Ideally, if it were the case that reading something automatically made you internalize deeply everything it said, then just by having a group of people who have read The Sequences, you'd have a superteam of intellectuals. And while I do think LW is a pretty cool group of smart thinkers, that isn't fully the case- just reading The Sequences isn't enough. To really internalize the lessons that one must learn, one must apply the principles, push against the problem, see where their understanding needs improvement, and where they are good enough.

The simplest form of this is having a high-quality Anki deck that tests users on the principles, both by testing recall of the stated principle itself, and even more importantly, giving them test cases where they can apply the principles (in the same vein as Ankifying medium-difficulty multiplication problems). I have seen some rationality-themed Anki decks, but many of the cards are poorly formatted (both esthetically and in terms of learnability), and are also poorly curated. Ideally, if there were to be an Anki deck, it would be well formatted, and the cards would be carefully chosen to maximize quality of information.

Another idea that I've been thinking about is making explorables, a la Nicky Case, that would introduce important rationality concepts. This would have the advantage of providing more flexibility in experience than Anki, but also would sacrifice the benefits of having already implemented SRS.

My question is: if there were to be either an Anki deck or an explorable teaching concepts from The Sequences, targeted primarily as an aide for current LW users, but also as an introduction aimed at the public at large, what concepts from The Sequences would you most want to see covered?

22

New Answer
Ask Related Question
New Comment

6 Answers

On seeing the title of this post again, I'm reminded of an obvious answer: teach people how to decide what to learn for themselves. Sort of like the feed a man a day vs. teaching fishing thing.

I don't think there's a more useful meta thing to learn since that's what you need to figure out everything else for yourself.

Training for effective discourse. If arguments for a claim are presented in philosophical standard form (as is done in academic spheres), it gets much easier to diagnose and identify incoherence in arguments. (And more importantly, in a way we could programmatically evaluate.) Using this structure gives you rationality testing in practical situations - there could be exercises where you're presented with a standard-form argument with flaws in one or two places, and you need to both identify them and label/name the issues; then perhaps ones which where you're given a similar argument but in natural language. Belief-forming is at a higher abstraction layer than argument-comprehension and -making, so there's some implicit foundational content that such software should establish first.

And then, of course, you'd have Competitive Ranked Debate, where you form argumentative trees to tear each others' arguments apart, labelling particular nodes in standard form as points of conflict and branching out into fractal disagreements... though probably not as minimum viable product.

Like you said, reading isn't enough. I think two of the key challenges for such software would be limiting inferential distance for any particular user, and giving practice examples/problems that they actually care about. That's much easier with a skilled mentor than with software, but I suspect it would be very helpful to have many different types of contexts and framings for whatever you try to have such software teach.

My first semester college physics class, the first homework set was all Fermi problems, just training us to make plausible assumptions and see where they lead. Things like "How many words are there in all the books in the main campus library?" or "How many feathers are there on all the birds in the world?" Even though this was years before the sequences were even written, let alone when I read them, it definitely helped me learn to think more expansively about what kinds of things count as "evidence" and how to use them. It also encourages playfulness with ideas, and counters the sense of learned helplessness a lot of us develop about knowledge in the course of our formal schooling.

Actually - beyond specific skills, it might be helpful to think about trying to foster the 12 virtues. Not just exercises, but anecdotes to motivate and show what's possible in interesting and real contexts, games that are fun to experiment with, things like that.

Having an Anki deck is kind of useless in my view as engaging with the ideas is not the path of least resistance. There's a tendency to just go "oh, that's useful" and do nothing with it because Anki/Supermemo are about memorisation. Using them for learning, or creating, is possible with the right mental habits. But for an irrational person, that's exactly what you want to instill! No, you need a system which fundamentally encourages those good habits.

Which is why I'm bearish about including cards that tell you to drill certain topis into Anki since the act of drilling is itself a good mental habit that many lack. Something like a curated selection of problems that require a certain aspect of rationality, spaced out to aid retention would be a good start. But

Unfortunately, there's a trade off between making the drills thorough and reducing overhead on the designer's part. If you're thinking about an empircally excellent, "no cut corners" implementation of teaching total newbs mental models, I'd suggest DARPA's Digital Tutor. As to how you'd replicate such a thing, the field of research described in here seems a good place to start.

Aside from memorizing declarative knowledge, the question of how to acquire tacit knowledge is very interesting.

I don’t have any current great ideas (other than adding in hammer time like practical tests into things) but I think commoncog’s blog is very interesting, especially the stuff about naturalistic decision making. https://commoncog.com/blog/the-tacit-knowledge-series/ (Can’t link more specifically, on mobile)

Anki deck is a bad idea because as you said: a. formulation b. poor coherence (when you’re stuffing things other people though was cool in your brain it won’t connect with other things in your brain as well as if you’d made the deck

I think incremental reading with supermemo is a decent option. I’ve taught a few rat adjacenct people supermemo and the ones that have spent time on the sequences inside it have said it’s useful. I’m not sure how to summarize it well but basically, anki let’s you memorize stuff algorithmically while incremental reading let’s you learn (algorithmically) then memorize.

I’d be surprised if after day a year of using IR on the sequences you weren’t at least a fair bit more instrumental

(If you want to give it a try I’ll gladly teach you. I don’t think there’s any more efficient way to process declarative information)