Epistemic status: serious, uncertain, moderate importance. Leaving comments is encouraged!

Recently Eliezer Yudkowsky's main writing output has been rationalist glowfic: role-play fiction written on an Internet forum like glowfic.com.[1] I think that LessWrongers, fans of rationalist fiction, and anyone interested in raising the sanity waterline should consider distilling lessons from Yudkowsky glowfic into LW posts.

Here's the basic case:

  1. The original Sequences were extremely good at building the community and raising the sanity waterline. If you want to make the impact case, I think they plausibly get multiple percent of the entire rationality community's impact points.
  2. The Sequences are incomplete. Despite most of his knowledge coming from his home planet, Eliezer has in fact learned things since 2009. Having more sequences would be great!
  3. Eliezer's thoughts are still relevant. Recent posts like conversations with AI researchers, calling attention to underrated ideas, and short fiction have all been good.
  4. Not everyone gets useful lessons from the Sequences, because Eliezer's writing style and tone can be annoying. Eliezer was deliberately discourteous towards "stupid ideas", and regrets this. Also, some people just learn better from other writing styles.
  5. Eliezer stopped writing Sequences and probably cannot write more. This is a combination of Eliezer's chronic fatigue syndrome and being tired of trolls / bad takes in comments. The only medium he can write in without being drained is glowfic. Thus, even though it's a non-serious format, glowfic is Eliezer's main intellectual output right now.
  6. Eliezer attempts to make his glowfic roughly as edifying as HPMOR, and among people who read glowfic, some find it really good at teaching rationality.
  7. But not everyone can read glowfic and gain useful lessons.
    1. Many people (including me) read fiction for maximum enjoyment rather than to extract maximum knowledge. I had the same problem with HPMOR, reading through it like any other novel, whereas many people I know who got more from HPMOR read it carefully, perhaps stopping after every chapter to think about the goals and motivations of each character and predict what happens next.
    2. It's really long (>>100 hours of reading time just for the existing material in the planecrash sequence) and most of the rationality lessons are contained in a small proportion of the words.
    3. It's in a weird format; there's no paper book or e-book version.
    4. Many of the stories have so much gratuitous sex (and often bad kink practices, torture, etc.) that they're inappropriate for children and offputting to some adults. (I started reading HPMOR at 14 and would not recommend most 14yo read glowfic.)

I expect that if good work is produced here, it's mostly by people who personally derived some important lesson from glowfic, and were thinking of writing it up already, whether or not it's on the idea list below. One such person could potentially be counterfactual for getting a lot more discussion of, and context for, Eliezer's current thoughts into the community, which I would see as a big win.

Q&A

What is glowfic and how do I read it?

There's a LW post here explaining the format here, and also a community guide written by members of the glowfic community. Eliezer also announced the planecrash sequence in particular and linked to a website containing just planecrash.

Surely glowfic doesn't actually contain useful information?

I'm pretty uncertain about the value of glowfic. I would update down if several people tried creating posts and none of them were good. But right now I think it's underexplored. Some evidence on the value of glowfic:

  • + HPMOR spawned discussion of core rationalist virtues, like heroic responsibility
  • ? HPMOR didn't have good sequences extracted from it (though maybe that's because most of the rationality material was already in the original Sequences)
  • ? reaction to this idea from glowfic fans I know has been mixed: some are pretty enthusiastic, while some think glowfic doesn't contain much practical rationality content
  • ? people have not written about many glowfic-derived insights yet (though maybe this is for no reason, which would make this project neglected)
  • - This post was less well received than I expected (though maybe that's due to concern about generalizing from fictional evidence, which wouldn't be a problem with all glowfic-derived sequences)

How should I start writing?

I don't necessarily recommend reading rationalist glowfic just to gain shards of Eliezer's thinking and write them up, if you don't find it fun in itself. (If you want to do this anyway, reading the first 2/3 of Mad Investor Chaos is a place to start.) But if you're already a glowfic fan, here's a list of topics from glowfic that could be turned into posts. (Thanks to Keller Scholl for some of these.) A large class of these is "dath ilani virtue": positive traits displayed by the civilization in Eliezer's utopia, or its citizens when placed in other worlds.

  • An introduction to rationalist glowfic: where glowfic lives, how to read it.
  • "Lawfulness" and its facets: Bayes, expected utility, the ability to coordinate and trade, etc.
  • How Keltham analyzes everything to try to understand it as an equilibrium between rational actors, whether this works in real life, and how to do it
  • The strengths and weaknesses of glowfic as an edification tool
  • "What would Otolmens say?"[2]
  • What civilizational competence looks like
  • A list of dath ilani virtues.
  • Decision theory. Some possible topics:
    • Someone who helps you should be rewarded, even if you were not in contact with them at the time
    • Rational actors don’t respond to threats
  • Applied rationality. Some possible topics:
    • Forming hypotheses is costly, because they distort future thinking in favor of themselves, and should be avoided as long as possible
    • Evidence accumulates: so long as you track hypotheses and evidence-shifts accurately, you will converge on the truth, and reality is full of information
    • How to "introspectively experience belief updates"

There are also points in glowfic where Eliezer gives a blog post as the narrator, or gives a blog post as a character giving a lecture; such content could be posted here with minor annotations/edits.

What not to write

If the goal is edification, I'm not particularly looking for the following artifacts (but I'd like to be proven wrong).

  • Plot summaries: I can't see anything in the plot of glowfic I've read so far that's more useful than the plot of any other fiction. (I also don't expect these to be very fun to read)
  • Book reviews: The reviews I've seen so far are amusing but don't really teach anything. Someone like Scott Alexander could write a book review that does teach things, but it doesn't seem substantially easier than writing other glowfic-related content. (edit: since writing this I'm more excited about book reviews than I was, although they do have to be done well)
  • Broad high-context discussions: HPMOR discussions were successful, but aren't what I'm looking for; ideally we make glowfic content accessible for people who don't want to read glowfic.

If Eliezer can't write nonfiction because of trolls and bad takes, won't turning glowfic into Sequences just make him stop writing glowfic?

No, I asked him.

Seems plausibly good, but this is a dumb plan. Are there better plans?

Maybe! Here are some alternate plans:

  • get Eliezer to write enlightening short fiction rather than glowfic
  • get Eliezer to write glowfic excerpts that can be posted on LW
  • create glowfic characters for top AI researchers, and have Eliezer critique their ideas by role-playing with them (mostly a joke)

Some plans sound much less dumb but maybe intractable:

  • cure Eliezer's chronic fatigue so he can actually attempt to grant humanity a couple more bits of information-theoretic dignity save the world
    • There was a $100,000 bounty for this that went unclaimed. Also, 5 people worked pretty seriously on it part-time for 2 years before giving up.
  • have Eliezer do more consulting with AI alignment researchers instead
    • This is already happening. I have heard that this is much more tiring than Eliezer for writing glowfic, and the glowfic is basically free, being written in his free time and not requiring nearly as much energy as consulting.
  1. ^

    Note that not all glowfic is rationalist fiction, and not all rationalist fiction is written as glowfic.

  2. ^

    In the planecrash series, Otolmens is the god of preventing existential risk.

68

17 comments, sorted by Click to highlight new comments since: Today at 11:47 AM
New Comment

Is there any Eliezer glowfic besides "mad investor chaos and the woman of asmodeus"? Which work is gigantic enough, but because it's so gigantic I find myself unmotivated to read any more of it now that I've more or less got the framework of that world.

Also, is that work a collaboration between Eliezer and one or more others? While reading it, for some reason I took Eliezer to be writing Keltham's part and someone else GM-ing all the other characters, but I'm not sure I have any reason to think that.

Also, is that work a collaboration between Eliezer and one or more others? While reading it, for some reason I took Eliezer to be writing Keltham's part and someone else GM-ing all the other characters, but I'm not sure I have any reason to think that.

Glowfic is generally written by multiple people. When you look at a post, you'll see on the left the character picture for that post (giving some mood info), the character's name, the character's short phrase-bio, and then below that the author's username.

Most of planecrash is written by Iarwain and lintamande, but the most recent thread has five authors (as more characters have joined the research project).

Each "post" on the glowfic also lists the author which I imagine is linked to the account that the post originated from. There are two main authors. They don't always strickly stick to writing particular characters. There is definetely parts where there is a clear agent-environment structure to the proceedings. "Keltham tries to open the door. Does it open? Yes, it does.". The structure does tickle my game literacy. While the end text is frozen in stone that its based on interaction counterfactuals become way more relevant (the participants would be prepared to tell the story even if there were slight twists).

Other roleplaying shows also have that structure that its mostly quiet and then some exciting things happen very spikingly and sproadickly. It tends to a once a week 3-4 h episode being made into a clip complication of 15 mins with 1-2 min tidbits of tasty character expression (the bits that everybody remembers from watching the episode).

Re: no e-book version: here's a script for downloading glowfic posts and continuities into epub format: https://github.com/rocurley/glowfic-dl

Perhaps this is a stupid suggestion, but if trolls in the comments annoy him, can he post somewhere where no comments are allowed? You can turn off comments on wordpress, for example.

One fun thing about the stories is that they are nuanced and express positions as beliefs of the characters and because there is such a variety the authors can't personally be backing everything. And for the same reason its hard to argue what is the correct takeaway. Making everything super complicated keeps things interesting and is mentally stimulating but doesn't provide the most clarity. I am pretty sure that "people should regard Evil as a supreme virtue" is not a correct takeaway but there is something to the direction of "don't be Stupid Good".

Althought the explicit learnings of cognition are very condenced the context of them being practised immediatly before or after is a kind of thing I suspect to be pretty central to the things and harder to make shorter.

It did occur to me that I would totally read through "virtues and their layers" and Tolkien style specification of Baseline.

Well, I tried reading mad investor chaos, and even though I loved hpmor, I couldn't make it through the first thread page of that story, it just feels extremely pedantic, though that's not exactly the right word. The density of terminology makes it all unpleasant, even though I understand what every term means, it just feels like a horribly stilted form of human communication. This might be appropriate in-universe, but it doesn't make it any less annoying to read.

Rational actors don’t respond to threats

I'm currently reading planecrash, and just today read a scene that could plausibly have prompted this bullet point:  Keltham is confused about teachers punishing students, and makes an argument about how if someone threatens to break your arm unless you give them your shoes, you should fight back, even though having your arm broken is worse than losing your shoes.

But my interpretation of this scene was "Keltham has lived all his life in dath ilan, where Very Serious people have done a lot of work specifically to engineer a societal equilibrium where this would be true, and has utterly failed to grasp how the game theory changes for the circumstances in this new world (partly because culture gap, partly because lies)."  I don't think it's actually true in general that's it's irrational to respond to threats (though judging when it's rational is more complicated than just deciding whether a broken arm is worse than losing your shoes).

(The glowfic characters don't have cause to directly address this point, because "teachers punishing students" isn't actually about threats at all; it's reinforcement, which is a different thing, and they are arguably still doing it wrong but for totally different reasons, so Keltham's parable about shoes turns out to be irrelevant.)

I...guess I could probably turn my interpretation of the scene into a post, if that has noticeable expected value?  Which it probably does if this scene is commonly being interpreted as "Keltham correctly argues that it is never rational to cave to a threat", but I'm not actually sure if this is the scene you had in mind or if your interpretation of it is common.

If somebody has time to pour into this I'd suggest recording an audio version of Mad Investor Chaos.

HPMOR reached a lot more people thanks to Eneasz Brodski's podcast recordings. That effect could be much more pronounced here if the weird glowfic format is putting people off.

I'd certainly be more likely to get through it if I could play it in the background whilst doing chores, commuting or falling asleep at night.

That's how I first listened to HPMOR, and then once I'd realised how good it was I went back and reread it slowly, taking notes, making an effort to internalize the lessons.

Hmm, funny, I usually listen to audiobooks, but this was not the case with HPMOR, I realized "how good it is" literally from the first chapter, which is extremely rare with books.

I would be glad if stories from there were straight up crossposted to here (and perhaps formatted/edited a bit), because several times already I went to the site to read something when I saw a recommendation, and just couldn't navigate there and understand what I'm supposed to read.

"Create glowfic characters for top AI researchers, and have Eliezer critique their ideas by role-playing with them (mostly a joke)" It looks interesting

cure Eliezer's chronic fatigue so he can actually attempt to grant humanity a couple more bits of information-theoretic dignity save the world

Possibly relevant: I know someone who had chronic fatigue syndrome which largely disappeared after she had her first child. I could possibly put her in contact with Eliezer or someone working on the problem.

Was the "glowfic excerpts" link supposed to be Self Integrity and the Drowning Child?

With so much apparently available energy/effort for eliezer-centered-improvement initiatives (like the $100,000 bounty mentioned in this post), I'd like to propose that we seriously consider cloning Eliezer. 

From a layman/outsider perspective, it seems the hardest thing would be keeping it a secret so as to avoid controversy and legal trouble, since from a technical perspective it seems possible and relatively cheap. EA folks seem well connected and capable of such coordination, even under the burden of secrecy and keeping as few people "in the know" as possible. 

Partially related: (in the category of comparatively off-the-wall - but nonviolent - AI alignment strategies): at some point there was a suggestion that MIRI pay $10mil (or some such figure) to Terence Tao (or some such prodigy) to help with alignment work. Eliezer replied thus

We'd absolutely pay him if he showed up and said he wanted to work on the problem.  Every time I've asked about trying anything like this, all the advisors claim that you cannot pay people at the Terry Tao level to work on problems that don't interest them.  We have already extensively verified that it doesn't particularly work for eg university professors.

I'd love to see more visibility into proposed strategies like these (i.e. strategies surrounding/above the object-level strategy of "everyone who can do alignment research puts their head down and works", and the related: "everyone else make money in their comparative specialization/advantage and donate to MIRI/FHI/etc"). Even visibility into why various strategies were shot down would be useful, and a potential catalyst for farming further ideas from the community. (even if - for game theoretic reasons - one may never be able to confirm that an idea has been tried, as in my cloning suggestion)

Meta level: Why on earth would you say "Here is my secret idea, internet"? That doesn't make any sense to me

New to LessWrong?