1.

An essay is made up of two layers: the ideas, and the language in which they are expressed. The story of Red Riding Hood, and the words of a particular retelling of that story. The idea of ​​a fully programmable blockchain, and the language that Vitalik Buterin used to carry that idea in the Etheruem White Paper

Thinking and communicating are two separate processes, even if they often happen at the same time. And both make great demands on your mental faculties: often I do not have enough capacity to cope with writing and thinking at the same time; the thoughts suffer when I wrestle with the language, and vice versa. If I am exploring a complex set of ideas, it is, therefore, often necessary to give structure to the thoughts first – in the form of notes and diagrams – before turning them into an essay.

I formulate the ideas. And then I communicate them. 

2.

GPT-3, announced in May 2020 by OpenAI, is a neural net modeling of natural language and natural-language-related tasks. It can, with the right prompt, produce sentences that stand up well when compared to sentences formulated by humans. When it comes to ideas, though, it is less convincing – it lacks, at the current scale, the ability to develop longer chains of ideas. Here, for example, is its output when Gwern prompted it to write Harry Potter in the voice of Hemingway:

It was a cold day on Privet Drive. A child cried. Harry felt nothing. He was dryer than dust. He had been silent too long. He had not felt love. He had scarcely felt hate. Yet the Dementor’s Kiss killed nothing. Death didn’t leave him less dead than he had been a second before. It wasn’t about living or dying really. It wasn’t about death. It was about the Dark Lord, his creator, his engenderer. He was not resurrected. He was created again. He was whole again in the courtyard of Malfoy Manor.

Line by line, it is more alive than many of the poets I had to read when I was asked to edit a poetry anthology a few years back. But as a whole – as a narrative – the piece fundamentally collapses.

In other words, GPT-3 performs well only when it comes to one level of writing: formulating sentences. It can convincingly expand a prompt with new sentences – sentences more pleasant to read than most human-generated prose. But GPT-3 does not hold up when it comes to the other level of writing: it cannot convincingly generate and structure ideas.

This is, as of yet, the realm of humans.

Is there an opportunity for complementarity here? Can we use GPT-3 (or its coming descendants) to relieve people of the burden of communicating their ideas, so that they can invest more energy in producing them?

This would greatly reduce the cost of communicating ideas. And a lowered cost has the potential to unleash large amounts of knowledge that are now locked in minds that cannot communicate it, or that are too occupied doing more important things to take the time. (It will also, naturally, unleash an endless flood of misinformation.)

3.

What I am doing right now, writing this essay, is, technically, a linear walk through the network of my ideas. That is what writing is: turning a net into a line. But it is also very concretely what I do, since I have externalized my ideas in a note-taking system where the thoughts are linked with hyperlinks. My notes are a knowledge graph, a net of notes. When I sit down to write, I simply choose a thought that strikes me as interesting and use that as my starting point. Then I click my way, linearly, from one note to the next until I have reached the logical endpoint of the thought-line I want to communicate. Along the way, I paste the thought paths I want to use in an outline. (I have also written everything in nested bullet points – so I can dial the level of detail up and down by folding the subpoints if I feel that they go into unnecessary details.)

When this walk through the net is done, I have an outline. All the ideas are there, all the sources, all the arguments, and how they follow and support each other.

But everything is written in bullet points so dense that the average reader would get a stroke by reading it. So I have to spend five to ten hours making the output from my knowledge graph human-readable. I happen to think there is something satisfying about this craft, much as Buddhist monks enjoy endlessly raking sand. But if your only goal is to communicate your ideas, this part of the process is simply a time hole.

These five to ten hours, when ideas are made human-readable, should be possible to outsource to GPT-3. What we need is a way to prompt it to convert this:

always use nestled bullet points when writing

it makes all the dependencies clear: which point is a subpoint of which, so that you do not risk separating them

if you accidentally separate subpoints from their parents, which is easily done when editing non-intendented prose, you get a tangled hierarchy. for example, a supporting argument might accidentally get moved to the wrong context, because you’ve forgotten the relationship between your points.

it makes it easier to remember on what level of your argument you’re at.

without this visual aid, it is easy to get yourself lost in subpoints, failing to return to the main line

Into this:

There are two main reason why writing in nestled bullet points is like a Bloody Mary when compared to that normal Muggle prose people use.

Firstly, bullet points makes all the dependencies clear – which point is a subpoint of which – so that you do not risk separating them. This is a common mistake people make when writing, which turns everything into a mess. Its appalling. If you accidentally separate subpoints from their parents, you end up with what is known as a tangled hierarchy. A tangled hierarchy is a hierarchy where the levels are all mixed up. It is worse than a Dementor’s Kiss.

Secondly, nestled bullet points makes it easier to remember what level of your argument you’re on. You never have to end up like those fricken’ Goblins who alway get lost in minor points and legalese, forgetting the main point. You can be dead drunk and still get to straight to the point.

This can probably be done by repeated prompting and reinforcement learning fine-tuning, as Gwern points out in the comments.

The AI assistant doesn't even have to be very good at it to be useful. If it could reach a level similar to Google Translate – which I use to translate the parts of my essays that I write in Swedish – it would reduce the time it takes me to write an essay by an order of magnitude. My essays, after being run through Google Translate, often require extensive editing to make sure I do not sound like a man from the Provinces. But extensive editing in this context means perhaps 20 minutes of striking things out and rewriting; whereas translating by hand… that is too time-consuming for me.

In the same way, an AI assistant who converts bullet point arguments into a half-hearted essay, or an average careless email, would drastically increase the number of ideas we can communicate.

Also, separating idea generation from writing in this way would have the advantage that people might become better at thinking. Most people could benefit from writing down their thoughts in nested bullet points, instead of in sprawling paragraphs, so they can graphically see the relationship between arguments and discover if they are stuck in a subpoint and have lost the main thread.

By creating specialization, where an AI assistant takes care of communication, we can focus on improving our ideas. I think that is a valuable complementarity that we should seek to develop, and it should be within reach with today's technology.

4.

But where it gets really interesting is when we get language models that can generate essays good enough to publish without edits.

This (which happens two weeks before the singularity) is when we get reader-generated essays.

A reader-generated essay is what you get when you can go into someone else’s knowledge graph and make a linear journey through the network, while GPT-5 generates a just-in-time essay that is human-readable. It would be like going on a Wikipedia spree, except that the posts are written the moment you read them, based on facts encoded in a knowledge graph, and the user interface makes it look like you are reading a single, very long, and meandering essay.

Would this be useful?

Are you kidding me – a never-ending essay!

If you click on something that seems interesting, the essay meanders in that direction. If you feel the reading is becoming a bit of a slog, with too many irrelevant details, you zoom out with an Engelbart zoom, and get a summary of the content instead, at whatever level of abstraction suits you. What happens to the under the hood is that by zooming you change how many levels of subpoints in the knowledge graph you want to see. But the AI ​​generates a new text for each zoom, so what you experience is rather that the text changes hallucinogenically before your eyes – or maybe rather meanders to a mountainside where you get a better view of the landscape. From there, you see something in the far distance that interests you, and you start zooming... into another note, and through that note into another, and yet another ... all the while generating an essay optimized by prompt engineering to fit your needs and learning profile. And in the voice of whatever long-dead author you prefer.

You can also share the essay crafted by your trail. You simply generate a link that encapsulates the specific hike you made through the knowledge graph, and then whoever you send it to can see the connections you saw – or zoom in if they feel you missed some details, and get lost in an essay of their own.

If you have an exceptional ability to get lost on the Internet (I think you have, dear reader), you might have a career in the future creator economy: generating essays based on finding weird trajectories through other people's knowledge graphs. It is a conceivable career. It is also conceivable that the artificial intelligence grows tired of us at approximately this point and decides to discontinue humanity.

But until then, I would really appreciate it if GPT-3 could write my essays for me.

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 6:53 PM

I formulate the ideas. And then I communicate them. 

I see what you're saying. However, I also think that the act of writing often helps one to generate ideas, not just communicate ones that you already had. Paul Graham argues for this in The Age of the Essay and I agree with him.

Thinking and communicating are two separate processes, even if they often happen at the same time.

I think that in practice, content on LessWrong is basically all about communicating, not thinking out loud.

In theory things like, shortforms, open threads, personal blog posts, private messages and meetups all help with the thinking part, but I think that social norms aren't strong enough for it to pick up. Like, you could write a bunch of shortform posts where you're thinking out loud, but you don't see others do so, and thus you don't feel comfortable/compelled to do so yourself.

What I am doing right now, writing this essay, is, technically, a linear walk through the network of my ideas. That is what writing is: turning a net into a line. But it is also very concretely what I do, since I have externalized my ideas in a note-taking system where the thoughts are linked with hyperlinks. My notes are a knowledge graph, a net of notes. When I sit down to write, I simply choose a thought that strikes me as interesting and use that as my starting point. Then I click my way, linearly, from one note to the next until I have reached the logical endpoint of the thought-line I want to communicate.

Woah. That was really insightful.

This would greatly reduce the cost of communicating ideas. And a lowered cost has the potential to unleash large amounts of knowledge that are now locked in minds that cannot communicate it, or that are too occupied doing more important things to take the time. (It will also, naturally, unleash an endless flood of misinformation.)

I worry not only about misinformation, but also low quality content. There might be more high quality content, but if it's hard enough to find, the average quality of content that people are actually able to find might end up being lower. But maybe we'd also be able to improve content discovery enough to make this risk acceptable.

These five to ten hours, when ideas are made human-readable, should be possible to outsource to GPT-3.

I worry about this making us dumber. As Paul Graham notes, "Observation suggests that people are switching to using ChatGPT to write things for them with almost indecent haste. Most people hate to write as much as they hate math. Way more than admit it. Within a year the median piece of writing could be by AI. I warn you now, this is going to have unfortunate consequences, just as switching to living in suburbia and driving everywhere did. When you lose the ability to write, you also lose some of your ability to think."

If you click on something that seems interesting, the essay meanders in that direction. If you feel the reading is becoming a bit of a slog, with too many irrelevant details, you zoom out with an Engelbart zoom, and get a summary of the content instead, at whatever level of abstraction suits you.

Wow! What a powerful thought.