According to Tim Berners-Lee, explaining his ideas about the World Wide Web was at times quite challenging:

«So going back to 1989, I wrote a memo suggesting the global hypertext system. Nobody really did anything with it.»

«Once you've had an idea like that it kind of gets under your skin. And even if people don't read your memo - well, actually he [my boss] did. It [his copy] was found after he died. He had written "Vague, but exciting" in pencil in the corner.»

«In general it was difficult. It was really difficult to explain what the web would be like. It's difficult to explain to people now that it was difficult then. But then - well, there was no web, so things like "click" didn't have the same meaning. I can show somebody a piece of hypertext, a page which has got links, and we click on the link and bing - there'll be another hypertext page. Not impressive. You know, we've seen that - we've got things on hypertext on CD-ROMs.»

Probably just bells and whistles

In terms of impact, it's unusual (but not unheard of) for ideas to rank more highly than the World Wide Web.

But, I suspect, it's not so unusual for ideas to be similarly difficult to grok (and sometimes much harder!).

And although it's not a perfect analogy, I think there is some relevance here to AI alignment, and ideas people propose.

We can't listen to everyone at length. There aren't enough hours in the day. So we need to form heuristics such as:

If people are unable to explain their ideas convincingly and succinctly, then this is often because they are internally confused. If they can't succinctly convey what the "core" of their idea is, we can disregard them.

This is a useful heuristic. But the more strongly we rely on it, the more we are at risk of false negatives.

New Comment
2 comments, sorted by Click to highlight new comments since:

I think a core part of this is understanding that there are trade-offs between "sensitivity" and "specificity", and different search spaces vary greatly in what trade-off is appropriate for it.

I distinguish two different reading modes: sometimes I read to judge whether it's safe to defer to the author about stuff I can't verify, other times I'm just fishing for patterns that are useful to my work.

The former mode is necessary when I read about medicine. I can't tell the difference between a brilliant insight and a lethal mistake, so it really matters to me to figure out whether the author is competent.

The latter mode is more appropriate for when I'm trying to get a gears-level understanding of something, and upside of novel ideas is much greater than the downside of bad ideas. Even if a bad idea gets through my filter, that's going to be very useful data when I later learn why it was wrong. The heuristic here should be "rule thinkers in, not out", or "sensitivity over specificity".

Unfortunately, our research environment is set up in such a way that people are punished more for making mistakes than they are for novel contributions. Readers typically have the mindset of declaring an entire person useless based on the first mistake they find. It makes researchers risk-averse, and I end up seeing fewer usefwl patterns.

But, consider, if you're reading something purely to enhance your own repertoire of useful gears, you shouldn't even necessarily be trying to find out what the author believes. If you notice yourself internally agreeing or disagreeing, you're already missing the point. What they believe is tangential to how the patterns behave in your own models, and all that matters is finding patterns that work. Steelmanning should be the default, not because it helps you understand what others think, but because it's obviously what you'd do to improve your models.


I think that how seriously one should take a person's half-baked idea depends very strongly on how well one knows that person.

To paraphrase your heuristic:

"If a stranger is unable to explain their idea convincingly and succinctly, the idea is probably either bad or unready for widespread dissemination".

I agree strongly there: No way can, nor should, everyone listen to all ideas from strangers.

However, if a friend or colleague who has had good ideas before is unable to explain their idea convincingly and succinctly, I'm likely to invest more time in trying to understand the idea. By doing so, I'm also likely to help them find out what works and what doesn't when it comes to communicating about it.

I expect that generally, trying an explanation on other people will not only improve the quality of the explanation but also stress-test the underlying concept with those listeners' questions and imaginations. So often (though not always), the convincingness and succinctness of an explanation is a proxy for whether that explanation is ready for more widespread sharing.