There is no product here, I am just putting down some ideas about what how such a thing might work.

You may have noticed all the annoying Grammarly adds bouncing around, which is a browser-based spelling/grammar/syntax checker. I wondered about how easily we could detect indications of known-bad-thinking with the same kind of analysis. This website is all through written communication, and the community is a solid proponent of note-taking; what if there were a tool that could catch obvious signs of bad or sloppy thinking as you wrote the thought down?

It does seem like it would be difficult to point to any kind of general grammar pattern that indicates a bad argument. However, it would be pretty straightforward to collect a large pile of bad argument examples, and then try pattern matching against those. It should also be simple enough to flag claims which should require evidence or a citation - something like if it detects numbers it searches the rest of the text for something resembling a source or citation.

Speaking of, is there some kind of bibliography or citation software that we could mimic or hijack for this purpose?

While I mostly envision this as a tool to help maintain clear thinking by catching errors in my notes, the thought of applying it to a blog post or editorial from the browser tickles me immensely. Click the button and the page turns red, go ahead and browse elsewhere.

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 8:52 AM

You may have noticed all the annoying Grammarly adds bouncing around, which is a browser-based spelling/​grammar/​syntax checker. I wondered about how easily we could detect indications of known-bad-thinking with the same kind of analysis.

Before we go ahead and attempt to build something like Grammarly, but for logical reasoning, shouldn't we verify that Grammarly actually improves one's writing? I haven't used Grammarly itself, but I did use a similar tool, called Hemingway, and I wasn't terribly impressed with it.

I don't see how a grammar checker like Grammarly is similar to a tool that's about improving style like Hemingway.

Frequently, there's one correct answer for questions of grammar but there isn't for questions of style.

Just be careful, because it's better to make an abacus than to make a calculator. What do I mean by that? Well, an abacus helps you get the answer to your question but it also teaches you how to get that answer: take the abacus away, you can still do math, because now you know how. This is not so with a calculator.

So, make something that teaches proper thinking, not blatantly corrects thinking.

Imagine it this way: you've made your Thinkerly and then an evil overlord hacks into it so that 'good thinking' is now defined as being a non critical, manipulatable lump of jelly. Do the people who have previously used Thinkerly start questioning the program and use what Thinkerly has previously taught them to be skeptical of the program? Or, do they not notice at all because they just know to click the button to make everything better, or in this case smarter?

I should have made this specific, but I had not considered using such a thing for producing writing for other people's consumption. What I wanted from Grammarly was this:

1) The latest grammar analysis.

2) The instant feedback.

With this, I envisioned two probable uses:

A) Writing your own thoughts down as notes. Thinkerly catches possible errors. This improves stream-of-consciousness writing as a tool for training better thinking, because the feedback loop is much tighter than with the draft-revision format to which we are usually constrained.

B) Looking critically at something from somewhere else. This seems like it would be more useful on the margins, because it is very easy even for skilled thinkers to accidentally rely on a few suspect thoughts.

I can't see any way for it to drop in to writing workflow the same way as spellcheckers do now, because I don't see how it could make good suggestions about replacements the way spellcheckers do. Even if there are signatures of poor thinking, that doesn't mean there is a corresponding correct thought the way there is with spelling.

I suspect AI like GPT-3 is good enough now to identify bad arguments quite well, maybe also things like cognitive biases

This might be too simple, but to start, a program that recognizes and tags or highlights statements of fact, or branches of argument might be useful.