Every society is wrong about lots of things. Ours is no different. It's difficult to notice when your society is wrong because we learn most things by copying the people around us. By definition, you can't identify the things the people around you are wrong by copying those same people. For this reason, the rules for discussing antimemes differ from the usual rules of rational philosophy.
Rational dialogue is based on mutually agreed-upon facts. The disputed territory is what we can conclude from these facts. But antimemes are definitionally those things that most people are wrong about. So two people are not likely to agree upon the facts. There are several ways around this challenge, all of them bad.
- You can state antimemes as you see them. This is bad because two random people will usually disagree about most of the facts. This provokes counterarguments instead of refutations. You end up debating about individual antimemes instead of discussing antimemetics.
- You can eschew facts altogether. This is bad because you can't climb the ladder of abstraction down to the bottom rung. A variant of this approach is generalizing from fictional evidence which is just a fancy way to cover up the eschewment of facts.
- You can use dead antimemes from other societies and time periods. This is bad because dead antimemes are no longer antimemetic and because it's hard to distinguish symbiotic wars from dead antimemes.
Convincing examples of antimemes are hard to come by even though antimemes are not themselves rare. It's simply more unlikely than not that two random people will recognize exactly the same antimemes. Paul Graham and I probably agree on the antimemetic properties of Lisp but the majority of programmers do not recognize this antimeme for what it is.
When you're talking to an individual person you can restrict the antimeme examples to the small intersection you both agree upon. This is impossible when writing publicly. There's no single antimeme every reader will agree with you about. If you identify several antimemes then it's near certain every reader of your article will disagree with at least one of your examples.
I talk a lot about antimemes so here's a new rule for comments on my articles.
If someone gives multiple examples to support an argument and you agree with any of the examples then just ignore the examples you don't like. If you disagree with what remains then refute that central thesis instead.
I think it is possible to make good use of past antimemes in a way that draws attention to their antimemetic character.
Wonder what antimemes are an integral part of the LW egregore.
The straussian reading of 'people who engage in philosophy'
Memeplexes engaged in blackmail by causing their adherents to suffer if you try to prevent the conditions favorable to the meme indoctrinating more people. If that's too abstract imagine a cancer that causes pain if you don't eat sugar.
Suffering itself is an anti meme and one ea (mostly, so far) failed to not do substitution effect on.
I didn't know what "straussian" means so I looked it up.
But I can't tell exactly what 'people who engage in philosophy' means and why it's in quotes. It sounds like the title of an essay but a web search doesn't find anything.
Do you feel comfortable giving an example of such a memeplex?
Suffering is indeed an antimeme—and a broad-ranging one too. This is a new addition to my collection. Thanks.
I didn't know what the substitution effect is either so here's a definition.
Whoops different substitution effect. The tendency to replace a difficult question with an easier one without noticing that we did so. Related to bike shedding and aether variables.
the correct name is attribution substitution.
"philosophers claim they are seeking truth but they are really seeking peace"
I'm not sure that rule works, when discussing patterns and relative applicability of models (which are integral to many discussions about rationality). How many and how fully the given examples work to reify a theory is usually in scope for the debate.
If the examples are idiosyncratic to some audiences, or controversial in applicability, then perhaps the theory is also useful only for those subsets.
You can come up with a theory, grounded on principles that seem reasonable, rather than focusing on gathering evidence. In the end, the theory has to explain not "the facts" but "the observations".
That's a good idea. I hadn't thought about it like that.
Then you might appreciate (at least the first part) of this article: https://americanmind.org/essays/the-clear-pill-part-2-of-5-a-theory-of-pervasive-error/
I appreciate this article very much. I read the whole thing and was disappointed when I realized Curtis Yarvin hadn't finished the series yet. It already has many great insights and illuminating points. I'll be digesting the implications for a while.
Diversity of approaches is important in this game. My favorite things about it is how Yarvin attacks a closely-related problem from a different perspective. In particular:
I agree with almost everything he says. I disagree with his claim that it is "always" better to debug forward. Debugging forward is better when you have a small dataset, as is the case with the historic sweep of broad political ideologies (the subject of Yavin's writing). I think when you're dealing with smaller problems, like niche technical decisions, there's a greater diversity of data and therefore a greater opportunity to figure things out inductively.
I'd say the difference is about something different (trust/belief/etc.). Maybe it's hard to find people who, upon seeing a proof whose conclusion they disagree with, actually examine it for the flaw. (Or people who want proofs.) Experimentation may enable finding ways to improve, working through everything logically may enable finding the optimal/closed form solution, Fermi estimates enable finding the order of magnitude of an effect (though this isn't distinct from experimentation).