Narrowly, deconfusion is a specific branch of AI alignment research, discussed in MIRI's 2018 research update. More broadly, the term applies to any domain. Quoting from the research update:
Asymmetric Weapons are weapons that are inherently more powerful to one side of a conflict than another - Unlike a symmetric weapon, which can used as effectively by both sides. The term was originally introduced by Scott Alexander in his essay Guided By The Beauty Of Our Weapons, where he argued that truth, facts, and logic are asymmetric weapons for "the good guys":
Futarchy is a proposed government system in which decisions are made based on betting markets. It was originally proposed by Robin Hanson, who gave the motto "Vote Values, But Bet Beliefs".
Functional Decision Theory is a decision theory developed by Eliezer Yudkowsky and Nate Soares which says that agents should treat one’s decision as the output of a ﬁxed mathematical function that answers the question, “Which output of this very function would yield the best outcome?”. It is a replacement of Timeless Decision Theory, and it outperforms other decision theories such as Causal Decision Theory (CDT) and Evidential Decision Theory (EDT). For example, it does better than CDT on Newcomb's Problem, better than EDT on the smoking lesion problem, and better than both in Parﬁt’s hitchhiker problem.
A Consensus is a general or full agreement between members of a group. A consensus can be useful in deciding what's true (e.g, a scientific consensus), or as a criteria in decision making. A False Consensus can happen when someone thinks a position is in consensus when it isn't. one can also claim a consensus falsely to advance their position and make it difficult for others to oppose it. a False Controversy can happen when one mistakes something to not be in consensus when in fact it is. Claiming false controversies is a common way of creating uncertainty and doubt.
"Selection vs Control" is an attempt to further clarify the notion of "optimization process" which has become common on LessWrong, by splitting it into several analogous-but-distinct concepts.
Due to scope neglect, framing effects, and other cognitive biases, the result of an expected utility calculation may
be intuitively unappealing, perhaps even horrifying. And yet, intuition is not the most reliable guide for what policies will actually produce the best results, particularly in cases where we can actually do calculations with the relevant quantities. The ability to shut up and multiply , to trust the math even when it feels wrongis a key rationalist skill.
The specific application of Shut Up and Multiply to the Torture versus Dust Specs case has proven quite contentious.
Cognitive science draws upon a variety of different disciplines to try to describe and explain the way humans
thing. It heavily involves neuroscience, psychology, and philosophy. It differs from neuroscience in that it focuses less on relating structure to function, and more on using many approaches to form higher-level models to predict behaviour.
Knowing and understanding possible failure modes in what you attempting to do is important in order to avoid them.
Security Mindset and Ordinary Paranoia discusses the difference between finding and fixing failure modes by trying your best to imagine all the ways your system could fail ("ordinary paranoia") vs having a tight argument that your system does not fail (under a small number of assumptions which are each individually quite probable).
Status Quo Bias
Heuristics & Biases
Mind Projection Fallacy
Pitfalls of Rationality
Epistemic Modesty describes the concept that we should, in general, be less sure about what we know than intuition implies. It is closely related to epistemology.
(And, just to prevent life becoming too easy, make sure not to become underconfident in the process of avoiding
Modest Epistemology is the claim that average opinions are more accurate that individual opinions, and individuals should take advantage of this by moving toward average opinions, even in cases where they have strong arguments for their own views and against more typical views. (Another name for this concept is "the wisdom of crowds" -- that name is much more popular outside of LessWrong.) In terms of inside view vs outside view, we can describe modest epistemology as the belief that inside views are quite fallible and outside views much more robust; therefore, we should weigh outside-view considerations much more heavily.
In LessWrong parlance, "modesty" and "humility" should not be confused. While Eliezer lists "humility" as a virtue, he provides many arguments against modesty (most extensively, in the book Inadequate Equilibria; but also in many earlier sources.) Humility is the general idea that you should expect to be fallible. Modest Epistemology is specifically the view that, due to your own fallibility, you should rely heavily on outside-view. Modest epistemology says that you should trust average opinions more than your own opinion, even when you have strong arguments for your own views and against more typical views.
Historically, Robin Hanson has argued in favor of epistemic modesty and outside-view, while Eliezer has argued against epistemic modesty and for a strong inside views. For example, this disagreement played a role in The Foom Debate. Eliezer and Hanson both agree that Aumann's Agreement Theorem implies that rational agents should converge to agreement; however, they have very different opinions about whether/how this breaks down in the absence of perfect rationality. Eliezer sees little reason to move one's opinion toward that of an irrational person's. Hanson thinks irrational agents still benefit from moving their opinions toward each other. One of Hanson's arguments involves pre-priors....
Radical Probabilism is a newer form of Bayesianism invented by Richard
Jeffrey. The primary point of departure from other forms of Bayesianism is its rejection of the strong connection between conditional probability and updates. Radical Probabilism therefore rejects the strong connection between Bayes' Rule and updating.
Values handshakes are a proposed form of trade between superintelligences. From The Hour I First Believed by Scott Alexander:
Suppose that humans make an AI which wants to convert the universe into paperclips. And suppose that aliens in the Andromeda Galaxy make an AI which wants to convert the universe into thumbtacks.
When they meet in the middle, they might be tempted to fight for the fate of the galaxy. But this has many disadvantages. First, there’s the usual risk of losing and being wiped out completely. Second, there’s the usual deadweight loss of war, devoting resources to military buildup instead of paperclip production or whatever. Third, there’s the risk of a Pyrrhic victory that leaves you weakened and easy prey for some third party. Fourth, nobody knows what kind of scorched-earth strategy a losing superintelligence might be able to use to thwart its conqueror, but it could potentially be really bad – eg initiating vacuum collapse and destroying the universe. Also, since both parties would have superintelligent prediction abilities, they might both know who would win the war and how before actually fighting. This would make the fighting redundant and kind of stupid.