Wiki-Tags in Need of Work

Narrowly, deconfusion is a specific branch of AI alignment research, discussed in MIRI's 2018 research update. More broadly, the term applies to any domain. Quoting from the research update:

A cognitive reduction is a form of reductive analysis where, rather than reducing something to physical phenomena, we reduce something to the cognitive machinery which give rise to the idea.

Asymmetric Weapons are weapons that are inherently more powerful to one side of a conflict than another - Unlike a symmetric weapon, which can used as effectively by both sides. The term was originally introduced by Scott Alexander in his essay Guided By The Beauty Of Our Weapons, where he argued that truth, facts, and logic are asymmetric weapons for "the good guys":

Futarchy is a proposed government system in which decisions are made based on betting markets. It was originally proposed by Robin Hanson, who gave the motto "Vote Values, But Bet Beliefs".

Posts about deciding whether to have children, how many children to have, when to have children, etc. Also called parenthood decision-making.

DIY means Do It Yourself

Functional Decision Theory is a decision theory developed by Eliezer Yudkowsky and Nate Soares which says that agents should treat one’s decision as the output of a fixed mathematical function that answers the question, “Which output of this very function would yield the best outcome?”. It is a replacement of Timeless Decision Theory, and it outperforms other decision theories such as Causal Decision Theory (CDT) and Evidential Decision Theory (EDT). For example, it does better than CDT on Newcomb's Problem, better than EDT on the smoking lesion problem, and better than both in Parfit’s hitchhiker problem.

A Consensus is a general or full agreement between members of a group. A consensus can be useful in deciding what's true (e.g, a scientific consensus), or as a criteria in decision making. A False Consensus can happen when someone thinks a position is in consensus when it isn't. one can also claim a consensus falsely to advance their position and make it difficult for others to oppose it. a False Controversy can happen when one mistakes something to not be in consensus when in fact it is. Claiming false controversies is a common way of creating uncertainty and doubt.

Unbundling is breaking a complex concept into simpler constituents; Conceptual Reductionism.

A Satisficer aims to reach a set level of utility rather than maximizing utility. It is a proposed optimization process to the open Other-izer problem.

Recent Tag & Wiki Activity

John Vervaeke
I don't want to have to pay attention to everything that's out there on Twitter or Facebook, and would like a short document that gets to the point and links out to other things if I feel curious.

I was pretty happy when Ben Pace turned Eliezer's Facebook AMA into a LW post; I might like to see more stuff like that. However, I feel like wiki pages ought to be durable and newcomer-friendly, and therefore must necessarily lag the cutting edge.

Due to scope neglect, framing effects, and other cognitive biases, the result of an expected utility calculation executed correctly may be intuitively unappealing, perhaps even horrifying. And yet, intuition isproduce an answer different from first intuition, making it "intuitively unappealing".  If you can tell that it's probably the intuitions that went wrong and not the most reliable guide for what policies will actually producecalculation, the best results, particularly in cases where we can actually do calculations with the relevant quantities. The ability toskill shut up and multiply, is the ability to trustaccept that, yes, sometimes the expected utility math even when it feels wrongis a key rationalist skill. correct and we need to deal with that. Contrast do the math, then go with your gut.  If you're not sure which of these applies, use "do the math, then go what your gut" until you've built up more experience.

The specific application of Shut Up and Multiply to the Torture versus Dust Specs case has proven quite contentious. One reason this case was cited as an exemplar of where "shut up and multiply" should apply was a claim that the usual reasoning behind answering "SPECKS" can be reduced to circular preferences.

Cognitive science draws upon a variety of different disciplines to try to describe and explain the way humans thing.think. It heavily involves neuroscience, psychology, and philosophy. It differs from neuroscience in that it focuses less on relating structure to function, and more on using many approaches to form higher-level models to predict behaviour.

This tag is way too specific for LessWrong

Knowing and understanding possible failure modes in what you attempting to do is important in order to avoid them.Security Mindset and Ordinary Paranoia discusses the difference between finding and fixing failure modes by trying your best to imagine all the ways your system could fail ("ordinary paranoia") vs having a tight argument that your system does not fail (under a small number of assumptions which are each individually quite probable).

Other Examples: 


Bias
Planning Fallacy 
Status Quo Bias
Affect Heuristic
Aversion/Ugh Fields
Bucket Errors
Compartmentalization
Confirmation Bias
Fallacies
Goodhart's Law
Groupthink
Heuristics & Biases
Mind Projection Fallacy
Motivated Reasoning
Pica
Pitfalls of Rationality
Rationalization
Self-Deception
Sunk-Cost Fallacy
Paperclip Maximizer
Moral Mazes
Replication Crisis
Moloch
Tribalism
Simulacrum Levels
Information Hazards
Pascal's Mugging
Akrasia
Procrastination
Nonappeals

Also don't confuse humilityHumility should also not be confused with social modesty, or motivated skepticism (aka disconfirmation bias).

Related Sequences: Inadequate Equilibria 

Related Tags:Pages: Calibration, Chesterton's Fence, Underconfidence, Modest Epistemology, Modesty, Fallacy of Gray

Notable Posts

Willpower

There is an argument that the use of willpower is undesirable.

Would be good to add a source.

Humility

Fixed, sorta, but now this tag needs to be merged with "humility". (I've named it "epistemic humility" in the meantime, but I think it should just be called "humility" -- no one says "epistemic humility" I think.)

Epistemic ModestyHumility describes the concept that we should, in general, be less sure about what we know than intuition implies. It is closely related to epistemology.

(And, just to prevent life becoming too easy, make sure not to become underconfident in the process of avoiding overconfidence)overconfidence!)

Contrasting Humility and Modesty

In LessWrong parlance, this should not be confused with "epistemic modesty" / "modest epistemology". While Eliezer lists "humility" as a virtue, he provides many arguments against modesty (most extensively, in the book Inadequate Equilibria; but also in many earlier sources.) Humility is the general idea that you should expect to be fallible. Modest Epistemology is specifically the view that, due to your own fallibility, you should rely heavily on outside-view. Modest epistemology says that you should trust average opinions more than your own opinion, even when you have strong arguments for your own views and against more typical views.

Related Tags: Calibration, Chesterton's Fence, Underconfidence, Modest Epistemology

Modest Epistemology

Modest Epistemology is the claim that average opinions are more accurate that individual opinions, and individuals should take advantage of this by moving toward average opinions, even in cases where they have strong arguments for their own views and against more typical views. (Another name for this concept is "the wisdom of crowds" -- that name is much more popular outside of LessWrong.) In terms of inside view vs outside view, we can describe modest epistemology as the belief that inside views are quite fallible and outside views much more robust; therefore, we should weigh outside-view considerations much more heavily.

In LessWrong parlance, "modesty" and "humility" should not be confused. While Eliezer lists "humility" as a virtue, he provides many arguments against modesty (most extensively, in the book Inadequate Equilibria; but also in many earlier sources.) Humility is the general idea that you should expect to be fallible. Modest Epistemology is specifically the view that, due to your own fallibility, you should rely heavily on outside-view. Modest epistemology says that you should trust average opinions more than your own opinion, even when you have strong arguments for your own views and against more typical views.

Historically, Robin Hanson has argued in favor of epistemic modesty and outside-view, while Eliezer has argued against epistemic modesty and for a strong inside views. For example, this disagreement played a role in The Foom Debate. Eliezer and Hanson both agree that Aumann's Agreement Theorem implies that rational agents should converge to agreement; however, they have very different opinions about whether/how this breaks down in the absence of perfect rationality. Eliezer sees little reason to move one's opinion toward that of an irrational person's. Hanson thinks irrational agents still benefit from moving their opinions toward each other. One of Hanson's arguments involves pre-priors....

(Read More)

Radical Probabilism is a newer form of Bayesianism invented by Richard Jeffrey.Jeffrey. The primary point of departure from other forms of Bayesianism is its rejection of the strong connection between conditional probability and updates. Radical Probabilism therefore rejects the strong connection between Bayes' Rule and updating.

Other notable writers on Radical Probabilism include Jeffrey's student Richard Bradley,Bradley, and the champion of naturalized philosophy, Brian Skyrms.Skyrms.

Values handshakes

Values handshakes are a proposed form of trade between superintelligences. From The Hour I First Believed by Scott Alexander:

Suppose that humans make an AI which wants to convert the universe into paperclips. And suppose that aliens in the Andromeda Galaxy make an AI which wants to convert the universe into thumbtacks.

When they meet in the middle, they might be tempted to fight for the fate of the galaxy. But this has many disadvantages. First, there’s the usual risk of losing and being wiped out completely. Second, there’s the usual deadweight loss of war, devoting resources to military buildup instead of paperclip production or whatever. Third, there’s the risk of a Pyrrhic victory that leaves you weakened and easy prey for some third party. Fourth, nobody knows what kind of scorched-earth strategy a losing superintelligence might be able to use to thwart its conqueror, but it could potentially be really bad – eg initiating vacuum collapse and destroying the universe. Also, since both parties would have superintelligent prediction abilities, they might both know who would win the war and how before actually fighting. This would make the fighting redundant and kind of stupid.

...

(Read More)

Internal Family Systems

Yeah, subagents is the general idea of modeling the mind in terms of independent agents, but IFS is a more specific theory of what kinds of subagents there are. E.g. my sequence has a post about understanding System 1 and System 2 in terms of subagents, while IFS doesn't really have anything to say about that.

Automation

I reckon it's fine, especially if you provide the source.

Technological Unemployment

Overlapping but they still feel like different concepts or something.