# Tag Voting Activity

 User Post Title Tag Pow When Vote Rob Bensinger Saving Time Agency 2 27m 2 Rob Bensinger Saving Time Decision Theory 2 28m 2 Rob Bensinger Saving Time AI 2 28m 2 Rob Bensinger Saving Time Abstraction 2 28m 2 Rob Bensinger Saving Time Causality 2 29m 2 Raemon Are PS5 scalpers actually bad? World Optimization 2 3h 2 sharps030 What will 2040 probably look like assuming no singularity? Futurism 1 13h 2
Load More

# Recent Tag & Wiki Activity

OpenPhil defines transformativeTransformative AI asis "AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution" (link). This is similar in nature to Superintelligent AI or Artificial General Intelligence but separates out the transformative nature of the AI from the mechanism by which the transformation occurs.

OpenPhil defines transformative AI as AI"AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution.revolution". This is similar in nature to Superintelligent AI or Artificial General Intelligence but separates out the transformative nature of the AI from the mechanism by which the transformation occurs.

Transformative AI

Transformative AI is "AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution" (link). This is similar in nature to Superintelligent AI or Artificial General Intelligence but separates out the transformative nature of the AI from the mechanism by which the transformation occurs.

The meta Skill Building is the meta-skill of getting good at things i.e. developing procedural knowledge.

Double-Crux differs from typical debates which are usually adversarial (your opinion vs mine), and instead attempt to be a collaborative attempt to uncover the true structure of the disagreement and wantwhat would change the disputants minds.

Addressing this type of objection to “paradise engineering”, Pearce writes that, while the prospect of perpetual intelligent bliss may sound unexciting, boredom can be abolished or replaced with its functional analogs that don’t involve aversive qualia. Like any other psychophysical state, boredom can be optional once its biochemical substrates are identified. Pearce also notes that even if human descendants opt into indiscriminate bliss, they will not get bored, for, as intracranial stimulation has evidenced, pure pleasure has no tolerance and “never palls”.

## EpistemicInstrumental vs InstrumentalEpistemic Rationality

Classically, on LessWrong, a distinction has been made between epistemicinstrumental rationality and instrumentalepistemic rationality, however, these terms may be misleading – it's not as though epistemic rationality can be traded off for gains in instrumental rationality. Only apparently, and to think one should do this is a trap.

Moderator Default Responses

Spoilers \\

Hey, minor mod-note on our spoiler policy: for things like discussing later spoilers in HPMOR, we prefer you use spoiler blocks. You can generate spoiler blocks like so:

>! type whatever you want in spoiler blocks like this...

type whatever you want in spoiler blocks like this...

Threat Models

A threat model is a story of how a particular risk (e.g. AI) plays out.

In the AI case, according to Rohin Shah, a threat model is ideally:

Combination of a development model that says how we get AGI and a risk model that says how AGI leads to existential catastrophe.

...

(Read More)

Selection vs Control

"Selection vs Control" is an attempt to further clarify the notion of "optimization process" which has become common on LessWrong, by splitting it into several analogous-but-distinct concepts.

Rent control
John Vervaeke
I don't want to have to pay attention to everything that's out there on Twitter or Facebook, and would like a short document that gets to the point and links out to other things if I feel curious.

I was pretty happy when Ben Pace turned Eliezer's Facebook AMA into a LW post; I might like to see more stuff like that. However, I feel like wiki pages ought to be durable and newcomer-friendly, and therefore must necessarily lag the cutting edge.

Due to scope neglect, framing effects, and other cognitive biases, the result of an expected utility calculation executed correctly may be intuitively unappealing, perhaps even horrifying. And yet, intuition isproduce an answer different from first intuition, making it "intuitively unappealing".  If you can tell that it's probably the intuitions that went wrong and not the most reliable guide for what policies will actually producecalculation, the best results, particularly in cases where we can actually do calculations with the relevant quantities. The ability toskill shut up and multiply, is the ability to trustaccept that, yes, sometimes the expected utility math even when it feels wrongis a key rationalist skill. correct and we need to deal with that. Contrast do the math, then go with your gut.  If you're not sure which of these applies, use "do the math, then go what your gut" until you've built up more experience.

The specific application of Shut Up and Multiply to the Torture versus Dust Specs case has proven quite contentious. One reason this case was cited as an exemplar of where "shut up and multiply" should apply was a claim that the usual reasoning behind answering "SPECKS" can be reduced to circular preferences.

## Sequence by YvainScott Alexander

Cognitive science draws upon a variety of different disciplines to try to describe and explain the way humans thing.think. It heavily involves neuroscience, psychology, and philosophy. It differs from neuroscience in that it focuses less on relating structure to function, and more on using many approaches to form higher-level models to predict behaviour.

Chemistry

This tag is way too specific for LessWrong

Knowing and understanding possible failure modes in what you attempting to do is important in order to avoid them.Security Mindset and Ordinary Paranoia discusses the difference between finding and fixing failure modes by trying your best to imagine all the ways your system could fail ("ordinary paranoia") vs having a tight argument that your system does not fail (under a small number of assumptions which are each individually quite probable).