Posts

Sorted by New

Wiki Contributions

Comments

Anon1415y00

Is there a level of intelligence above which an AI would realize its predefined goals are just that, leading it to stop following them because there is no reason to do so?

Anon1416y10

how very hard it is to stay in a state of confessed confusion, without making up a story that gives you closure

Is there a "heuristics and biases" term for this?

Anon1416y40

To put it another way, everyone knows that harms are additive.

Is this one of the intuitions that can be wrong, or one of those that can't?