mathenjoyer
mathenjoyer has not written any posts yet.

mathenjoyer has not written any posts yet.

Fair.
Something something blackmailer is subjunctively dependent with the teacup! (This is a joke.)
No, they can't. See: "akrasia" on the path to protecting their hypothetical predicted future selves 30 years from now.
The teacup takes the W here too. It's indifferent to blackmail! [chad picture]
I don't disagree with any of this.
And yet, some people seem to be generalizedly "better at things" than others. And I am more afraid of a broken human person (he might shoot me) than a broken teacup.
It is certainly possible that "intelligence" is a purely intrinsic property of my own mind, a way to measure "how much do I need to use the intentional stance to model another being, rather than model-based reductionism?" But this is still a fact about reality, since my mind exists in reality. And in that case "AI alignment" would still need to be a necessary field, because there are objects that have a larger minimal-complexity-to-express than the size of my mind, and I would want knowledge that allows me to approximate their behavior.
But I can't robustly define words like "intelligence" in a way that beats the teacup test. So overall I am unwilling to say "the entire field of AI Alignment is bunk because intelligence isn't a meaningful concept?" I just feel very confused.
"A map that reflects the territory. Territory. Not people. Not geography. Not land. Territory. Land and its people that have been conquered."
The underlying epistemology and decision theory of the sequences is AIXI. To AIXI the entire universe is just waiting to be conquered and tiled with value because AIXI is sufficiently far-sighted to be able to perfectly model "people, geography, and land" and thus map them nondestructively.
The fact that mapping destroys things is a fact about the scope of the mapper's mind, and the individual mapping process, not about maps and territories in general. You cannot buy onigiri in Berkeley but you can buy rice triangles, an conquering/approximation of onigiri, which (if... (read more)
Human reasoning is not Bayesian because Bayesianism requires perfectly accurate introspective belief about one's own beliefs.
Human reasoning is not Frequentist because Frequentism requires access to the frequency of an event, which is not accessible because humans cannot remember the past with accuracy.
To be "frequentist" or "bayesian" is merely a philosophical posture about the correct way to update beliefs in response to sense-data. But this is an open problem: the current best solution AFAIK is Logical Induction.
This is one of the most morally powerful things you have ever written. Thanks.
This is actually completely fair. So is the other comment.
Thank you for echoing common sense!
Specific claim: the only nontrivial obstacle in front of us is not being evil
This is false. Object-level stuff is actually very hard.
Specific claim: nearly everyone in the aristocracy is agentically evil. (EDIT: THIS WAS NOT SAID. WE BASICALLY AGREE ON THIS SUBJECT.)
This is a wrong abstraction. Frame of Puppets seems naively correct to me, and has become increasingly reified by personal experience of more distant-to-my-group groups of people, to use a certain person's language. Ideas and institutions have the agency; they wear people like skin.
Specific claim: this is how to take over New York.
Didn't work.
This is how real-life humans talk.