Top postsTop post
azsantosk
Message
Co-founder and CEO of Futarchy Labs. Interested in mechanism design and neuroscience. Hopes to contribute to AI alignment.
Twitter: https://twitter.com/azsantosk
245
Ω
3
12
39
"Do you want to be rich, or do you want to be king? — The founder's dilemma. As we approach the technological singularity, the sometimes applicable trade-off between wealth (being rich) and control (being king) may extend into the realm of AI and governance. The prevailing discussions around governance often...
During the American Revolution, a federal army and government was needed to fight against the British. Many people were afraid that the powers granted to the government for that purpose would allow it to become tyrannical in the future. If the founding fathers had decided to ignore these fears, the...
Epistemic status: trying hard to explain, not persuade. Part of me wants to fight the good fight and protect Bayesian orthodoxy against a corrupting heresy. MIT Professor Kevin Dorst (LessWrong username: kevin-dorst) has a new argument that, contrary to standard Bayesianism, it can be rational to predict in which directions...
Optimization happens inside the mind (map), not in the world (territory). Reflecting about this made me noticeably less confused about how powerful AI agents capable of model-based planning and self-modification will act. My model of future AI An AI that is a powerful optimizer will probably do some kind of...
The bet was arranged on Twitter between @MichaelVassar and I (link). Conditions are similar to this question on Metaculus, except for the open-source condition (I win even if the AI is closed-source, and in fact I would very much prefer it to be closed-source). @Zvi has agreed to adjudicate this...
In the recent counterarguments to the basic AI x-risk case, Katja Grace mentions a basic argument for existential risk from superhuman AI, consisting of three claims, and proceeds to explain the gaps she sees in each on them. Here are the claims: A. Superhuman AI systems will be "goal-directed" B....
Eliezer Yudkowsky on Math AIs Here are some interesting quotes from the alignment debate with Richard Ngo. > If it were possible to perform some pivotal act that saved the world with an AI that just made progress on proving mathematical theorems, without, eg, needing to explain those theorems to...