I operate by Crocker's rules.
In a current caffeine self-experiment I'm tracking the following variables:
More here. Maybe you could try to self-blind yourself, e.g. have half of your injections be saline and the other half testosterone? (I don't know much about how testosterone is consumed, maybe the benefits accrue slowly enough that randomization doesn't make sense).
Happy to respond to more questions as well, but I haven't finished my experiment yet so I can't provide any code for analysis.
Historically, LessWrong was seeded by the writings of Eliezer Yudkowsky, an artificial intelligence researcher.
He usually descibes himself as a decision theorist if asked for a description of his job.
Man I think I am providing value to the world by posting and commenting here. If it cost money I would simply stop posting here, and not post anywhere else.
The value flows in both directions. I'm fine not getting paid but paying is sending a signal of "what you do here isn't appreciated".
(Maybe I'd feel different if the money was reimbursed to particularly good posters? But then Goodharts law)
I encourage you to fix the mistake. (I can't guarantee that the fix will be incorporated, but for something this important it's worth a try).
I notice I am confused. How do you violate an axiom (completeness) without behaving in a way that violates completeness? I don't think you need an internal representation.
Elaborating more, I am not sure how you even display a behavior that violates completeness. If you're given a choice between only universe-histories and , and your preferences are imcomplete over them, what do you do? As soon as you reliably act to choose one over the other, for any such pair, you have algorithmically-revealed complete preferences.
If you don't reliably choose one over the other, what do you do then?
To clarify the comment for @tjaffee, a superintelligence could do the following
Courtesy of the programming language checklist.
So you're proposing a new economic policy. Here's why your policy will not work.