I'm not convinced that "it's impossible for him to press the button, but it's better for him to press the button" is a meaningful concept.
Nothing suggests it's impossible for him to press the button, even if we grant that it's possible he can't reason. Maybe he can stumble into it.
(According to Eliezer, each tick denotes a "wasted motion" - something that Harry does or thinks that will nevertheless not influence his final course of action.)
If being a psychopath (or not being a psychopath) affects your answer because it affects your ability to reason, then based on your psychopath status, you may not even have the ability to reason correctly and choose an outcome. The problem is ill-defined, because it asks you to do something that you may be incapable, by stipulation, of doing.
Ah... but it's the meta-you (the reader), not the story-you (the arguable psychopath), who is tasked with saying whether the story-you should press the button. Maybe the story-you is incapable of reasoning. But given his values and the setup of the story, it should be either better for him to press, or not to press, regardless of whether he can choose or not (and it's that answer we're tasked with giving).
To predict if a human ends up happy with something or not?
If you want to think about the outcomes of a a counterfactual its just a conditional whose antecedent didn't happen.
Indeed.
But thats not the problem Rationalists have.
So what is the problem?
So you're neither saying it's not a counterfactual (despite it not involving either subjective or objective probability), nor you're saying there is a problem with nobody being motivated to think about them.
So what are you saying?
So you're saying that it is a counterfactual (despite not involving subjective or objective probability), but you're saying there is a problem in nobody being motivated to think about said counterfactual?
Also, I found the post really hitting home with good ideas.
So if I'm not a Yudkowskian rationalist and I want to say that if, in Game of Life, the configuration of cells had been different (so instead of configuration1, it had been configuration2), the outcome would've also been different (outcome2 instead of outcome1), that's not a counterfactual? (Since it's not defined in terms of subjective or objective probability.)
In a deterministic universe (the jury is still out as to whether the indeterminism of our universe impacts our decisions), free will is hidden in the other if-branches of the computation-which-is-you. It could've made another decision, but it didn't. You can imagine that as another possible world with that computation being slightly different (such that it makes another decision).
Counterfactuals don't have ontological existence. We talk about them to talk about other possible worlds which are similar to ours in some aspects and different in others.