Thanks for the comment. +1 to it. I also agree that this is an interesting concept: using Achilles Heels as containment measures. There is a discussion related to this on page 15 of the paper. In short, I think that this is possible and useful for some achilles heels and would be a cumbersome containment measure for others which could be accomplished more simply via bribes of reward.
Thanks.
I disagree a bit. My point has been that it's easy for solipsism to explain consciousness and hard to materialism to. But it's easy for materialism to account for structure and hard solipsism to. Don't interpret the post as my saying solipsism wins--just that it's underrated. I also don't say qualia must be irreducible, just that there's spookiness if they are.
Thanks! This is insightful.
What exactly would it mean to perform a baysian update on you not experiencing qualia?
Good point. In an anthropic sense, the sentence this is a reply to could be redacted. Experiencing qualia themselves would not be evidence to prefer one theory over another. Only experiencing certain types of observations would cause a meaningful update.
The primitives of materialism are described in equations. Does a solipsist seek an equation to tell them how angry they will be next Tuesday? If not, what is the substance of a solipsistic mode... (read more)
Great comment. Thanks.
In the case of idealism, we call the ontological primitive "mental", and we say that external phenomena don't actually exist but instead we just model them as if they existed to predict experiences. I suppose this is a consistent view and isn't that different in complexity from regular materialism.
I can't disagree. This definitely shifts my thinking a bit. I think that solipsism + structured observations might be comparable in complexity to materialism + an ability for qualia to arise from material phenomena. ... (read more)
Thanks for the comment. I'm not 100% on the computers analogy. I think answering the hard problem of consciousness is significantly different compared to understanding how complex information processing systems like computers work. Any definition or framing of consciousness in terms of informational or computational theory may allow it to be studied in those terms in the same way that computers are can be understood by system based theoretical reasoning based on abstraction. However, I don't think this is what it means to solve the hard problem o... (read more)
I agree--thanks for the comment. When writing this post, my goal was to share a reflection on solipsism in a vacuum rather than in context of decision theory. I acknowledge that solipsism doesn't really tend to drive someone toward caring much about others and such. In that sense, it's not very productive if someone is altruistically/externally motivated.
I don't want to give any impression that this is a particularly important decision theoretic question. :)
Thanks for the comment. I think it's exciting for this to make it into the newsletter. I am glad that you liked these principles.
I think that even lacking a concept of free will, FDT can be conveniently thought of applying to humans through the installation of new habits or ways of thinking without conflicting with the framework that I aim to give here. I agree that there are significant technical difficulties in thinking about when FDT applies to humans, but I wouldn't consider them philosophical difficulties.
Huge thanks. I appreciate it. Fixed.
I'm skeptical of this. Non-mere correlations are consequences of an agent's source-code producing particular behaviors that the predictor can use to gain insight into the source-code itself. If an agent adaptively and non-permanently modifies its souce-code, this (from the perspective of a predictor who suspects this to be true), de-correlates it's current source code from the non-mere correlations of its past behavior -- essentially destroying the meaning of non-mere correlations to the extent that the predictor is suspicious.
Oh yes. I agre... (read more)
I wrote a LW post as a reply to this. I explain several points of disagreement with MacAskill and Y&S alike. See here.
I really like this analysis. Luckily, with the right framework, I think that these questions, though highly difficult, are technical but no longer philosophical. This seems like a hard question of priors but not a hard question of framework. I speculate that in practice, an agent could be designed to adaptively and non-permanently modify its actions and source code to slick past many situations, fooling predictors exploiting non-mere correlations when helpful.
But on the other hand, maybe a way to induce a great amount of uncertainty to mess up certain age... (read more)
Huge thanks. I fixed this.
It's really nice to hear that the paper seems clear! Thanks for the comment.
For 2-3, I can give some thoughts, but these aren't ne... (read more)