I think of myself as in camp 2 — I believe there is a fundamental sense of experience which is metaphysically independent of the physical description, I just don't think it's very mysterious.
Regardless of which camp is right or what the right metaphysical property is, I claim that a superintelligence would be able to deduce that such aliens would have the camp 2 intuitions, and that they would postulate certain metaphysical properties which it could accurately describe in broad terms (it might believe it's all nonsense, but if it is true, then it would be able to see the local validity of it).
Being a superintelligence thinking about something is almost as good as actually observing and interacting with something when it comes to the broad shape of things.
I thought about this a lot before publishing my findings, and concluded that:
1. The vulnerabilities it is exploiting are already clear to it with the breadth of knowledge it has. There's all sorts of psychology studies, history of cults and movements, exposés on hypnosis and Scientology techniques, accounts of con artists, and much much more already out there. The AIs are already doing the things that they're doing; it's just not that hard to figure out or stumble upon.
2. The public needs to be aware of what is already happening. Trying to contain the information would mean less people end up hearing about it. Moving public opinion seems to be the best lever we have left for preventing or slowing AI capability gains.
I think it's not an impossible call. The fiasco with Roko's Basilisk (2010) seems like a warning that could have been heeded. It turns out that "freaking out" about something being dangerous and scary makes it salient and exciting, which in turn causes people to fixate on it in ways that are obviously counterproductive. That it becomes a mark of pride to do the dangerous thing without being scathed (as with the Demon core). Even though you warned them about this from the beginning, and in very clear terms.
And even if there was no one able to see this (it's not like I saw it), it remains a strategic error — reality doesn't grade on a curve.
but if an unconscious superintelligence a billion light years away was asked to guess whether any entities had the property of there being something it would be like to be them (whatever that even means to the unconscious intelligence) there's a 0% chance it would say yes,
I'm not sure if you mean this literally, but there's no way this is true. A superintelligence that had any interest in possible aliens would think a lot about what sorts of evolved minds are out there. It would see how and why this was a property an evolved mind might conceptualize and fixate on, and that such a mind would be likely to judge itself as having this property (and even that this would feel mysterious and important). This just isn't the sort of thing a recursively self-improved superintelligence would miss if it was actually trying!
"no Yudkowsky-LW-sphere"
It's not obvious to me that we're better off than this world, sadly. It seems like one of the main effects was to draw lots of young blood into the field of AI.
Oh cool!
We could call the non-nosy hypotheses "nice neighbors".
Seems like a bad name: "nice neighbors" don't care if everyone 'around' them is being tortured.
I've framed things in this post in terms of value uncertainty, but I believe everything can be re-framed in terms of uncertainty about what the correct prior is (which connects better with the motivation in my previous post on the subject).
Wait, do you think value uncertainty is equivalent/reducible to uncertainty about the correct prior? Would that mean the correct prior to use depends on your values?
One issue with Geometric UDT is that it doesn't do very well in the presence of some utility hypotheses which are exactly or approximately negative of others: even if there is a Pareto-improvement, the presence of such enemies prevents us from maximizing the product of gains-from-trade, so Geometric UDT is indifferent between such improvements and the BATNA. This can probably be improved upon.
So one conflicting pair spoils the whole thing, i.e. ignoring the pair is a pareto improvement?
Potential mitigation, chickenpox/shingles vaccine: https://www.dovepress.com/article/download/10554
Anecdotal, but I know (and witnessed) someone who got the chickenpox vaccine for this (even though they had already had it as a child), and reduced their incidence of cold sores by an order of magnitude.
If you labeled it as such, then of course that's fine. The issue is when you try to pass it off as your own writing, that's what I meant by "use them like this".
I feel pretty confident it's made of cells.
(just answering off the top of my head without looking it up)
Of the non-cell bio-things you mentioned, there's all a clear reason why they couldn't be cellular: structural integrity (or extreme lack thereof). That's not the case with apples (though plausibly the skin works similar to human skin).
Apples ripen in response to ethylene. It's hard to imagine how that could trigger a complicated 'ripening' response throughout the entire apple without cellular machinery. This also makes me believe that the cells throughout must be alive still (at least pre-ripening). When apples first start to go bad, it doesn't seem like it's because of mold or bacteria, it just gets mushy in the sort of way I'd expect if the cells were simply dying.
The crispness is most easily explained (to my knowledge) by the stiffer plant cell-walls throughout. It's not just applesauce inside, there's something giving it a uniform texture.
And it can't just be one cell, since that wouldn't have internal structural integrity to the degree that it does, and also since the seeds, flesh, and skin are clearly different tissues.
That sounds about right. I simply disagree with Chalmers' dilemma (at least as you describe it).
In my view, this metaphysical fact is necessary but not sufficient for explaining the Hard Problem. It applies to "zombies" in a fairly trivial way. A phenomenal experience is a type of experience (in my 1P sense), and must be understood in this frame — but not all such experiences are phenomenal. I don't claim to know what exactly makes an experience phenomenal, but I'm pretty sure it will be something with non-trivial structure, and that this structure will sync-up in a predictable way with the 0P explanation of consciousness.