You're right. I'm updating towards illusionism being orthogonal to anthropics in terms of betting behavior, though the upshot is still obscure to me.
I agree realism is underrated. Or at least the term is underrated. It's the best way to frame ideas about sentientism (in the sense of hedonic utilitarianism). On the other hand, you seem to be talking more about rhetorical benefits of normative realism about laws.
Most people seem to think phenomenal valence is subjective, but that's confusing the polysemy of the word "subjective", which can mean either arbitrary or bound to a first-person subject. All observations (including valenced states like suffering) are subjective in the second sense, but not in th...
it is easy to cooperate on the shared goal of not dying
Were you here for Petrov Day? /snark
But I'm confused what you mean about a Pivotal Act being unnecessary. Although both you and a megacorp want to survive, you each have very different priors about what is risky. Even if the megacorp believes your alignment program will work as advertised, that only compels them to cooperate with you if they are (1) genuinely concerned about risk in the first place, (2) believe alignment is so hard that they will need your solution, and (3) actually possess the institutional coordination abilities needed.
And this is just for one org.
World B has a 1, maybe minus epsilon chance of solving alignment, since the solution is already there.
That is totum pro parte. It's not World B which has a solution at hand. It's you who have a solution at hand, and a world that you have to convince to come to a screeching halt. Meanwhile people are raising millions of dollars to build AGI and don't believe it's a risk in the first place. The solution you have in hand has no significance for them. In fact, you are a threat to them, since there's very little chance that your utopian vision will match up wit...
Okay, let's operationalize this.
Button A: The state of alignment technology is unchanged, but all the world's governments develop a strong commitment to coordinate on AGI. Solving the alignment problem becomes the number one focus of human civilization, and everyone just groks how important it is and sets aside their differences to work together.
Button B: The minds and norms of humans are unchanged, but you are given a program by an alien that, if combined with an AGI, will align that AGI in some kind of way that you would ultimately find satisfying.
World ...
I agree that the political problem of globally coordinating non-abuse is more ominous than solving technical alignment. If I had the option to solve one magically, I would definitely choose the political problem.
What it looks like right now is that we're scrambling to build alignment tech that corporations will simply ignore, because it will conflict with optimizing for (short-term) profits. In a word: Moloch.
It's happened before though. Despite being one of those 2 friends, I've already been forced to change my habits and regard videocalls as a valid form of communication.
none of this requires seperate privileged existence different from the environment around us; it is our access consciousness that makes us special, not our hard consciousness.
That sounds like a plausible theory. But, if we reject that there is a separate 1st person perspective, doesn't that entail that we should be Halfers in the SBP? Not saying it's wrong. But it does seem to me like illusionism/elimitivism has anthropic consequences.
I can see how a computer could simulate any anthropic reasoner's thought process. But if you ran the sleeping beauty problem as a computer simulation (i.e. implemented the illusionist paradigm) aren't the Halfers going to be winning on average?
Imagine the problem as a genetic algorithm with one parameter, the credence. Wouldn't the whole population converge to 0.5?
Can you explain what you mean by "underdetermined" in this context? How is there any ambiguity in resolving the payouts if the game is run as a third person simulation?
If I program a simulation of the SBP and run it under illusionist principles, aren't the simulated Halfers going to inevitably win on average? After all, it's a fair coin.
I'm fine with everything on LW ultimately being tied to alignment. Hardcore materialism being used as a working assumption seems like a good pragmatic measure as well. But ideally there should also be room for foundational discussions like "how do we know our utility function?" and "what does it mean for something to be aligned?" Having trapped priors on foundational issues seems dangerous to me.
What would it be conscious of, though? Could it feel a headache when you gave it a difficult riddle? I don't think a look-up table can be conscious of anything except for matching bytes to bytes. Perhaps that corresponds to our experience of recognizing that two geometric forms are identical.
Does anyone know of work dealing with the interaction between anthropic reasoning and illusionism/elimitivism?
What about a large look-up table that mapped conversation so far -> what to say next and was able to pass the Turing test? This program would have all the external signs of consciousness, but would you really describe it as a conscious being in the same way that you are?
Unless the conscious algorithm in question will experience states that are not valence-neutral, I see no issue with creating or destroying instances of it. The same applies to any other type of consciousness. It seems implausible to me that any of our known AI architectures could instantiate such non-neutral valences, even if they do seem plausibly able to instantiate other kinds of experiences (e.g. geometric impressions).
Quick note on the Ponzo illusion: In my view, seeing the top bar as longer is actually a more primitive, fundamental observation. The idea that the bars ought to appear as the same length is an additional interpretative layer thrown on top of this, justified by geometric principles and theories about human visual perception. The direct (or "raw") observation, however, is that the top bar appears longer.
Question: anyone know of some work on the connection between anthropic paradoxes and illusionism? (I couldn't figure out how to make a "Question" type post.)
Yes. Rogue AGI is scary, but I'm far more concerned about human misuse of AGI. Though in the end, there may not be that much of a distinction.
There's a big difference between teleology ... and teleonomy
I disagree. Any "purposes" are limited to the mind of a beholder. Otherwise, you'll be joining the camp of the child who thinks that a teddy bear falls to the ground because it wants to.
Work to offer the solutions and let them make their own, informed choice.
The problem is that the bureaucrats who make the decision of whether gene drives are allowed aren't the same people as the ones who are dying from malaria. Every day that you postpone the eradication of malaria by trying to convince bureaucrats, over a thousand people will die from the disease in question. Most of them, many of whom are infants, had no ability to meaningfully affect their political situation.
I guess it is logically coherent that a bean sprout could have values of its own. But what would it mean for a bean sprout to value something?
You might say, its evolutionary teleology is what it values. But it's only in your human mind that there is such a thing as that teleology, which was an idea your mind created to help it understand the world. By adopting such a non-sentientist view, your brain hasn't stepped down from its old tyranny, but only replaced one of its notions with a more egalitarian sounding one. This pleases your brain, but the bean sprout had no say.
That's true that it could set a bad precedent. But it also could set a bad precedent to normalize letting millions of people die horribly just to avoid setting a bad precedent. It's not immediately clear to me which is worse in the very-long-run.
Something I didn't see mentioned: is there any concern that a sudden elimination of malaria could cause a population surge, with cascading food shortage effects? I have no idea how population dynamics work, so it's non-obvious to me whether there's a potential problem there. Even if so, though, that still wouldn't be an argument to not do the gene drive, but just to make the appropriate preparations beforehand.
How should this affect one's decision to specialize in UI design versus other areas of software engineering? Will there be fewer GUIs in the future, or will the "audience" simply cease to be humans?
Personhood is a separate concept. Animals that may lack a personal identity conception may still have first person experiences, like pain and fear. Boltzmann brains supposedly can instantiate brief moments of first person experience, but they lack personhood.
The phrase "first person" is a metaphor borrowed from the grammatical "first person" in language.
“We ran the experiment of email being a truly open P2P protocol… That experiment failed” (@patio11)
I must be missing something here. How does this fit in with the rest of the tweets in that list?
I'm not a negative utilitarian, for the reason you mention. If a future version of myself was convinced that it didn't deserve to be happy, I'd also prefer that its ("my") values be frustrated rather than satisfied in that case, too.
Are you an illusionist about first person experience? Your concept of suffering doesn't seem to have any experiential qualities to it at all.
the information defining a self preserving agent must not be lost into entropy, and any attempt to reduce suffering by ending a life when that life would have continued to try to survive is fundamentally a violation that any safe ai system would try to prevent.
Very strongly disagree. If a future version of myself was convinced that it deserved to be tortured forever, I would infinitely prefer that my future self be terminated than have its ("my") new values satisfied.
Can you elaborate what such a process would be? Under illusionism, there is no first person perspective in which values can be disclosed (namely, for hedonic utilitarianism).
While it's true that AI alignment raises difficult ethical questions, there's still a lot of low-hanging fruit to keep us busy. Nobody wants an AI that tortures everyone to death.
Do you believe that the pleasure/pain balance is an invalid reason for violently intervening in an alien civilization's affairs? Is this true by principle, or is it simply the case that such interventions will make the world worse off in the long run?
Criticism of one of your links:
those can all be ruled out with a simple device: if any of these things were the case, could that causate onto whether such an intuition fires? for all of them, the answer is no: because they are immaterial claims, the fact of them being true or false cannot have causated my thoughts about them. therefore, these intuitions must be discarded when reasoning about them.
Causation, which cannot be observed, can never overrule data. The attempted comparison involves incompatible types. Causation is not evidence, but a type of inter...
Because, so the argument goes, if the AI is powerful enough to pose any threat at all, then it is surely powerful enough to improve itself (in the slowest case, coercing or bribing human researchers, until eventually being able to self-modify). Unlike humans, the AI has no skill ceiling, and so the recursive feedback loop of improvement will go FOOM in a relatively short amount of time, though how long that is is a matter of question.
The space of possible minds/algorithms is so vast, and that problem is so open-ended, that it would be a remarkable coincidence if such an AGI had a consciousness that was anything like ours. Most details of our experience are just accidents of evolution and history.
Does an airplane have a consciousness like a bird? "Design an airplane" sounds like a more specific goal, but in the space of all possible minds/algorithms that goal's solutions are quite undetermined, just like flight.
Utilitarianism seems to demand such a theory of qualitative experience, but this requires affirming the reality of first-person experience. Apparently, some people here would rather stick their hand on a hot stove than be accused of "dualism" (whatever that means) and will assure you that their sensation of burning is an illusion. Their solution is to change the evidence to fit the theory.
I'm not quite convinced that illusionism is decision-irrelevant in the way you propose. If it's true that there is no such thing as 1st-person experience, then such experience cannot disclose your own values to you. Instead, you must infer your values indirectly through some strictly 3rd-person process. But all external probing of this sort, because it is not 1st-person, will include some non-zero degree of uncertainty.
One paradox that this leads to is the willingness to endure vast amounts of (purportedly illusory) suffering in the hope of winning, in exc...
Creating or preventing conscious experiences from happening has a moral valence equivalent to how that conscious experience feels. I expect most "artificial" conscious experiences created by machines to be neutral with respect to the pain-pleasure axis, for the same reason that randomly generated bitmaps rarely depict anything.
Great work! I hope more people take your direction, with concrete experiments and monitoring real systems as they evolve. The concern that doing this will backfire somehow simply must be dismissed as untimely perfectionism. It's too late at this point to shun iteration. We simply don't have time left for a Long Reflection about AI alignment, even if we did have the coordination to pull that off.
Exactly. I wish the economic alignment issue was brought up more often.