I am a bit doubtful about the impact that this will have – it seems like the goal of fasting, as a form of activism, is to signal to others that you care deeply enough about an issue to suffer for it.
However, a lot of the key players in AI probably disagree with, or assign lower probability to the arguments for x-risk (for whatever reason). Which means demonstrating that you care a lot is unlikely to convince them.
Not to say that there will be no impact, but I'm not sure the expected loss of productivity from fasting for a week straight would be worth it.
To refine this discussion, I'll only be responding to what I think are major points of disagreement.
This scenario is just as incoherent as the other one!
No, it is not.
Imagining yourself as a "disembodied spirit," or a mind with no properties, is completely incoherent.
To imagine being someone else is something people (including you, I assume) do all the time. Maybe it's difficult to do, and certainly your imagination will never be perfectly accurate to their experience, but it is not incoherent.
I do see your point: you can't become someone else because you would then no longer be you. What you are arguing is that through your view of personal identity (which, based on your comments, I presume is in line with Closed Individualism), the Veil of Ignorance is not an accurate description of reality. Sure, I don't disagree there.
I think it would be helpful to first specify your preferred theory of identity, rather than dismissing the VOI as nonsense altogether. That way, if you think Closed Individualism is obviously true, you could have a productive conversation with someone who disagrees with you on that.
No, the goal of the thought experiment is to argue that you should want to do this. If you start out already wanting to do this, then the thought experiment is redundant and unmotivated.
If you accept sentientism, I think it is still very useful to consider what changes to our current world would bring us closer to an ideal world under that framework. This is how I personally answer "what is the right thing for me to do?"
So, yes, as a point of metaethics, we recognize that aliens won’t share our morality, etc. But this has zero effect on our ethics. It’s simply irrelevant to ethical questions—a non sequitur.
I don't see why a meta-ethical theory cannot inform your ethics. Believing that my specific moral preferences do not extend to everyone else has certainly helped me answer “what is the right thing for me to do?”
I believe that humans (and superintelligences) should treat all sentient beings with compassion. If the Veil of Ignorance encourages people to consider the perspective of other beings and reflect on the specific circumstances that have cultivated their personal moral beliefs, I consider it useful and think that endorsing it is the right thing for me to do.
Of course, this should have no direct bearing on your beliefs, and you are perfectly free to do whatever maximizes your internal moral reward function.
Is there something incoherent about caring about only some people/things/entities and not others? Surely there isn’t.
I think we agree here.
What I meant was: caring about another sentient being experiencing pain/pleasure, primarily because you can imagine what they are experiencing and how un/desirable it is, indicates that you care about experienced positive and subjective states, and so this care applies to all beings capable of experiencing such states.
I generally accept Eliezers meta-ethical theory and so I don't assume this necessarily applies to anybody else.
The way I interpret it: in the thought experiment, you are not literally imagining yourself as a mind without properties, and then asking "what would I want?" You are imagining that you can become any of the sentient minds entering existence at any given moment, and that you will inherit their specific circumstances and subjective preferences.
The goal of the thought experiment, then, is to construct a desirable world that encompasses the specific desires and preferences of all sentient beings.[1] This seems obviously relevant to the alignment problem.
With no constraints, your answer might look like "give every sentient being a personal utopia." Our individual ability to change the real world is heavily constrained, though, so some realistic takeaways can be:
I (taking your question seriously) answer that, obviously, the master should command the slave, and the slave should obey the master...
This would not be a faithful execution of the thought experiment, as you would obviously be ignoring the possibility of existing as a slave (which, presumably, does not prefer to be enslaved).
In a sense, the veil of ignorance is an exercise in rationality – you are recognizing that your mind (and every other mind) is shaped by its specific circumstances, so your personal conception of what's moral and good doesn't necessarily extend to all other sentient beings. If I'm not mistaken, this seems to be in line with Eliezers position on meta-ethics.
I agree that the notion of "you" existing before your entering into existence seems incoherent – I prefer to describe it as "your mind emerged from physical matter, and developed all of its properties based on that matter, so when you develop a moral/ethical framework that includes any minds outside of your own, it should logically extend to all minds that exist/will ever exist." In other words, caring about anyone else means you should probably care about all sentient beings.
The thought experiment is supposed to be somewhat practical, so I don't think you need to consider aliens, or the entire set of all possible minds that can ever exist.
I'm curious – what are the "very serious criticisms" you refer to? Your comment would more helpful and productive if you pointed to specific disagreements you have with the post rather than abstractly mentioning the existence of them.
More importantly, what are these criticisms directed at? The stated thesis of this post is that the veil of ignorance is a "useful and mostly accurate description of reality when viewed through certain theories of personal identity."
This list of notable criticisms of the Veil of Ignorance doesn't seem to include any disagreement with this claim in particular. In fact, the last criticism mentioned seems to support it – albeit with different reasoning.
Likewise, the first LessWrong post that comes up when searching for "Veil of Ignorance" argues that the thought experiment doesn't necessarily support Rawl's ideal society – not that it doesn't accurately describe reality.
Note that there have been many reports of persistent physiological changes caused by 5-AR inhibitors such as finasteride (see: Post Finasteride Syndrome), some of which sound pretty horrifying, like permanent brain fog and anhedonia.
I've spent a lot of time reading through both the scientific literature and personal anecdotes and it seems like such adverse effects are exceedingly rare, but I have high confidence (>80%) that they are not completely made up or psychosomatic. My current best guess is that all such permanent effects are caused by some sort of rare genetic variants, which is why I'm particularly interested in the genetic study being funded by the PFS network.
The whole situation is pretty complex and there's a lot of irrational argumentation on both sides. I'd recommend this Reddit post as a good introduction – I plan on posting my own detailed analysis on LW sometime in the future.