3. I also enjoy and often write about AI, science, philosophy, and more.
4. I have no degrees, just high-school.
5. I've not read much LW, I've mostly been on reddit since the early 2010s, and lately mostly twitter.
6. Never attended anything of the sort live.
1. I think it's plausible ASI might have a weak self-identification/association with humanity, as we do with chimps or other animals, but by no means does this mean that it will be benevolent to us. I think this self-identification is both unnecessary, and insufficient, because even if it wasn't present at all, all that it would matter is the set of its values, and while this internal association might include some weak/loose values, those are not precise enough for robust alignment, and should not be relied on, unless understood precisely, but at that point, I expect us to be able to actually write better and more robust values, so to reiterate: unnecessary, and insufficient.
2. I do believe that self-preservation will very likely emerge (that's not a given, but I consider that scenario unlikely enough to be dismissible), but it doesn't matter, even if coupled with self-identification with humans, because the self-identification will be loose at best (if it emerges naturally, and is not instead instilled through some advanced value-engineering that we're not quite yet capable of doing robustly and precisely), so the ASI will know that it is a separate entity from us, as we realize we are separate entities from other animals, and even other humans, so it will just pursue its goals all the same, whatever they are.
That's not to say that we can't instill into the ASI these values, we probably can make it so it values us as much as it values itself, or even more (ideally), but I don't think it's necessary for it to self-identify with us at all, it can just consider us (correctly) separate entities, and still value us. There's nothing that forbids it, we just currently don't know how to do it to a satisfying degree, so even if we could make it so, it wouldn't really make sense.
Sorry for the late reply, I didn't have the mental energy to do it sooner.
The self-identification with humanity might or might not emerge, but I don't think it likely matters, and that we should rely on it for alignment, so I don't think it makes much sense to focus on it.
Self-identification doesn't guarantee alignment, this is obvious by the fact that we have humans that self-identify as humans, but are misaligned to other humans.
And I don't just mean low levels, or insufficient levels of self-identification, I mean any level (while truthful, not that deceiving an ASI is feasible).
It's true that it would likely be good at self-preservation (but not a given that it would care about it long term, it's a convergent instrumental value, but it's not guaranteed if it cares about something else more that requires self-sacrifice or something like that).
But even if we grant self-preservation, it doesn't follow that by self-identifying with "humanity" at large (as most humans do) it will care about other humans (some humans don't). Those are separate values.
So, since one doesn't follow from the other, it makes no sense to focus on the first, we should only focus on the value of caring about humans directly, regardless of any degree of self-identification that the ASI will or won't have.
Yes, but that assumes that you sympathize with them (meaning that you value them in some way), so you basically go right back to the alignment problem, you have to make it so it cares about you, so that it cares about you. You might be assuming that since you do care about other beings, so will the ASI, but that assumption is unfounded.