I'm surprised there's no tag for either "AI consciousness" or "AI rights", given that there have been several posts discussing both. However, there's a lot of overlap between the two, so perhaps both would be redundant, and the question of which is broader/more fitting becomes relevant. Thoughts?
(Sorry if this is not the right place to put this.)
Bing writes that one of the premises is that the AIs "can somehow detect or affect each other across vast distances and dimensions", which seems to indicate that it's misunderstanding the scenario.
People do not have the ability to fully simulate a person-level mind inside their own mind. Attempts to simulate minds are accomplished by a combination of two methods:
(There's also space in between these two, such as pattern-matching from one's own type of thinking, inserting pattern-results into one's own thinking-style, or balancing the outputs of the two approaches.)
Insofar as the first method is used, the result is not detailed enough to be a real person. Insofar as the second is used, it is not a distinct person from the person doing the thinking. You can simulate a character's pain by either feeling it yourself and using your own mind's output, or by using a less-than-person rough pattern, and neither of these come with moral quandaries.
Relevant quote from the research paper: "gpt-4 has a context length of 8,192 tokens. We are also providing limited access to our 32,768–context (about 50 pages of text) version, gpt-4-32k, which will also be updated automatically over time".
A few errors: The sentence "We're all crypto investors here." was said by Ryan, not Eliezer, and the "How the heck would I know?" and the "Wow" (following "you get a different thing on the inside") were said by Eliezer, not Ryan. Also, typos:
I am strongly reminded of the descriptions of the "upper class" in ACX's review of Fussel: "[T]he upper class doesn't worry about status because that would imply they have something to prove, which they don't.", and therefore they are extremely meticulous in making sure that nothing they do looks like signalling, ever, because otherwise people might think they have something to prove (which they don't). Boring parties, specifically non-ostentatious mansions, food which is just bland enough to avoid being too good and look like they're trying to show something, etc.
This kind of thing does happen. A group thinks they can be above signalling, and starts avoiding any attempts to signal, and then everyone notices that visible attempts to signal are bad signalling. Good signalling is looking like you're not trying to signal. And then the game starts all over again, only with yet another level of convoluted rules.
Some relevant posts: