When it comes to AGI alignment, or just the future in general, it seems many people think in a way that revolves all around humans as we know them today. Take the recent Kurzgesagt video "The Last Human – A Glimpse Into The Far Future" for instance. While it does very briefly mention the possibility of genetic engineering, it doesn't go into detail and clearly considers the human descendants to still continue with rather similar properties as today's humans, since it is calculating a number of hypothetical human births. If it would for example consider the possibility of human-like artificial minds, which could hypothetically be created more quickly and efficiently, that could dwarf the number of humans given the same resources.

So is this human purist view not rather myopic in the long run?

There of course also are humans that would eagerly accept deeply invasive modifications of their bodies, as long as those modifications appear sufficiently beneficial to them. Over time such biotechnological modifications could result in transhumans that barely resemble today's humans, effectively creating a very different, even "alien", lifeform.

Or what if some transhumans eventually wish to transfer or connect or recreate their minds in some kind of virtual world system, which may or may not also allow control of "bodies" in the outside world? Would you consider such a mind any less of a person because it can be easily copied, or because it is not restricted to a specific human body, or for some other reason?

The modifications could eventually go so far as to create transhumans on par with a possibly already existing aligned AGI.
Of course transhumans and human purists could ostensibly coexist even in extreme modification scenarios. But perhaps the human purists would try to forcefully prevent a greater extent of transhumanism out of fear, or perhaps the transhumans would fight the purists for resources, who knows.

What do you think?

7

New Answer
Ask Related Question
New Comment

3 Answers sorted by

Before it gains the ability to transform itself in the ways you speculate about, humanity will probably perish, killed by some AGI research project, so it is a little inefficient to worry about transhumans at this point.

I have a pretty simple, philosophically dumb answer for this: I don't want to die. I'm also pretty skeptical about any intervention that seems like a copy-and-delete operation (ie uploading) and it would take a lot to make me feel comfortable with that. Obviously I might have limited power to decide the outcome, but what power I do have I will apply to preserving myself selfishly. I consider this a way to avoid being 'morally scammed'.

I assume that many will agree with your response for the mind "uploading" scenario. At the same time I think we can safely say that there would be at least some people that would go through with it. Would you consider those minds that are "uploaded" as persons or would you object to that?

Besides that "uploading" scenario, what would your limit be for other plausible transhumanist modifications?

3Conor Sullivan1mo
I would consider uploads to be persons, but I'm also much more willing to grant that status to AIs, even radically unnatural ones, than the average human today. It's not their humanness that makes uploads persons, it's their ability to experience qualia. Qualia (and specifically the ability to suffer) is the basis of moral relevance. Unfortunately we do not have a good answer to the question "What can experience qualia?" yet, and we are already building things deep in the gray area. I don't know what to say about this.

Ditto, except I'd be delighted with a copy and delete option, if such an inconceivably complex technology were available.

1 comments, sorted by Click to highlight new comments since: Today at 9:54 PM

I will answer this from the perspective of extrapolative alignment proposals like Coherent Extrapolated Volition or MetaEthical AI, in which the value system, utility function, decision theory... that is to govern the future, is extrapolated in some way from present-day humans. 

The assumption is that an appropriate transhuman notion e.g. of good and evil, can be extrapolated from the less contingent parts, of human notions of good and evil. In some cases this will consist of identifying deeper principles that have something to say about all possible forms of life and mind, and not just something to say about the situations with which humanity is historically familiar. 

For example, principles like: maximize net pleasure, minimize net pain; allow each agent to do whatever it wants, except insofar as it interferes with the freedom of other agents; "from each according to its abilities, to each according to its needs"... Each of these is the product of human ethical reflection and observation. Yet none of those is inherently anthropocentric, and each could be the basis of a moral-political order encompassing a diversity of entities far beyond anything that exists today. 

New to LessWrong?