Question. In the world in which multiple AGIs obtain (i.e., not a singleton), is it likely these AGIs will be parents/mentors/guardians of younger AGIs?


Context. I was wondering whether parenthood (or, more minimally, guardianship) is an intrinsic property of the evolution of sufficiently intelligent agents.

The answer seems to depend, at least partly, on the stability of the environment and the returns to learning. My off-the-cuff reasoning follows.

1. It seems that a species (or, perhaps more generically, sufficiently similar information processing systems), can persist over time either by increasing their own lifespan or by creating new instances of the species.

2.a. It seems that if the environment is highly stable, then increasing lifespan and/or creating clones is attractive because the agent has found agent-environment fit. Returns to learning are low because the environment is already known. Returns to guardianship are low because the usefulness of a young agent existing is itself low.

2.b. If, on the other hand, the environment is highly unstable, then shorter lives seems more adaptive because natural selection will ensure agent-environment fit. Returns to learning are again low but this time for the opposite reason: facts now will not be relevant in the future. Returns to guardianship are low because the probability of survival of any one agent is low which makes guardianship investment a low-return activity.

2.c. Finally, if the environment is moderately stable, then there seems to be an optimal lifespan: one that enables a relatively long learning phase and a relatively long exploit phase. Returns to learning seem high as do returns to guardianship.

3. If AGIs will need relatively long learning phases, and, further, if young AGIs benefit from parenting/mentorship/guardianship (perhaps because they can be hurt by their environment, including by other AGIs), then it seems there's a fitness advantage to AGIs which attract parents/mentors/guardians.

Open to any feedback. Thanks!

New to LessWrong?

New Answer
New Comment
6 comments, sorted by Click to highlight new comments since: Today at 8:07 AM

Duplicates - digital copies as opposed to genetic clones - might not require new training (unless a whole/partial restart/retraining was being done).

When combined with self-modification, there could be 'evolution' without 'deaths' of 'individuals' - just continual ship of Theseus processes. (Perhaps stuff like merging as well, which is more complicated.)

Duplicates - digital copies as opposed to genetic clones - might not require new training (unless a whole/partial restart/retraining was being done).

Wouldn't new training be strongly adaptive -- if not strictly required -- if the duplicate's environment is substantively different from the environment of its parent?

When combined with self-modification, there could be 'evolution' without 'deaths' of 'individuals' - just continual ship of Theseus processes. (Perhaps stuff like merging as well, which is more complicated.)

I understand this model; at the same time, however, it's my impression that it's commonplace in software development to periodically altogether jettison an old legacy software system in favor of building a new system from the ground-up. This seemed to be evidence that there are growing costs to continual self-modification in software systems that might limit this strategy.

This seemed to be evidence that there are growing costs to continual self-modification in software systems that might limit this strategy.

It's an unusual case, but AlphaGo provides an example of something being removed and retrained and getting better.

 

Outside of that - perhaps. The viability of self-modifying software...I guess we'll see. For a more intuitive approach, let's imagine an AGI is a human emulation except it's immortal/doesn't die of old age. (I.e. maybe the 'software' in some sense doesn't change but the knowledge continues to accumulate and be integrated in a mind.)

1. Why would such an AI have 'children'?

2. How long do software systems last when compared to people?

Just reasoning by analogy, yes 'mentoring' makes sense, though maybe in a different form. One person teaching everyone else in the world sounds ridiculous - with AGI, it seems conceivable. Or in a different direction, imagine if when you forgot about something you just asked your past self.

 

Overall, I'd say it's not an necessary thing, but for agents like us it seems useful, and so the scenario you describe seems probable, but not guaranteed.

A strong contender would be to write a descendant rather than raising one. If a human writes an AGI and AGI are superior to humans then an AGI can write a descendant.

The whole thing with culture and raising is that humans have a way of awakening the construction of knowledge and reconstructing key pieces of information in an otherwise blankish hardware (ie genetic passing of information ("rich hardware") isn't used for actual content of mind but epigenetic process are used for that). Most humans coould not build a psychology from the ground up, maybe some bookwriters or advanced psychologics could. But parenting doesn't require you to have a good theory of mind in order to get a good mind like having penetrative intercourse doesn't require a good sense of anatomy while still producing perfectly good anatomy. If you develop enough insight into pregnancy you can have test tube babies and if you can good enough insight why education works you can jsut have a correctly reasoning system.

Now if a AGI would write a child they might not rewrite a version of themselfs or even a small-delta impression. There might be sense of identifying harmful tendendcies and outrigth wrong knowledge that you expressly don't pass on forward. Information that truly passes the test of time - some sort of constitutional intelligence - might be better released into the world without a more detailed image of the finery or non-constituional bits. But if you are truly going to let the child figure things itself out then biasing that process would defeat the point.

So an AGI that understands how it functions it would probably write rather than raise a child. But if humans made an AGI come about that wasn't written then the default could be that the big data or whatever process could look like rearing descendants.

Althought humans are thought to be intelligent they lack the property of explaining or accessing in detail what makes them intelligent. So the degree of introspective access isn't guaranted from existence. But I think to the extent that humans know how to write they children they would prefer it. There might be a tipping point where being a social dad is more important than being a biological dad, where disciples are more central than family. Like organism will further the needs of the species over the individual organism so might species come serve cultures over their own good (memes over genes).

[-][anonymous]4y10

[Deleted]

I'll check it out -- thanks Zachary!