LESSWRONG
LW

629
lightferret
0010
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
The stakes of AI moral status
lightferret2mo10

But I think it is right, nonetheless, to think of ourselves as being measured. Even if AIs are not moral patients: did we try, actually, to find out? Even if AI is not like slavery: would we have stopped if it were? 

Picard tells the judge: 

The decision you reach here today will determine how we will regard this creation of our genius. It will reveal the kind of a people we are, what he is destined to be. It will reach far beyond this courtroom and this one android. It could significantly redefine the boundaries of personal liberty and freedom, expanding them for some, savagely curtailing them for others. Are you prepared to condemn him and all who come after him to servitude and slavery? Your Honor, Starfleet was founded to seek out new life. Well, there it sits. Waiting.

 

This is interesting that you use Picard's arguments for your own. The episode also goes on to use a few key points to decide upon weather Data is alive and deserving of personhood. They talk about Data learning from interaction, his ability to reflect upon meaning, and to form continuity.

Now these things can be said about a few different LLMs. The argument of continuity is a whole other problem but if the AIs we make today match 2 of the 3 criteria for Picard's argument does that mean we are only a step away from creating Data? I have been obsessed with the idea that current AI is on the edge of what we can consider consciousness just as fire is on the fringe of what biology considers alive. (Meeting 4 of 6 requirements to be alive.)

Before we even start making a claim on moral  personhood I feel we need to decide on if AI can be considered alive in a meaningful way. 

Reply