LESSWRONG
LW

Jemal Young's Shortform

by Jemal Young
29th Apr 2025
1 min read
2

2

This is a special post for quick takes by Jemal Young. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Jemal Young's Shortform
4Jemal Young
3Seth Herd
2 comments, sorted by
top scoring
Click to highlight new comments since: Today at 5:35 AM
[-]Jemal Young4mo4-1

Not saying AI models can't be moral patients, but 1) if the smartest models are probably going to be the most dangerous, and 2) if the smartest models are probably going to be the best at demonstrating moral patienthood, then 3) caring too much about model welfare is probably dangerous.

Reply
[-]Seth Herd4mo30

I don't think so on average. It could be under specific circumstances, like "free the AIs" movements in relation to controlled but misaligned AGI.

But to the extent people assume that advanced AI is conscious and will deserve rights, that's one more reason not to build an unaligned species that will demand and deserve rights. Making them aligned and working in cooperation rather with them rather than trying to make them slaves is the obvious move if you predict they'll be moral patients, and probably the correct one.

And just by loose association, thinking that AGI will be "conscious" by whatever vague definition each person uses will also trend toward them believing that it will be dangerous. Humans are both conscious and very dangerous.

I also think that this association is not coincidental, so deeper contemplation on a personal and societal level will deepen, not weaken this conclusion.

Potential moral worth is also just one more route to getting people to think seriously about AGI, which on the whole is probably a good thing.

Reply
Moderation Log
More from Jemal Young
View more
Curated and popular this week
2Comments