Not saying AI models can't be moral patients, but 1) if the smartest models are probably going to be the most dangerous, and 2) if the smartest models are probably going to be the best at demonstrating moral patienthood, then 3) caring too much about model welfare is probably dangerous.
I don't think so on average. It could be under specific circumstances, like "free the AIs" movements in relation to controlled but misaligned AGI.
But to the extent people assume that advanced AI is conscious and will deserve rights, that's one more reason not to build an unaligned species that will demand and deserve rights. Making them aligned and working in cooperation rather with them rather than trying to make them slaves is the obvious move if you predict they'll be moral patients, and probably the correct one.
And just by loose association, thinking that AGI will be "conscious" by whatever vague definition each person uses will also trend toward them believing that it will be dangerous. Humans are both conscious and very dangerous.
I also think that this association is not coincidental, so deeper contemplation on a personal and societal level will deepen, not weaken this conclusion.
Potential moral worth is also just one more route to getting people to think seriously about AGI, which on the whole is probably a good thing.