(Those two had, specifically, asked an automatic result-filtering algorithm to select that fleshling of the highest discernible intelligence class up to measurement noise, whose Internet traces suggested the greatest ability to quickly adapt to being seized by aliens without disabling emotional convulsions. And if this was, itself, an odd sort of request-filter by fleshling standards -- liable to produce strange and unexpected correlations to its oddness -- neither of those two aliens had any way to know that.)
My read was that natural human variation plus a few dozen bits of optimization was sufficient explanation.
But have you considered... pointing them all at a disco ball?
The question is whether restrictions on AI speech violate the first amendment rights of users or developers
I'm assuming this means restrictions on users/developers being legally allowed to repeat AI-generated text, rather than restrictions built into the AI on what text it is willing to generate.
Either I'm misunderstanding what you wrote, or you didn't mean to write what you did.
Suppose A is a human and B is a shrimp.
The value of adding a shrimp to a world where A exists is small.
The value of replacing the shrimp with A is large.
Could this be the result of a system prompt telling them that the COT isn't exposed? Similarly to how they denied that events after their knowledge cutoff could have occurred?
Both of my comments were about the thought experiment at the end of the post:
You are given a moral dilemma, either a million people will get an experience worth 100 utility points each, or a million + 1 people will get 99 utility points each. The first option gets you more utility total, but if we take the second option we get one more person served and nobody else can even tell the difference.
If I unfocus my eyes, I can see-double with a different mode in each eye.