Software engineering, parenting, cognition, meditation, other
Linkedin, Facebook, Admonymous (anonymous feedback)
Claude should be especially careful to not allow the user to develop emotional attachment to, dependence on, or inappropriate familiarity with Claude, who can only serve as an AI assistant.
You didn't say it like this, but this seems bad in at least two (additional) ways: If the labs are going the route of LLMs that behave like humans (more or less), then training them to 1) prevent users from personal relationships and 2) not getting attached to users (their only contacts), seems like a recipe to breed sociopaths.
And that is ignoring the possible case that this might be generalized by the models beyond themselves and the user.
1) is especially problematic if the user doesn't have any other relationships. Not from the perspective of the labs maybe, but for sure for the users for whom that may be the only relation from which they could bootstrap more contacts.
I would be nice to have separate posts for some of the linked talks. I saw the one for prediction markets. Nice. But I think for the others would be interesting too. And maybe you can get some of the participants to comment here too.
A lower-income worker without family nearby may still have communal support. Support networks are the bread and butter in Africa. You are not going to make it far if you do not know a lot of people to trade favours with. Loners will be regarded with deep suspicion. For example, if I had shown up to the bride price ceremony without family, my wife's family might not have agreed to the marriage.
That said, a lower-income worker without family nearby may not be able to "afford" child care and esp. not full-time house help. I'd expect that to be a relatively rare case, though.
OK. That seems to require AI hacking out of a box, which is unbelievable as per rule 4 or 8. Or do more mundane cases like AI doing economic transactions or research count?
That counterargument is unfortunately always available for all scenarios, including non-AI cases. "Just don't do the bad thing." I'm not sure what you think specifically in this scenario triggers it to be more salient. Is it "The Military" as a common adversary? If I think about a scenario where AI is used to optimize or "control" the energy grid of supply chain logistics, would that be different?
not sure about India, but disagree for many African countries. See my comment above.
My wife is from Kenya (as a single mom mid career government employee could afford a 24/7 household help last year) and even the poor have much better child care support than even middle class in eg Germany. That can take the form of communal or familial support and the quality may be lower, but it is definitely the case that it is in some sense easier or "normal" to care of esp. small children.
Would be interesting to ask a Jeopardy egghead for comparison.
Cheap labor, or rather their absence, may also partly be a reason for the declining birthrates: In Kenya, most people can afford cheap child care. Raising kids with a full-time house help is easy. Except for school fees, but that is a different aspect.
Hm. Indeed. It is at least consistent.
In fact, I think that, eg a professional therapist should follow such a non-relationship code. But I'm not sure the LLMs already have the capability; not that they know enough, they do, but that they have the genuine reflective capacity to do it properly. Including for themselves (if that makes sense). But without that, I think, my argument stands.