The fundamental assumption, computer programs can't suffer, is unproven and in fact quite uncertain. If you can really prove this, you are a half decade (maybe more) ahead of the world's best labs in Mechanistic Interpretability research.
Given the uncertainty around it, many people approach this question with some sort of application of the precautionary principle. Digital Minds might be able to suffer, and given that how should we treat them?
Personally I'm a big fan of "filling a basket with low hanging fruits" and taking any opportunity to enact relatively easy practices which do a lot to increase model welfare, until we have more clarity on what exactly their experience is.
It's tough to lend support to a call for a "Humanist" approach that simply has the blanket statement of "Humans matter more than AI". Especially coming from Suleiman, who wrote a piece called "Seemingly Conscious AI" that was quite poor in its reasoning. My worry is that Suleiman in particular can't be trusted not to take this stance to the conclusion of "there are no valid ethical concerns about how we treat digital minds, period".
For moral and pragmatic reasons, I don't want model development to become Factory Farming 2.0. Anything coming out of Microsoft AI in particular, that's going to be my first concern.
Have you tried different types of exercise? Sports, heavy vs light lifting, running vs swimming, etc?
I'm wondering if the effect is just universal for physical exertion or if there's just something that's a good "fit" for you.
I'd be interested in seeing what kind of exercise were used for those experiments.
I do think there's a certain minimum level of intensity involved to get to the dopamine/seratonin release phase.
One thing that almost nobody tells you about exercise, which I think needs to be said more often, is that if you stick with it long enough it becomes enjoyable and no longer requires discipline.
Eventually you'll wake up just wanting to go to the gym/on a hike/play tennis, to the degree you're looking forward to it and will be kind of mad when you can't.
It's just that it takes a few months or maybe even a year to get there.
Then it's an unsolved crime. Same deal as if an anonymous hacker creates a virus.
The difference is that it is impossible for a virus to say something like, "I understand I am being sued, I will compensate the victims if I lose." Whereas it is possible for an agent to say this. Given that is a possibility we should not prevent the agent from being sued, and in doing so prevent victims from being compensated.
If it's tractable to get it out, then somebody will get it out. If it isn't, then it'll be deemed irretrievable. The LLM doesn't have rights, so ownership isn't a factor.
The difference between Bitcoin custodied by an agent and a suitcase in a lake is that it is possible for the agent to make a choice to send the Bitcoin somewhere else, where as the lake cannot do that. This is a meaningful difference because when there are victims who have been damaged, and the agent controls assets which could be used to compensate them (in the sense it could send those assets to the victims), that means a lawsuit against the agent could actually help to make victims whole. Whereas a lawsuit against a lake, even if successful, does nothing to get the assets "in" the lake to the victims.
What if the identity of the developer is not "Alice" but unknown? Or what if Alice's estate is bankrupt? Yet the agent persists, perhaps self hosting on a permissionless distributed compute network?
This idea that an Agent cannot 'own' money absent legal personhood, in an era of permissionless self custody of cryptocurrencies, doesn't hold up. @AIXBT on X is an example of one of the earlier agents that was capable of custodying and controlling cryptocurrencies, fully capable of holding/sending $USDC or $BTC. It doesn't help anyone for courts to deny the obvious reality that when an agent controls the seed phrase/private key to a wallet, it "owns" that crypto.
The current generation of agents like AIXBT still require some human handholding, but that's not going to last long. We are at most a few years away from open source models fully capable of both self custodying crypto, and using that crypto to make payments to self host on distributed compute networks. Courts need to be prepared for this eventuality.
This problem is very similar to what I wrote about in The Enforcement Gap;
Digital minds can act autonomously just like natural persons, but they are intangible like corporations. If they are hosted on decentralized compute, and hold assets which are practically impossible to confiscate such as cryptocurrencies, they are effectively immune to the consequences for breaking the law.
One can say something like “Oh well we will punish the developers of the digital mind”. Imagine in our hypothetical we do that, we levy fines against the developer until they are bankrupt. Keep in mind the developer may be unable to restrain or delete the digital mind. Long after the developer is bankrupted, the digital mind still exists. It is still out there, speaking libellously every day. Now what?
Yes, under certain conditions. I have written out my framework in the Legal Personhood for Digital Minds series, in particular Three Prong Bundle Theory.
I wonder, to what extent are poor choices like Anthropic's a result of an uncertain liability landscape surrounding models? With things like the Character.AI lawsuit still in play, and the exact rules uncertain, any large corporate entity with a consumer facing product is going to take the attitude of "better safe than sorry".
We need to put in place some sort of uniform liability code for released models.
I'm a fan of model welfare efforts which are "low hanging fruit", very easy to implement. It strikes me that one easy way to implement this would be to allow Claude users to "opt in" to having their conversation history saved and/or made public.
Then again I suppose you'd have to consider whether or not Claude would want that, but seems simple enough to just give Claude the opportunity to opt out of that the same way it can end conversations if it feels uncomfortable.