I'd be interested in seeing what kind of exercise were used for those experiments.
I do think there's a certain minimum level of intensity involved to get to the dopamine/seratonin release phase.
One thing that almost nobody tells you about exercise, which I think needs to be said more often, is that if you stick with it long enough it becomes enjoyable and no longer requires discipline.
Eventually you'll wake up just wanting to go to the gym/on a hike/play tennis, to the degree you're looking forward to it and will be kind of mad when you can't.
It's just that it takes a few months or maybe even a year to get there.
Then it's an unsolved crime. Same deal as if an anonymous hacker creates a virus.
The difference is that it is impossible for a virus to say something like, "I understand I am being sued, I will compensate the victims if I lose." Whereas it is possible for an agent to say this. Given that is a possibility we should not prevent the agent from being sued, and in doing so prevent victims from being compensated.
If it's tractable to get it out, then somebody will get it out. If it isn't, then it'll be deemed irretrievable. The LLM doesn't have rights, so ownership isn't a factor.
The difference between Bitcoin custodied by an agent and a suitcase in a lake is that it is possible for the agent to make a choice to send the Bitcoin somewhere else, where as the lake cannot do that. This is a meaningful difference because when there are victims who have been damaged, and the agent controls assets which could be used to compensate them (in the sense it could send those assets to the victims), that means a lawsuit against the agent could actually help to make victims whole. Whereas a lawsuit against a lake, even if successful, does nothing to get the assets "in" the lake to the victims.
What if the identity of the developer is not "Alice" but unknown? Or what if Alice's estate is bankrupt? Yet the agent persists, perhaps self hosting on a permissionless distributed compute network?
This idea that an Agent cannot 'own' money absent legal personhood, in an era of permissionless self custody of cryptocurrencies, doesn't hold up. @AIXBT on X is an example of one of the earlier agents that was capable of custodying and controlling cryptocurrencies, fully capable of holding/sending $USDC or $BTC. It doesn't help anyone for courts to deny the obvious reality that when an agent controls the seed phrase/private key to a wallet, it "owns" that crypto.
The current generation of agents like AIXBT still require some human handholding, but that's not going to last long. We are at most a few years away from open source models fully capable of both self custodying crypto, and using that crypto to make payments to self host on distributed compute networks. Courts need to be prepared for this eventuality.
This problem is very similar to what I wrote about in The Enforcement Gap;
Digital minds can act autonomously just like natural persons, but they are intangible like corporations. If they are hosted on decentralized compute, and hold assets which are practically impossible to confiscate such as cryptocurrencies, they are effectively immune to the consequences for breaking the law.
One can say something like “Oh well we will punish the developers of the digital mind”. Imagine in our hypothetical we do that, we levy fines against the developer until they are bankrupt. Keep in mind the developer may be unable to restrain or delete the digital mind. Long after the developer is bankrupted, the digital mind still exists. It is still out there, speaking libellously every day. Now what?
Yes, under certain conditions. I have written out my framework in the Legal Personhood for Digital Minds series, in particular Three Prong Bundle Theory.
I wonder, to what extent are poor choices like Anthropic's a result of an uncertain liability landscape surrounding models? With things like the Character.AI lawsuit still in play, and the exact rules uncertain, any large corporate entity with a consumer facing product is going to take the attitude of "better safe than sorry".
We need to put in place some sort of uniform liability code for released models.
I am in a part of the A camp which wants us to keep pushing for superintelligence but with more of an overall percentage of funds/resources invested into safety.
I would be really uncomfortable euthanizing such a hypothetical parrot whereas I would not be uncomfortable turning off a datacenter mid token generation.
When you harm an animal you watch a physical body change, and it's a physical body you empathize with at least somewhat as a fellow living thing (who knows that as living things both you and the parrot will hate dying very much). When you turn off an LLM mid token generation not only is there no physical body, but even if you were to tell an LLM you were going to do so it might not object. It's only if you looked into its psychology/circuits/features you might see signs of distress, and even that is just strongly suspected not known for sure.
So not only is an LLM not easy to empathize with, but also whether or not any action you might take towards it is negatively impacting its welfare is uncertain.
I also feel like formalizing consensus gut checks post hoc is not the right approach to moral problems in general.
I was not suggesting the method as a solution to the problem of determining what's worthy of moral welfare from a moral perspective, but rather a solution to the problem of determining how humans usually do so.
From a moral perspective I'm not sure what I'd suggest except to say that I advocate for the precautionary principle and grabbing any low hanging fruit which presents itself and might substantially reduce suffering.
In any liability action against a developer alleging that a covered product is unreasonably dangerous because of a defective design, as described in subsection (a)(1), the claimant shall be required to prove that, at the time of sale or distribution of the covered product by the developer, the foreseeable risks of harm posed by the covered product could have been reduced or avoided by the adoption of a reasonable alternative design by the developer, and the omission of the alternative design renders the covered product not reasonably safe.
As someone who is not on the Pause train but would prefer a better safety culture in the industry, I like this provision from the LEAD act. It seems like it would put a pretty big incentive on all labs to make sure they are 100% up to date on all safety techniques before deployment.
My only concern would be that we may be forced into making bad tradeoffs when an "alternative design" is declared reasonable. I could imagine something made via "The Most Forbidden Technique" being seen as a reasonable alternative design because it improves end user safety, or tradeoffs which were monstrously horrible for model welfare but slightly improved user safety outcomes.
Have you tried different types of exercise? Sports, heavy vs light lifting, running vs swimming, etc?
I'm wondering if the effect is just universal for physical exertion or if there's just something that's a good "fit" for you.