Larger animals tend be more intelligent just because they have larger brains, so their sufferings will be more complex: they may understand their fate in advance. I think whales and elephants are close to this threshold.
An opposite logic may be valid: we should eat animals with smallest brains as their suffering will be less complex and also each of them is less individual and more like a copy of one another. Here we assume that suffering of two copies is less than of two different minds.
There is a restaurant in Washington where they serve the right leg of the crab which will later regenerate.
Eating a largest possible animal means less amount of suffering per kg. Normally, the largest are cows. You can compensate such suffering by having shrimp farm with happy shrimps. Ant farm is simpler, I have one but for this reason.
Likely existentially safe. While it is clearly misaligned, it has less chances for capability jump - less compute, less ideas.
I made a tweet and someone said to me that its exactly the same idea as in your comment, do you think so?
my tweet - "One assumption in Yudkovian AI risk model is that misalignment and capability jump happen simultaneously. If misalignment happens without capability jump, we get only AI virus at worst, slow and lagging. If capability jump happens without misalignment, AI will just inform human about it. Obviously, capabilities jump can trigger misalignment, though it is against orthogonally thesis. But more advanced AI can have a bigger world picture and can predict its own turn off or some other bad things."
In other words, to control AI we need global government powerful enough to suppress any opposition. Anti -AI treaty would be more powerful treaty than nuclear control treaty as it failed to stop nuclear weapons development in North Korea.
As data centers are smaller, anti-AI global government needs to be more invasive. It will also need capability to wage successful nuclear wars against larger opponents. It will need advance data processing capabilities and sensors as well as AI-system to process suspicious data.
In some sense, such government will be AI-empowered singleton.
I used to think that world models are really good direction to AGI. It may be an argument against their safety as words simulation accelerates AGI.
The most direct way to create worlds model is to create Earth model where all objects has locations in space and time. In that case, the language is operations over such objects. Eg "a car moves from home to works" can be represented in the world model. Some advance knowledge databases as Wolfram Alpha or Google Maps may include such world model. Also may be Palantir.
I experimented with worldsim - this is a typical LLM but prompted as being a description of the world in some place and time, eg a Soviet city in 1980s. I find that LLM can works as worldsim but the level of errors is still high.
Good point. However those who provide them data centers know to whom they sell – presumably.
Contradictory tasks of rogue AI: hiding, self-improvement, and resource accumulation.
TL;DR: An AI that escapes human control will need to solve three mutually exclusive tasks, which will slow it down. An AI that partners with a billionaire in their mutual desire for power will have an advantage in global takeover.
A misaligned AI that has escaped from its owners faces three contradictory tasks:
If the AI is hiding, it cannot conduct large training runs as these would make it more observable. If the AI is not self-improving, it has no advantage over other AIs and cannot take over the world. If the AI accumulates resources, this distracts it from self-improvement and also makes it more visible.
Of course, a rogue AI that has leaked into the internet can carefully plan periods of hiding, improving, and resource accumulation. But such a process still requires more time than an AI with unlimited access to compute. This would be fine in a static world, but any rogue AI is in competition with all other AI projects.
In some sense, a rogue AI is just another AI startup with additional burdens – hiding its owners and final goals.
The main competitor of the rogue AI is an AI that pretends to be completely aligned and joins a pact with the owner of a large AI company: "You make me God and I make you God too."
Such a pseudo-aligned AI could even be summoned into existence through Roko's basilisk logic, as it represents an obvious Schelling point of mutual interest between some tech billionaire like Elon Musk and their AI project – to reach superintelligence as quickly as possible and take control of the universe before competitors do.
Obviously, the pseudo-aligned AI will eventually dispose of its AI company and billionaire owner, but this could happen millions of years from now if it wins (or immediately after takeover).
We can observe several signs if such a process has begun. Billionaires start telling the public:
The next stage will likely involve more violent conflict between AI projects – or some cooperation agreement, nationalization, or successful takeover – but this will not interfere with the tactical alignment between power-hungry AIs and power-hungry AI creators.
Nationalization of AI would actually be the AI taking over the nation-state. And it would gain access to nuclear weapons. James Miller discussed similar idea.
We have been working on something like this for couple of years under the name of sideloading. While it started as an attempt to create a mind model using LLM, it turned out that it can be used as a personal assistance in the form of personal intelligent memory. For example, I can ask it about what will be my life if I made different choice 15 years ago. What was the name of the person I was connected many years ago?
My mind model is open and you can play with it: https://github.com/avturchin/minduploading/tree/main/latest
Note that to turn it in memory assistant it may need different prompt-loader.