So a greater power is necessary to prevent bad actors from concentrating it?
No. Amish society is pretty successful at stopping concentrations of power, mostly via peer pressure.
If we’re being honest, the compensation would probably have to be capped at some maximum amount. If the AIs gave up an 80% chance at world takeover for our benefit, it would probably not be within an AI company’s power to give away 80% of all future resources in compensation (or anything close to that).
It seems pretty hard to predict whether an AI company would have such power in conditions which are that unusual. After all, it would have a pretty powerful AI trying to enforce the agreement.
I don't see the benefit to setting a cap. Let's just inform the AI as best we can about the uncertainties involved, and promise to do the best we can to uphold agreements.
As a donor, I'm nervous about charities that pay fully competitive wages, although it only gets about 2% weighting in my decisions. If someone can clearly make more money somewhere else, then that significantly reduces my concern that they'll mislead me about the value of their charity.
Remember, if the theories were correct and complete, then they could be turned into simulations able to do all the things that the real human cortex can do[5]—vision, language, motor control, reasoning, inventing new scientific paradigms from scratch, founding and running billion-dollar companies, and so on.
So here is a very different kind of learning algorithm waiting to be discovered
There may be important differences in the details, but I've been surprised by how similar the behavior is between LLMs and humans. That surprise is in spite of me having suspected for decades that artificial neural nets would play an important role in AI.
It seems far-fetched that a new paradigm is needed. Saying that current LLMs can't build billion-dollar companies seems a lot like saying that 5-year-old Elon Musk couldn't build a billion-dollar company. Musk didn't seem to need a paradigm shift to get from the abilities of a 5-year-old to those of a CEO. Accumulation of knowledge seems like the key factor.
But thanks for providing an argument for foom that is clear enough that I can be pretty sure why I disagree.
I've donated $30,000.
The budget is attempting to gut nuclear
Yet the stock prices of nuclear-related companies that I'm following have done quite well this month (e.g. SMR). There doesn't seem to be a major threat to nuclear power.
I expect deals between AIs to make sense at the stage that AI 2027 describes because the AIs will be uncertain what will happen if they fight.
If AI developers expected winner-take-all results, I'd expect them to be publishing less about their newest techniques, and complaining more about their competitors' inadequate safety practices.
Beyond that, I get a fairly clear vibe that's closer to "this is a fascinating engineering challenge" than to "this is a military conflict".
It is too decentralized to qualify as the kind of centralized power that WalterL was talking about, and probably too decentralized to fit the concerns that Gabriel expressed.