So long as Trump as in charge in America, any global governance idea will have to be compatible with his geopolitical style (described today on the Piers Morgan show as "transactional" and "personal", as good a description as any I've heard). I don't know if anyone has ideas in that direction.
On the Russian side, Dugin (an ideologue of multipolarity) has proposed that there could be strategic cooperation between BRICS and Trump, since they all have a common enemy in global liberalism. On the other hand, liberals also believe in global cooperation to solve problems, their world order had an ever-expanding list of new norms and priorities.
China under Xi Jinping has proposed a series of "global initiatives", the most recent of which, a Global Governance Initiative, debuted at the SCO meeting in Tianjin attended by Modi.
I mention this to show that anyone still trying to organize a global pause on frontier AI, has material to work with, though it will require creativity and ingenuity to marshall these disparate ingredients. But the bigger immediate problem is domestic AI policy in America and China. America basically has an e/acc policy towards AI at the moment, and official China is comparably oblivious to superintelligence as a threat (if that's what we're talking about).
For those of us who do favor "very short timelines", any thoughts?
What an aligned ASI that is about to foom should do
I thought the assumption behind the "pivotal act" is that it is done at a time when no-one actually knows how to align an ASI, and it is done to buy time until alignment theory can be figured out?
He was convinced he would fade into obscurity and that his discoveries were inadequate. He believed that people would remember Albert Einstein and Kurt Gödel and he would fade into obscurity.
Is this actually true? It makes sense but I can't find a source for it.
According to the mathematician G.H. Hardy (Ramanujan's sponsor and collaborator), Bertrand Russell had a nightmare about Principia Mathematica being lost to time...
This is quite a place to make a pitch for a new kind of AI service. I thought Less Wrong was known as a haven of AI doom. Haven't you heard that AI is either going to kill us all, or else transform the world in some ineffable posthuman way?
Even if I put that aside for a moment, the idea of having an agent in charge of spending my money is alarming, perhaps because I have so little to begin with; and also because I expect it to want to spend my money, because that's what consumer society is about. You do say (statement 5) that the agent should "help me save", and frankly, the people at the lower levels of society probably need an agent that is primarily defensive, that will not just do their financial planning, but which will protect them from scams and exploitation and bad habits. The problem is, that would put you at odds with the business model of a lot of people, who subsist in this world by e.g. convincing other people to buy things that they don't actually need.
Now, maybe social classes higher up the economic food chain can afford to have an agent that is not always in defensive mode. They have money to spend on things beyond survival, and they're determined to get out there and spend it, they just want to spend it as well as possible. This starts to sound like a problem in utility maximization, the agent's first task being to infer the utility function of the user.
Here we have entered conceptual territory familiar to Less Wrongers. The upshot is that if the agent is smart enough, it will deduce that what the user really wants is {inexpressible goal derived from transhuman extrapolation of human volitional impulses}, and it will cure death and take over the world in order to bring about a transhuman utopia; but if it extrapolates incorrectly, it will bring about a transhuman dystopia instead.
This may sound weird or whimsical if you really haven't encountered AI-safety alignment-lore before, but seriously, this is where AI naturally leads - beyond humanity. Even if we do get a world of millions of empowered consumers with friendly agents making their purchasing decisions, that is a transitional condition that will be swept aside by the godlike superintelligences that are the next logical stage in the evolution of AI (swept aside, unless those godlike superintelligences actually want their human wards to live in market societies).
Having expressed my cynicism about actually existing capitalism, and my transhumanist conviction that AIs will replace humans as the ones in charge, let me try to be a little positive. Some things are worth paying for! Some services available in the marketplace, maybe even most of them, do have genuine value! Being an entrepreneur can actually be a net positive for the world! And even if you were ultimately just in it for yourself, you do have a right to make your way in the world in that fashion!
You might wish to investigate Pattie Maes from MIT, and dig up her thesis on reflective agents. In the 1990s, she was a guru of the future agent-based society, I'm sure her thoughts and her career would have a few lessons for you. And if I was really was an entrepreneur trying to design an AI agent intended to act as an economic surrogate for its human user, I might think about it from the perspective of George Reisman's Capitalism, a hybrid work of Objectivist philosophy and Austrian economics, written according to the premise that capitalism done right really is the most virtuous economic system. It has nothing to say about Internet economics specifically, but it's a first-principles work, so if those principles are right, they should still be valid even when we're talking about a symbiotic economy of AIs and humans. (In fact, you could just feed the book to GPT-5 and ask it to write you a business plan.)
I don't quite understand what point is being made here.
The way I see it, we already inhabit a world in which half a dozen large companies in America and China are pressing towards the creation of superhuman intelligence, something which naturally leads to the loss of human control over the world unless human beings are somehow embedded in these new entities.
This essay seems to propose that we view this situation as a "governance model for AGI", alongside other scenarios like an AGI Manhattan Project and an AGI CERN that have not come to pass. But isn't the governance philosophy here, "let the companies do as they will and let events unfold as they may"? I don't see anything that addresses the situation in which one company tries to take over the world using its AGI, or in which an AGI acting on its own initiative tries to take over the world, etc. Did I miss something?
"What if some major human religion with an anthropomorphic God turns out to be literally correct" is not very interesting to think about; though perhaps in the hands of a good author, it could lead to intense fiction, as one explores the consequences of living in a world where e.g. Heaven or Hell exists.
However, if the topic is "how much of your values or even your humanity would you abandon, in order to survive in a transformed world", that's much more relevant...
Friendly AI
At this point, how would you define or characterize Friendly AI? Do you consider "aligned AI" to be a completely separate thing?
There are a large number of "string vacua" which contain particles and interactions with the quantum numbers and symmetries we call the standard model, but (1) they typically contain a lot of other stuff that we haven't seen (2) the real test is whether the constants (e.g. masses and couplings) are the same as observed, and these are hard to calculate (but it's improving).
Maybe Babuschkin Ventures will come up with something :-)