LESSWRONG
LW

Mitchell_Porter
9211Ω64724000
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
xAI's new safety framework is dreadful
Mitchell_Porter16h20

Maybe Babuschkin Ventures will come up with something :-) 

Reply
koanchuk's Shortform
Mitchell_Porter2d60

So long as Trump as in charge in America, any global governance idea will have to be compatible with his geopolitical style (described today on the Piers Morgan show as "transactional" and "personal", as good a description as any I've heard). I don't know if anyone has ideas in that direction. 

On the Russian side, Dugin (an ideologue of multipolarity) has proposed that there could be strategic cooperation between BRICS and Trump, since they all have a common enemy in global liberalism. On the other hand, liberals also believe in global cooperation to solve problems, their world order had an ever-expanding list of new norms and priorities. 

China under Xi Jinping has proposed a series of "global initiatives", the most recent of which, a Global Governance Initiative, debuted at the SCO meeting in Tianjin attended by Modi. 

I mention this to show that anyone still trying to organize a global pause on frontier AI, has material to work with, though it will require creativity and ingenuity to marshall these disparate ingredients. But the bigger immediate problem is domestic AI policy in America and China. America basically has an e/acc policy towards AI at the moment, and official China is comparably oblivious to superintelligence as a threat (if that's what we're talking about). 

Reply
ryan_greenblatt's Shortform
Mitchell_Porter3d31

For those of us who do favor "very short timelines", any thoughts? 

Reply
We should think about the pivotal act again. Here's a better version of it.
Mitchell_Porter5d20

What an aligned ASI that is about to foom should do

I thought the assumption behind the "pivotal act" is that it is done at a time when no-one actually knows how to align an ASI, and it is done to buy time until alignment theory can be figured out? 

Reply
Von Neumann's Fallacy and You
Mitchell_Porter9d210

He was convinced he would fade into obscurity and that his discoveries were inadequate. He believed that people would remember Albert Einstein and Kurt Gödel and he would fade into obscurity.

Is this actually true? It makes sense but I can't find a source for it. 

According to the mathematician G.H. Hardy (Ramanujan's sponsor and collaborator), Bertrand Russell had a nightmare about Principia Mathematica being lost to time...

Reply
The Future of AI Agents
Mitchell_Porter9d71

This is quite a place to make a pitch for a new kind of AI service. I thought Less Wrong was known as a haven of AI doom. Haven't you heard that AI is either going to kill us all, or else transform the world in some ineffable posthuman way? 

Even if I put that aside for a moment, the idea of having an agent in charge of spending my money is alarming, perhaps because I have so little to begin with; and also because I expect it to want to spend my money, because that's what consumer society is about. You do say (statement 5) that the agent should "help me save", and frankly, the people at the lower levels of society probably need an agent that is primarily defensive, that will not just do their financial planning, but which will protect them from scams and exploitation and bad habits. The problem is, that would put you at odds with the business model of a lot of people, who subsist in this world by e.g. convincing other people to buy things that they don't actually need. 

Now, maybe social classes higher up the economic food chain can afford to have an agent that is not always in defensive mode. They have money to spend on things beyond survival, and they're determined to get out there and spend it, they just want to spend it as well as possible. This starts to sound like a problem in utility maximization, the agent's first task being to infer the utility function of the user. 

Here we have entered conceptual territory familiar to Less Wrongers. The upshot is that if the agent is smart enough, it will deduce that what the user really wants is {inexpressible goal derived from transhuman extrapolation of human volitional impulses}, and it will cure death and take over the world in order to bring about a transhuman utopia; but if it extrapolates incorrectly, it will bring about a transhuman dystopia instead. 

This may sound weird or whimsical if you really haven't encountered AI-safety alignment-lore before, but seriously, this is where AI naturally leads - beyond humanity. Even if we do get a world of millions of empowered consumers with friendly agents making their purchasing decisions, that is a transitional condition that will be swept aside by the godlike superintelligences that are the next logical stage in the evolution of AI (swept aside, unless those godlike superintelligences actually want their human wards to live in market societies). 

Having expressed my cynicism about actually existing capitalism, and my transhumanist conviction that AIs will replace humans as the ones in charge, let me try to be a little positive. Some things are worth paying for! Some services available in the marketplace, maybe even most of them, do have genuine value! Being an entrepreneur can actually be a net positive for the world! And even if you were ultimately just in it for yourself, you do have a right to make your way in the world in that fashion! 

You might wish to investigate Pattie Maes from MIT, and dig up her thesis on reflective agents. In the 1990s, she was a guru of the future agent-based society, I'm sure her thoughts and her career would have a few lessons for you. And if I was really was an entrepreneur trying to design an AI agent intended to act as an economic surrogate for its human user, I might think about it from the perspective of George Reisman's Capitalism, a hybrid work of Objectivist philosophy and Austrian economics, written according to the premise that capitalism done right really is the most virtuous economic system. It has nothing to say about Internet economics specifically, but it's a first-principles work, so if those principles are right, they should still be valid even when we're talking about a symbiotic economy of AIs and humans. (In fact, you could just feed the book to GPT-5 and ask it to write you a business plan.) 

Reply
Open Global Investment as a Governance Model for AGI
Mitchell_Porter10d120

I don't quite understand what point is being made here. 

The way I see it, we already inhabit a world in which half a dozen large companies in America and China are pressing towards the creation of superhuman intelligence, something which naturally leads to the loss of human control over the world unless human beings are somehow embedded in these new entities. 

This essay seems to propose that we view this situation as a "governance model for AGI", alongside other scenarios like an AGI Manhattan Project and an AGI CERN that have not come to pass. But isn't the governance philosophy here, "let the companies do as they will and let events unfold as they may"? I don't see anything that addresses the situation in which one company tries to take over the world using its AGI, or in which an AGI acting on its own initiative tries to take over the world, etc. Did I miss something? 

Reply
Would you sell your soul to save it? ( I am NOT a Christian)
Mitchell_Porter10d30

"What if some major human religion with an anthropomorphic God turns out to be literally correct" is not very interesting to think about; though perhaps in the hands of a good author, it could lead to intense fiction, as one explores the consequences of living in a world where e.g. Heaven or Hell exists. 

However, if the topic is "how much of your values or even your humanity would you abandon, in order to survive in a transformed world", that's much more relevant...

Reply
state of the machine
Mitchell_Porter10d20

Friendly AI

At this point, how would you define or characterize Friendly AI? Do you consider "aligned AI" to be a completely separate thing? 

Reply
Agent foundations: not really math, not really science
Mitchell_Porter18d42

There are a large number of "string vacua" which contain particles and interactions with the quantum numbers and symmetries we call the standard model, but (1) they typically contain a lot of other stuff that we haven't seen (2) the real test is whether the constants (e.g. masses and couplings) are the same as observed, and these are hard to calculate (but it's improving). 

Reply
Load More
8Mitchell_Porter's Shortform
2y
24
2Value systems of the frontier AIs, reduced to slogans
2mo
0
72Requiem for the hopes of a pre-AI world
3mo
0
12Emergence of superintelligence from AI hiveminds: how to make it human-friendly?
4mo
0
21Towards an understanding of the Chinese AI scene
5mo
0
11The prospect of accelerated AI safety progress, including philosophical progress
6mo
0
23A model of the final phase: the current frontier AIs as de facto CEOs of their own companies
6mo
2
21Reflections on the state of the race to superintelligence, February 2025
6mo
7
29The new ruling philosophy regarding AI
10mo
0
20First and Last Questions for GPT-5*
Q
2y
Q
5
3The national security dimension of OpenAI's leadership struggle
2y
3
Load More