LESSWRONG
LW

264
Philip Niewold
6060
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Open Global Investment as a Governance Model for AGI
Philip Niewold1mo70

Your working paper, "Open Global Investment as a Governance Model for AGI." It provides a clear, pragmatic, and much-needed baseline for discussion by grounding a potential governance model in existing legal and economic structures. The argument that OGI is more incentive-compatible and achievable in the short term than more idealistic international proposals is a compelling one.

However, I wish to offer a critique based on the concern that the OGI model, by its very nature, may be fundamentally misaligned with the scale and type of challenge that AGI presents. My reservations can be grouped into three main points.

1. The Inherent Limitations of Shareholder Primacy in the Face of Existential Stakes

The core of the OGI model relies on a corporate, shareholder-owned structure. While you thoughtfully include mechanisms to mitigate the worst effects of pure profit-seeking (such as Public Benefit Corporation charters, non-profit ownership, and differentiated share classes), the fundamental logic of such a system remains beholden to shareholder interests. This creates a vast principal-agent problem where the "principals" (all of humanity) have their fate decided by "agents" (a corporation's board and its shareholders) who are legally and financially incentivized to prioritize a much narrower set of goals.

This leads to a global-scale prisoner's dilemma. In a competitive environment (even OGI-1 would have potential rivals), the pressure to generate returns, achieve market dominance, and deploy capabilities faster will be immense. This could force the AGI Corp to make trade-offs that favor speed over safety, or profit over broad societal well-being, simply because the fiduciary duty to shareholders outweighs a diffuse and unenforceable duty to humanity. The governance mechanisms of corporate law were designed to regulate economic competition, not to steward a technology that could single-handedly determine the future of sentient life.

2. Path Dependency and the Prevention of Necessary Societal Rewiring

You astutely frame the OGI model as a transitional framework for the period before the arrival of full superintelligence. The problem, however, is that this transitional model may create irreversible path dependency. By entrenching AGI development within the world's most powerful existing structure—international capital—we risk fortifying the very system that AGI's arrival should compel us to rethink.

If an AGI corporation becomes the most powerful and valuable entity in history, it will have an almost insurmountable ability to protect its own structure and the interests of its owners. The "rewiring of society" that you suggest might be necessary post-AGI could become politically and practically impossible, because the power to do the rewiring would have already been consolidated within the pre-AGI paradigm. The stopgap solution becomes the permanent one, not by design, but by the sheer concentration of power it creates.

3. Misidentification of the Ultimate Risk: From Distributing Wealth to Containing Unchecked Power

My deepest concern is that the OGI model frames the AGI governance challenge primarily as a problem of distribution: how to fairly distribute the economic benefits and political influence of AGI. This is why it focuses on mechanisms like international shareholding and tax revenues.

I fear the ultimate risk is not one of unfair distribution, but of absolute concentration. As you have explored in your own work, AGI represents a potential tool of immense capability. It is a solution to the game of power, allowing its controller to resolve nearly any game-theoretic dilemma in their favor. The single greatest check on concentrated power throughout human history has been the biological vulnerability and mortality of leaders. No ruler has been immortal; no regime has been omniscient. AGI could sweep those limitations away.

From this perspective, a governance system based on who can accumulate the most capital (i.e., buy the most shares) seems like a terrifyingly arbitrary method for selecting the wielders of such ultimate power. It prioritizes wealth as the key qualification for stewardship, rather than wisdom, compassion, or a demonstrated commitment to the global good.

In conclusion, while I appreciate OGI's pragmatism, I believe its reliance on a shareholder-centric model is a critical flaw. It applies the logic of our current world to a technology that will create a new one, potentially locking us into a future where ultimate power is wielded by an entity optimized for profit, not for the flourishing of humanity.

Reply
A case for courage, when speaking of AI danger
Philip Niewold3mo3-2

I don't think people in general react well to societal existential risks, regardless how well or courageous the message is framed. These are abstract concerns. The fact that we are talking about AI (an abstract thing in itself) makes it even worse.

I'm also a very big opponent of arguing by authority (I really don't care how many nobel laureates are of the opinion of something, it is the content of their argument I care about, now how many authorities are saying it). That is simply that I cannot determine the motives of these authorities and hence their opinions, while I can't argue with logic and facts)

Usually it is better to make people can understand the risks in terms of stories, in particular stories they can relate to, hence why people still think of Terminator when thinking of AI exctinction risks.

There is a real (and large) exctinction risk, sure. Then again, the Ape picking up the club in 2001: a Space Odyssey could just as well be accused of going down a path that very likely would result in extinction. But when is extinction risk accepable is a more interesting question, and a question most people are much more ready to answer.

Reply
Failures in Kindness
Philip Niewold1y21

Social messaging is fine balancing act: people like to offload responsibility and effort, especially if it doesn't come at the cost of status. And, to be honest, you don't know if your question would impose upon the other (in terms of cognitive load, social pressure or responsibility), so you it is smart to start your social bid low and see if the other wants to raise the price. Sometimes they work, creating a feedback loop similar to how superstitions evolve: if it is minimal effort and sometimes it is effective, better continue using it.

As a child, I despised a lot of these practices, to me it felt like people were lying all the time, or at least, hiding their true motivations or concerns. I tended to simply call out these adults on their bullshit. If somebody said "I'm fine with everything", I simply proposed something that I know that person is not fine with but that is absurd enough to indicate that I'm not being serious. As a child you can still get away with such behaviour, but many adults find it highly annoying. However, I still employ it among friends who I know don't judge me on that interaction or at least lace it with humor to make it socially acceptable.

However, I think such messaging can often turn into a social communication into a prisoner's dilemma type situation, where each party puts in the minimum succesfull effort resulting in a situation unsatisfactory to either party. I'm just not sure how (and if) we are able to recognize when a situation is a prisoner's dillema and when not. "How was your week?" is often a very welcoming question to me, but not for others.

Leaving things unspoken and relying on generally accepted principles can increase communication efficiency enormously, a lot of communication isn't a prisoner's dilemma type exchange after all, but it will run into issues occasionally, especially if the communicator do not share a set of unspoken rules.

Having grown up in Dutch culture, I was unusually direct (rude) for even a Dutch person, so travelling in Iran where things are absurdly polite at times was very interesting for me, for example. However, a society like Iran requires quite an amount of cognitive load for even simple issues.

 

Reply
Hell is Game Theory Folk Theorems
Philip Niewold2y00

Of course it is perfectly rational to do so, but only from a wider context. From the context of the equilibrium it isn't. The rationality your example is found because you are able to adjudicate your lifetime and the game is given in 10 second intervals. Suppose you don't know how long you have to live, or, in fact, now that you only have 30 seconden more to live. What would you choose?

This information is not given by the game, even though it impacts the decision, since the given game does rely on real-world equivalency to give it weight and impact. 

Reply
Hell is Game Theory Folk Theorems
Philip Niewold2y10

Any Nash Equilibrium can be a local optimum. This example merely demonstrates that not all local optima are desirable if you are able to view the game from a broader context. Incidentally, evolution has provided us with some means to try and get out of these local optima. Usually by breaking the rules of the game or leaving the game or seemingly not acting rationally from the perspective of the local optimum.

Reply
Bing Chat is blatantly, aggressively misaligned
Philip Niewold3y-1-8

Please keep in mind that the Chat technology is an desired-answer-predicter. If you are looking for weird response, the AI can see that in your questioning style. It has millions of examples of people trying to trigger certain responses in fora etc, en will quickly recognize what you really are looking for, even if your literal words might not exactly request it.

If you are a Flat Earther, the AI will do its best to accomodate your views about the shape of the earth and answer in a manner that you would like your answer to be, even though the developers of the AI have done their best to instruct it to 'speak as accurately as possible within the parameters of their political and PR views'

If you want to trigger the AI to give poorly written code examples with mistakes in them, it can. And you don't even have to ask it directly, it can detect your intention by carefully listening to your line of questioning.

Once again, it is a desired-answer-predicter/most-likely-response generator, that's its primary job, not to be nice or give you accurate information.

Reply