I think the utility function and probability framework from VNM rationality is a very important kernel of math that constrains "any possible agent that can act coherently (as a limiting case)".
((I don't think of the VNM stuff as the end of the story at all, but it is an onramp to a larger theory that you can motivate and teach in a lecture or three to a classroom. There's no time in the VNM framework. Kelly doesn't show up, and the tensions and pragmatic complexities of trying to apply either VNM or Kelly to the same human behavioral choices in real life and have that cause your life to really go better are non-trivial!))
With that "theory which relates to an important agentic process" as a background, I have a strong hunch that Dominant Assurance Contracts (DACs) are really really conceptually important, in a similarly deep way.
I think that "theoretical DACs" probably constrain all possible governance systems that "collect money to provide public services" where the governance system is bounded by some operational constraint like "freedom" or "non-tyranny" or "the appearance of non-tyranny" or maybe "being limited to organizational behavior that is deontically acceptable behavior for a governance system" or something like that.
In the case of DACs, the math is much less widely known than VNM rationality. Lesswrong has a VNM tag that comes up a lot, but the DAC tag has less love. And in general, the applications of DACs to "what an ideal tax-collecting service-providing governance system would or could look like" isn't usually drawn out explicitly.
However, to me, there is a clear sense in which "the Singularity might will produce a single AI that is mentally and axiologically unified as sort of 'single thing' that is 'person-shaped', and yet it might also be vast, and (if humans still exist after the Singularity) would probably provide endpoint computing services to humans, kinda like the internet or kinda like the government does".
And so in a sense, if a Singleton comes along who can credibly say "The State: it is me" then the math of DACs will be a potential boundary case on how ideal such Singletons could possibly work (similarly to how VNM rationality puts constrains on how any agent could work) if such Singletons constrained themselves to preference elicitation regimes that had a UI that was formal, legible, honest, "non-tyrannical", etc.
That is to say, I think this post is important, and since it has been posted here for 2 days and only has 26 upvotes at the time I'm writing this comment, I think the importance of the post is not intelligible to most of the potential audience!
The intellectually hard part of Kant is coming up with deontic proofs for universalizable maxims in novel circumstances where the total list of relevant factors is large. Proof generation is NP-hard in the general case!
The relatively easy part is just making a list of all the persons and making sure there is an intent to never treat any of them purely as a means, but always also as an end in themselves. Its just a checklist basically. To verify that it applies to N people in a fully connected social graph is basically merely O(N^2) checks of directional bilateral "concern for the other".
For a single agent to fulfill its own duties here is only an O(N) process at start time, and with "data dependency semantics" you probably don't even have to re-check intentions that often for distant agents who are rarely/minimally affected by any given update to the world state. Also you can probably often do a decent job with batched updates with an intention check at the end?
Surely none of it is that onerous for a well ordered mind? <3
I laughed out loud on this line...
Perhaps my experience in the famously kindly and generous finance industry has not prepared me for the cutthroat reality of nonprofit altruist organizations.
...and then I wondered if you've seen Margin Call? It is truly a work of art.
My experiences are mostly in startups, but rarely on the actual founding team, so I have seen more stuff that was unbuffered by kind, diligent, "clueless" bosses.
My general impression is that "systems and processes" go a long way into creating smooth rides for the people at the bottom, but those things are not effectively in place (1) at the very beginning and (2) at the top when exceptional situations arise. Credentialed labor is generally better compensated in big organizations precisely because they have "systems" where people turn cranks reliably that reliably Make Number Go Up and then share out fractional amounts of "the number".
Some years ago, a few people from my team (2 on a team of ~7) were laid off as part of firm staff reductions.
Did you ever see or talk with them again? Did they get nice severance packages? Severance packages are the normal way for oligarchs to minimize expensive conflict, I think.
With apologies for the long response... I suspect the board DID have governance power, but simply not decisive power.
Also it was probably declining, and this might have been a net positive way to spend what remained of it... or not?
It is hard to say, and I don't personally have the data I'd need to be very confident. "Being able to maintain a standard of morality for yourself even when you don't have all the data and can't properly even access all the data" is basically the core REASON for deontic morality, after all <3
Naive consequentialism has a huge GIGO data problem that Kant's followers do not have.
(The other side of it (the "cost of tolerated ignorance" so to speak) is that Kantian's usually are leaving "expected value" (even altruistic expected value FOR OTHERS) on the table by refraining from actions that SEEM positive EV but which have large error bars based on missing data, where some facts could exist that they don't know about that would later cause them to have appeared to lied or stolen or used a slave or run for high office in a venal empire or whatever.)
I personally estimate that it would have been reasonable and prudent for Sam to cultivate other bases of power, preparing for a breach of amity in advance, and I suspect he did. (This is consistent with suspecting the board's real power was declining.)
Conflict in general is sad, and often bad, and it usually arises at the boundaries where two proactive agentic processes show up with each of them "feeling like Atlas" and feeling that that role morally authorizes them to regulate others in a top-down way... to grant rewards, or to judge conflicts, or to sanction wrong-doers...
...if two such entities recognize each other as peers, then it can reduce the sadness of their "lonely Atlas feelings"! But also they might have true utility functions, and not just be running on reflexes! Or their real-agency-echoing reflexive tropisms might be incompatible. Or mixtures thereof?
Something I think I've seen many times is a "moral reflex" on one side (that runs more on tropisms?) be treated as a "sign of stupidity" by someone who habitually runs a shorter tighter OODA loop and makes a lot of decisions, whose flexibility is taken as a "sign of evil". Then both parties "go mad" :-(
Before any breach, you might get something with a vibe like "a meeting of sovereigns", with perhaps explicit peace or honorable war... like with two mafia families, or like two blockchains pondering whether or how to fund dual smart contracts that maintain token-value pegs at a stable ratio, or like the way Putin and Xi are cautious around each other (but probably also "get" each other (and "learn from a distance" from each other's seeming errors)).
In a democracy, hypothetically, all the voters bring their own honor to a big shared table in this way, and then in Fukuyama's formula such "Democrats" can look down on both "Peasants" (for shrinking from the table even when invited to speak and vote in safety) and also "Nobles" (for simple power-seeking amorality that only cares about the respect and personhood of other Nobles who have fought for and earned their nobility via conquest or at least via self defense).
I could easily imagine that Sam does NOT think of himself "as primarily a citizen of any country or the world" but rather thinks of himself as something like "a real player", and maybe only respects "other real players"?
(Almost certainly Sam doesn't think of himself AS a nominal "noble" or "oligarch" or whatever term. Not nominally. I just suspect, as a constellation of predictions and mechanisms, that he would be happy if offered praise shaped according to a model of him as, spiritually, a Timocracy-aspiring Oligarch (who wants money and power, because those are naturally good/familiar/oikion, and flirts in his own soul (or maybe has a shadow relationship?) with explicitly wanting honor and love), rather than thinking of himself as a Philosopher King (who mostly just wants to know things, and feels the duty of logically coherent civic service as a burden, and does NOT care for being honored or respected by fools, because fools don't even know what things are properly worthy of honor). In this framework, I'd probably count as a sloth, I think? I have mostly refused the call to adventure, the call of duty, the call to civic service.)
I would totally get it if Sam might think that OpenAI was already "bathed in the blood of a coup" from back when nearly everyone with any internal power somehow "maybe did a coup" on Elon?
The Sam in my head would be proud of having done that, and maybe would have wished to affiliate with others who are proud of it in the same way?
From a distance, I would have said that Elon starting them up with such a huge warchest means Elon probably thereby was owed some debt of "governing gratitude" for his beneficence?
If he had a huge say in the words of the non-profit's bylaws, then an originalist might respect his intent when trying to apply them far away in time and space. (But not having been in any of those rooms, it is hard to say for sure.)
Elon's ejection back then, if I try to scry it from public data, seems to have happened with the normal sort of "oligarchic dignity" where people make up some bullshit about how a breakup was amicable.
((It can be true that it was "amicable" in some actual pareto positive breakups, whose outer forms can then be copied by people experiencing non-pareto-optimal breakups. Sometimes even the "loser" of a breakup values their (false?) reputation for amicable breakups more than they think they can benefit from kicking up a fuss about having been "done dirty" such that the fuss would cause others to notice ad help them less than the lingering reputation for conflict would hurt.
However there are very many wrinkles to the localized decision theory here!
Like one big and real concern is that a community would LIKE to "not have to take sides" over every single little venal squabble, such as to maintain itself AS A COMMUNITY (with all the benefits of large scale coordination and so on) rather than globally forking every single time any bilateral interaction goes very sour, with people dividing based on loyalty rather than uniting via truth and justice.
This broader social good is part of why a healthy and wise and cheaply available court system is, itself, an enormous public good for a community full of human people who have valid selfish desires to maintain a public reputation as "a just person" and yet also as "a loyal person".))
So the REAL "psychological" details about "OpenAI's possible first coup" are very obscure at this point, and imputed values for that event are hard to use (at least hard for me who is truly ignorant of them) in inferences whose conclusions could be safely treated as "firm enough to be worth relying on in plans"?
But if that was a coup, and if OpenAI already had people inside of it who already thought that OpenAI ran on nearly pure power politics (with only a pretense of cooperative non-profit goals), then it seems like it would be easy (and psychologically understandable) for Sam to read all pretense of morality or cooperation (in a second coup) as bullshit.
And if the board predicted this mental state in him, then they might "lock down first"?
Taking the first legibly non-negotiated non-cooperative step generally means that afterwards things will be very complex and time dependent and once inter-agent conflict gets to the "purposeful information hiding stage" everyone is probably in for a bad time :-(
For a human person to live like either a naive saint (with no privacy or possessions at all?!) or a naive monster (always being a closer?) would be tragic and inhuman.
Probably digital "AI people" will have some equivalent experience of similar tradeoffs, relative to whatever Malthusian limits they hit (if they ever hit Malthusian limits, and somehow retain any semblance or shape of "personhood" as they adapt to their future niche). My hope is that they "stay person shaped" somehow. Because I'm a huge fan of personhood.)
The intrinsic tensions between sainthood and monsterhood means that any halo of imaginary Elons or imaginary Sams, who I could sketch in my head for lack of real data, might have to be dropped in an instant based on new evidence.
In reality, they are almost certainly just dudes, just people, and neither saints, nor monsters.
Most humans are neither, and the lack of coherent monsters is good for human groups (who would otherwise be preyed upon), and the lack of coherent saints is good for each one of us (as a creature in a world, who has to eat, and who has parents and who hopefully also has children, and for whom sainthood would be locally painful).
Both sainthood and monsterhood are ways of being that have a certain call on us, given the world we live in. Pretending to be a saint is a good path to private power over others, and private power is subjectively nice to have... at least until the peasants with knifes show up (which they sometimes do).
I think that tension is part of why these real world dramatic events FEEL like educational drama, and pull such huge audiences (of children?), who come to see how the highest and strongest and richest and most prestigious people in their society balance such competing concerns within their own souls.
That's part of the real situation though. Sam would never quit to "spend more time with his family".
When we predict good outcomes for startups, the qualities that come up in the supporting arguments are toughness, adaptability, determination. Which means to the extent we're correct, those are the qualities you need to win.Investors know this, at least unconsciously. The reason they like it when you don't need them is not simply that they like what they can't have, but because that quality is what makes founders succeed.Sam Altman has it. You could parachute him into an island full of cannibals and come back in 5 years and he'd be the king. If you're Sam Altman, you don't have to be profitable to convey to investors that you'll succeed with or without them. (He wasn't, and he did.)
Link in sauce.
I wrote a LOT of words in response to this, talking about personal professional experiences that are not something I coherently understand myself as having a duty (or timeless permission?) to share, so I have reduced my response to something shorter and more general. (Applying my own logic to my own words, in realtime!)
There are many cases (arguably stupid cases or counter-producive cases, but cases) that come up more and more when deals and laws and contracts become highly entangling.
Its illegal to "simply" ask people for money in exchange for giving them a transferable right future dividends on a project for how to make money, that you seal with a handshake. The SEC commands silence sometimes and will put you in a cage if you don't.
You get elected to local office and suddenly the Brown Act (which I'd repeal as part of my reboot of the Californian Constitution had I the power) forbids you from talking with your co-workers (other elected officials) about work (the city government) at a party.
A Confessor is forbidden kinds of information leak.
Fixing <all of this (gesturing at nearly all of human civilization)> isn't something that we have the time or power to do before we'd need to USE the "fixed world" to handle AGI sanely or reasonably, because AGI is coming so fast, and the world is so broken.
That there is so much silence associated with unsavory actors is a valid and concerning contrast, but if you look into it, you'll probably find that every single OpenAI employee has an NDA already.
OpenAI's "business arm", locking its employees down with NDAs, is already defecting on the "let all the info come out" game.
If the legal system will continue to often be a pay-to-win game and full of fucked up compromises with evil, then silences will probably continue to be common, both (1) among the machiavellians and (2) among the cowards, and (3) among the people who were willing to promise reasonable silences as part of hanging around nearby doing harms reduction. (This last is what I was doing as a "professional ethicist".)
And IT IS REALLY SCARY to try to stand up for what you think you know is true about what you think is right when lots of people (who have a profit motive for believing otherwise) loudly insist otherwise.
People used to talk a lot about how someone would "go mad" and when I was younger it always made me slightly confused, why "crazy" and "angry" were conflated. Now it makes a lot of sense to me.
I've seen a lot of selfish people call good people "stupid" and once the non-selfish person realizes just how venal and selfish and blind the person calling them stupid is, it isn't hard to call that person "evil" and then you get a classic "evil vs stupid" (or "selfish vs altruistic") fight. As they fight they become more "mindblind" to each other? Or something? (I'm working on an essay on this, but it might not be ready for a week or a month or a decade. Its a really knotty subject on several levels.)
Good people know they are sometimes fallible, and often use peer validation to check their observations, or check their proofs, or check their emotional calibration, and when those "validation services" get withdrawn for (hidden?) venal reasons, it can be emotionally and mentally disorienting.
(And of course in issues like this one a lot of people are automatically going to have a profit motive when a decision arises about whether to build a public good or not. By definition: the maker of a public good can't easily charge money for such a thing. (If they COULD charge money for it then it'd be a private good or maybe a club good.))
The Board of OpenAI might be personally sued by a bunch of Machiavellian billionaires, or their allies, and if that happens, everything the board was recorded as saying will be gone over with a fine-toothed comb, looking for tiny little errors.
Every potential quibble is potentially more lawyer time. Every bit of lawyer time is a cost that functions as a financial reason to settle instead of keep fighting for what is right. Making your attack surface larger is much easier than making an existing attack surface smaller.
If the board doesn't already have insurance for that extenuating circumstance, then I commit hereby to donate at least $100 to their legal defense fund, if they start one, which I hope they never need to do.
And in the meantime, I don't think they owe me much of anything, except for doing their damned best to ensure that artificial general intelligence benefits all humanity.
When I read this part of the letter, the authors seem to be throwing it in the face of the board like it is a damning accusation, but actually, as I read it, it seems very prudent and speaks well for the board.
You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
Maybe I'm missing some context, but wouldn't it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor "aligned with humanity" (if we are somehow so objectively bad as to not deserve care by a benevolent powerful and very smart entity).
This reminds me a lot of a blockchain project I served as an ethicist, which was initially a "project" that was interested in advancing a "movement" and ended up with a bunch of people whose only real goal was to cash big paychecks for a long time (at which point I handled my residual duties to the best of my ability and resigned, with lots of people expressing extreme confusion and asking why I was acting "foolishly" or "incompetently" (except for a tiny number who got angry at me for not causing a BIGGER explosion than just leaving to let a normally venal company be normally venal without me)).
In my case, I had very little formal power. I bitterly regretted not having insisted "as the ethicist" in having a right to be informed of any board meeting >=36 hours in advance, and to attend every one of them, and to have the right to speak at them.
(Maybe it is a continuing flaw of "not thinking I need POWER", to say that I retrospectively should have had a vote on the Board? But I still don't actually think I needed a vote. Most of my job was to keep saying things like "lying is bad" or "stealing is wrong" or "fairness is hard to calculate but bad to violate if clear violations of it are occurring" or "we shouldn't proactively serve states that run gulags, we should prepare defenses, such that they respect us enough to explicitly request compliance first". You know, the obvious stuff, that people only flinch from endorsing because a small part of each one of us, as a human, is a very narrowly selfish coward by default, and it is normal for us, as humans, to need reminders of context sometimes when we get so much tunnel vision during dramatic moments that we might commit regrettable evils through mere negligence.)
No one ever said that it is narrowly selfishly fun or profitable to be in Gethsemane and say "yes to experiencing pain if the other side who I care about doesn't also press the 'cooperate' button".
But to have "you said that ending up on the cross was consistent with being a moral leader of a moral organization!" flung on one's face as an accusation suggests to me that the people making the accusation don't actually understand that sometimes objective de re altruism hurts.
Maturely good people sometimes act altruistically, at personal cost, anyway because they care about strangers.
Clearly not everyone is "maturely good".
That's why we don't select political leaders at random, if we are wise.
Now you might argue that AI is no big deal, and you might say that getting it wrong could never "kill literally everyone".
Also it is easy to imagine how a lot of normally venal corporate people (who thought they could get money by lying and saying "AI might kill literally everyone" when they don't believe it to people who do claim to believe it) if a huge paycheck will be given to them for their moderately skilled work contingent on them saying that...
...but if the stakes are really that big then NOT acting like someone who really DID believe that "AI might kill literally everyone" is much much worse than a lady on the side of the road looking helplessly at her broken car. That's just one lady! The stakes there are much smaller!
The big things are MORE important to get right. Not LESS important.
To get the "win condition for everyone" would justify taking larger risks and costs than just parking by the side of the road and being late for where-ever you planned on going when you set out on the journey.
Maybe a person could say: "I don't believe that AI could kill literally everyone, I just think that creating it is just an opportunity to make a lot of money and secure power, and use that to survive the near term liquidation of the proletariate when rambunctious human wage slaves are replaced by properly mind-controlled AI slaves".
Or you could say something like "I don't believe that AI is even that big a deal. This is just hype, and the stock valuations are gonna be really big but then they'll crash and I urgently want to sell into the hype to greater fools because I like money and I don't mind selling stuff I don't believe in to other people."
Whatever. Saying whatever you actually think is one of three legs in a the best definition of integrity that I currently know of.
(The full three criteria: non-impulsiveness, fairness, honesty.)
OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity... Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.
(Sauce. Italics and bold not in original.)
Compare this again:
The board could just be right about this.
It is an object level question about a fuzzy future conditional event, that ramifies through a lot of choices that a lot of people will make in a lot of different institutional contexts.
If Open AI's continued existence ensures that artificial intelligence benefits all of humanity then its continued existence would be consistent with the mission.
If not, not.
What is the real fact of the matter here?
Its hard to say, because it is about the future, but one way to figure out what a group will pursue is to look at what they are proud of, and what they SAY they will pursue.
Look at how the people fleeing into Microsoft argue in defense of themselves:
We, the employees of OpenAl, have developed the best models and pushed the field to new frontiers. Our work on Al safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.
This is all MERE IMPACT. This is just the coolaid that startup founders want all their employees to pretend to believe is the most important thing, because they want employees who work hard for low pay.
This is all just "stuff you'd put in your promo packet to get promoted at a FAANG in the mid teens when they were hiring like crazy, even if it was only 80% true, that 'everyone around here' agrees with (because everyone on your team is ALSO going for promo)".
Their statement didn't mention "humanity" even once.
Their statement didn't mention "ensuring" that "benefits" go to "all of humanity" even once.
Microsoft's management has made no similar promise about benefiting humanity in the formal text of its founding, and gives every indication of having no particular scruples or principles or goals larger than a stock price and maybe some executive bonuses or stock buy-back deals.
As is valid in a capitalist republic! That kind of culture, and that kind of behavior, does have a place in it for private companies that manufacture and sell private good to individuals who can freely choose to buy those products.
You don't have to be very ethical to make and sell hammers or bananas or toys for children.
However, it is baked into the structure of Microsoft's legal contracts and culture that it will never purposefully make a public good that it knowingly loses a lot of money on SIMPLY because "the benefits to everyone else (even if Microsoft can't charge for them) are much much larger".
Open AI has a clear telos and Microsoft has a clear telos as well.
I admire the former more than the latter, especially for something as important as possibly creating a Demon Lord, or a Digital Leviathan, or "a replacement for nearly all human labor performed via arm's length transactional relations", or whatever you want to call it.
There are few situations in normal everyday life where the plausible impacts are not just economic, and not just political, not EVEN "just" evolutionary!
This is one of them. Most complex structures in the solar system right now were created, ultimately, by evolution. After AGI, most complex structures will probably be created by algorithms.
Evolution itself is potentially being overturned.
Software is eating the world.
"People" are part of the world. "Things you care about" are part of the world.
There is no special carveout for cute babies, or picnics, or choirs, or waltzing with friends, or 20th wedding anniversaries, or taking ecstasy at a rave, or ANYTHING HUMAN.
All of those things are in the world, and unless something prevents that natural course of normal events from doing so: software will eventually eat them too.
I don't see Microsoft and the people fleeing to Microsoft, taking that seriously, with serious language, that endorses coherent moral ideals in ways that can be directly related to the structural features of institutional arrangements to cause good outcomes for humanity on purpose.
Maybe there is a deeper wisdom there?
Maybe they are secretly saying petty things, even as they secretly plan to do something really importantly good for all of humanity?
Most humans are quite venal and foolish, and highly skilled impression management is a skill that politicians and leaders would be silly to ignore.
But it seems reasonable to me to take both sides at their word.
One side talks and walks like a group that is self-sacrificingly willing to do what it takes to ensure that artificial general intelligence benefits all of humanity and the other side is just straightforwardly not.
This is a diagram explaining what is, in some sense, the fundamental energetic numerical model that explains "how life is possible at all" despite the 2nd law:
The key idea is, of course, activation energy (and the wiki article on the idea is the source of the image).
If you take "the focus on enzymes" and also the "background of AI" seriously, then the thing that you might predict would happen is a transition on Earth from a regime where "DNA programs coordinate protein enzymes in a way that was haphazardly 'designed' by naturalistic evolution" to a regime where "software coordinates machine enzymes in a way designed by explicit and efficiently learned meta-software".
I'm not actually sure if it is correct to focus on the fuel as the essential thing that "creates the overhang situation"? However fuel is easier to see and reason about than enzyme design <3
If I try to think about the modern equivalent of "glucose" I find myself googling for [pictures of vibrant cities] and I end up with things like this:
You can look at this collection of buildings like some character from an Ayn Rand novel and call it a spectacularly beautiful image of human reason conquering the forces of nature via social cooperation within a rational and rationally free economy...
...but you can look at it from the perspective of the borg and see a giant waste.
So much of it is sitting idle. Homes not used for making, offices not used for sleeping!
Parts are over-engineered, and many doubly-over-engineered structures are sitting right next to each other, since both are over-engineered and there are no cross-spars for mutual support!
There is simply a manifest shortage of computer controlling and planning and optimizing so many aspects of it!
I bet they didn't even create digital twins of that city and run "simulated economies" in digital variants of it to detect low hanging fruit for low-cost redesigns.
Maybe at least the Tokyo subway network was designed by something at least as smart as slime mold, but the roads and other "arteries" of most other "human metaorganic conglomerations" are often full of foolishly placed things that even a slime mold could suggest ways to fix!
(Sauce for Slime Mold vs Tokyo.)
I think that eventually entropy will be maximized and Chaos will uh... "reconcile everything"... but in between now and then a deep question is the question of preferences and ownership and conflict.
I'm no expert on Genghis Khan, but it appears that the triggering event was a triple whammy where (1) the Jin Dynasty of Northern China cut off trade to Mongolia and (2) the Xia Dynasty of Northwest China ALSO cut off trade to Mongolia and (3) there was a cold snap from 1180-1220.
The choice was probably between starving locally or stealing food from neighbors. From the perspective of individual soldiers with familial preferences for racist genocide over local tragedy, if they have to kill someone in order to get a decent meal, they may as well kill and eat the outgroup instead of the ingroup.
And from the perspective of a leader, who has more mouths among their followers than food in their granaries, if a war to steal food results in the deaths of some idealistic young men... now there are fewer mouths and the angers are aimed inward and upward! From the leaders selfish perspective, conquest is a "win win win".
Even if they lose the fight, at least they will have still redirected the anger and have fewer mouths to feed (a "win win lose") and so, ignoring deontics or just war theory or property rights or any other such "moral nonsense", from the perspective of a selfish leader, initiating the fight is good tactics, and pure shadow logic would say that not initiating the fight is "leaving money on the table".
From my perspective, all of this, however, is mostly a description of our truly dark and horrible history, before science, before engineering, before formal logic and physics and computer science.
In the good timelines coming out of this period of history, we cure death, tame hydrogen (with better superconductors enabling smaller fusion reactor designs), and once you see the big picture like this it is easier to notice that every star in the sky is, in a sense, a giant dumpster fire where precious precious hydrogen is burning to no end.
Once you see the bigger picture, the analogy here is very very clear... both of these, no matter how beautiful these next objects are aesthetically, are each a vast tragedy!
The universe is literally on fire. War is more fire. Big fires are bad in general. We should build wealth and fairly (and possibly also charitably) share it, instead of burning it.
Nearly all of my "sense that more is possible" is not located in personal individual relative/positional happiness but rather arises from looking around and seeing that if there were better coordination technologies the limits of our growth and material prosperity (and thus the limits on our collective happiness unless we are malignant narcissists who somehow can't be happy JUST from good food and nice art and comfy beds and more leisure time and so on (but have to also have "better and more than that other guy")) are literally visible in the literal sky.
This outward facing sense that more is possible can be framed as an "AI overhang" that is scary (because of how valuable it would be for the AI to kill us and steal our stuff and put it to objectively more efficient uses than we do) but even though framing things through loss avoidance is sociopathically efficient for goading naive humans into action, it is possible to frame most of the current situation as a very very very large opportunity.
That deontic just war stuff... so hot right now :-)
I've thought about this for a bit, and I think that the constitution imposes many constraints on the shape and constituting elements of the House that aren't anywhere close to optimal, and the best thing would be to try to apply lots and lots of mechanism design and political science but only to the House (which is supposed to catch the passions of the people and temper them into something that might include more reflection).
A really bad outcome would be to make a change using some keyword from election theory poorly, and then have it fail, and then cause there to be a lot of "no true X" debates for the rest of history.
You don't want to say that the failure of "X applied to the House" was the fault of X instead of some other nearby problem that no one wanted to talk about because it seemed even more stupid and sad than the stupid sadness of status quo House Speaker elections.
So the best I can come up with for the House given time constraints (that I think would cause the House to be the "part of the US government that wasn't a dumpster fire of bad design") would require a constitutional amendment to actually happen:
A) The full proposal envisions there being initial chaos after the proposal is adopted, such that a really high quality algorithms for Speaker selection becomes critical for success rather than just "a neat little idea". Also, we intentionally buffer the rest of the government from the predicted chaos while "something like real democracy, but for the internet era" emerges from the "very new game with very new rules". The federal government will take over running the elections for the House. Not the Senate, not the President, and not any state elections. There have to be two separate systems because the changes I'm proposing will cause lots of shaking and there has to be a backup in place. The systems I'm proposing might not even have the same sets of voters if some states have different franchise and voter registration processes and laws. Some people might be able to vote in the "federal house elections" but not state or "old federal" elections and that's just how it is intended to work. The point here is partly to detect if these diverge or not (and if they diverge which is better).
Can states grant voting rights to AIs? That's an open question! Voters in both system will have a state party registration and a federal party registration and everyone in the US who is either kind of voting citizen (or both kinds) will have a constitutional right to be in different parties on different levels. The House's initial partisan chaos (like in the plan I'm proposing the Senate Republican Party and the House Republican Party wouldn't even be a single legal entity even if they both use the word "Republican" in their name, and will only align if that's what the people in the two things strongly desire and work to make real) and that will almost certainly make it much much harder to "validly or sanely use FPTP" to pick a Speaker... so...
A1) The election for the Speaker will internally occur within the house using secret ballot ranked pairs (but with anti-cheating measures from cryptography so that if cheating happens in the counting then any member of the House will be able detect "that cheating occurred" and release their data to prove it). Part of the goal here is that House Reps will be F2F familiar to many voters, and so many voters can believe "that Rep is honest, and saw cryptographic math, that says the Speaker is really the speaker" and then they will know who the valid Speaker is by that method (like part of the goal is to make legitimacy destroying misinformation very hard to pull off in the near future where AI powered disinformation attacks attempt to destroy all democracies by this method).
If a circle in the voting shows up (that is, if there is no Condorcet Winner for Speaker at first) and if the Ranked Pairs resolution for that produces a tie (it could happen) then re-run the Speaker election over and over until it goes away, like how the Pope election runs until they agree based on pure tiredness (or being spoken to by the Holy Spirit or whatever it is that causes people to vote better the second time). The plan is to have every election always produce a sort of a Prime Minister who represents the entire country in a central way. The hope is that after several election cycles things settle down, and the Senate and the Presidency start to become somewhat vestigial and embarrassing, compared to the high quality centrist common sense that is served up regularly by the Speaker over and over and over.
If the experiment goes well, we hope for an eventual second constitutional amendment to clean things up and make the US a proper well designed Parliamentary government with the Presidency and Senate becoming more symbolic, like the British House of Lords or the British Monarch.
A2) We don't know what parties will even exist in advance. Thus the Speaker needs personal power, not just "the loyalty of their party". They get some power to control how the votes go, like Speakers have traditionally had, but now added to the constitution explicitly. The federal parties still have some power... they get to generate a default preference ballot for the voters in that party to start out with. Its a UI thing, but UIs actually matter.
B) Super districts will be formed by tiling the country with a number of "baby" house districts that is divisible by 5, and then merging groups of 5 such baby districts into super districts (even across state lines if necessary (so Wyoming is just gonna be one big baby district every time for a while)). State governments (where they have latitude) set the baby district shapes and the federal level chooses how to merge them. Then the US federal election system will run IRV proportionally representative elections within each super district to select 5 house reps from each super district.
C) The House is supposed to act very very quickly. It was given a 2 year cycle before telegrams existed and it is supposed to be "the institution that absorbs and handles the passions of the masses of voters who maybe should change their minds about stuff sometimes". It is totally failing to do this these days. There is too much happening too fast. To increase the speed at which things operate (and to fix the problem where elections can leave the House itself ungovernable sometimes (and how can something that can't govern itself hope to effectively govern anything else!)) we add "no confidence" judgements, that can be applied to the House such that its elections can happen on closer to "an as-needed to deal-with-the-Singularity" sort of timescale... so... much much faster... gated mostly by something like "the speed at which humans can handle a changing political zeitgeist in the age of modern media"...
C1) A "top-down no confidence" can be initiated by a majority role call vote of the Senate, first giving the warning, then waiting 3 months, and then the Senate can hold a 2/3s private ballot vote to agree to go through with it, and then the President has 3 days to either veto (restarting the clock such that the Senate can try again with a secret ballot in 3 months) or pass it. If the Senate has a majority persistently voting in their real names (but getting vetoed by the President or the 2/3s vote) then the third such vote (taking 2 months and 6 days to occur on the schedule where the 51% votes instantly and the 67% and President drag their feet) shall also be a way to trigger a "top-down no confidence" vote. It is good form to call these Bertolt Brecht elections. If the Senate causes a top-down snap election, they can redo the federal portion of the districting (change which baby districts merge into which super district) as part of the reboot, in the hopes of getting a nearly completely new cast of characters in the House. The House would obviously still be representative (maybe too representative of an insane electorate?)... but the Senate can hope for "new specific persons raised up by The People".
C2) The Speaker gains the constitutional power to call an "internal no confidence" election. In games of Chicken vs the entire rest of the House, the Speaker should hopefully just win and have the entire House swerve. However, they have to try to rule the House for the first 2 months after the election and then they have to give a "7 day warning" in advance of the failure being legible and decisive. Part of the fear is that AI systems might attack the minds of the voters to intentionally cause the elections to crash over and over, if the minds of the voters actually start to matter to the real shape of the government. The 2 month thing puts a circuit breaker in that loop. So the Speaker can decide and make their threat unilaterally that the House deserves "no confidence" after 2 months from an election and ultimately and internally decide 7 days later about whether to kick off the next election. Then a snap election would happen as fast as pragmatically possible, probably using the internet and open source polling software that the NSA (and all the crazy programmers around the world looking at the code) say can't be hacked?
C3) If a "bottom-up no confidence" has been indicated by a (i) majority of voters overall expressing "no confidence" specifically in their own rep using the federal election system's real time monitoring processes, and (ii) a majority of reps have lost the confidence of the specific people they are supposed to represent, then a snap election shall occur as fast as pragmatically possible. The software for soliciting info from the voters would be part of the voting system, and also open source, and should be audited by the NSA and so on. Each voter, running a voting client, should get a digital receipt that tells them EXACTLY who their ballot caused them to be represented by. They should also know how far down that person was down in their list of preferences from the top to the bottom. They are not allowed to call no confidence on who they ended up with as their rep for at least 2 months (just like how the Speaker can't do that). Also the people who do this have to do it in two motions, first "warning" their candidate, second "following through" at least 7 days later.
C4) Default elections using the federal election system will happen for the House at the same time as the President and/or the Senate are holding their elections using the state election system but only if there hasn't been a "no confidence" snap election in the last 6 months. No convened elected House shall go longer, without any election, than 30(=6+24) months. Note that since the federal election system will be open source, it should be quite easy for the states to copypasta it, if they want (with any tweaks, if they want). The voters will get to see for themselves which layer of government is the bigger shitshow, in a head-to-head competition, and judge accordingly.
D) There will be a local town hall style system inside each superdistrict, with federal funding to rent the physical venue in a stadium or an auditorium or a conference center or whatever, and federal internet hosting for the video and transcripts from the proceedings, where the "popular also rans" from each superdistrict get privileges to ask questions in hearings with the superdistrict winners when the winners are visiting home from DC. These events will occur 1 month after every election, and also whenever a no confidence warning as been issued by the Senate or the Speaker, and 7 days before a Default Election. Basically: there will be debates both before and after elections and the people who ask questions won't be plants. Voters, in their final election "receipt" will see the "also ran representatives" and part of the goal here is to get people to see the ideological diversity of their own neighbors, and learn alternative new names they could be higher on their lists next time, to show a lot more ideological diversity at both the local and federal level, so the voters can change their mind if they become embarrassed of what is said by the people who nominally represent them. Also, voters can just "fire and forget" on their "no confidence" status updates, by proxying their "no confidence" to any single one of these "also ran reps" that they ranked higher than whoever is actually currently representing them.
Thus, each "also ran" will have some real power, connected to a real voice, and be able to credibly threaten all five of the winners from a superdistrict with "no confidence" to some degree or another, if they get a lot of disgruntled voters to proxy their confidence to that "also ran". Hopefully this lets the each voters have TWO people to complain to about the House, and let them not be constantly be obsessed with politics in real time forever, because that would be very exhausting and a terrible waste of brain power.
(There's a lurking implication here where reps who were elected and who were also the first choice of a lot of voters will get "confidence vs no confidence" directly by those first choice voters, who will not be allowed to proxy their "no confidence", because those voters won't have anyone that they ranked higher on their ballot than who they ended up being represented by! Either these voters will have to watch their representative more carefully all by themselves, or else those elected people will be predictably more secure as their unproxied supporters get distracted and don't register "no confidence" for stuff that they just never observed or heard about. This was an unintended design outcome, but on reflection I think I endorse it as a sort of circuit breaker that makes really good representatives very safe and really bad voters particularly clear targets for appeals to change their mind by their fellow voters.)
What you WISH would happen is that everyone (from the voters up to the Speaker) would just universally derive common sense morally good government policy from first principles to the best of their ability... and then elections would basically just amount to picking the wisest person around who is willing to perform altruistic government service in a fair way to cheaply produce public goods and cheaply mitigate the negative externalities, that naturally arise when free people exercise their freedom to exchange within locally competitive and efficient markets, in obviously good and fair ways.
I fear that my proposal will cause a lot of churn and drama at first, and seem to be broken, and to be a source of constitutional crises for... maybe 1-6 years? It might seem a bit like a civil war between the Republicrats and the New System, except fought with words and voting? The House might well reboot every 6 months for a while, until the first wave of Senate elections occurred.
But after 12 years (time enough for the Senate to reboot twice) I'd expect the House to become quite boring and very very very reasonable and prudent seeming to nearly everyone, such that the US could (and would want to) switch to a fully Parliamentary system within 18 years and think "what took us so long to do this obviously sensible thing?"
One thing to remember is that Rulers Who Rule A Long Time Are Generally Less Aligned With The People.
I think most people haven't internalized the logic of such processes, and somehow have invented some kind of bullshit cope such that they can imagine that having the same representatives and elected officials for long stretches of time (with children of famous politicians being elected based on name recognition) is somehow "good", instead of a really really terrible sign. Then many of the people who don't believe this are in favor of (and sometimes even pass) term limit laws instead of designing elections with high turnover based on minor dissatisfactions, which is the opposite of the right move. Term limits REMOVE voter influence (again, like so many other things) rather than enabling voters to have more influence to truly pick who they truly think (1) is wise and (2) has their interests at heart.
My proposal treats "lots of people cycling through the House very fast for very short stints based on actual voting that solicits many bits of information from actual voters on low latency cycles" as a valid and good thing, and potentially just a "necessary cost of doing business" in the course of trying to literally just have the best possible (representative) government that can be had.
If ANYONE survives that kind of tumult, you would expect them to be shockingly benevolent and skilled rulers. You wouldn't want people so exquisitely selected from huge numbers by thorough sifting to then get "termed out"! That would be a tragedy!
In the ideal case, the US House would eventually have sufficient global centrality (because the US government is kind of the imperial government of the world?), and sufficient recognized wisdom (because this proposal makes it stop being a dumpster fire?), that eventually lots of countries would simply want to join the US, and get to help select the membership of our House, which could become the de facto and eventually de jure world government.
The really hard thing is how to reconcile this vision with individual rights. Most Americans don't actually understand social contract theory anymore, and can't derive rights from first principles... so the proposed House, if it were really properly representative, might be even more hostile to the Bill Of Rights than it already is, which would set them very strongly against the SCOTUS and I don't know what the resolution of that process would look like in the end :-(
My hope is that the (1) fast cycling, and (2) "most central wins" dynamics of the new electoral scheme...
...would cause "reasonableness" to become prestigious again?
And then maybe a generation of reasonable humans would come along and stop voting against individual rights so much? Maybe? Hopefully?
If you think voters are just completely stupid and evil, then I could see how that would be a coherent and reasonable reason to be against my proposal... but then for such people I'd wonder why you aren't already organizing a coup of all existing governments (except the authoritarian governments that are really great at respecting individual rights... except I think there is no such thing as a current or past example of a real government that is both authoritarian and also individual-rights-respecting).
It is precisely from sloshing back and forth between these alternatives ("actually good" vs "actually democratic") that causes me to try to "steelman the idea of representative government" with this proposal.
Granting that the existing government is neither competent nor honest nor benevolent, maybe the problem is that "true democracy has never actually been tried" and so maybe we should actually try "true democracy" before we overthrow the existing shambolic horror?
However, this full extended vision aims to imagine (1) how a good House could actually work, and (2) how the voters could learn to stop being hostile to freedom and individual rights, and (3) how other countries wanted to get in on the deal... and if it hits all of its various aims at the same time then it might give humanity "world peace" for free, as a side effect? <3
You gotta have hope, right? :-)
You gotta say what might actually work in Heaven BEFORE you start compromising with the Devil, right? :-)
There are still some compromises with the Devil in my plan, but the only devils I'm trying to compromise with here are the voters themselves.
Your summary did not contain the keyword "unlearning" which suggested that maybe he people involved didn't know about how Hopfield Networks form spurious memories by default that need to be unlearned. However, article you linked mentions "unlearn" 10 times so my assumption is that they are aware of this background and re-used the jargon on purpose.