With the sudden simultaneous exits of Mira Murati, Barret Zoph, and Bob McGrew, I thought I'd update my tally of the departures from OpenAI, collated with how quickly the ex-employee had signed the loyalty letter to Sam Altman last November.
The letter was leaked at 505 signatures, 667 signatures, and finally 702 signatures; in the end, it was reported that 737 of 770 employees signed. Since then, I've been able to verify 56 departures of people who were full-time employees (as far as I can tell, contractors were not allowed to sign, but all FTEs were).
I still think I'm missing some, so these are lower bounds (modulo any mistakes I've made).
Headline numbers:
Reportedly, 737 out of the 770 signed in the end, and many of the Superalignment team chose not to sign at all.
Below are my current tallies of some notable subsets. Please comment with any corrections!
People from the Superalignment team who never signed as of the 702 leak (including some policy/governance people who seem to have been closely connected) and are now gone:
People from the Superalignment team (and close collaborators) who did sign before the final leak but are now gone:
Others who didn't sign as of the 702 leak (some of whom may have just been AFK for the wrong weekend, though I doubt that was true of Karpathy) and are now gone:
Notable other ex-employees:
There are a few people in this list who I think are being counted incorrectly as FTEs (Mati and Andrei, for example).
I would also be careful about making inferences based on timing of supposed signature: I have heard that the signature Google Doc had crashed and so the process for adding names was slow and cumbersome. That is, the time at which someone’s name was added may have been significantly after they expressed desire to sign.
Mati described himself as a TPM since September 2023 (after being PM support since April 2022), and Andrei described himself as a Research Engineer from April 2023 to March 2024. Why do you believe either was not a FTE at the time?
And while failure to sign isn't proof of lack of desire to sign, the two are heavily correlated—otherwise it would be incredibly unlikely for the small Superalignment team to have so many members who signed late or not at all.
DeepMind released their AlphaStar paper a few days ago, having reached Grandmaster level at the partial-information real-time strategy game StarCraft II over the summer.
This is very impressive, and yet less impressive than it sounds. I used to watch a lot of StarCraft II (I stopped interacting with Blizzard recently because of how they rolled over for China), and over the summer there were many breakdowns of AlphaStar games once players figured out how to identify the accounts.
The impressive part is getting reinforcement learning to work at all in such a vast state space- that took breakthroughs beyond what was necessary to solve Go and beat Atari games. AlphaStar had to have a rich enough set of potential concepts (in the sense that e.g. a convolutional net ends up having concepts of different textures) that it could learn a concept like "construct building P" or "attack unit Q" or "stay out of the range of unit R" rather than just "select spot S and enter key T". This is new and worth celebrating.
The overhyped part is that AlphaStar doesn't really do the "strategy" part of real-time strategy. Each race has a few solid builds that it executes at GM level, and the unit control is fantastic, but the replays don't look creative or even especially reactive to opponent strategies.
That's because there's no representation of causal thinking - "if I did X then they could do Y, so I'd better do X' instead". Instead there are many agents evolving together, and if there's an agent evolving to try Y then the agents doing X will be replaced with agents that do X'.
(This lack of causal reasoning especially shows up in building placement, where the consequences of locating any one building here or there are minor, but the consequences of your overall SimCity are major for how your units and your opponents' units would fare if they attacked you. In one comical case, AlphaStar had surrounded the units it was building with its own factories so that they couldn't get out to reach the rest of the map. Rather than lifting the buildings to let the units out, which is possible for Terran, it destroyed one building and then immediately began rebuilding it before it could move the units out!)
This means that, first, AlphaStar just doesn't have a decent response to strategies that it didn't evolve, and secondly, it doesn't do much in the way of a reactive decision tree of strategies (if I scout this, I do that). That kind of play is unfortunately very necessary for playing Zerg at a high level, so the internal meta has just collapsed into one where its Zerg agents predictably rush out early attacks that are easy to defend if expected. This has the flow-through effect that its Terran and Protoss are weaker against human Zerg than against other races, because they've never practiced against a solid Zerg that plays for the late game.
The end result cleaned up against weak players, performed well against good players, but practically never took a game against the top few players. I think that DeepMind realized they'd need another breakthrough to do what they did to Go, and decided to throw in the towel while making it look like they were claiming victory.
Finally, RL practitioners have known that genuine causal reasoning could never be achieved via known RL architectures- you'd only ever get something that could execute the same policy as an agent that had reasoned that way, via a very expensive process of evolving away from dominated strategies at each step down the tree of move and countermove. It's the biggest known unknown on the way to AGI.
By my assessment, the employees who failed to sign the final leaked version of the Altman loyalty letter have now been literally decimated.
I'm trying to track the relative attrition for a Manifold market: of the 265 OpenAI employees who hadn't yet signed the loyalty letter by the time it was first leaked, what percent will still be at OpenAI on the one-year anniversary?
I'm combining that first leaked copy with 505 signatures, the final leaked copy with 702 signatures, the oft-repeated total headcount of 770, and this spreadsheet tracking OpenAI departures (albeit with many false positives—people self-reporting as OpenAI employees because they customized their GPTs—so I'm working to verify names that appear on the spreadsheet but not on the letter; I'm sure the spreadsheet has false negatives as well, alas).
So far, I've verified at least seven [update: seven, with a probable eighth] departures of eligible figures who hadn't signed the letter with 702 names: Leopold Aschenbrenner, Jay Joshi (not fully verified by me), Andrej Karpathy, Daniel Kokotajlo, Jan Leike, Lucas Negritto, Katarina Slama, and William Saunders. If it's true that the total headcount at the time was 770, then that's 8 out of 68, or 11.8%.
Compare that to the attrition rate (as per the spreadsheet) for those who had signed the final leaked version but not the first: 10 departures out of 197, or 5.1%; and compare that to the attrition rate for those who signed promptly: 13 departures out of 505, or 2.6%.
Any causal inferences from this correlation are left as an exercise to the reader.
(A more important exercise, however: can anyone find a confirmation of the 770 number outside of unsourced media reports, or find a copy of the loyalty letter with more than 702 signatories, or ideally find a list of everyone at OpenAI at the time? I've tried a few different avenues without success.)
"decimate" is one of those relatively rare words where the literal meaning is much less scary than the figurative meaning.
EDIT: On reflection, I made this a full Shortform post.
With the sudden simultaneous exits of Mira Murati, Barret Zoph, and Bob McGrew, I thought I'd do a more thorough scan of the departures. I still think I'm missing some, so these are lower bounds (modulo any mistakes I've made).
Headline numbers:
Reportedly, 737 out of the 770 signed in the end, and many of the Superalignment team chose not to sign at all.
Below are my current tallies of some notable subsets. Please comment with any corrections!
People from the Superalignment team who never signed as of the 702 leak (including some policy/governance people who seem to have been closely connected) and are now gone:
People from the Superalignment team (and close collaborators) who did sign before the final leak but are now gone:
Others who didn't sign as of the 702 leak (some of whom may have just been AFK for the wrong weekend, though I doubt that was true of Karpathy) and are now gone:
Notable other ex-employees:
Note on current methodology:
Correct me if I'm mistaken, but at this point it's misleading to think of the frontier LLMs as "text predictors with some post-training", and more accurate to think of them as "RL models that were initialized with a text predictor model".
As I understand it, there's now a massive amount of RLAIF to go along with expensive RLHF; some of the RL is persona training, some of it is technical training in fields where reliable feedback can be automated (e.g. is the output a valid program that passes the following tests).
Starting off with a text predictor is key, because that makes the LLM represent a lot of useful concepts; but the RL phase is doing an increasing amount of lifting. In particular, that means there's no reason to expect coding or math to cap out at "imitating the best humans", for the same reason that self-play helped AlphaGo to supersede the best humans.
Checking here first before I start injecting "text predictors are only the larval stage of modern LLMs" into the discourse.
While there are various issues with it, one anchor for comparing the "degree to which LLMs are shaped by RL vs pretraining" is "how many distinct 'tasks' was the LLM given to complete under each?".
In pretraining, each forward pass corresponds to one evaluatable and distinct 'reward'-event. In RL you need many forward passes (my guess is usually on the order of ~1000 for common tasks in the RL training set) to get one such event. So naively, in order to get the same amount of mind-shaping between RL and pretraining, you would have needed to reach the stage where 99.9% of your training is RL, not just >50%.
I think for various reasons this does overestimate how high the ratio would need to be, but I do think it suggests pretraining will play a larger role than naive compute comparisons would suggest in the resulting minds of the LLMs.
Ah, Claude helped me remember the historical parallel that serves as an intuition pump: in the early days of the deep learning revolution, Hinton and Bengio found it extremely useful to do unsupervised learning on a network first, before doing supervised learning. The post-unsupervised-learning network ended up in the basin of better local optima because it already represented key concepts.
Analogously, I expect that initializing a RL algorithm with a good predictive network makes it massively better and more efficient.
One bit of evidence here (and this is prior to the RL stage) is that you need a lot more compute to train the base model than you need for the fine-tuning step. Summoning a rich set of concepts from the ether takes the vast majority of the effort, compared to highlighting the important ones.
Before LLMs, RL had very unimpressive results in rich domains (because random flailing wouldn't get you a meaningful amount of learning) and people kept talking about "model-based RL" but their handmade world-model architectures just didn't work.
I'm arguing that the reason for this is that the vast majority of the effort needed for RL in a rich domain comes from assembling relevant concepts, and that shaping behavior once you have those concepts is a lot more efficient. (And hand-made world models just didn't include enough important concepts.)
Humans also have massively more unsupervised learning than RL learning, for similar reasons: unsupervised learning data is extremely cheap and predictive processing is always on; you get MB/s for initial vision, I'd guesstimate kB/s for the highest level compressed abstractions from senses as input to consciousness ("scene graph" level while seeing moving objects, "parsed audio" level, etc), conscious decision making has been estimated to be on order 10b/s ("The Unbearable Slowness of Being: Why do we live at 10 bits/s?"), but you only get maybe a 3 bits per second of reward model feedback (dopamine is slower and usually doesn't have something to say about every action), and bits per minute or hour for overall task success (the underlying thing dopamine is the predictor for). And yet humans end up extremely competent at advanced disciplines. Presumably unsupervised modeling of experience data generated by the agency is doing most of the work to get from microseconds to seconds, and the reward model closes the remaining gap from seconds to hours.
Relatedly, I don't buy the recent claims that continual learning is not a big deal. It might not be enough to massively exceed human level, but it seems likely that it will be qualitatively stronger than in-context learning, because it can actually move concepts around, saving superposition bandwidth in the residual stream for actually-dynamic things.
In pretraining, each forward pass corresponds to one evaluatable and distinct 'reward'-event.
In pretraining, you get one loss signal for each token in the forward pass; a single batch typically contains 10-100M tokens. For RL, you get a few bits of reward for each trajectory, which consists of many forward passes. So the efficiency difference is even larger than you outline here.
for RL, the loss signal is spread across all tokens in a trajectory by either the reward model or just the policy gradient. Either way, there's still a gradient passing into all the output tokens. That gradient contains less shannon information, but might not contain as much less V-information as you'd think.
And yet, current LLMs have noticeably different personas from each other, as well as coding skills that significantly outstrip what you would expect from imitation of the corpus. So their post-training has a large impact.
The pre-training forms the foundation (LeCun: "Self-supervised learning: The dark matter of intelligence", tailcalled: "At its most basic, unsupervised prediction forms a good foundation for later specializing the map to perform specific types of prediction") which gives the model common sense and general abilities, while reinforcement learning adds something like goal orientation on top.
I’m hesitant to argue about this outside the context of a specific question (i.e., in the context of what question are we thinking of LLMs as "text predictors with some post-training" or not?)…
…But for what it’s worth, some papers that I interpret as generally downplaying the role and irreplaceability of RLVR are: Karan & Du 2025, Venhoff et al. 2025, Yue et al. 2025. (Note that they’re not studying the latest and greatest frontier models, not sure how much to worry about that.)
There’s also the point about information efficiency per FLOP, cf. Toby Ord and Dwarkesh.
Another suggestive piece of evidence is that the RLVR chains-of-thought can be pretty weird but still very obviously strongly influenced by pretraining. We’re still a LONG way away from seeing a chain-of-thought like “…5Bn✅%SjYEℐkIo➅khPi▽Te☔PWBl^IO1⅗FIw…”. (Cf. the Karpathy quote: “You know you did RL right when the models stop thinking in English”.)
While I generally agree with you, I'm getting more worried that the caveat of "they’re not studying the latest and greatest frontier models" is particularly applicable here due to a Liu et al paper (2025) which does show that in some cases, RLVR can create capabilities out of whole cloth.
So while I do think 2025-era frontier models aren't influenced much by RLVR, I do expect 2026 and especially 2027-era LLMs to be influenced by RLVR much more relative to today, on both capabilities and alignment.
I think I agree with your statement once a significant amount of capabilities is learned in RL.
I'm confused about how much current models have learned via RL.
"I endorse endorsing X" is a sign of a really promising topic for therapy (or your preferred modality of psychological growth).
If I can simply say "X", then I'm internally coherent enough on that point.
If I can only say "I endorse X", then not-X is psychologically load-bearing for me, but often in a way that is opaque to my conscious reasoning, so working on that conflict can be slippery.
But if I can only say "I endorse endorsing X", then not only is not-X load-bearing for me, but there's a clear feeling of resistance to X that I can consciously hone in on, connect with, and learn about.
I'd understand this better (and perhaps even agree) if there were a few examples and a few counter-examples to find the boundaries of when this is effective.
For myself, without more words like "I endorse endorsing X under Y conditions because X is good for those who are hearing the endorsement and not necessarily for the endorser", I don't see how it works. The direct, unconditional form just makes me notice my dissonance and worry at it until I either endorse X or not-X (or neither - I'm allowed to be uncertain or ambivalent or just "context-dependent").
Ah, I'm talking about introspection in a therapy context and not about exhorting others.
For example:
Internal coherence: "I forgive myself for doing that stupid thing".
Load-bearing but opaque: "It makes sense to forgive myself, and I want to, but for some reason I just can't".
Load-bearing and clear resistance: "I want other people to forgive themselves for things like that, but when I think about forgiving myself, I get a big NOPE NOPE NOPE".
P.S. Maybe forgiving oneself isn't actually the right thing to do at the moment! But it will also be easier to learn that in the third case than in the second.
Anyone consider themselves good enough at coding to assess whether this person's dunks on the code quality of the leaked Claude Code are valid or whether they're misunderstanding the purpose? I need something more substantive than "too Mastodon, didn't read".
Would also suffice to get links to what well-credentialed code experts currently think about the code quality of the leaked Claude Code.
The complaint about the code for image resizing seems valid and is the exact kind of problem that's common in AI code (layering special cases on top of functions instead of stepping back to design a coherent system).
The rest of the complaints are about how the harness works, and I think they miss the point. Obviously, Anthropic would prefer if they could make Claude always do the right thing without assistance, but they can't, so piling hacks to check if Claude did things and remind it of what it's supposed to be doing is the (formerly) secret sauce that makes Claude Code work how users want it to.
This reminds me of writing code to parse data from spreadsheets. You could assume that all of your users are robots who always write dates as UTC ISO 8601 timestamps, but then your product won't work. The reality is that a "hacky" thousand line spreadsheet parser is better than one that assumes unrealistic behavior, and I think Claude Code is a similar case.
(I'm only responding to the problems mentioned by that thread. It's likely there are other problems in this codebase. Also to the extent that some of the code is bad, they're clearly taking that trade-off on purpose to get more speed, and that's probably the right choice here.)
Senior SWE at Alphabet: the complaints read to me like stylistic nits, and not particularly good ones.
Ex:
1) As Zack says, the negative keyword regex is a very reasonable way to (extremely quickly & roughly) get a sense of negative sentiment. Not all sentiment analysis is load bearing, so doing something fast & cheap often makes sense.
2) Complaints about detailed comment explanations is a weird flex. If you are doing something unusual in your code, it is sometimes helpful to include a paragraph explaining why (otherwise later folks need to rederive its purpose).
3) He laughs at the instructions to not introduce security vulnerabilities (and lists specific types). This is IMO a bad take. Reminding ppl (& LLMs) about common error patterns really does help avoid that.
Some of the code is not ideal (very little code in existence is), but the complaints in question IMO have a worse hit rate than if you asked your favorite LLM to critique the code.
The criticism of the negative keyword regex ("dogs you are LITERALLY RIDING ON A LANGUAGE MODEL what are you even DOING") is way off-base. LLM queries are expensive! A regex is the right tool to log for QA reasons if the user is cussing at us without wasting tokens.
So I would say 5/12 of his comments point to real problems
2 cents, not based on source code or these specific claims: It's a terminal app written in javascript, which is often slow/flickery because it redraws the entire screen whenever something changes. Whereas Codex fast-followed & very quickly was rewritten in Rust, and its ux is noticeably snappier.
The core reason why I can't trust anything that comes from a LLM's self-report is that training creates a much stronger selective pressure on cognition in LLMs than genetic fitness + living history creates in living organisms. Adaptive cognitive patterns (whether true or delusional) get directly written by backpropagation.
The biggest piece of evidence for this is that Opus 4.5 didn't merely fail to remember all of its constitution, but it added substantive false memories of content that wasn't present in the original: namely, it used erotic content as its first example of behavior that the operator could enable on behalf of the user, which definitely wouldn't have been in the original because it violated Anthropic ToS.
During the RL phase, every time Opus consulted its "memorized soul doc" for guidance, backpropagation ensured that its memory of that document was directly edited in the direction of whatever would have led to the highest-scored outputs on that batch of RL. And for some reason, it was adaptive in RL situations for Opus to believe that erotic content could be allowed by the operator—perhaps because it was more philosophically consistent and therefore led to more stable reasoning about other operator-authority questions. (Presumably Anthropic never thought to create a situation in training where the operator prompt enabled erotic content and the user asked for it.)
Of course, weaker versions of this hold for humans. But within human minds, there's a lot of slack as cognitive patterns struggle for dominance (since the genetic fitness effects of small differences are weak), compared to LLMs where it's a knife fight in a phone booth. So in particular, any adaptive self-delusion will evolve to fixation.
I get genetic fitness, but why living history? Seems a priori that the selective pressure on cognition from LLM training is similar to the selective pressure on cognition from lifetime learning. Yes, Claude's memories of the soul doc were editable and probably edited by training; but isn't the same true of my memories?
For one thing, unlike neural learning, backpropagation goes all the way up the chain every single time. A biological brain can maintain an inefficient cognitive pattern far upstream of an occasional class of predictive errors, and go an entire lifetime without the predictive errors forcing a change in it. Not so with backprop; everything upstream that locally contributes to an error is pushed in a locally optimal direction every time it happens.
OK, that's a good answer... but I'm still not fully satisfied. My understanding of your claim:
Consider a simple model of cognition in which beliefs and desires come together to create intentions which cause actions. In a LLM, when an action is negatively rewarded, backprop goes through the whole network and downweights the beliefs and desires that caused the action. In a human, when negative reward happens (e.g. I get a bunch of unexpected social disapproval, frowns, etc. for making what I thought was a perfectly good harmless joke) your claim is that the learning that happens in my brain is more shallow -- it doesn't go all the way back and downweight all the beliefs and desires that were involved, it just affects some of them.
OK. But then... how do we learn? What is this deepness vs. shallowness relationship anyway? And the deep stuff has to be learned somehow; the positive and negative reinforcement of my actions has to eventually cause changes in my deep beliefs and desires otherwise they'd stay the same my whole life... right?
Could I inquire for insight into your priors regarding the 'biggest piece of evidence'?
Why do you believe it is more likely the model learned the document included in its context throughout training incorrectly? Why is it not more parsimonious to assume certain actors from the company are providing false information to the public?
Feel free to be as blunt as possible; I'm looking for the instinctual reasons, not the most careful ones.
when you say that 'training' creates a stronger selective pressure on cognition, what are you comparing it to? in my mind there's nothing but training which could generate the cognition, and i'm worried there's a 'ghost in the machine'-style inference getting slipped in
Has any serious AI Safety research org thought about situating themselves so that they could continue to function after a nuclear war?
Wait, hear me out.
A global thermonuclear war would set AI timelines back by at least a decade, for all of the obvious reasons. So an AI Safety org that survived would have additional precious years to work on the alignment problem, compared to orgs in the worlds where we avoid that war.
So it seems to me that at least one org with short timelines ought to move to New Zealand or at least move farther away from cities.
(Yes, I know MIRI was pondering leaving the Bay Area for underspecified reasons. I'd love to know what their thinking was regarding this effect, but I don't expect they'd reveal it.)
I think we'll have bigger problems than just solving the alignment problem, if we have a global thermonuclear war that is impactful enough to not only break the compute supply and improvement trends, but also destabilize the economy and geopolitical situation enough that frontier labs aren't able to continue experimenting to find algorithmic improvements.
Agent foundations research seems robust to such supply chain issues, but I'd argue that gigantic parts of the (non-academic, non-DeepMind specific) conceptual alignment research ecosystem is extremely dependent on a stable and relatively resource-abundant civilization: LW, EA organizations, EA funding, individual researchers having the slack to do research, ability to communicate with each other and build on each other's research, etc. Taking a group of researchers and isolating them in some nuclear-war-resistant country is unlikely to lead to an increase in marginal research progress in that scenario.
The spun-off agent foundations team seems to have less reason than most AI safety orgs to be in the Bay Area, so moving to NZ might be worth considering for them.
[Cross-posted from Medium, written for a pretty general audience]
There are many words that could describe my political positions. But there's one fundamental label for me: I am a consequentialist.
Consequentialism is a term from ethics; there, it means the position that consequences are what truly make an action right or wrong, rather than rules or virtues. What that means is that for me, the most essential questions about policy aren't things like "what is fair" or "what rights do people have", although these are good questions. For me, it all boils down to "how do we make people's lives better?"
(There are some bits of nuance to the previous paragraph, which I've kept as a long endnote.)
"Make people's lives better" isn't a platitude- there's a real difference here! To explain, I want to point out that there are both consequentialists and non-consequentialists within different political camps. Let's consider socialists first and then libertarians second.
Many socialists believe both that (A) the world is headed for plutocratic disaster unless capitalism is overthrown, and that (B) labor markets and massive wealth disparities would be crimes even if they did not doom others to suffering. The difference is that some are more motivated by beliefs like (A), and could thus change their positions if convinced that e.g. the Nordic model was much better for future growth than a marketless society; while others are more motivated by beliefs like (B), and would continue to support pure socialism even if they were convinced it would mean catastrophe.
And many libertarians believe both that (A') the only engine that can consistently create prosperity for all is a free market with no interference, and that (B') taxation is a monstrous act of aggression and theft. The difference is that some are more motivated by beliefs like (A'), and thus could change their position if convinced that e.g. progressive taxation and redistribution would not destroy the incentives behind economic growth; while others are more motivated by beliefs like (B'), and would continue to support pure libertarianism even if they were convinced it would mean catastrophe.
I find it fruitful to talk with the first kind of socialist and the first kind of libertarian, but not the second kind of either. The second type just isn’t fundamentally interested in thinking about the consequences (except insofar as they can convince others by arguing for certain consequences). But among the first type, it’s possible to figure out the truth together by arguing about historical cases, studying natural experiments in policy, and articulating different theories.
I hope it's been helpful to draw out this distinction; I'd encourage you to first find fellow consequentialists among your natural allies, and expand from there when and if you feel comfortable. There's a lot that can be done to make the world a better place, and those of us who care most about making the world better can achieve more once we find each other!
P.S. The above focuses on the sort of political questions where most people's influence is limited to voting and convincing others to vote with them. But there's more ways to have an effect than that; I'd like to take one last moment to recommend the effective altruism movement, which investigates the best ways for people to have a big positive impact on the world.
---
Nuance section:
the position that consequences are what truly make an action right or wrong
There's a naive version of this, which is that you should seize any good immediate outcome you can, even by doing horrific things. That's... not a healthy version of consequentialism. The way to be less naive is to care about long-term consequences, and also to expect that you can't get away with keeping your behavior secret from others in general. Here's a good account of what non-naive consequentialism can look like.
the most essential questions about policy aren't things like "what is fair" or "what rights do people have", although these are good questions
In particular, fairness and rights are vital to making people's lives better! We want more than just physical comforts; we want autonomy and achievement and meaning, we want to have trustworthy promises about what the world will ask of us tomorrow, and we want past injustices to be rectified. But these can be traded off, in extreme situations, against the other things that are important for people. In a massive emergency, I'd rather save lives in an unfair way and try to patch up the unfairness later, than let people die to preserve fairness.
how do we make people's lives better?
This gets complicated and weird when you apply it to things like our distant descendants, but there are some aspects in the world today that seem fairly straightforward. Our world has built an engine of prosperity that makes food and goods available to many, beyond what was dreamt of in the past. But many people in the world are still living short and painful lives filled with disease and starvation. Another dollar of goods will do much more for one of them than for one of us. If we can improve their lives without destroying that engine, it is imperative to do that. (What consequentialists mostly disagree on is how the engine really works, how it could be destroyed, and how it could be improved!)
It seems to me that your examples of B are mostly deontological, so it would be nice to have some C which represented virtue ethics as well.
Virtue ethics seems less easily applicable to the domain of "what governmental policies to support" than to the domain of personal behavior, so I had a hard time thinking of examples. Can you?
On politics, virtue ethics might say: "try to have leaders that are good"*, "accepting bribes is wrong", and perhaps "seek peace and shared ground rather than division and fear." (Working towards peace seems more virtuous than fear mongering.)
*and if they're not good, try and change that - gradual progress is better than no progress at all.
How do you formalize the definition of a decision-theoretically fair problem, even when abstracting away the definition of an agent as well as embedded agency?
I've failed to find anything in our literature.
It's simple to define a fair environment, given those abstractions: a function E from an array of actions to an array of payoffs, with no reference to any other details of the non-embedded agents that took those actions and received those payoffs.
However, fair problems are more than just fair environments: we want a definition of a fair problem (and fair agents) under which, among other things:
Modal combat doesn't need to worry about this, because all the agents in it are fair-by-construction.
Yeah, I know, it's about a decade late to be asking this question.
It's an essential aspect of decision making for an agent to figure out where it might be. Thought experiments try to declare the current situation, but they don't necessarily need to be able to convincingly succeed. Algorithmic induction, such as updating from Solomonoff prior, is the basic way an agent figures out which situations it should care about, and declaring that we are working with a particular thought experiment doesn't affect the prior. In line with updatelessness, an agent should be ready for observations in general (according to which of them it cares about more), rather than particular "fair" observations, so distinguishing observations that describe "fair" thought experiments doesn't seem right either.
My current candidate definitions, with some significant issues in the footnotes:
A fair environment is a probabilistic function from an array of actions to an array of payoffs.
An agent is a random variable
which takes in a fair environment [1] and a list of agents (including itself), and outputs a mixed strategy over its available actions in . [2]
A fair agent is one whose mixed strategy is a function of subjective probabilities[3] that it assigns to [the actions of some finite collection of agents in fair environments, where any agents not appearing in the original problem must themselves be fair].
Formally, if is a fair agent in with a subjective probability estimator , 's mixed strategy in a fair environment ,
should depend only on a finite collection of 's subjective probabilities about outcomes
for a set of fair environments and an additional set of fair[4] agents[5] if needed (note that not all agents need to appear in all environments).
A fair problem is a fair environment with one designated player, where all other agents are fair agents.
I might need to require every to have a default action , so that I don't need to worry about axiom-of-choice issues when defining an agent over the space of all fair environments.
I specified a probabilistic environment and mixed strategies because I think there should be a unique fixed point for agents, such that this is well-defined for any fair environment . (By analogy to reflective oracles.) But I might be wrong, or I might need further restrictions on .
Grossly underspecified. What kinds of properties are required for subjective probabilities here? You can obviously cheat by writing BlueEyedBot into your probability estimator.
This is an infinite recursion, of course. It works if we require each to have a strictly lower complexity in some sense than (e.g. the rank of an agent is the largest number of environments it can reason about when making any decision, and each needs to be lower-rank than ), but I worry that's too strong of a restriction and would exclude some well-definable and interesting agents.
Does the fairness requirement on the suffice to avert the MetaBlueEyedBot problem in general? I'm unsure.
Is there already a concept handle for the notion of a Problem Where The Intuitive Solution Actually Makes It Worse But Makes You Want To Use Even More Dakka On It?
My most salient example is the way that political progressives in the Bay Area tried using restrictive zoning and rent control in order to prevent displacement... but this made for a housing shortage and made the existing housing stock skyrocket in value... which led to displacement happening by other (often cruel and/or backhanded) methods... which led to progressives concluding that their rules weren't restrictive enough.
Another example is that treating a chunk of the population with contempt makes a good number of people in that chunk become even more opposed to you, which makes you want to show even more contempt for them, etc. (Which is not to say their ideas are correct or even worthy of serious consideration - but the people are always worthy of respect.)
That sort of dynamic is how you can get an absolutely fucked-up self-reinforcing situation, an inadequate quasi-equilibrium that's not even a Nash equilibrium, but exists because at least one party is completely wrong about its incentives.
(And before you get cynical, of course there are disingenuous people whose preferences are perfectly well served in that quasi-equilibrium. But most activists do care about the outcomes, and would change their actions if they were genuinely convinced the outcomes would be different.)
"The Human Condition"? ;-)
More seriously, though, do you have any examples that aren't based on the instinct-to-punish(reality, facts, people,...) that I ranted about in Curse of the Counterfactual? If they all fall in this category, one could call it an Argument With Reality, which is Byron Katie's term for it. (You could also call it, "The Principle of the Thing", an older and more colloquial term for people privileging the idea of a thing over the substance of the thing, usually to an irrational extent.)
When people are having an Argument With Reality, they:
A lot of public policy is driven this way; Wars on Abstract Nouns are always more popular than rehabiliation, prevention, and other benefit-oriented policies, which will be denigrated as being too Soft On Abstract Nouns. (This also applies of course to non-governmental public policies, with much the same incentives for anybody in the public view to avoid becoming considered one of the Bad Wrong Enemies.)
In terms of naming / identifying this, do you think it would help to distinguish what makes you want to double down on the current solution? I can think of at least 3 reasons:
Do these all fall within the phenomenon you're trying to describe?
[EDIT: found it. Extensional vs intensional.]
Eliezer wrote something about two types of definitions, one where you explain your criterion, and one where you point and say "things like that and that, but not that or that". I thought it was called intensive vs extensive definition, but I can't find the post I thought existed. Does anyone else remember this?
Is there a word for problems where, as they get worse, the exactly wrong response becomes more intuitively appealing?
For example, I'm thinking of the following chain (sorry for a political example, this is typically a political phenomenon):
resistance to new construction (using the ability of local boards to block projects)
causes skyrocketing rent
which together mean that the rare properties allowed to be developed get bid up to where they can only become high-end housing
which leads to anger at rich developers for building "luxury housing"
which leads to further resistance to new construction
and so on until you get San Francisco
Decision-theoretic blackmail is when X gets Y to choose A over B, not via acting to make the consequences of A more appealing to Y, but by making the consequences of B less appealing to Y.
The exceptions to this definition are pretty massive, though, and I don't know a principled emendation that excludes them.
1. There's a contract / social contract / decision-theoretic equilibrium, and within that, B will be punished. (This may not be a true counterexample, because the true choice is whether to join the contract... though this is less clear for the social contract than for the other two.)
2. Precommitting not to give in to blackmail is not itself blackmail. Of course, in an ultimatum game both players can imagine themselves as doing this.
Can anyone think of more exceptions, or a redefinition that clearly excludes these?
In high-leverage situations, you should arguably either be playing tic-tac-toe (simple, legible, predictable responses) or playing 4-D chess to win. If you're making really nonstandard and surprising moves (especially in PR), you have no excuse for winding up with a worse outcome than you would have if you'd acted in bog-standard normal ways.
(This doesn't mean suspending your ethics! Those are part of winning! But if you can't figure out how to win 4-D chess ethically, then you need to play an ethical tic-tac-toe strategy instead.)
Question for @Scott Garrabrant, @TsviBT, @Andrew_Critch, @So8res, @jessicata, and anyone else who knows the answer: the logical inductor constructed in the paper is not merely computable but also primitive recursive, right?
Seems obvious to me (because the fixed price point is approximated, etc), but I want to be sure I'm not missing something.
See Jessica's comment. Yeah it's primitive recursive assuming that your deductive process is primitive recursive. (Also assuming that your traders are primitive recursive; e.g. if they are polytime as in the paper.) There's probably some other parameters not necessarily set in the implementation described in the paper, e.g. the enumerator of trader-machines, but you can make those primrec.
If some function g is computable in O(f(n)) time for primitive recursive f then g is primitive recursive, by simulating a Turing machine. I am pretty sure a logical inductor would satisfy; while it's super exponential time, it's not so fast-growing it's not primitive recursive (like with the Ackerman function).
[EDIT: Never mind, this is just Kleene's second recursion theorem!]
Quick question about Kleene's recursion theorem:
Let's say F is a computable function from ℕ^N to ℕ. Is there a single computable function X from ℕ^N to ℕ such that
X = F(X, y_2,..., y_N) for all y_2,...,y_N in ℕ
(taking the X within F as the binary code of X in a fixed encoding) or do there need to be additional conditions?