Claim 1: there is an AI system that (1) performs well ... (2) generalizes far outside of its training distribution.
Don't humans provide an existence proof of this? The point about there being a 'core' of general intelligence seems unnecessary.
I know that this is a common argument against amplification, but I've never found it super compelling.People often point to evil corporations to show that unaligned behavior can emerge from aligned humans, but I don't think this analogy is very strong. Humans in fact do not share the same goals and are generally competing with each other over resources and power, which seems like the main source of inadequate equilibria to me.
If everyone in the world was a copy of Eliezer, I don't think we would have a coordination problem around building AGI. They w... (read more)
That's a good point. I guess I don't expect this to be a big problem because:1. I think 1,000,000 copies of myself could still get a heck of a lot done. 2. The first human-level AGI might be way more creative than your average human. It would probably be trained on data from billions of humans, so all of those different ways of thinking could be latent in the model.2. The copies can potentially diverge. I'm expecting the first transformative model to be stateful and be able to meta-learn. This could be as simple as giving a transformer read and write ... (read more)
Wait... I'm quite confused. In the decision rule, how is the set of environments 'E' determined? If it contains every possible environment, then this means I should behave like I am in the worst possible world, which would cause me to do some crazy things.Also, when you say that an infra-bayesian agent models the world with a set of probability distributions, what does this mean? Does the set contain every distribution that would be consistent with the agent's observations? But isn't this almost all probability distributions? Some distributions match the d... (read more)
Interesting post! I'm not sure if I understand the connection between infra-bayesianism and Newcomb's paradox very well. The decision procedure you outlined in the first example seems equivalent to an evidential decision theorist placing 0 credence on worlds where Omega makes an incorrect prediction. What is the infra-bayesianism framework doing differently? It just looks like the credence distribution over worlds is disguised by the 'Nirvana trick.'
It's great that you are trying to develop a more detailed understanding of inner alignment. I noticed that you didn't talk about deception much. In particular, the statement below is false:
Generalization <=> accurate priors + diverse data
You have to worry about what John Wentworth calls 'manipulation of imperfect search.' You can have accurate priors and diverse data and (unless you have infinite data) the training process could produce a deceptive agent that is able to maintain its misalignment.
I'm guessing that you are referring to this:
Another strategy is to use intermittent oversight – i.e. get an amplified version of the current aligned model to (somehow) determine whether the upgraded model has the same objective before proceeding.
The intermittent oversight strategy does depend on some level of transparency. This is only one of the ideas I mentioned though (and it is not original). The post in general does not assume anything about our transparency capabilities.
I'm not sure I understand. We might not be on the same page.Here's the concern I'm addressing:Let's say we build a fully aligned human-level AGI, but we want to scale it up to superintelligence. This seems much harder to do safely than to train the human-level AGI since you need a training signal that's better than human feedback/imitation.
Here's the point I am making about that concern:It might actually be quite easy to scale an already aligned AGI up to superintelligence -- even if you don't have a scalable outer-aligned training signal -- because the AGI will be motivated to crystallize its aligned objective.
Thanks for the thoughtful review! I think this is overall a good read of what I was saying. I agree now that redundancy would not work.
The mesaobjective that was aligned to our base objective in the original setting is no longer aligned in the new setting
When I said that the 'human-level' AGI is assumed to be aligned, I meant that it has an aligned mesa-objective (corrigibly or internally) -- not that it has an objective that was functionally aligned on the training distribution, but may not remain aligned under distribution shift. I thought that internally/corrigibly aligned mesa-objectives are intent-aligned on all (plausible) distributions by definition...
Adding some thoughts that came out of a conversation with Thomas Kwa:
Gradient hacking seems difficult. Humans have pretty weak introspective access to their goals. I have a hard time determining whether my goals have changed or if I have gained information about what they are. There isn't a good reason to believe that the AIs we build will be different.
Safety and value alignment are generally toxic words, currently. Safety is becoming more normalized due to its associations with uncertainty, adversarial robustness, and reliability, which are thought respectable. Discussions of superintelligence are often derided as “not serious”, “not grounded,” or “science fiction.”
Here's a relevant question in the 2016 survey of AI researchers:
These numbers seem to conflict with what you said but maybe I'm misinterpreting you. If there is a conflict here, do you think that if this survey was done again, the... (read more)
I have an objection to the point about how AI models will be more efficient because they don't need to do massive parallelization:
Massive parallelization is useful for AI models too and for somewhat similar reasons. Parallel computation allows the model to spit out a result more quickly. In the biological setting, this is great because it means you can move out of the way when a tiger jumps toward you. In the ML setting, this is great because it allows the gradient to be computed more quickly. The disadvantage of parallelization is that it means that more ... (read more)
Here's another milestone in AI development that I expect to happen in the next few years which could be worth noting:I don't think any of the large language models that currently exist write anything to an external memory. You can get a chatbot to hold a conversation and 'remember' what was said by appending the dialogue to its next input, but I'd imagine this would get unwieldy if you want your language model to keep track of details over a large number of interactions. Fine-tuning a language model so that it makes use of a memory could lead to:1. Mo... (read more)
I'm pretty confused about the plan to use ELK to solve outer alignment. If Cakey is not actually trained, how are amplified humans accessing its world model?"To avoid this fate, we hope to find some way to directly learn whatever skills and knowledge Cakey would have developed over the course of training without actually training a cake-optimizing AI...
I don't think I agree that this undermines my argument. I showed that the utility function of person 1 is of the form h(x + y) where h is monotonic increasing. This respects the fact that the utility function is not unique. 2(x + y) + 1 would qualify, as would 3 log(x + y), etc.Showing that the utility function must have this form is enough to prove total utilitarianism in this case since when you compare h(x + y) to h(x'+ y'), h becomes irrelevant. It is the same as comparing x + y to x' + y'.
This is a much more agreeable assumption. When I get a chance, I'll make sure it can replace the fairness one and add it to the proof and give you credit.
I am defining it as you said. They are like movie frames that haven't been projected yet. I agree that the pre-arranged nature of the snapshots is irrelevant -- that was the point of the example (sorry that this wasn't clear).The purpose of the example was to falsify the following hypothesis: "In order for a simulation to produce conscious experiences, it must compute the next state based on the previous state. It can't just 'play the simulation from memory'"
Maybe what you are getting at is that this hypothesis doesn't do justice to the intuitions tha... (read more)
This looks great. I'll check it out thanks!
This is great. I'd love to see more stuff like this.Is anyone aware of articles like Chris Olah's views on AI Safety but for other prominent researchers?Also, it would be great to see a breakdown of how the approaches of ARC, MIRI, Anthropic, Redwood, etc differ. Does this exist somewhere?
I agree that the claims are doing all of the work and that this is not a convincing argument for utilitarianism. I often hear arguments for moral philosophies that make a ton of implicit assumptions. I think that once you make them explicit and actually try to be rigorous the argument always seems less impressive, and less convincing.
Good point, I overlooked this.