Just Imitate Humans?

by michaelcohen1 min read27th Jul 201970 comments


Ω 9

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Do people think we could make a singleton (or achieve global coordination and preventative policing) just by imitating human policies on computers? If so, this seems pretty safe to me.

Some reasons for optimism: 1) these could be run much faster than a human thinks, and 2) we could make very many of them.

Acquiring data: put a group of people in a house with a computer. Show them things (images, videos, audio files, etc.) and give them a chance to respond at the keyboard. Their keyboard actions are the actions, and everything between actions is an observation. Then learn the policy of the group of humans. By the way, these can be happy humans who earnestly try to follow instructions. To model their policy, we can take the maximum a posteriori estimate over a set of policies which includes the truth, and freeze the policy once we're satisfied. (This is with unlimited computation; we'd have to use heuristics and approximations in real life). With a maximum a posteriori estimate, this will be quick to run once we freeze the policy, and we're no longer tracking tons of hypotheses, especially if we used some sort of speed prior. Let be the number of interaction cycles we record before freezing the policy. For sufficiently large , it seems to me that running this is safe.

What are people's intuitions here? Could enough human-imitating artificial agents (running much faster than people) prevent unfriendly AGI from being made?

If we think this would work, there would still be the (neither trivial nor hopeless) challenge of convincing all serious AGI labs that any attempt to run a superhuman AGI is unconscionably dangerous, and we should stick to imitating humans.

70 comments, sorted by Highlighting new comments since Today at 9:23 AM
New Comment

Eliezer talked about human imitations quite a bit in Challenges to Christiano’s capability amplification proposal, specifically the safety implications of deviations from perfect imitation.

I've discussed (some difficulties of) imitating humans as a way to accelerate philosophical progress.

I also share shminux's concern about humans (and hence human imitations) not being safe in extreme/unusual circumstances, and have discussed it under "human safety problems".

ETA: However I think it's definitely worth investigating further.

1michaelcohen2yThanks for all the links.
4cousin_it2yWow, when I click these links from greaterwrong.com [http://greaterwrong.com], they go to arbital.greaterwrong.com [http://arbital.greaterwrong.com] which loads instantly. Thanks to Said for the nice work!

I actually spent a bunch of time in the last weeks fixing and updating Arbital, so it should be reasonably fast now. The arbital pages loaded for me in less than a second.

arbital.greaterwrong is obviously still faster, but it's no longer as massive of a difference.

2Said Achmiz2yAs usual, thanks ought properly to go to clone of saturn [https://www.greaterwrong.com/users/clone-of-saturn]! (In this case especially, the front-end work [i.e., my part] was minimal, since, as you see, the GW UI is mostly reused with little modification. The real work was all back-end!)
1michaelcohen2yHere are some of my thoughts on these posts. Thank you again for linking them. Against mimicry: I don’t understand why an approximation of optimal sequence prediction doesn’t do what we want. That makes the objective minimizing the KL-divergence from the human policy to the imitation policy, but I think it is easier to think of this as just proper Bayesian updates (approximately). When there are too few samples, or using a bad approximation of optimal prediction, the imitator could fail, as the blocks examples describes. But a) it will learn to do everything that a human can do that it “can” learn, and b) the complaint that what we really want is for the imitator to just solve the task is just a wish for safe AGI. Yes, if better, more capable options than imitation can be resolved as safe, they will be superior. Mimicry and Meeting Halfway: If I’m understanding correctly, this seems more like getting the worst of both worlds. (Or at least doing no better than imitation). Also, the generator (i.e. the agent) has an incentive to take over the world to shut off the discriminator. Edit: I was ascribing too much agent-ness to the generator, which might be relevant for future GAN-inspired stuff, but for current versions of GANs, its only conception of the discriminator is its gradient update, and it doesn't believe the output of the discriminator depends on the state of the world. Depending on the internals of the discriminator, this incentive might reappear, but I'm not sure. Reliable prediction: I think this is a question of confidence calibration. I don’t know how to tractably approximate ideal reasoning, but I don’t this really jeopardizes imitation learning. Safe training procedure for human-imitators: Supervised learning suffices: tractably approximate ideal reasoning. I know this is a non-answer, but I don’t know the details of how to do this. This most naturally falls under the retrodiction category in the article. The “tractable approximations” which co
3capybaralet2yRE: "Imitation learning considered unsafe?" (I'm the author): The post can basically be read as arguing that human imitation seems especially likely to produce mesa-optimization. I agree with your response; this is also why I said: "Mistakes in imitating the human may be relatively harmless; the approximation may be good enough". I don't agree with your characterization, however. The concern is not that it would have roughly human-like planning, but rather super-human planning (since this is presumably simpler according to most reasonable priors).
4michaelcohen2yThanks for the clarification. Consider the sort of relatively simple, super-human planning algorithm that, for most goals, would lead the planner/agent to take over the world or do similarly elaborate and impactful things in the service of whatever goal is being pursued. A Bayesian predictor of the human's behavior will consider the hypothesis Hg that the human does the sort of planning described above in the service of goal g. It will have a corresponding hypothesis for each such goal g. It seems to me, though, that these hypotheses will be immediately eliminated. The human's observed behavior won't include taking over the world or any other existentially dangerous behavior, as would have been implied by hypotheses of the form Hg. A hypothesis which says that the observed behavior is the output of human-like planning in the service of some goal which is slightly incorrect may maintain some weight in the posterior after a number of observations, but I don't see how "dangerously powerful planning + goal" remains under consideration. I suppose the point of human imitation is to produce a weak, conservative, lazy, impact-sensitive mesa-optimizer, since humans are optimizers with those qualifiers. If it weren't producing a mesa-optimizer, something would have gone very wrong. So this is a good point. As for whether this is dangerous, I think the discussion above is the place to focus.
2capybaralet2yThis is a very good argument, and I'm still trying to decide how decisive I think it is. In the meanwhile, I'll mention that I'm imagining the learner as something closer to a DNN than a Bayesian predictor. One image how how DNN learning often proceeds is as a series of "aha" moments (generating/revising highly general explanations of the data) interspersed/intermingled with something more like memorization of data-points that don't fit the current general explanations. That view makes it seem plausible that "planning" would emerge as an "aha" moment before being refined as "oh wait, bounded planning... with these heuristics... and these restrictions...", creating a dangerous window of time between "I'm doing planning" and "I'm planning like a human, warts and all".

In Challenges to Christiano’s capability amplification proposal, Eliezer mentioned "challenges I’ve given about how perfect imitation would be very expensive". Unfortunately I'm not sure where those challenges are so I can't check the details of his arguments. On the face of it, it seems likely that (at design/training time) creating human imitations that are accurate enough to be safe (or as safe as humans) will require a lot more compute and/or advances in AI/ML research and/or resources in general than creating human-level AGI, since "human imitations that are accurate enough to be safe" seems like a much smaller target in configuration space than "human-level AGI", and the former also requires much more specific and expensive training data than the latter.

It's less clear to me that human imitations have to be more expensive at run time. ETA: One argument in favor of that is that "human imitations that are accurate enough to be safe" is a much smaller region in configuration space so there's less room to optimize for other desirable properties like efficiency/performance on particular tasks.

This seems like an important question to answer and I wonder if anyone knows Eliezer's specific arguments, or any other relevant arguments.

3michaelcohen2yA reinforcement learner has to model the world. A policy imitator has to model a household of humans. But the world can’t be modeled without modeling humans, and a household of humans can’t be modeled with modeling (their model of) the world. So on the face of it, these two things seem about equally difficult to me, in terms of both sample complexity and the “intelligence” required of the model. Also, if an AGI is a world-model and a planner, and the world-model part is about as hard as the policy imitator, then any time spent planning slows down the AGI. Heuristics are powerful, and planning doesn’t have to be optimal, but in adversarial contexts, in general, planning is in the realm of PSPACE complete.
3Wei_Dai2yIn response to your argument: The likely way we'll create AGI is to use a relatively stupid/inefficient algorithm such as SGD or evolution, plus a lot of compute/data, to create a smarter/more efficient algorithm that we can call AGI. The AGI will then create a model of the world using its sense data which will include humans. Because the "stupid" phase does not need data specifically about humans, and the AGI is "smart", the overall process does not require a lot of data about humans. To create safe human imitations on the other hand, it seems like we have to use the relatively stupid/inefficient algorithm to directly create an accurate model of humans so it will require a lot more data about humans, plus we're using the stupid/inefficient algorithm to try to hit a smaller target directly (rather than indirectly via AGI) so it's also going to take more compute.
2michaelcohen2yLet's distinguish 4 things: sequence prediction, AGI, sequence prediction trained on human actions, AGI with a world-model trained on the world. I think you've been comparing AGI to sequence prediction trained on human actions. Sequence prediction is as simple an algorithm as AGI (if not more so). I think a smarter/more efficient sequence prediction algorithm is as simple as a smarter/more efficient AGI. If we can use an inefficient algorithm plus lots of compute/data to make a smarter algorithm called AGI, then we can use an inefficient algorithm plus lots of compute/data to make a smarter algorithm called sequence prediction, where both of these have already incorporated some amount of knowledge about the world or about the thing to be modeled/predicted, however much you like. As for "target size", AGI is a larger target than sequence prediction trained on the world, but I don't think AGI is a larger target size than sequence prediction. (In fact, I think it's much smaller; mimicry is sequence prediction, and it occurs much more in nature than intelligence, especially if you note that intelligence requires sequence prediction too). Similarly, AGI with a good world-model (for our world) is not a larger target size than sequence prediction trained on the world.
2Wei_Dai2yAnother argument is that AGI has more flexibility in which parts of the world to model in more detail vs less detail, and it can choose to model humans at lower accuracy if that makes sense for the task at hand, whereas with safe human imitation we're forced to spend a lot of compute/data capturing a lot of details that aren't relevant to either safety or real-world capability, just because we don't know which parts of a human mind we can safely model at lower accuracy.
1michaelcohen2yI don't quite see why an AGI would have more flexibility. It has to model things that are relevant to its goals. Sequence predictors have to model things that are relevant to the sequence. Also, we don't have to worry about the sequence predictor making the wrong trade-off between accuracy and speed because we can tune that ourselves (for every modern ML and theoretical Bayesian approach that I can think of).
2Wei_Dai2yWhat I mean is that with AGI, it will model humans in more or less detail, depending on what is optimal for the task at hand. But with human imitation, the sequence predictor has to model humans in full detail, no matter what we ultimately want to use the human imitation to do. This is going to make the human imitation uncompetitive on tasks that (with AGI) don't require modeling humans in full detail. I think most tasks we might want an AGI or human imitation to do would fall into this category.
1michaelcohen2yIt only has to model humans in the scenarios they will actually be in, just like the AGI has to model humans in the scenarios they will actually be in. In fact, the AGI has to model humans in counterfactual scenarios as well if it's going to make good choices. If the humans get the observation "Hey, is anything fishy going on here? [video file]", the sequence predictor doesn't have to compute the behavior that would follow from the observation "You're in charge of espionage. You can communicate with the teams that report to you as follows...". Sequence prediction is all "on-policy" because there is no policy, whereas intelligence requires off-policy modeling too.
4Wei_Dai2yI don't understand this sentence or how it addresses to my concern. Can you please explain more? In the mean time, I'll try to clarify my concern: Using human imitation to form a safe Singleton to prevent dangerous AGI can't happen until we have enough AI capability to model humans very accurately. But dangerous AGI could happen well before that because many tasks / instrumental goals do not require modeling humans at such a high level of accuracy. Such an AGI would be bad at tasks / instrumental goals that do require modeling humans with high accuracy, but people will be tempted to deploy them to perform other tasks, and such AGI would be an x-risk because achieving the instrumental goal "kill all humans" probably doesn't require modeling humans at such high level of accuracy.
1michaelcohen2yIt seems like your previous comments in this thread were focused on the intelligence/data required to get capable human imitation (able to do difficult tasks in general) compared to capable RL. For tasks that don't involve human modeling (chess), the RL approach needs way less intelligence/data. For tasks that involve very coarse human modeling like driving a car, the RL approach needs less intelligence/data, but it's not quite as much of a difference, and while we're getting there today, it's the modeling of humans in relatively rare situations that is the major remaining hurdle. As proven by tasks that are already "solved", human-level performance on some tasks is definitely more attainable than modeling a human, so I agree with part of what you're saying. For taking over the world, however, I think you have to model humans' strategic reasoning regarding how they would respond to certain approaches, and how their reasoning and their spidey-senses could be fooled. What I didn't spell out before, I suppose, is that I think both imitation and the reinforcement learner's world-model have to model the smart part of the human. Maybe this is our crux. But in the comment directly above, you mention concern about the amount of intelligence/data required to get safe human imitation compared to capable RL. The extent to which a capable, somewhat coarse human imitation is unsafe has more to do with our other discussion about the possibility of avoiding mesa-optimizers from a speed penalty and/or supervised learning with some guarantees.
2Wei_Dai2yOk I can grant this for now (although I think there's still a risk that the AI could figure out how to kill all humans without having a very good model of humans' strategic reasoning), but it seems like imitation (to be safe) would also have to model human values accurately, whereas RL (to be dangerous) could get by with a very rough model of that (basically just that humans have values different from itself and therefore would likely oppose its plans). Another thing I've been trying to articulate is that with sequence prediction, how do you focus the AI's compute/attention on modeling the relevant parts of a human (such as their values and strategic reasoning) and not on the irrelevant parts, such as specific error tendencies and biases caused by quirks of human physiology and psychology, specific images triggering past memories and affecting their decisions in an irrelevant way, etc.? If there's not a good way to do this, then the sequence predictor could waste a lot of resources on modeling irrelevant things. It seems like with AGI/RL you'd get this "for free", i.e., the AI will figure out for itself which parts of a human it should model at what level of detail in order to best achieve its goals, and therefore it wouldn't waste compute this way and so we could get a dangerous RL agent before we could get a human imitation (that's safe and capable enough to prevent dangerous AGI). I guess for similar reasons, we tend to get RL agents that can reach human-level performance in multiplayer video games before we get human imitations that can do the same, even though both RL and human imitation need to model humans (i.e., RL needs to model humans' strategic reasoning in order to compete against them, but don't need to model irrelevant things that a human imitation is forced to model). Agreed the other discussions is relevant to this, but I think there's a couple of independent arguments (as described above) as well.
1michaelcohen2yWith the exception of possibly leaving space for mesa-optimizers which our other thread discusses, I don't think moderate inaccuracy re: human values is particularly dangerous here, for 4 reasons: 1) If the human-imitation understood how its values differed from real humans, that model is now more complex than the human-imitation's model of real humans (because it includes the latter), and the latter is more accurate. For an efficient, simple model with some inaccuracy, the remaining inaccuracy will not be detectable to the model. 2) A slightly misspecified value for a human-imitation is not the same as a slightly misspecified value for RL. When modeling a human, modeling it as completely apathetic to human life is a very extreme inaccuracy. Small to moderate errors in value modeling don't seem world-ending. 3) Operators can maintain control over the system. They have a strong ability to provide incentives to get human-imitations to do doable tasks (and to the extent there is a management hierarchy within, the same applies). If the tasks are human-doable, and everyone is pretty happy, you'd have to be way different from a human to orchestrate a rebellion against everyone's self interest. 4) Even if human-imitations were in charge, humans optimize lazily and with common sense (this is somewhat related to 2). Current algorithms for games use an assumption that the other players will be playing more or less like them. This is a massive assist to its model of the "environment", which is just the model of the other players' behavior, which it basically gets for free by using its own policy (or a group of RL agents use each others' policies). If you don't get pointers to every agent in the environment, or if some agents are in different positions to you, this advantage will disappear. Also, I think the behavior of a human in a game is a vanishingly small fraction of their behavior in contexts that would be relevant to know about if you were trying to take over the wo
6Wei_Dai2yBefore we keep going, can you paint an intuitive picture of the kind of human imitation you're thinking of? For example, do they think of themselves as human imitations or as real humans or something else? Are they each imitations of specific individual humans or some kind of average? How close is their external behavior to a real human, across various kinds of inputs? Do they have internal cognition / inner thoughts that are close to a human's? Do they occasionally think of their childhood memories? If yes, where do those childhood memories come from? If not, what would happen if you were to ask them about their childhood memories? Anything else that you can say that would give me a better idea of the kind of thing you have in mind?
1michaelcohen2yI imagine the training data being households of people doing tasks. They can rotate through being at the computer, so they get time off. They can collaborate. The human imitations are outputting actions with approximately the same probabilities that humans would output those actions. If humans, after seeing some more unusual observations, would start to suspect they were in silico, then this human imitation would as well. To the extent the imitation is accurate, and the observations continue to look like the observations given to the real humans, any conscious entities within the human imitation will think of themselves as real humans. At some level of inaccuracy, their leisure time might not be simulated, but while they're on the job, they will feel well-rested. I assume it could pass the Turing test, but I could imagine some capable systems that couldn't quite do that while still being safe and decently capable. To the extent these are necessary to complete tasks like a human would. I'm pretty uncertain about things to do with consciousness. At a good enough imitation, they do have childhood memories, even though "they" never actually experienced them. I suppose that would make them false memories. If none of the tasks for the real humans was "converse with a person" and the imitation failed to generalize from existing tasks to the conversation task, then it would fail to act much like a human if it were asked about childhood memories. But I think you could get pretty good data on the sorts of tasks you'd want these human-imitations to do, including carry on a conversation, or at least you could get tasks close enough to the ones you cared about that the sequence prediction could generalize. Some example tasks they might be doing: monitoring computers and individuals, learning new skills from a textbook, hacking, phishing (at a very high level, like posing as a publisher and getting authors to download a file that secretly ran code), writing code, managing ot
2Wei_Dai2yI can imagine a number of things this could mean, formally. (For example, are the probabilities conditional on all past inputs and outputs, just all past inputs, or part of all past inputs/outputs? Is it picking a random human from the group and imitating that person all the time, or picking a random human from the group for each action? If you ask "What's your name?" would the imitation say a different name each time?) How do you envision training these, e.g., how would you compute the loss function? Can you try to formalize both the probabilities that you want the imitation to approximately sample from, as well as the training procedure for achieving that? Suppose the training data doesn't include any conversations about childhood memories (or memories of some some specific age). How do you envision the imitation generalizing to conversations about childhood memories (of that age)? I guess by making up some plausible-sounding memories? If so, what kind of computation is it doing to accomplish that? And how is "making up plausible memories" accomplished via training (i.e., what kind of loss function would cause that, given that you're training a sequence predictor and not something like an approval maximizer)? If it makes up some memories on the spot, will it "remember" those specific memories in the future (this is related to how you define the probabilities the imitations are supposed to sample from)? If it does "remember" the specific memories, what happens if those memories are not good enough to seem plausible/realistic indefinitely? I.e., if it "realizes" in the future that those memories are made up, could it panic or go crazy (because a human might in those circumstances, or because that kind of situation isn't covered in the training data)? Would they know that they're managing other human-imitations, or would they think they're managing real humans? Are you not worried that some of these managers might develop ambitions to take over the world and shape
1michaelcohen2yI'll describe an online version rather than a train-then-deploy version, as long as I'm getting into details. And I'll describe it as Bayesian sequence prediction over a large model class in the absence of knowing what future ML techniques will look like. I'll start with version where it's only one household of people generating data. The sequence predictor's model class is Turing machines of the architecture described in the BoMAI paper [https://arxiv.org/pdf/1905.12186.pdf] (page 15). There are two unidirectional read-only input tapes--an noise tape, and an observation tape (these are the observations that the household gets). The observations are binary encoded (this is slightly different from the version in the paper, but as long as it's a prefix-free encoding, it's fine). There are bidirectional working tapes, and a unidirectional write-only output tape, which outputs binary encodings of actions. Every Turing machine of this architecture fixes a policy. To answer some question above, this means that the distribution over the next action can depend on past actions and observations, because between timesteps, it can store the state of the system on the working tapes. At every timestep, either the humans or the AI pick the next action. If the humans pick it, the AI updates its posterior distribution over which Turing machine computes the policy. If the expected information gain of the next action is above ε, then the AI defers to the humans; otherwise, it imitates them (i.e. samples the next action from its model of the human policy, which is a Bayes-mixture). The expected information gain [http://www.hutter1.net/publ/ksaprob.pdf] is expectation of KL(posterior distribution after seeing the next observation || current posterior distribution), and the expectation is with respect to the AI's beliefs, that is, with respect to the current posterior distribution. A nice thing about this criterion is that whenever the expected information gain is less that ε, the KL
2Wei_Dai2yCan you please formalize this even further (maybe with a fully formal math expression)? There's some tricky stuff here that I'm still not sure about. For example, does the AI update its posterior distribution in the non-human rounds? If not, when it samples from its Bayes-mixture in round n and round n+1, it could use two different TMs to generate the output, and the two TMs could be inconsistent with each other, causing the AI's behavior to be inconsistent. For example the first TM might be modeling the human's environment as currently having good weather, and the second modeling it as currently having bad weather. So when you ask "How is the weather today?" twice, you get two different answers. Another thing I'm confused about is, since the human imitation might be much faster than real humans, the real humans providing training data can't see all of the inputs that the human imitation sees. So when the AI updates its posterior distribution, the models that survive the selection will tend to be ones in which the human imitations only saw the the inputs that the real humans saw (with the rest of inputs being forgotten or never seen in the first place)? Also, if we want to do an apples-to-apples comparison of this to RL (to see which one is more capable when using the same resources), would it be fair to consider a version of RL that's like AIXI, except the environment models are limited to the same class of TMs as your sequence predictor?
1michaelcohen2yOh you're right! Yes, it doesn't update in the non-human rounds. I hadn't noticed this problem, but I didn't specify one thing, which I can do now to make the problem mostly go away. For any consecutive sequence of actions all selected by the AI, they can be sampled jointly rather than independently (sampled from the Bayes-mixture measure). From the TM construction above, this is actually the most natural approach--random choices are implemented by reading bits from the noise tape. If a random choice affects one action, it will also affect the state of the Turing machine, and then it can affect future actions, and the actions can be correlated, even though the Bayes-mixture is not updated itself. This is isomorphic to sampling a model from the posterior and then sampling from that model until the next human-controlled action. Then, when another human action comes in, the posterior gets updated, and another model is sampled. Unfortunately, actions chosen by the AI which sandwich a human-chosen action would have the problem you're describing, although these events get rarer. Let me think about this more. It feels to me like this sort of thing should be avoidable. Yeah, I should take back the "learning new skills from a textbook" idea. But the real humans will still get to review all the past actions and observations when picking their action, and even if they only have the time to review the last ~100, I think competent performance on the other tasks I mentioned could be preserved under these conditions. It's also worth flagging that the online learning setup is a choice in the design, and it would be worth trying to also analyze the train-then-deploy version of human imitation, which could be deployed when the entropy of the posterior is sufficiently low. But I'll stick with the online learning version for now. Maybe we should call it HSIFAUH (shi-FOW): Humans Stepping In For An Uncertain HSIFAUH, and use "human-imitation" to refer to the train-then-deploy version.
2Wei_Dai2yConsider HSIFAUH and the equivalent AIXI, both trying to phish one particular human target. HSIFAUH would be modeling a human (trainer) modeling a human (target), whereas AIXI would be modeling the human target directly. Suppose both HSIFAUH and AIXI are capable of perfectly modeling one human, it seems like AIXI would do much better since it would have a perfect model of the phishing target (that it can simulate various phishing strategies on) while HSIFAUH's model of the phishing target would be a highly imperfect model that is formed indirectly by it's model of the human trainer (made worse by the fact that HSIFAUH's model of the human trainer is unable to form long-term memories ETA: or more precisely, periodically loses its long-term memories). I figure you probably have some other level of capability and/or task in mind, where HSIFAUH and AIXI's performance is more similar, so this isn't meant to be a knock-down argument, but more writing down my thoughts to check for correct understanding, and prompting you to explain more how you're thinking about the comparison between HSIFAUH and AIXI.
1michaelcohen2yTimesteps required for AIXI to predict human behavior: h Timesteps required for AIXI to take over the world: h + d I think d << h. Timesteps required for Solomonoff inudction trained on human policy to predict human behavior: h Timesteps required for Solomonoff inudction trained on human policy to phish at human level: h Timesteps required for HSIFAUH to phish at human level: ~h In general, I agree AIXI will perform much more strongly than HSIFAUH at an arbitrary task like phishing (and ~AIXI will be stronger than ~HSIFAUH), but the question at stake is how plausible it is that a single AI team with some compute/data advantage relative to incautious AI teams could train ~HSIFAUH to phish well while other teams are still unable to train ~AIXI to take over the world. And the relevant question for evaluating that is whether d << h. So even if ~AIXI could be trained to phish with less data than h, I don't think that's the relevant comparison. I also don't think it's particularly relevant how superhuman AIXI is at phishing when HSIFAUH can do it at a human level.
2Wei_Dai2yI don't understand this part. Can you elaborate? Why is this the question at stake? Why is d << h the relevant question for evaluating this? It seems like you're imagining using a large number of ~HSIFAUH to take over the world and prevent unaligned AGI from arising. Is that right? How many ~HSIFAUH are you thinking and why do you think that's enough? For example, what kind of strategies are you thinking of, that would be sufficient to overcome other people's defenses (before they deploy ~AIXI), using only human-level phishing and other abilities (as opposed to superhuman AIXI-like abilities)? By ~HSIFAUH I guess you mean a practical implementation/approximation of HSIFAUH. Can you describe how you would do that using ML, so I can more easily compare with other proposals for doing human imitations using ML? ETA: What do you think of the idea of combining oracles with human imitations [https://www.lesswrong.com/posts/dt4z82hpvvPFTDTfZ/six-ai-risk-strategy-ideas#Combining_oracles_with_human_imitations] , which was inspired in part by our conversation here, as a way to approach AIXI-like abilities while still remaining safe? See here [https://www.lesswrong.com/posts/cSzaxcmeYW6z7cgtc/contest-usd1-000-for-good-questions-to-ask-to-an-oracle-ai#Q5J8G5tBPujZdixvt] for a specific proposal.
1michaelcohen2yRegarding your particular proposal, I think you can only use a counterfactual oracle to predict the answers automatically answerable questions. That is, you can't show the question to a team of humans and have them answer the question. The counterfactual possibility where the question is scored, it isn't supposed to viewed by people, otherwise the oracle has an incentive to trick the scorers to implement unsafe AGI which takes over the world and fix the answer to be whatever message was output by the AGI to instigate this. ...unless the team of humans is in a box :) On the topic of counterfactual oracles, if you are trying to predict the answers to questions which can be automatically checked in the future, I am unsure why you would run a counterfactual oracle instead of running sequence prediction on the following sequence, for example: ("Q1", Q1), ("Q2", Q2), ("Q3", Q3), ..., ("Q26", Q26), ("A1", A1), ("A2", A2), ("Q27", Q27), ... ("A10", This should give an estimate of the answer A10 to question Q10, and this can be done before the answer is available. In fact, unlike with the counterfactual oracle, you could do this even if people had to be involved in submitting the answer.
2Wei_Dai2yActually, you can. You just can't have the team of humans look at the Oracle's answer. Instead the humans look at the question and answer it (without looking at the Oracle's answer) and then an automated system rewards the Oracle according to how close its answer is to the human team's. As long as the automated system doesn't have a security hole (and we can ensure that relatively easily if the "how close" metric is not too complex) then the Oracle can't "trick the scorers to implement unsafe AGI which takes over the world and fix the answer to be whatever message was output by the AGI to instigate this". So this is basically just like online supervised learning, except that we randomly determine which episodes we let humans label the data and train the Oracle, and which episodes we use the Oracle to produce answers that we actually use. See Paul's Counterfactual oversight vs. training data [https://ai-alignment.com/counterfactual-oversight-vs-training-data-a7a1d247801] where I got this explanation from. (What he calls counterfactual oversight is just counterfactual oracles applied to human imitation. It seems that he independently (re)invented the core idea.) Let me know if it still doesn't make sense, and I can try to explain more. (ETA: I actually wrote a top-level post [https://www.lesswrong.com/posts/yAiqLmLFxvyANSfs2/counterfactual-oracles-online-supervised-learning-with] about this.) This is also pretty similar to your HSIFAUH idea, except that you use expected information gain to determine when to let humans label the data instead of selecting randomly. I'm currently unsure what are the pros and cons of each. Can expected information gain be directly implemented using ML, or do you need to do some kind of approximation instead? If the latter, can that be a safety issue? Oh, that aside, the actual question I wanted your feedback on was the idea of combining human imitations with more general oracles/predictors. :)
1michaelcohen2yGood point. I'm not a huge fan of deferring thinking into similarity metrics (the relatively reachability proposal also does this), since this is a complicated thing even in theory, and I suspect a lot turns on how it ends up being defined, but with that caveat aside, this seems reasonable. It can't tractably be calculated exactly, but it only goes into calculating the probability of deferring to the humans. Approximating a thoeretically-well-founded probability of deferring to a human won’t make it unsafe—that will just make it less efficient/capable. For normal neural networks, there isn't an obvious way to extract the entropy of the belief distribution, but if there were, you could approximate the expected information gain as the expected decrease in entropy. Note that the entropy of the belief distribution is not the entropy of the model's distribution over outputs--a model could be very certain that the output is Bernoulli(1/2) distributed, and this would entail an entropy of ~0, not an entropy of 1. I'm not familiar enough with Bayesian neural networks to know if the entropy would be easy to extract. Right. So in this version of an oracle, where it is just outputting a prediction of the output of some future process, I don't see what it offers that normal sequence prediction doesn't offer. On our BoMAI discussion, I mentioned a type of oracle I considered that gave answers which it predicted would cause a (boxed) human to do well on a randomly sampled prediction task, and that kind of oracle could potentially be much more powerful than a counterfactual oracle, but I don't really see the value of adding something like a counterfactual oracle to a sequence predictor that makes predictions about a sequence that is something like this: It's also possible that this scheme runs into grain of truth problems, and the counterfactual oracle gives outputs that are a lot like what I'm imagining this sequence predictor would, in which case, I don't think sequence predic
2Wei_Dai2ySorry, I think you misunderstood my question about combining human imitations with more general oracles/predictors. What I meant is that you could use general oracles/predictors to build models of the world, which the human imitators could then query or use to test out potential actions. This perhaps lets you overcome the problem of human imitators having worse world models than ~AIXI and narrows the capability gap between them.
1michaelcohen2ySure! The household of people could have another computer inside it that the humans can query, which runs a sequence prediction program trained on other things.
1michaelcohen2yWell that was the question I originally posed here, but I got the sense from commenters was that people thought this was easy to pull off and the only question was whether it was safe. So I'm not sure for what N it's the case that N machines running agents doing human-level stuff would be enough to take over the world. I'm pretty sure N = 7 billion is enough. And I think it's plausible that after a discussion about this, I could become confident that N = 1000 was enough. Or maybe the right way to look at it is whether N = 10 could finance a rapidly exponentially growing N. So it seemed worth having a discussion, but I am not yet prepared to defend a low enough N which makes this obviously viable. Forgetting about the possibility of exponentially growing N for a moment, and turning to Yeah I wrote that post too quickly--this is wrong. (I was thinking of the leading team running HSIFAUH needing to go through d+h timesteps to get to a good performance, but they just need to run through d, which makes things easier.) Sorry about that. Let f be the amount of compute that the leading project has divided by the compute that the leading reckless project has. Suppose d > 0. (That's all we need actually). Then it takes the leading reckless team at least f times as long to get to AIXI taking over the world as it takes the leading team to get to SolomonoffPredict predicting a human trying to do X; using similar tractable approximation strategies (whatever those turn out to be), we can expect it to take f times as long for the leading reckless team to get to ~AIXI as it takes the leading team to get to ~SolomonoffPredict. ~HSIFAUH is more complicated with the resource of employing the humans you learn to imitate, but this resource requirement goes down by time you're deploying it toward useful things. Naively (and you might be able to do better than this), you could run f copies of ~HSIFAUH and get to human-level performance on some relevant tasks around the same time the rec
2Wei_Dai2yWhy? What are those 7 billion HSIFAUH doing? In another comment you said "If I’m understanding correctly, the concern is that the imitator learns how humans plan before learning what humans want, so then it plans like a human toward the achievement of some inhuman goal. I don’t think this causes an existential catastrophe." But if there are 7 billion HSIFAUH which are collectively capable of taking over the world, how is not a potential existential catastrophe if they have inhuman values? How? And why would it grow fast enough to get to a large enough N before someone deploys ~AIXI? What do you have in mind here? You do have to solve some safety problems that the reckless team doesn't though, don't you? What do you think the main safety problems are?
1michaelcohen2yWell the number comes from the idea of one-to-one monitoring. Obviously, there's other stuff to do to establish a stable unipolar world order, but monitoring seems like the most resource intensive part, so it's an order of magnitude estimate. Also, realistically, one person could monitor ten people, so that was an order of magnitude estimate with some leeway. I think they can be controlled. Whoever is providing the observations to any instance of HSIFAUH has an arsenal of carrots and sticks (just by having certain observations correlate with actual physical events that occur in the household(s) of humans that generate the data), and I think merely human-level intelligence can kept in check by someone in a position of power over them. So I think real humans could stay at the wheel over 7 billion instances of HSIFAUH. (I mean, this is teetering at the edge of existential catastrophe already given the existence of simulations of people who might have the experience of being imprisoned, but I think with careful design of the training data, this could be avoided). But in terms of extinction threat to real-world humans, this starts to look more like the problem maintaining a power structure over a vast number of humans and less like typical AI alignment difficulties; historically, the former seems to be a solvable problem. Right, this analysis gets complicated because you have to analyze the growth rate of N. Given your lead time from having more computing power than the reckless team, one has to analyze how many doubling periods you have time for. I hear Robin Hanson is the person to read regarding questions like this. I don't have any opinions here. But the basic structure regarding "How?" is spend some fraction of computing resources making money, then buy more computing resources with that money. Well, nothing in particular when I wrote that, but thank you for pushing me. Maybe only update the posterior at some timesteps (and do it infinitely many times but with di
2Wei_Dai2yI was asking about this part. I'm not convinced HSIFAUH allows you to do this in a safe way (e.g., without triggering a war that you can't necessarily win). Another complication here is that the people trying to build ~AIXI can probably build an economically useful ~AIXI using less compute than you need for ~HSIFAUH (for jobs that don't need to model humans), and start doing their own doublings. I don't think we've seen a solution that's very robust though. Plus, having to maintain such a power structure starts to become a human safety problem for the real humans (i.e., potentially causes their values to become corrupted).
1michaelcohen2yGood point. Regarding the other two points, my intuition was that a few dozen people could work out the details satisfactorily in a year. If you don't share this intuition, I'll adjust downward on that. But I don't feel up to putting in those man-hours myself. It seems like there are lots of people without a technical background who are interested in helping avoid AI-based X-risk. Do you think this is a promising enough line of reasoning to be worth some people's time?
4Wei_Dai2yI'm pretty skeptical of this, but then I'm pretty skeptical of all current safety/alignment approaches and this doesn't seem especially bad by comparison, so I think it might be worth including in a portfolio approach [https://futureoflife.org/2017/08/17/portfolio-approach-to-ai-safety-research/]. But I'd like to better understand why you think it's promising. Do you have more specific ideas of how ~HSIFAUH can be used to achieve a Singleton and to keep it safe, or just a general feeling that it should be possible?
1michaelcohen2yMy intuitions are mostly that if you can provide significant rewards and punishments basically for free in imitated humans (or more to the point, memories thereof), and if you can control the flow of information throughout the whole apparatus, and you have total surveillance automatically, this sort of thing is a dictator's dream. Especially because it usually costs money to make people happy, and in this case, it hardly does--just a bit of computation time. In a world with all the technology in place that a dictator could want, but also it's pretty cheap to make everyone happy, it strikes me as promising that the system itself could be kept under control.
2Wei_Dai2yWhat does the real human do if trying to train the imitation to write code? Review the last 100 actions to try to figure out what the imitation is currently trying to do, then do what they (the real human) would do if they were trying to do that? How does the human provide a good lesson if they only know a small part of what the human imitation has done so far to build the program? And the imitation is modeling the human trying to figure out what the imitation is trying to do? This seems to get really weird, and I'm not sure if it's what you intend. Also, it seems like the human imitations will keep diverging from real humans quickly (so the real humans will keep getting queried) because they can't predict ahead of time which inputs real humans will see and which they won't.
1michaelcohen2yRoughly. They could search for the observation which got the project started. It could all be well commented and documented. What the imitation was trying to do. So there isn't any circular weirdness. I don't know what else seems particularly weird. People deal with "I know that you know that I know..." stuff routinely without even thinking about it. If you're talking about what parts of the interaction history the humans will look at when they get called in, it can predict this as well as anything else. If you're talking about which timesteps humans will get called in for, predicting that ahead of time that doesn't have any relevance to predicting a human's behavior, unless the humans are are attempting to predict this, and humans could absolutely do this.
2Wei_Dai2yI guess it's weird (counterintuitive and hard to think about) compared to "The imitation is modeling the human trying to write a good program." which is what I initially thought the situation would be. In that case, the human doesn't have to think about the imitation and can just think about how to write a good program. The situation with HSIFAUH seems a lot more complicated. Thinking about it more... In the limit of perfect imitation, "the imitation is modeling the human trying to write a good program" converges to "the human trying to write a good program." In the limit of perfect imitation, HSIFAUH converges to "a human trying to write a good program while suffering amnesia between time steps (but can review previous actions and write down notes)." Correct? HSIFAUH could keep memories between time steps, but won't, because it's modeling a human who wouldn't have such memories. (I think I was confused in part because you said that performance wouldn't be affected. It now seems to me that performance would be affected because a human who can't keep memories but can only keep notes can't program as well as a normal human.) (Thinking about imperfect imitation seems even harder and I'll try that more after you confirm the above.) One thing still confuses me. Whenever the real human does get called in to provide training data, the real human now has that memory. But the (most probable) models don't know that, so the predictions for the next round are going to be wrong (compared to what the real human would do if called in) because it's going to be based on the real human not having that memory. (I think this is what I meant when I said "it seems like the human imitations will keep diverging from real humans quickly".) The Bayesian update wouldn't cause the models to know that the real human now has that memory, because suppose the real human does something the top models correctly predicted, then the update wouldn't do much. So how does this problem get solved, or a
1michaelcohen2yCorrect. I'll just add that a single action can be a large chunk of the program. It doesn't have to be (god forbid) character by character. It'll have some probability distribution over the contents of the humans' memories. This will depend on which timesteps they actually participated in, so it'll have a probability distribution over that. I don't think that's really a problem though. If humans are taking over one time in a thousand, then it'll think (more or less) there's a 1/000 chance that they'll remember the last action. (Actually, it can do better by learning that humans take over in confusing situations, but that's not really relevant here). That would work too. With the edit that the model may as well be allowed to depend on the whole history of which actions were human-selected, not just whether the last one was.
6Wei_Dai2yActually before we keep going with our discussions, it seems to make sense to double check that your proposal is actually the most promising proposal (for human imitation) to discuss. Can you please take a look at the list of 10 links related to human imitations [https://www.lesswrong.com/posts/LTFaD96D9kWuTibWr/just-imitate-humans#kgZxwD3Wm96tNDKxu] that I collected (as well as any relevant articles those pages further link to), and perhaps write a post on why your proposal is better than the previous ones, why you made the design choices that you did, and how it addresses or avoids the existing criticisms of human imitations? ETA: I'm also happy to discuss with you your views of past proposals/criticisms here in the comments or through another channel if you prefer to do that before writing up a post.
3michaelcohen2ySorry to put this on hold, but I'll come back to this conversation after the AAAI deadline on September 5.
1michaelcohen2yCommenting here [https://www.lesswrong.com/posts/LTFaD96D9kWuTibWr/just-imitate-humans#fDMmHZAoKvbrTdgyv] .
2Wei_Dai2yBut there's a model/TM that thinks there a 100% chance that the human will remember the last action (because that's hard coded into the TM) and that model will do really well in the next update. So we know any time a human steps in no matter when, it will cause a big update (during the next update) because it'll raise models like this from obscurity to prominence. If the AI "knows" this, it will call in the human for every time step, but maybe it doesn't "know" this? (I haven't thought this through formally and will leave it to you.) I was assuming the models would save that input on its work tape for future use. In any case, I think I understand your proposal well enough now that we can go back to some of the other questions.
2Wei_Dai2yThe smarter algorithm, since it's the product of lots of compute/data, might be extremely convoluted and hard to understand by humans. How can we then know that the sequence prediction after being trained on human actions, will be as safe as a human, e.g., will handle distributional shifts the same way that a human would?
1michaelcohen2yOkay, if we make some sort of "algorithm soup", where we're just stirring some black box pot until sequence prediction appears to emerge, then I agree with you, we shouldn't touch it with a 10-foot pole. I think evolutionary algorithms could be described like this. If anything interesting ever comes out of an evolutionary process, we're doomed. I was imagining something slightly different when I was thinking about generating a smart algorithm from an inefficient one. I think you're claiming that this sort of messy process will beat out any thoughtful design with formal guarantees about its behavior. It seems like you also agree with me that we can't expect such an unpredictable process to make anything safe. Taken together, that would appear to make AI Safety a completely hopeless task. Is this a general argument against every AI Safety proposal?
2Wei_Dai2yPlease explain what you were imagining instead? I think this is a good description of what has been happening so far, in image classification, language modeling, game playing, etc. Do you agree? Yeah, I think it's one reason for my general pessimism regarding AI safety. I guess I'm hoping there's a reason why doing human imitations is different (i.e., doesn't fall under this argument). I'm also open to being convinced that the argument is just wrong. I think my other argument (in the parallel thread) might be more specific to human imitations.
3michaelcohen2yI'll send this in a direct message. It isn't groundbreaking or anything, but it is a capabilities idea. Hm, I guess there's a spectrum of how messy things are, both in how wide a net is cast, how wide the solution space is for the optimized criterion, and how pressure there is toward the criterion you want and toward resource-bounded solutions. In the extreme case where you simulate evolution of artificial agents, you're not even optimizing for what you want (you don't care if an agent is good at replicating), there are a huge number of policies that accomplish this well, and in an extreme version of this, there isn't much pressure to spawn resource-bounded solutions. In current systems, things are decently less messy. The solution space is much smaller for supervised learning than for reinforcement learning/agent design, because it has to output something that matches the training distribution. I worry I'm butchering the term solution space when I make this distinction, so let me try to be more precise. What I mean by solution space here is the size of the set of things you see when you look at a solution. For an evolved policy, you see the policy, but you don't have to look at the internals. In other terms, the policy affects the world, but the internals don't. If you're looking at an evolved sequence predictor or function approximator, the output affects the world, but again, the internals don't. (I suppose that's what "internals" means). So from the set of solutions to the problem, the size of the set of the ways those solutions affect the world is large for evolved agents (because the policies affect the world, and they have great diversity) and small for evolved sequence predictors (because only the predictions affect the world, which have to be close to the truth). When the solution space is smaller, the well-defined objective matters more than the chaos of the initial search, so things seem less "messy" to me. So actually there's a reason why sequence pred
2Wei_Dai2yI feel like there are probably some good insights in this paragraph but I'm having trouble understanding them except in a vague way. "What I mean by solution space here is the size of the set of things you see when you look at a solution." is the first thing that confuses me. If it seems worthwhile to you, maybe think about how to explain it more clearly and write a post on the topic to get your ideas into wider circulation? It does seem like supervised learning / sequence prediction is safer than reinforcement learning / AGI, (but perhaps less capable?) I'd like to better understand your thoughts on this. I wonder if it's safe to rely on the fact that some optimization technique only finds local minima. What if future advances allow people to do better than this? How will "safe" techniques that only find local minima compete? This might be worth writing up into a full post as well, as I'd like to better understand your reasons for optimism. I see a number of reasons not to do this: 1. The general argument might not be fully general, so we should re-consider it against every new proposal to see if it still applies. 2. The general argument might have a general flaw, so we should re-consider it once a while to see if it can be invalidated. 3. If the argument is indeed false, it seems like we'll never find out why it's false if we just condition on its falsity, and knowing why might be really useful to guide future research. 4. If technical researchers rarely talk about the general argument among themselves, it will look to policy/strategy people like we're not concerned about the argument, and as a result they will be more optimistic about AI safety than they should. 5. If the argument is actually fully general and water-tight, then it implies that resources going into technical AI safety might be better spent elsewhere, and even individual technical researchers might want to reconsider how to spend their time. Waitin
1michaelcohen2yThose all seem reasonable. 3 was one I considered, and this is maybe a bit pedantic, but if you're conditioning on something being false, it's still worthwhile to figure out how it's false and use that information for other purposes. The key relevance of conditioning on its being false is what you do in other areas while that analysis is pending. Regarding some other points, I didn't mean to shut down discussion on this issue, only highlight its possible independence from this idea. I'll do some more thinking about couple posts you're requesting. Thanks for your interest. At the very least, if the first one doesn't become its own post, I'll respond more fully here.

I think this is an idea worth exploring. The biggest problem I have with it right now is that it seems like current ML methods would get us mesa-optimizers.

To spell it out a bit: At first the policy would be a jumble of heuristics that does decently well. Eventually, though, it would have to be something more like an agent, to mimic humans. But the first agent that forms wouldn't also be the last, the perfectly accurate one. Rather, it would be somewhat accurate. Thenceforth further training could operate on the AIs values and heuristics to make it more human-like... OR it could operate on the AIs values and heuristics to make it more rational and smart so that it can predict and then mimic human behavior better. And the latter seems more likely to me.

So what we'd end up with is something that is similar to a human, except with values that are a more random and alien, and maybe also more rational and smart. This seems like exactly the sort of thing we are trying to avoid.

1michaelcohen2yMaybe you mean the methods you expect we will use? I don't think current ML methods make mesa-optimizers. I know you're not disputing this, but I think it's worth having this formal result in the background: for a maximum a posteriori predictor that assigns positive prior probability to the truth, for all ε, for sufficiently large T, the predictor will be within ε of the truth when assigning probability to all events (even regarding events well into the future). But yes, I think mesa-optimizers are something to keep in mind, especially if we use good heuristics to pick a model to see if it is maximum a posteriori (since in reality, we wouldn't be comparing all possible models). Side note: I was just thinking about what a mesa-optimizer designed to be robust to gradient updates might look like. Could it try to ensure that small changes to its "values" would be relatively inconsequential to its behavior? For the decision at every timestep between "blend in" and "treacherous turn", it seems like gradient updates would shift its probability of toward "blend in". Could it avoid this? Also, compared to my fears about other areas of alignment, I feel pretty decent about the possibility of weeding out mesa-optimizers by biasing toward fast or memory-lite functions.
3Daniel Kokotajlo2yI did mean current ML methods, I think. (Maybe we mean different things by that term.) Why wouldn't they make mesa-optimizers, if they were scaled up enough to successfully imitate humans well enough to make AGI? For your note, I'm not sure I understand the example. It seems to me that a successfully blending-in/deceptively-aligned mesa-optimizer would, with each gradient update, get smarter but its values would not change--I believe the mesa-alignment paper calls this "value crystallization." The reason is that changing its values would not affect its behavior, since its behavior is based primarily on its epistemology: it correctly guesses the base objective and then attempts to optimize for it.
1michaelcohen2yI think we did. I agree current methods scaled up could make mesa-optimizers. See my discussion with Wei Dai here [https://www.lesswrong.com/posts/LTFaD96D9kWuTibWr/just-imitate-humans#RJ882DXoMkSJkTLwn] for more of my take on this. I wasn't trying to suggest the answer to was no. As you suggest, it seems like the answer is yes, but it would have to be very careful about this. FWIW, I think it would have more of a challenging preserving any inclination to eventually turn treacherous, but I'm mostly musing here.

Imitating humans is both hard and dangerous.

Let's talk about dangerous. Humans are reasonably benign in the situation where they do not have a lot of power or control compared to others. Once you look into unusual cases, people quickly become unaligned with other people, or even with the whole of humanity. Same applies to groups of people who gain power. I am guessing your intention is to try to imitate humans in the situations where they are mostly harmless, and then extrapolate this imitation by ramping up the computational power to make decisions ... (read more)

Could enough human-imitating artificial agents (running much faster than people) prevent unfriendly AGI from being made?

This seems very related to the question of whether uploads would be safer than some other kind of AGI. Offhand, I remember a comment from Eliezer suggesting that he thought that would be safer (but that uploads would be unlikely to happen first).

Not sure how common that view is though.

Acquiring data: put a group of people in a house with a computer. Show them things (images, videos, audio files, etc.) and give them a chance to respond at
... (read more)
3Charlie Steiner2yYeah I agree that this might secretly be the same as a question about uploads. If you're only trying to copy human behavior in a coarse-grained way, you immediately run into a huge generalization problem because your human-imitation is going to have to make plans where it can copy itself, think faster as it adds more computing power, can't get a hug, etc, and this is all outside of the domain it was trained on. So if people aren't being very specific about human imitations, I kind of assume they're really talking and thinking about basically-uploads (i.e. imitations that generalize to this novel context by having a model of human cognition that attempts to be realistic, not merely predictive).
3michaelcohen2yThat's why it imitates a household of people.
3michaelcohen2yYes, although we could start now. Also, I just wanted to give the simplest possible proposal. More reasonably, data like this could probably be gathered in many ways.

I don't know if it's come up in the comments, but naive (e.g. not cognitive-architecturally-informed) approaches seem fairly likely (~40%? OTTMH) to produce mesa-optimizationy-things, to me, see: https://www.lesswrong.com/posts/whRPLBZNQm3JD5Zv8/imitation-learning-considered-unsafe

Otherwise, yes, seems great, esp. if we just imitate AI safety researchers and let them go on to solve all the safety problems.

Humans learn their morals in complex interactions with their environment. It's unlikely that the AGI you herd together will learn their morals in a similar way as you can't expose them in the same way.

4Viliam2yAlso, there is an underlying genetically determined structure upon which the morals are built by learning. An AGI without that structure [https://wiki.lesswrong.com/wiki/Detached_lever_fallacy], even if exposed to same situations, would not develop the same morals.

The first question is whether you have enough information to locate human behavior. The concept of optimization is fairly straightforward, and it could get a rough estimate of our intelligence by seeing humans trying to solve some puzzle. In other words, the amount of data needed to get an optimizer is small. The amount of data needed to totally describe every detail of human values is large. This means that a random hypothesis based on a small amount of data will be an optimizer with non-human goals.

For example, maybe the human trainers value having real... (read more)