Vika

Research scientist at DeepMind working on AI safety, and cofounder of the Future of Life Institute. Website and blog: vkrakovna.wordpress.com

Vika's Comments

Specification gaming examples in AI

I've been pleasantly surprised by how much this resource has caught on in terms of people using it and referring to it (definitely more than I expected when I made it). There were 30 examples on the list when was posted in April 2018, and 20 new examples have been contributed through the form since then. I think the list has several properties that contributed to wide adoption: it's fun, standardized, up-to-date, comprehensive, and collaborative.

Some of the appeal is that it's fun to read about AI cheating at tasks in unexpected ways (I've seen a lot of people post on Twitter about their favorite examples from the list). The standardized spreadsheet format seems easier to refer to as well. I think the crowdsourcing aspect is also helpful - this helps keep it current and comprehensive, and people can feel some ownership of the list since can personally contribute to it. My overall takeaway from this is that safety outreach tools are more likely to be impactful if they are fun and easy for people to engage with.

This list had a surprising amount of impact relative to how little work it took me to put it together and maintain it. The hard work of finding and summarizing the examples was done by the people putting together the lists that the master list draws on (Gwern, Lehman, Olsson, Irpan, and others), as well as the people who submit examples through the form. What I do is put them together in a common format and clarify and/or shorten some of the summaries. I also curate the examples to determine whether they fit the definition of specification gaming (as opposed to simply a surprising behavior or solution). Overall, I've probably spent around 10 hours so far on creating and maintaining the list, which is not very much. This makes me wonder if there is other low hanging fruit in the safety resources space that we haven't picked yet.

I have been using it both as an outreach and research tool. On the outreach side, the resource has been helpful for making the argument that safety problems are hard and need general solutions, by making it salient just in how many ways things could go wrong. When presented with an individual example of specification gaming, people often have a default reaction of "well, you can just close the loophole like this". It's easier to see that this approach does not scale when presented with 50 examples of gaming behaviors. Any given loophole can seem obvious in hindsight, but 50 loopholes are much less so. I've found this useful for communicating a sense of the difficulty and importance of Goodhart's Law.

On the research side, the examples have been helpful for trying to clarify the distinction between reward gaming and tampering problems. Reward gaming happens when the reward function is designed incorrectly (so the agent is gaming the design specification), while reward tampering happens when the reward function is implemented incorrectly or embedded in the environment (and so can be thought of as gaming the implementation specification). The boat race example is reward gaming, since the score function was defined incorrectly, while the Qbert agent finding a bug that makes the platforms blink and gives the agent millions of points is reward tampering. We don't currently have any real examples of the agent gaining control of the reward channel (probably because the action spaces of present-day agents are too limited), which seems qualitatively different from the numerous examples of agents exploiting implementation bugs.

I'm curious what people find the list useful for - as a safety outreach tool, a research tool or intuition pump, or something else? I'd also be interested in suggestions for improving the list (formatting, categorizing, etc). Thanks everyone who has contributed to the resource so far!

Specification gaming examples in AI

Thanks Ben! I'm happy that the list has been a useful resource. A lot of credit goes to Gwern, who collected many examples that went into the specification gaming list: https://www.gwern.net/Tanks#alternative-examples.

Thoughts on "Human-Compatible"

Yes, decoupling seems to address a broad class of incentive problems in safety, which includes the shutdown problem and various forms of tampering / wireheading. Other examples of decoupling include causal counterfactual agents and counterfactual reward modeling.

Classifying specification problems as variants of Goodhart's Law

Thanks Evan, glad you found this useful! The connection with the inner/outer alignment distinction seems interesting. I agree that the inner alignment problem falls in the design-emergent gap. Not sure about the outer alignment problem matching the ideal-design gap though, since I would classify tampering problems as outer alignment problems, caused by flaws in the implementation of the base objective.

Reversible changes: consider a bucket of water
Vika5mo24Ω9

I think the discussion of reversibility and molecules is a distraction from the core of Stuart's objection. I think he is saying that a value-agnostic impact measure cannot distinguish between the cases where the water in the bucket is or isn't valuable (e.g. whether it has sentimental value to someone).

If AUP is not value-agnostic, it is using human preference information to fill in the "what we want" part of your definition of impact, i.e. define the auxiliary utility functions. In this case I would expect you and Stuart to be in agreement.

If AUP is value-agnostic, it is not using human preference information. Then I don't see how the state representation/ontology invariance property helps to distinguish between the two cases. As discussed in this comment, state representation invariance holds over all representations that are consistent with the true human reward function. Thus, you can distinguish the two cases as long as you are using one of these reward-consistent representations. However, since a value-agnostic impact measure does not have access to the true reward function, you cannot guarantee that the state representation you are using to compute AUP is in the reward-consistent set. Then, you could fail to distinguish between the two cases, giving the same penalty for kicking a more or less valuable bucket.

Reversible changes: consider a bucket of water

Thanks Stuart for the example. There are two ways to distinguish the cases where the agent should and shouldn't kick the bucket:

  • Relative value of the bucket contents compared to the goal is represented by the weight on the impact penalty relative to the reward. For example, if the agent's goal is to put out a fire on the other end of the pool, you would set a low weight on the impact penalty, which enables the agent to take irreversible actions in order to achieve the goal. This is why impact measures use a reward-penalty tradeoff rather than a constraint on irreversible actions.
  • Absolute value of the bucket contents can be represented by adding weights on the reachable states or attainable utility functions. This doesn't necessarily require defining human preferences or providing human input, since human preferences can be inferred from the starting state. I generally think that impact measures don't have to be value-agnostic, as long as they require less input about human preferences than the general value learning problem.
Stable Pointers to Value: An Agent Embedded in Its Own Utility Function

Thanks Abram for this sequence - for some reason I wasn't aware of it until someone linked to it recently.

Would you consider the observation tampering (delusion box) problem as part of the easy problem, the hard problem, or a different problem altogether? I think it must be a different problem, since it is not addressed by observation-utility or approval-direction.

The AI Timelines Scam

Definitely agree that the AI community is not biased towards short timelines. Long timelines are the dominant view, while the short timelines view is associated with hype. Many researchers are concerned about the field losing credibility (and funding) if the hype bubble bursts, and this is especially true for those who experienced the AI winters. They see the long timelines view as appropriately skeptical and more scientifically respectable.

Some examples of statements that AGI is far away from high-profile AI researchers:

Geoffrey Hinton: https://venturebeat.com/2018/12/17/geoffrey-hinton-and-demis-hassabis-agi-is-nowhere-close-to-being-a-reality/

Yann LeCun: https://www.facebook.com/yann.lecun/posts/10153426023477143 https://futurism.com/conscious-ai-decades-away https://www.facebook.com/yann.lecun/posts/10153368458167143

Yoshua Bengio: https://www.lesswrong.com/posts/4qPy8jwRxLg9qWLiG/yoshua-bengio-on-ai-progress-hype-and-risks

Rodney Brooks: https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/ https://rodneybrooks.com/agi-has-been-delayed/

TAISU - Technical AI Safety Unconference

Janos and I are coming for the weekend part of the unconference

Risks from Learned Optimization: Introduction
Vika7mo10Ω5

I'm confused about the difference between a mesa-optimizer and an emergent subagent. A "particular type of algorithm that the base optimizer might find to solve its task" or a "neural network that is implementing some optimization process" inside the base optimizer seem like emergent subagents to me. What is your definition of an emergent subagent?

Load More