LESSWRONG
LW

1655
Vika
3275Ω828562151
Message
Dialogue
Subscribe

Victoria Krakovna. Research scientist at DeepMind working on AI safety, and cofounder of the Future of Life Institute. Website and blog: vkrakovna.wordpress.com

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
DeepMind Alignment Team on Threat Models and Plans
Safety researchers should take a public stance
Vika1mo40

Similarly to Leo, I think racing to AGI is bad and it would be good to coordinate not to do that. I support proposals for AI regulations that would make this easier. I signed various open letters to this effect on AI red lines, AI Treaty, SB1047, and others.

I'm pretty uncertain if pushing for an AI pause now is an effective way to achieve this, and I think it's quite plausibly better to pause later rather than now. In the next few years, we will have more solid evidence of misalignment, and we would be able to make better use of a pause period (which is likely to be finite) e.g. with automated alignment researchers. I don't think calling for a pause/ban now is a costless action - early calls for a pause have the risk of crying wolf and using up the political will that could be used for a pause later. I signed the FLI pause letter in 2023, but looking back it seems a bit premature. A conditional pause in the future seems much easier to get adopted than a hard pause now. 

I agree with everything Neel said in his top-level comment, and I'm puzzled by the number of disagreement votes on it. 

Reply
An epistemic advantage of working as a moderate
Vika2mo5-3

This is a significant effect in general, but I'm not sure how much epistemic cost it creates in this situation. Moderates working with AI companies mostly interact with safety researchers, who are not generally doing bad things. There may be a weaker second-order effect where the safety researchers at labs have some epistemic distortion from cooperating with capabilities efforts, and this can influence external people who are collaborating with them. 

Reply
Neel Nanda's Shortform
Vika3mo62

Thanks for this helpful framework, it's also useful for people who are submitting rebuttals not for the first time :). Sadly NeurIPS and ICML no longer allow a top-level comment (for silly technical reasons). 

Reply
Moving on from community living
Vika2y20

Thanks Gunnar, those sound like reasonable guidelines!

  • The common space was still usable by other housemates, but it felt a bit cramped, and I felt more internal pressure to keep it tidy for others to use (while in my own space I feel more comfortable leaving it messy for longer). Our housemates were very tolerant of having kid stuff everywhere, but it still seemed suboptimal. 
  • The fridge, laundry area and outdoor garbage bins were the most overloaded in our case, while the shed and attic were sufficiently spacious and less in demand that it wasn't an issue. Gathering everyone for a decluttering spree is a noble effort but a bit hard to coordinate. I found it easier to declutter by putting away some type of object (e.g. shoes) and have people put theirs back (to identify things that didn't belong to anyone). The fridge was often overfull despite regular decluttering - I think it was just too small for the number of people we had, and getting a second fridge would take up extra space. 
  • I would add general disruption of child routines in addition to sleep (though sleep is the most important routine). Surprisingly, it was not as much of an issue the other way around, e.g. the baby was quiet enough not to bother the housemate next door at night. The 3 year old running around the living room in the morning was a bit noisy for the people downstairs though.
Reply
Moving on from community living
Vika2y42

Yeah, living in a group house was important for our mental well-being as well, especially during the pandemic and parental leaves. I think the benefits of the social environment decreased somewhat because we were often occupied with the kids and had less time to socialize. It was still pretty good though - if Deep End was close enough to schools we like, we would have probably stayed and tried to make it work (though this would likely involve taking over more of the house over time). Our new place contributes to mental well-being by being much closer to nature (while still a reasonable bike commute from the office).

Reply1
Moving on from community living
Vika2y62

I would potentially be interested, if we knew the other people well. I find that, as a parent, I'm less willing to take risks by moving in with people I don't know that well, because the stress and uncertainty associated with things not working out are more costly.

Space requirements would likely be the biggest difficulty though, as you pointed out. A family with 2 kids probably needs at least 3 rooms, so two such families together would need a 6 bedroom house. This is hard to find, especially combined with other constraints like proximity to schools, commute distances, etc. It's a lot easier to live near other families than sharing a living space. 

Reply
More Is Different for AI
Vika2yΩ340Review for 2022 Review

I really enjoyed this sequence, it provides useful guidance on how to combine different sources of knowledge and intuitions to reason about future AI systems. Great resource on how to think about alignment for an ML audience. 

Reply
Counterarguments to the basic AI x-risk case
Vika2yΩ10172Review for 2022 Review

I think this is still one of the most comprehensive and clear resources on counterpoints to x-risk arguments. I have referred to this post and pointed people to a number of times. The most useful parts of the post for me were the outline of the basic x-risk case and section A on counterarguments to goal-directedness (this was particularly helpful for my thinking about threat models and understanding agency). 

Reply
Refining the Sharp Left Turn threat model, part 1: claims and mechanisms
Vika2yΩ7120Review for 2022 Review

I still endorse the breakdown of "sharp left turn" claims in this post. Writing this helped me understand the threat model better (or at all) and make it a bit more concrete.

This post could be improved by explicitly relating the claims to the "consensus" threat model summarized in Clarifying AI X-risk. Overall, SLT seems like a special case of that threat model, which makes a subset of the SLT claims: 

  • Claim 1 (capabilities generalize far) and Claim 3 (humans fail to intervene), but not Claims 1a/b (simultaneous / discontinuous generalization) or Claim 2 (alignment techniques stop working). 
  • It probably relies on some weaker version of Claim 2 (alignment techniques failing to apply to more powerful systems in some way). This seems necessary for deceptive alignment to arise, e.g. if our interpretability techniques fail to detect deceptive reasoning. However, I expect that most ways this could happen would not be due to the alignment techniques being fundamentally inadequate for the capability transition to more powerful systems (the strong version of Claim 2 used in SLT).
Reply
Clarifying AI X-risk
Vika2y*Ω7120Review for 2022 Review

I continue to endorse this categorization of threat models and the consensus threat model. I often refer people to this post and use the "SG + GMG → MAPS" framing in my alignment overview talks. I remain uncertain about the likelihood of the deceptive alignment part of the threat model (in particular the requisite level of goal-directedness) arising in the LLM paradigm, relative to other mechanisms for AI risk. 

In terms of adding new threat models to the categorization, the main one that comes to mind is Deep Deceptiveness (let's call it Soares2), which I would summarize as "non-deceptiveness is anti-natural / hard to disentangle from general capabilities". I would probably put this under "SG → MAPS", assuming an irreducible kind of specification gaming where it's very difficult (or impossible) to distinguish deceptiveness from non-deceptiveness (including through feedback on the model's reasoning process). Though it could also be GMG, where the "non-deceptiveness" concept is incoherent and thus very difficult to generalize well. 

Reply
Load More
Impact Regularization
3 years ago
(+41/-31)
18Access to agent CoT makes monitors vulnerable to persuasion
3mo
0
57Evaluating and monitoring for AI scheming
Ω
4mo
Ω
9
104A short course on AGI safety from the GDM Alignment team
Ω
8mo
Ω
2
64Moving on from community living
2y
7
124When discussing AI risks, talk about capabilities, not intelligence
Ω
2y
Ω
7
128[Linkpost] Some high-level thoughts on the DeepMind alignment team's strategy
Ω
3y
Ω
13
56Power-seeking can be probable and predictive for trained agents
Ω
3y
Ω
22
39Refining the Sharp Left Turn threat model, part 2: applying alignment techniques
Ω
3y
Ω
9
79Threat Model Literature Review
Ω
3y
Ω
4
127Clarifying AI X-risk
Ω
3y
Ω
24
Load More