This is a linkpost for https://www.latacora.com/

Am I affiliated with Latacora?

No, not in any way.

What is Latacora?

From their own webpage:

Latacora does just one kind of engagement: we join your engineering team virtually, create a security practice, and run it. As your company matures, we'll make sure that practice also matures. When it makes sense for you to bring some (or all!) of our security capabilities in-house, we'll help you make those hires.

Why might some AI Safety organizations be interested in Latacora?

Security is hard. Only amateurs roll their own security. But identifying good security hires and practices is hard if you don't know what they are or how they look like. This is similar to how it's difficult to hire for taste if you don't have taste. In particular, I'd expect top AI Safety hires to be great at AI Safety, but not necessarily great at security practices. But even if they are great at security practices, it's probably not their comparative advantage.

Recently, Yudkowsky has been talking about third countries stealing dangerous AI models and running them as a consideration feeding into his own pessimism. This doesn't seem like a hypothetical concern. Latacora seems like it would allow one to solve or mitigate this problem by throwing money at it. 

How did you hear about Latacora? Why do you think they're good?

I don't remember how I heard about them. It was probably mentioned by someone grungier than me in some Linux forum, and I've been following their blog for a while. I think they're good because of the same reasons I think Zvi is good: because I read their stuff and they seem to say sensible things according to my judgment. 

Are you suggesting that AI Safety organizations literally hire these people?

Yes. 

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 9:30 PM

Some more thoughts:

1. It might be the case that these organizations already have security procedures, but I'd expect these procedures to be somewhat ad-hoc, particularly for the more recently formed organizations. If they're not, I'll just be pleasantly surprised. Also, how to say, I could also imagine Latacora having more optimization power in the security dimension than, say, MIRI.

2. I imagine that explaining the security profile to them might be fun.

3. I can imagine that as Latacora has grown larger, their proportion of junior to senior people might have changed. It seems to me that AI Safety orgs would want to bid for the more senior people, rather than for the more recent hires.

4. I imagine that Latacora's job might be greatly facilitated by:

  • AI safety orgs not needing to appease by bureaucratic requirements (such as security certifications)
  • AI Safety orgs not literally expecting AGI this year, thus giving time to prepare (unlike bureaucratic requirements or business deadlines)

5. I also imagine that asking for a security team such as Latacora to be integrated into, e.g., DeepMind, is a nice specific ask which people with short timelines might want to push for.

It seems like MIRI already had a very strong security policy that strongly inhibited their ability to do their job. By hiring professionals like Latacora, they might not only make MIRI more secure but might also provide helpful advice about what practices are creating an unnecessary burden.

I had similar thoughts.

Deepmind specifically has Google's security people on call, which is to say the best that money can buy. For others, well, AI Safety Needs Great Engineers and Anthropic is hiring, including for security.

(opinions my own, you know the drill)

I can imagine situations where having people "on call", and "on site" provide different levels of security, but you probably have more insight. I.e., DeepMind's ability to call on a Google security team after a breach after the fact doesn't provide that much security. 

I can imagine setups where Google's security people are already integrated into DeepMind, but I can also imagine setups where DeepMind has a few really top security people and that still doesn't provide a paranoid enough level of security.

As part of being inside Google DeepMind I would expect that this gives DeepMind already good access to security expertise. If you think that an external service like Latacora can do things that internal Google services can't, can you expand the argument of why you think so?

I don't think that Latacora can do things that an internal Google service literally can't.

Recently, Yudkowsky has been recently talking about third countries stealing

Should probably be

Recently, Yudkowsky has been talking about third countries stealing

[I see that I posted a draft version of this, rather than the final version, I updated the post.]