Zach Stein-Perlman

AI strategy & governance. ailabwatch.org. Looking for new projects.

Sequences

Slowing AI

Wiki Contributions

Load More

Comments

Yep, lots of people independently complain about "lab." Some of those people want me to use scary words in other places too, like replacing "diffusion" with "proliferation." I wouldn't do that, and don't replace "lab" with "mega-corp" or "juggernaut," because it seems [incorrect / misleading / low-integrity].

I'm sympathetic to the complaint that "lab" is misleading. (And I do use "company" rather than "lab" occasionally, e.g. in the header.) But my friends usually talk about "the labs," not "the companies." But to most audiences "company" is more accurate.

I currently think "company" is about as good as "lab." I may change the term throughout the site at some point.

This kind of feedback is very helpful to me; thank you! Strong-upvoted and weak-agreevoted.

(I have some factual disagreements. I may edit them into this comment later.)

(If you think Dan's comment makes me suspect this project is full of issues/mistakes, react 💬 and I'll consider writing a detailed soldier-ish reply.)

Thanks. Briefly:

I'm not sure what the theory of change for listing such questions is.

In the context of policy advocacy, think it's sometimes fine/good for labs to say somewhat different things publicly vs privately. Like, if I was in charge of a lab and believed (1) the EU AI Act will almost certainly pass and (2) it has some major bugs that make my life harder without safety benefits, I might publicly say "I support (the goals of) the EU AI Act" and privately put some effort into removing those bugs, which is technically lobbying to weaken the Act.

(^I'm not claiming that particular labs did ~this rather than actually lobby against the Act. I just think it's messy and regulation isn't a one-dimensional thing that you're for or against.)

Edit: this comment was misleading and partially replied to a strawman. I agree it would be good for the labs and their leaders to publicly say some things about recommended regulation (beyond what they already do) and their lobbying. I'm nervous about trying to litigate rumors for reasons I haven't explained.

Edit 2: based on https://corporateeurope.org/en/2023/11/byte-byte, https://time.com/6288245/openai-eu-lobbying-ai-act/, and background information, I believe that OpenAI, Microsoft, Google, and Meta privately lobbied to make the EU AI Act worse—especially by lobbying against rules for foundation models—and that this is inconsistent with OpenAI's and Altman's public statements.

This post is not trying to shame labs for failing to answer before; I didn't try hard to get them to answer. (The period was one week but I wasn't expecting answers to my email / wouldn't expect to receive a reply even if I waited longer.)

(Separately, I kinda hope the answers to basic questions like this are already written down somewhere...)

Google sheet.

Some overall scores are one point higher. Probably because my site rounds down. Probably my site should round to the nearest integer...

Thanks for the feedback. I'll add "let people download all the data" to my todo list but likely won't get to it. I'll make a simple google sheet now.

This is too strong. For example, releasing the product would be correct if someone else would do something similar soon anyway and you're safer than them and releasing first lets you capture more of the free energy. (That's not the case here, but it's not as straightforward as you suggest, especially with your "Regardless of how good their alignment plans are" and your claim "There's just no good reason to do that, except short-term greed".)

Constellation (which I think has some important FHI-like virtues, although makes different tradeoffs and misses on others)

What is Constellation missing or what should it do? (Especially if you haven't already told the Constellation team this.)

Harry let himself be pulled, but as Hermione dragged him away, he said, raising his voice even louder, "It is entirely possible that in a thousand years, the fact that FHI was at Oxford will be the only reason anyone remembers Oxford!"

Yes but possibly the lab has its own private scaffolding which is better for its model than any other existing scaffolding, perhaps because it trained the model to use its specific scaffolding, and it can initially not allow users to use that.

(Maybe it’s impossible to give API access to scaffolding and keep the scaffolding private? Idk.)

Edit: Plus what David says.

Load More