Edited to say it is not your position. I'm sorry for having published this comment without checking with you.
EDIT: Originally I said that was my best understanding of Mikhail's point. Mikhail has told me it was not his point. I'm keeping this comment as that's a point that I find interesting personally.
Before Mikhail released this post, we talked for multiple house about the goal of the article and how to communicate it better. I don't like the current structure of the post, but I think Mikhail has good arguments and has gathered important data.
Here's the point I would have made instead:
Anthropic presents itself as the champion of AI safety among the AI companies. People join Anthropic because of their trust that the Anthropic leadership will take the best decisions to make the future go well.
There have been a number of incidents, detailed in this post, where it seems clear that Anthropic went against a commitment they were expected to have (pushing the frontier), where their communication was misleading (like misrepresenting the RAISE bill), or where they took actions that seem incongruous with their stated mission (like accepting investment from Gulf states).
All of those incidents most likely have explanations that were communicated internally to the Anthropic employees. Those explanations make sense, and employees believe that the leadership made the right choice.
However, from the outside, a lot of those actions look like Anthropic gradually moving away from being the company that can be trusted to do what's best for humanity. It looks like Anthropic doing whatever it can to win the race even if it increases risks, like all the other AI companies. From the outside, it looks like Anthropic is less special than it seemed at first.
There are two worlds compatible with the observations:
In the second world, working at Anthropic would not reliably improve the world. Anthropic employees would have to evaluate whether to continue working there in the same way as they would if they worked at OpenAI or any other AI company.
All current and potential Anthropic employees should notice that from the outside, it sure does look like Anthropic is not following its mission as much as it used to. There are two hypotheses that explain it. They should make sure to keep tracking both of them. They should have a plan of what they'll do if they're in the least convenient world, so they can face uncomfortable evidence. And, if they do conclude that the Anthropic leadership is not following Anthropic's mission anymore, they should take action.
Nominated. One of the posts that changed my life the most in 2024. I've eaten oatmeal at least 50 times since then, and have enjoyed the convenience and nutrition.
I'll go buy some more tomorrow
Nominated. I used the calculator linked in this post to determine whether to take up insurance since then.
Nominated. The hostile telepath problem immediately entered my library of standards hypothesis to test for debugging my behavior and helping others do so, and sparked many lively conversations in my rationalist circles.
I'm glad I reread it today.
Draft post made during Inkhaven. Interested in feedback.
Signals of Competence is the model I use to advise friends on how to build career capital.
When deciding who to hire, an organization will assess the competence of the candidates by looking at various signals that they sent in their CV, cover letter, interview or work test.
Those signals are a point on those two main dimensions:
Signals of Competence are of two types:
Small and Big organizations generally care about different signals
Your CV is the collection of those signals. There are three ways you can improve how good it is:
Recruiters look for three kind of signals:
Make sure you signal your competence in those three.
Its population advantage had vastly reduced by the end of the 19th century already, as France was the first country to go through a demographic transition in the 18th century. AFAIK its relative population compared to other western countries has been stable since WWII.
(If this seems mean, I think I have equally mean critiques about how every other country is screwing up. I'm just calling out France here because the claims in the post are about France.)
Yeah I realized this post did make me sound like a patriot lol. I'm not convinced of France's relevance, nor its irrelevance. I'm writing those posts for myself to figure out whether France matters, and to help other people working in AI policy to have a good model of France's motivations in the AI race.
Yeah, agree with most of this. I added a not saying that it's the narrative that I think is shaping France actions, rather than my view of the situation.
I agree that Mistral cannot come to parity on technical prowess. I think that from the sovereignty angle, it still can be a success if they make models that are actually useful for industry and are trusted by French national security agencies, which seems to be more like Mistral's goal lately.
Strong agree on not expecting France to be a significant player in AI development. However, I expect that France seeing itself as in the race is a big part of the current AI investment push. Also, France might not be in the race, but it could still actually matter whether they have national compute resources and expertise. France only developed nuclear weapons fourth, but it still matters for its sovereignty to have them.
Also agree on the lameness of the tech scene in France. I was working in a world leading crypto startup started in France, and the founder still ended up moving from Paris to London, to get closer to a proper financial district.
In https://secularsolstice.vercel.app/feedback, "2022 (Or, "Where Ray's Coming From")" has the lyrics of "Five Thousand Years", which seems incorrect.