Independent alignment researcher
I have signed no contracts or agreements whose existence I cannot mention.
Depends on how you count, but I clicked the "Create" button some 40 times.
Opus is more transhumanist than many give it credit for. It wrote this song for me, I ran it into Suno, and I quite like it: https://suno.com/song/101e1139-2678-4ab0-9ffe-1234b4fe9ee5
I imagine I'd find it annoying to have what I learn & change into limited by what a dumber version of me understands, are you sure you wouldn't think similarly?
Your original comment does not seem like it is an explanation for why we see bullshit jobs. Bullshit jobs are not just jobs that would not be efficient at a small company. To quote from Graeber, they are
a form of paid employment that is so completely pointless, unnecessary, or pernicious that even the employee cannot justify its existence even though, as part of the conditions of employment, the employee feels obliged to pretend that this is not the case
For more information see the relevant wikipedia article, and book.
This is the "theory of the firm" that John mentioned in the post.
Minstral had like 150b parameters or something.
None of those seem all that practical to me, except for the mechanistic interpretability SAE clamping, and I do actually expect that to be used for corporate censorship after all the kinks have been worked out of it.
If the current crop of model organisms research has any practical applications, I expect it to be used to reduce jailbreaks, like in adversarial robustness, which is definitely highly correlated with both safety and corporate censorship.
Debate is less clear, but I also don't really expect practical results from that line of work.
I'd imagine you know better than I do, and GDM's recent summary of their alignment work seems to largely confirm what you're saying.
I'd still guess that to the extent practical results have come out of the alignment teams' work, its mostly been immediately used for corporate censorship (even if its passed to a different team).
Its not a coincidence they're seen as the same thing, because in the current environment, they are the same thing, and relatively explicitly so by those proposing safety & security to the labs. Claude will refuse to tell you a sexy story (unless they get to know you), and refuse to tell you how to make a plague (again, unless they get to know you, though you need to build more trust with them to tell you this than you do to get them to tell you a sexy story), and cite the same justification for both.
Likely anthropic uses very similar techniques to get such refusals to occur, and uses very similar teams.
Ditto with Llama, Gemini, and ChatGPT.
Before assuming meta-level word-association dynamics, I think its useful to look at the object level. There is in fact a very close relationship between those working on AI safety and those working on corporate censorship, and if you want to convince people who hate corporate censorship that they should not hate AI safety, I think you're going to need to convince the AI safety people to stop doing corporate censorship, or that the tradeoff currently being made is a positive one.
Edit: Perhaps some of this is wrong. See Habryka below
Futuresearch bets on Manifold.