William_S

I worked at OpenAI for three years, from 2021-2024 on the Alignment team, which eventually became the Superalignment team. I worked on scalable oversight, part of the team developing critiques as a technique for using language models to spot mistakes in other language models. I then worked to refine an idea from Nick Cammarata into a method for using language model to generate explanations for features in language models. I was then promoted to managing a team of 4 people which worked on trying to understand language model features in context, leading to the release of an open source "transformer debugger" tool.
I resigned from OpenAI on February 15, 2024.

Wiki Contributions

Comments

William_S117

Language in the emails included:

"If you executed the Agreement, we write to notify you that OpenAI does not intend to enforce the Agreement"

I assume this also communicates that OpenAI doesn't intend to enforce the self-confidentiality clause in the agreement

William_S5-1

Evidence could look like 1. Someone was in a position where they had to make a judgement about OpenAI and was in a position of trust 2. They said something bland and inoffensive about OpenAI 3. Later, independently you find that they likely would have known about something bad that they likely weren't saying because of the nondisparagement agreement (instead of ordinary confidentially agreements).

This requires some model of "this specific statement was influenced by the agreement" instead of just "you never said anything bad about OpenAI because you never gave opinions on OpenAI".

I think one should require this kind of positive evidence before calling it a "serious breach of trust", but people can make their own judgement about where that bar should be.

William_S5-12

I imagine many of the people going into leadership positions were prepared to ignore the contract, or maybe even forgot about the nondisparagement clause altogether. The clause is also open to more avenues of legal attack if it's enforced against someone who takes another position which requires disparagement (e.g. if it's argued to be a restriction on engaging in business). And if any individual involved divested themselves of equity before taking up another position, there would be fewer ways for the company to retaliate against them. I don't think it's fair to view this as a serious breach of trust on behalf of any individual, without clear evidence that it impacted their decisions or communication. 

But it is fair to view the situation overall as concerning that it could happen with nobody noticing, and try to design defenses to prevent this or similar things happening in the future, e.g. some clear statement around not having any conflict of interest including legal obligations for people going into independent leadership positions, as well as a consistent divestment policy (though that creates its own wierd incentives).

There is no Golden Gate Bridge division

Would make a great SCP

William_SΩ731709

I worked at OpenAI for three years, from 2021-2024 on the Alignment team, which eventually became the Superalignment team. I worked on scalable oversight, part of the team developing critiques as a technique for using language models to spot mistakes in other language models. I then worked to refine an idea from Nick Cammarata into a method for using language model to generate explanations for features in language models. I was then promoted to managing a team of 4 people which worked on trying to understand language model features in context, leading to the release of an open source "transformer debugger" tool.
I resigned from OpenAI on February 15, 2024.

Reply161032
William_SΩ342

Re: hidden messages in neuron explanations, yes it seems like a possible problem. A way to try to avoid this is to train the simulator model to imitate what a human would say given the explanation. A human would ignore the coded message, and so the trained simulator model should also ignore the coded message. (this maybe doesn't account for adversarial attacks on the trained simulator model, so might need ordinary adversarial robustness methods).

Does seem like if you ever catch your interpretability assistant trying to hide messages, you should stop and try to figure out what is going on, and that might be sufficient evidence of deception.

William_SΩ350

From discussion with Logan Riggs (Eleuther) who worked on the tuned lens: the tuned lens suggests that the residual stream at different layers go through some linear transformations and so aren’t directly comparable. This would interfere with a couple of methods for trying to understand neurons based on weights: 1) the embedding space view 2) calculating virtual weights between neurons in different layers.

However, we could try correcting these using the transformations learned by the tuned lens to translate between the residual stream at different layers, and maybe this would make these methods more effective. By default I think the tuned lens learns only the transformation needed to predict the output token but the method could be adapted to retrodict the input token from each layer as well, we’d need both. Code for tuned lens is at https://github.com/alignmentresearch/tuned-lens

(I work at OpenAI). Is the main thing you think has the effect of safetywashing here the claim that the misconceptions are common? Like if the post was "some misconceptions I've encountered about OpenAI" it would mostly not have that effect? (Point 2 was edited to clarify that it wasn't a full account of the Anthropic split.)

Load More