I think about AI alignment. Send help.
I say things on twitter, other links at payor.io
I see the main contribution/idea of this post as being: whenever you make a choice of basis/sorting-algorithm/etc, you incur no "true complexity" cost if any such choice would do.
I would guess that this is not already in the water supply, but I haven't had the required exposure to the field to know one way or other. Is this more specific point also unoriginal in your view?
For one thing, this wouldn't be very kind to the investors.
For another, maybe there were some machinations involving the round like forcing the board to install another member or two, which would allow Sam to push out Helen + others?
I also wonder if the board signed some kind of NDA in connection with this fundraising that is responsible in part for their silence. If so this was very well schemed...
This is all to say that I think the timing of the fundraising is probably very relevant to why they fired Sam "abruptly".
OpenAI spokesperson Lindsey Held Bolton refuted it:"refuted that notion in a statement shared with The Verge: “Mira told employees what the media reports were about but she did not comment on the accuracy of the information.”"
OpenAI spokesperson Lindsey Held Bolton refuted it:
"refuted that notion in a statement shared with The Verge: “Mira told employees what the media reports were about but she did not comment on the accuracy of the information.”"
The reporters describe this as a refutation, but this does not read to me like a refutation!
Has this one been confirmed yet? (Or is there more evidence that this reporting that something like this happened?)
Your graphs are labelled with "test accuracy", do you also have some training graphs you could share?
I'm specifically wondering if your train accuracy was high for both the original and encoded activations, or if e.g. the regression done over the encoded features saturated at a lower training loss.
See also: LLMs Sometimes Generate Purely Negatively-Reinforced Text
With respect to AGI-grade stuff happening inside the text-prediction model (which might be what you want to "RLHF" out?):
I think we have no reason to believe that these post-training methods (be it finetuning, RLHF, RLAIF, etc) modify "deep cognition" present in the network, rather than updating shallower things like "higher prior on this text being friendly" or whatnot.
I think the important points are:
Evidence in favor of this is the difficulty of eliminating "jailbreaking" with these methods. Each jailbreak demonstrates that a lot of the necessary algorithms/content are still in there, accessible by the network whenever it deems it useful to think that way.
Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration.
Some distinctions that might be relevant:
If you ask me for what in my experience corresponds to a feeling of "passively accepting a proposition" when someone tells me, I think I'm doing a bunch of (1) and (2). This does feel like "accepting" or "taking in" the proposition, and can change how I see things if it works.
Awesome, thanks for writing this up!
I very much like how you are giving a clear account for a mechanism like "negative reinforcement suppresses text by adding contextual information to the model, and this has more consequences than just suppressing text".
(In particular, the model isn't learning "just don't say that", it's learning "these are the things to avoid saying", which can make it easier to point at the whole cluster?)
I tried to formalize this, using A→B as a "poor man's counterfactual", standing in for "if Alice cooperates then so does Bob". This has the odd behaviour of becoming "true" when Alice defects! You can see this as the counterfactual collapsing and becoming inconsistent, because its premise is violated. But this does mean we need to be careful about using these.
For technical reasons we upgrade to □A→B, which says "if Alice cooperates in a legible way, then Bob cooperates back". Alice tries to prove this, and legibly cooperates if so.
This setup gives us "Alice legibly cooperates if she can prove that, if she legibly cooperates, Bob would cooperate back". In symbols, □(□A→B)→A.
Now, is this okay? What about proving □A→⊥?
Well, actually you can't ever prove that! Because of Lob's theorem.
Outside the system we can definitely see cases where A is unprovable, e.g. because Bob always defects. But you can't prove this inside the system. You can only prove things like "□kA→⊥" for finite proof lengths k.
I think this is best seen as a consequence of "with finite proof strength you can only deny proofs up to a limited size".
So this construction works out, perhaps just because two different weirdnesses are canceling each other out. But in any case I think the underlying idea, "cooperate if choosing to do so leads to a good outcome", is pretty trustworthy. It perhaps deserves to be cached out in better provability math.