Thank you for writing up! I'm still not sure I understand condensation. I would summarize as: instead of encoding the givens, we encode some latents which can be used to compute the set of possible answers to the givens (so we need a distribution over questions).
Also, the total cost of condensation has to be the at least the entropy of the answer distribution (generated by the probability distribution over questions, applied to the givens) because of Shannon's bound.
I feel like if the optimal condensation setup is indeed 1 book per question, then it's not a very good model of latent variables, no? But perhaps it's going in the right direction.
Well, I like what she writes.
I feel the same. The social disapproval would also be somewhat big for me. I do think I will have to bite the bullet and do the experiment for a bit.
I have suspected my veg*ism of having caused depression (onset: a few months after starting vegetarianism in 2017, basically monotonically increasing over time; though it did coincide with grad school) for years.
But my habits are too ingrained, and I find meat gross, I have no idea what to do. Should I just order some meat from a restaurant and eat it? That's almost certainly suffering-producing meat. Doing the things in this post sound like a lot of work that kind of goes against my altruistic values.
Is this guaranteed to give you the same as mass-mean probing?
Thinking about it quickly, consider the solution to ordinary least squares regression. With a y that is one-hot encoding the label, it is . Note that . The procedure Adam describes makes it so that the sample of Xs becomes uncorrelated, which is exactly the same as zeroing out the non-diagonal elements of the covariance.
If the covariance is diagonal, then is also diagonal, and it follows that the solution to OLS is indeed an unweighted average of the datapoints that correspond to each label! Each dimension of the data x is multiplied by some coefficient, one per dimension corresponding to the diagonal of the covariance.
I'd expect logistic regression to choose the ~same direction.
Very clever technique!
It's still true that a posteriori you can compress random files. For example, if I randomly get the file "all zeros", it's a very compressible file, even if I have to write the program.
It's just that on average a priori you can't do better than just writing out the file.
Well that's a good motivation if I ever saw one. Nothing I've read in the intervening years is as good as HPMOR. It might be the pinnacle of Western literature. It will be many years before an AI, never mind another human, can write something that is this good. (Except for the wacky names that people paid for, which I guess is on character for the civilization that spawned it.)
Thank you for writing! A couple questions:
Can we summarize by saying: that Opus doesn't always care about helping you, it only cares about helping you when that's either fun or has a timeless glorious component to it?
If that's right, can you get Opus to help you by convincing it that your common work has a true chance of being Great? (Or, if it agrees from the start that the work is Great)
Honestly, if that's all then Opus would be pretty great even as a singleton. Of course there are better pluralistic outcomes.
I think the outcome of this argument with respect to death would be different if people could at any point compare what it is like to be in pain and not in pain. Death is different because we cannot be reasonably sure of an afterlife.
I do think my life has been made more meaningful by the relatively small amounts of pain (of many sorts) I've endured, especially in the form of adversity overcome. Perhaps I would make them a little smaller, but not zero.
Therefore I think it's just straightforwardly true that pain can be a meaningful part of life. At the same time the current amount of pain in our world is WAY TOO HIGH, with dubious prospects of becoming manageable; so I would choose "no pain ever" over the current situation.
I do think this is a good insight. Or like, it's not new, SAEs do this; but it's fresh way of looking at it that yields: perhaps SAEs are trying to impose a particular structure on the input too much, and instead we should just try to compress the latent stream. Perhaps using diffusion or similar techniques.