I guess figuring out whether we’re “in a bubble” just hasn’t seemed very important to me, relative to how hard it seems to determine? What effects on the strategic calculus do you think it has?
E.g. my current best guess is that I personally should just do what I can to help build the science of interpretability and learning as fast as I can, so we can get to a point where we can start doing proper alignment research and reason more legibly about why alignment might be very hard and what could go wrong. Whether we’re in a bubble or not mostly matters for that only insofar as it’s one factor influencing how much time we have left to do that research.
But I’m already going about as fast as I can anyway, so having a better estimate of timelines isn’t very action-relevant for me. And “bubble vs. no bubble” doesn’t even seem like a leading-order term in timeline uncertainty anyway.
Yeah, the observation that the universe seems maybe well-predicted by a program running on some UTM seems like a subset of the observation that the universe seems amendable to mathematical description and compression. So the former observation isn't really an explanation for the latter, just a kind of restatement. We'd need an argument for why a prior over random programs running on an UTM should be preferred over a prior over random strings. Why does the universe have structure? The Universal Prior isn't an answer to that question. It's just an attempt to write down a sensible prior that takes the observation that the universe is structured and apparently predictable into account.
See footnote. Since this permutation freedom always exists no matter what the learned algorithm is, it can't tell us anything about the learned algorithm.
... Wait, are you saying we're not propagating updates into to change the mass it puts on inputs vs. ?
My viewpoint is that the prior distributions giving weight to each of the three hypotheses is different from the one giving weight to each of and , even if their mixture distributions are exactly the same.
That's pretty unintuitive to me. What does it matter whether we happen to write out our belief state one way or the other? So long as the predictions come out the same, what we do and don't choose to call our 'hypotheses' doesn't seem particularly relevant for anything?
We made our choice when we settled on as the prior. Everything past that point just seems like different choices of notation to me? If our induction procedure turned out to be wrong or suboptimal, it'd be because was a bad prior to pick, not because we happened to write down in a weird way, right?
If they have the same prior on sequences/histories, then in what relevant sense are they not the same prior on hypotheses? If they both sum to , how can their predictions come to differ?
I'm confused. Isn't one of the standard justifications for the Solomonoff prior that you can get it without talking about K-complexity, just by assuming a uniform prior over programs of length on a universal monotone Turing machine and letting tend to infinity?[1] How is that different from your ? It's got to be different right, since you say that is not equivalent to the Solomonoff prior.
See e.g. An Introduction to Universal Artifical Intelligence, pages 145 and 146.
Obviously SLT comes to mind, and some people have tried to claim that SLT suggests that neural network training is actually more like Solomonoff prior than the speed prior (e.g. bushnaq) although I think that work is pretty shaky and may well not hold up.
That post is superseded by this one. It was just a sketch I wrote up mostly to clarify my own thinking, the newer post is the finished product.
It doesn't exactly say that neural networks have Solomonoff-style priors. It depends on the NN architecture. E.g., if your architecture is polynomials, or MLPs that only get one forward pass, I do not expect them to have a prior anything like that of a compute-bounded Universal Turing Machine.
And NN training adds in additional complications. All the results I talk about are for Bayesian learning, not things like gradient descent. I agree that this changes the picture and questions about the learnability of solutions become important. You no longer just care how much volume the solution takes up in the prior, you care how much volume each incremental building block of the solution takes up within the practically accessible search space of the update algorithm at that point in training.
I think just minimising the norm of the weights is worth a try. There's a picture of neural network computation under which this mostly matches their native ontology. It doesn't match their native ontology under my current picture, which is why I personally didn't try doing this. But the empirical results here seem maybe[1] better than I predicted they were going to be last February.
I'd also add that we just have way more compute and way better standard tools for high-dimensional nonlinear optimisation than we used to. It's somewhat plausible to me that some AI techniques people never got to work at all in the old days could now be made to kind of work a little bit with sufficient effort and sheer brute force, maybe enough to get something on the level of an AlphaGo or GPT-2. Which is all we'd really need to unlock the most crucial advances in interp at the moment.
I haven't finished digesting the paper yet, so I'm not sure.
I just meant that if an oracle told me ASI was coming in two years, I probably couldn't spend down energy reserves to get more done within that timeframe compared to being told it'll take ten years. I might feel a greater sense of urgency than I already am and perhaps end up working longer hours as a result of that, but if so that'd probably be an unendorsed emotional response I couldn't help more than a considered plan. I kind of doubt I'd actually get more done that way. Some slack for curiosity and play is required for me to do my job well.
The stakes are already so high and time so short that varying either within an order of magnitude up or down really doesn't change things all that much.