I'm a last-year PhD student at the University of Amsterdam working on AI Safety and Alignment, and specifically safety risks of Reinforcement Learning from Human Feedback (RLHF). Previously, I also worked on abstract multivariate information theory and equivariant deep learning. https://langleon.github.io/
I suspect there’s a basic reason why futility claims are often successful in therapy/coaching: by claiming (and succeeding in convincing the client) that something can’t be changed, you reduce the client’s shame in not changing the thing. Now the client is without shame, and that’s a state of mind that makes it a priory easier to change, and focusing the change on aspects the client didn’t fail on in the past additionally increases the chance of succeeding since there’s no evidence of not succeeding on those aspects.
However, I also really care about truth, and so I really dislike such futility claims.
I feel like Cunningham's law got confirmed here. I'm really glad about all the things I learned from people who disagreed with me.
Thanks a lot for this very insightful comment!
I think we may not disagree about any truth-claims about the world. I'm just satisfied that the north star of Solomonoff induction exists at all, and that it is as computable (albeit only semicomputable), well-predicting, science-compatible and precise as it is. I expected less from a theory that seems so unpopular.
> It predicts well: It's provenly a really good predictor
So can you point to any example of anyone ever predicting anything using it?
No, but crucially, I've also never seen anyone predict as well as someone using Solomonoff induction with any other method :)
I'm only now really learning about Solomonoff induction. I think I didn't look into it earlier since I often heard things along the lines of "It's not computable, so it's not relevant".
But...
What more do you want?
The fact that my master's degree in AI at the UvA didn't teach this to us seems like a huge failure.
Thanks for doing this AMA!
1. In what sense is enlightenment permanent? E.g., will a truly enlightened person never suffer again? Or is it a weaker claim of the form "once one learns the motion of enlightenment, it can be repeated to eliminate suffering at will"? Or something else entirely?
2. I've read Shinzen Young's "The Science of Enlightenment". He describes how in some intermediate stage, he started hallucinating giant insects in his daily life. Have you made similar experiences? How do you interpret them? Do you consider such experiences dangerous? Are such experiences distinct from schizophrenia in a relevant way?
3. As far as I understand, enlightened people don't cling to a specific reality anymore, but they may still have strong desires. Is this your typical experience? How do you relate to your desires? Is it compatible to be (a) enlightened, (b) have a desire that leads to amoral actions when acted upon them, and (c) acting on those desires?
4. Is any amount of pain/anxiety/sadness/anger/etc. compatible with being in a state of zero suffering? Is zero suffering in practice harder to maintain for higher levels of these displeasures, or will a once-enlightened person not find any of these experiences difficult to combine with a state of non-suffering?
5. Are there degrees of enlightenment, or is it more of a discrete change?
6. How much control can an enlightened person have over their experience? Is it possible to decide to not hear/see/smell/feel/taste something? Is it possible to stop thinking at will? Is it possible to feel pleasure or bliss at will? Is it possible to change ones experience at will / to hallucinate arbitrary experiences at will?
7. What is it like to be enlightened?
Thanks for adding!
Not sure if you were aware, but in the glossary at the top-right of the post, there is also an explanation (albeit shorter) of "prefix-free code". I'm just mentioning this in case you weren't aware of the glossary functionality.
Several commenters have remarked that the alt-complexity and K-complexity differ only up to a constant. I have now written a post where I write down a detailed proof of that classical result, which is known as the "coding theorem".
A NeurIPS paper on scaling laws from 1993, shared by someone on twitter.