This is a great post! Very nice to lay out the picture in more detail than LTSP and the previous LTP posts, and I like the observations about the trickiness of the n-way assumption.
I also like the "Is it time to give up?" section. Though I share your view that it's hard to get around the fundamental issue: if we imagine interpretability tools telling us what the model is thinking, and assume that some of the content that must be communicated is statistical, I don't see how that communication doesn't need some simplifying assumption to be interpretable to humans (though the computation of P(Z) or equivalent could still be extremely powerful). So then for safety we're left with either (1) probabilities computed from an n-way assumption are powerful enough that the gap to other phenomena the model sees is smaller than available safety margins or (2) something like ELK works and we can restrict the model to only act based on the human-interpretable knowledge base.
This is a great post! Very nice to lay out the picture in more detail than LTSP and the previous LTP posts, and I like the observations about the trickiness of the n-way assumption.
I also like the "Is it time to give up?" section. Though I share your view that it's hard to get around the fundamental issue: if we imagine interpretability tools telling us what the model is thinking, and assume that some of the content that must be communicated is statistical, I don't see how that communication doesn't need some simplifying assumption to be interpretable to humans (though the computation of P(Z) or equivalent could still be extremely powerful). So then for safety we're left with either (1) probabilities computed from an n-way assumption are powerful enough that the gap to other phenomena the model sees is smaller than available safety margins or (2) something like ELK works and we can restrict the model to only act based on the human-interpretable knowledge base.