Peter Thiel pointed out that the common folk wisdom in business that you learn more from failure than success is actually wrong - failure is overdetermined and thus uninteresting.
I think you can make an analogous observation about some prosaic alignment research - a lot of it is the study of (intellectually) interesting failures, which means that it can make for a good nerdsnipe, but it's not necessarily that informative or useful if you're actually trying to succeed at (or model) doing something truly hard and transformative.
Glitch tokens, the hot mess work, and various things related to jailbreaking, simulators, and hallucinations come to mind as examples of lines of research and discussion that an analogy to business failure predicts won't end up being centrally relevant to real alignment difficulties. Which is not to say that the authors of these works are claiming that they will be, nor that this kind of work can't make for effective demonstrations and lessons. But I do think this kind of thing is unlikely to be on the critical path for trying to actually solve or understand some deeper problems.
Another way of framing the observation above is that it is an implication of instrumental convergence: without knowing anything about its internals, we can say confidently that an actually-transformative AI system (aligned or not) will be doing something that is at least roughly coherently consequentialist. There might be some intellectually interesting or even useful lessons to be learned from studying the non-consequentialist / incoherent / weird parts of such a system or its predecessors, but in my frame, these parts (whatever they end up being), are analogous to the failures and missteps of a business venture, which are overdetermined if the business ultimately fails, or irrelevant if it succeeds.