Just to make another note, "Solving the problem in theory" is also equivalent to the [forward training algorithm](https://www.cs.cmu.edu/~sross1/publications/Ross-AIStats10-paper.pdf), which preceded DAgger by the same authors.
I do think there are some interesting ideas to consider in the alignment setting. For example, the chunk size k is equivalent to the number of roll-out steps in IL. "Chunking" the roll-out to a fixed window is a common optimization if the task has a long time horizon and the expert is expensive to query. On the other hand, longer rol...
It's not clear to me that you do get stronger guarantees because the setting and method is so similar to that of classical imitation learning. In both cases, we seek to learn a policy that is aligned with the expert (human). Supervised fine-tuning (behavioral cloning) is problematic because of distribution shift, i.e. the learned policy accumulates error (at a quadratic rate!) and visits states it did not see in training.
You say this failure mode is dangerous because of scheming AI and I say it's dangerous because the policy is OOD, but it appears you agre...
How does this differ from DAgger (https://arxiv.org/abs/1011.0686)?
Output tokens certainly do not scale linearly, even with a KV cache. The KV cache means you don't need to recompute the k/q/v vectors for each of the previous tokens, but you still need to compute n kq dot products for the (n+1)'st token.