I think all of these are quite unconvincing and the argument stays intact, but thanks for coming up with them.
I think longer explanation is needed to show how benevolent AI will save observers from evil AI. It is not just compensation for sufferings. It is based on the idea of the indexical uncertainty of equal observers. If two equal observers-moments exist, he doesn't know, which one them he is. So a benevolent AI creates 1000 copies of an observer-moment which is in jail of evil AI, and construct to each copy pleasant next moment. From the point of view of the jailed observer-moment, there will be 1001 expected future moments for him, and only 1 of them will c