For derendering latex in Emacs, see https://github.com/io12/org-fragtog.
For drawing images in line, you could try https://github.com/misohena/el-easydraw.
I like this idea and think it is worth exploring. It is not even just with training new models; AGI have to worry about misalignment with every self-modification and every interaction with the environment that changes itself.
Perhaps there are even ways to deter an AGI from self-improvement, by making misalignment more likely.
Some caveats are:
Escape. Invest in space travel and escape the solar system before they arrive.If your AI timelines are long, this may be a viable strategy for preserving the human species in the event of unaligned AGI.In your AI timelines are short, a budget solution is to just send human brains into space and hope they will be found and revived by other powerful species (hopefully at least one of them is "benevolent").