I endorse the "overly galaxy brained strategy." If you actually understand why it's not useful even as a step towards some other alignment scheme that works for superintelligence, you should just drop it and think about other things.
However, usually things aren't so cut and dried. In the course of arriving at the epistemic state hypothesized above, it's probably a good idea to talk to some other safety researchers.
Generally if you think of something that's super useful for present-day systems, it's related to ideas that are useful for future systems. In that case, I endorse attempting to study your idea for its safety properties for a while and then eventually publishing (preferably just in time to scoop people in industry who are thinking about similar things :P ).