"Dear Oracle, consider the following two scenarios. In scenario A an infallible oracle tells me I am making no major mistakes right now (literally in these words). In scenario B, an infallible oracle tells me I am making a major mistake right now (but doesn't tell me what it is). Having received this information, I adjust my decisions accordingly. Is the outcome of scenario A better, in terms of my subjective preferences?"
We can also do hard mode where "better in terms of my subjective preferences" is considered ill-defined. In this case I finish the question by "...for the purpose of the thought experiment, imagine I have access to another infallible oracle that can answer any number of questions. After talking to this oracle, will I reach the conclusion scenario A would be better?"