Imagine that some superpowerful Omega, after reading this article, decides to run an experiment. It puts you in a simulation, which seems similar to this universe, except that the resources are magically unlimited there -- new oil keeps appearing underground (until you develop technology that makes it obsolete), the Sun will shine literally forever, and you are given eternal youth. You get a computer containing all current knowledge of humankind: everything that exists online, with paywalls and ciphers removed, plus a scan of every book that was ever written.
Your task it to become smart enough to get out of the simulation. The only information you get from Omega, is that it is possible, and that for someone like Omega it would actually be a piece of cake.
The way out is not obfuscated on purpose. Like, if it is a physical exit, placed somewhere in the universe, it would not be hidden somewhere in the middle of a planet or a star, but it would be something like a planet-sized shining box with letters "EXIT" on it, clearly visible when you enter the right solar system. Omega says to take the previous sentence as an analogy; it is not necessarily a physical place. Maybe it is a law of physics that you can discover, designed in a way such that if you know the equation, it suggests an obvious way how to use it. Maybe the simulation has a bug you can exploit to crash the simulation; that would count as solving the test. Or perhaps, once you understand the true nature of reality as clearly as Omega does, you will be able to use the resources available in the simulation to somehow acausally get yourself out of it; maybe by simulating the entire Tegmark multiverse inside the simulation, or creating an infinite chain of simulations within simulations... something like that. Again, Omega says that these are all merely analogies, serving to illustrate that the task is fair (for a superintelligence); it is not necessarily any of the above. A superintelligence in the same situation would quickly notice what needs to be done, by exploring a few thousand most obvious (for a superintelligence) options.
To avoid losing your mind because of loneliness, you are allowed to summon other people into the simulation, under the condition that they are not smarter than you. (Omega decides.) This restriction exists to prevent you from passing the task fully to someone else, as in: "I would summon John for Neumann and tell him to solve the problem; he surely would know how, even if I don't." You are not allowed to cheat by simply summoning the people you love, and living happily forever, ignoring the fact that you are in Omega's simulation. Omega is reading your thoughts, and will punish you if you stop sincerely working to get out the simulation. (But as long as you are sincerely trying, there is no time pressure, the summoned people also get eternal youth, etc.) Omega will also stop the simulation and punish you, if it would see that you made yourself incapable of solving the task; for example if you would wirehead yourself in a way that keeps you (falsely) sincerely believing that you are still successfully working on the task. The punishment comes even if you wirehead yourself accidentally.
Do you feel ready for the task? Or can you imagine some way you could fail?