It's rather obvious that truly ideal decision making is undecidable. I mean a simpler argument than the one you put forth is just: god presents you with a random turing machine and asks whether it halts or not. Send you to hell if you answer wrong, heaven if right.
People have put a fair bit of thought into this already though. You should look up logical inductors.
A core tenet of rationalism is that for any given set of known information, a span of possible decisions, and some utility function, there exists a decision-making algorithm that will return the optimal outcome. A short argument, given below, will demonstrate (given relatively minimal assumptions) that finding any such algorithm is an undecidable problem.
We will take as given three axioms:
Therefore, any proposed optimal decision-making algorithm, henceforth referred to as D, must also encode an algorithm H, which determines the ideal halting point[3].
Thus, by the same logic that underlies the halting problem (and axiom 3), there exists for every D a possible state of affairs D' which contains an ideal halting point H', such that when D is run, H does not halt D at the ideal halting point[4]. This, being the definition of undecidability, concludes the argument.
Or a nondeterministic Turing machine; according to Wikipedia, they differ only in time complexity.
If there were not, one would be locally omniscient, therefore trivializing decision-making.
If there were no halting point, the algorithm would not terminate, guaranteeing unbounded search cost (a strictly suboptimal outcome).
In common parlance, the possibility that H is not the optimal halting point is referred to as ‘doubt’, and the factors that potentially defy the optimality of D are referred to as ‘unknowns’.