What you do is you pick a turing machine that you think halts eventually, but takes a really long time to do so, and then expend the resource when that TM stops.
After all, a computer with an N bit program can't delay longer than BB(N). In practice, if most of the bits aren't logically correlated with long running turing machines, then it can't run nearly that long. This inevitably becomes a game of logical uncertainty about which turing machines halt.
I don't understand the significance of using a TM -- is this any different from just applying some probability distribution over the set of actions?
In economics, if utility is strictly increasing in t—the quantity of it consumed—then we would call it a type of a "good", and utility functions are often unbounded. What makes the ultimate choice of t finite is that utility from t is typically modeled as concave, while costs are convex. I think you might be able to find some literature on convex utility functions, but my impression is that there isn't much to study here:
If utility is strictly increasing in t (even accounting for cost of its inputs/opportunity cost etc.), then you will consume all of it that you can access. So if the stock of available t is finite, then you have reduced the problem to a simpler subproblem, where you allocate your resources (minus what you spent on t) to everything else. If t is infinite, then you have attained infinite utility and need not make any other decisions, so the problem is again uninteresting.
Of course, the assumption that utility is strictly increasing in t, regardless of circumstance, is a strong one, but I think it is what you're asking.
Infinite t does not necessarily deliver infinite utility.
Perhaps it would be simpler if I instead let t be in (0, 1], and U(t) = {t if t < 1; 0 if t = 1}.
It's the same problem, with 1 replacing infinity. I have edited the question with this example instead.
(It's not a particularly weird utility function -- consider, e.g. if the agent needs to expend a resource such that the utility from expending the resource at time t is some fast-growing function f(t). But never expending the resource gives zero utility. In any case, an adverserial agent can always create this situation.)
In economics, we call something a "good" if utility is increasing in t—the quantity of it consumed—and utility functions are often unbounded
I don't think that's true. Modern economists say utility is increasing in t _at the relevant margins_. There is no claim that it's unbounded or that it doesn't become negative utility at some future time or quantity.
Basic answer: there are no infinities in the real world. whatever resource t you're talking about is actually finite - find the limit and your problem goes away (that problem, at least; many others remain when you do math on utility, rather than doing the math on resources/universe-states, and then simply transforming to utility).
It does not require infinities. E.g. you can just reparameterize the problem to the interval (0, 1), see the edited question. You just require an infinite set.
An elementary question that has probably been discussed for 300 years, but I don't know what the right keyword to use to google it might be.
How, theoretically, do you deal with (in decision theory/AI alignment) a "noncompact" utility function, e.g. suppose your set of actions is parameterized by t in (0, 1], and U(t) = {t for t < 1, 0 for t = 1}. Which t should the agent choose?
E.g. consider: the agent gains utility f(t) from expending a resource at time t and f(t) is a (sufficiently fast-growing) increasing function. When does the agent expend the resource?
I guess the obvious answer is "such a utility function cannot exist, because the agent obviously does something, and that demonstrates what the agent's true utility function is", but it seems like it would be difficult to hard-code a utility function to be compact in a way that doesn't cause the agent to be stupid.