A few months ago I disagreed with Sniffnoy about whether the theorem regarding Savage's axioms for probability and utility, that utility must be bounded, is a good reason for believing that utility must be bounded. Sniffnoy said yes, because it follows from the axioms, I said no, therefore there is a flaw in the axioms. (The fact that the theorem does follow from the axioms is not an issue.) I concluded that conversation by saying I'd have to think about it.

This I have done. I have followed Jaynes' dictum that when infinities lead to problems, one must examine the limiting process by which those infinities were arrived at, which almost invariably dissolves the problem. The flaw in Savage's system is easy to find, easy to describe, and easy to rectify. I have devised a new set of axioms such that:

- Probability and utility are constructed from the preference relation by the same method as Savage.
- Every model of Savage's axioms is a model of these axioms and constructs the same probability measure and utility function.
- The new axioms also have models with acts and outcomes of unbounded utility.
- Acts of infinite utility (such as the St. Petersburg game) are admitted as second-class citizens, in much the same way that measurable functions with infinite integral are in measure theory.
- More pathological infinite games (such as St. Petersburg with every other payout in the series reversed in sign) are excluded from the start, but without having to exclude them by any criterion involving utility. (Utility is constructed from the axioms, so cannot be mentioned within them.) Like measurable functions that have no integral, well, that's just what they are. There's no point in demanding that they all should.

This removes all force from the argument that because Savage's axioms imply bounded utility, utility must be bounded. (There are other axiom systems that have that consequence, but I believe that my construction would apply equally to them all.) If one prefers Savage's axioms *because* they have that consequence, one must have some other reason for believing that utility must be bounded, or the argument is circular.

There are a few details of proofs still to be filled in, but I don't think there will be any problems there. Any expert on measure theory could probably dispose of them with a theorem off the shelf. Because of this I don't want to stick it on arXiv yet, but I would welcome interested readers. Anyone interested, ask me for a copy and give me a way of sending you a PDF.

Despite the title of this post, the only argument for bounded utility I am addressing here is the argument that it follows from various axiom systems. For other, more informal reasons people have for believing in bounded utility, Eliezer (an unbounded fun theorist) has had plenty to say about that in the past, so I'll just refer people to the Fun Theory Sequence. Because they are informal, you can chew over them forever, which I find an un-fun activity.

This is great!

> I have followed Jaynes' dictum that when infinities lead to problems, one must examine the limiting process by which those infinities were arrived at, which almost invariably dissolves the problem.

love this!

I just remembered that, having had the paper rejected by a couple of journals, a few months ago I decided to just put in online here. Those who saw the 25-page version may be relieved to know that this greatly improved version is only 4 pages. The paper includes a link to code on GitHub for its simulations.

(I also posted this to the Open Thread—I'm not sure which is more likely to be seen.)

Since posting the OP, I've revised my paper, now called "Unbounded utility and axiomatic foundations", and eliminated all the placeholders marking work still to be done. I believe it's now ready to send off to a journal. If anyone wants to read it, and especially if anyone wants to study it and give feedback, just drop me a message. As a taster, here's the introduction.

To give an idea of the motivation, consider the following analogy (which is actually quite close to what happens in my paper). Take the set of all functions from the reals to the reals. (In Savage's system, these are the "acts" when the set of "world states" and the set of "consequences" are both copies of the reals.)

Suppose you want some measure of the "size" of a function, intuitively visualising this as the area under the function's curve. So any probability density function would have a "size" of 1, its negative would have a "size" of -1, the constant function f(x) = 1 would have infinite "size", and so on.

So you write down a set of axioms that this "size" seems like it ought to obey, and end up defining the "size" of a function to be its integral over the whole real line.

But then you have a problem. For some functions, there is simply no such thing. The sine function, for example, can be integrated over every finite interval, but not over the whole real line. f(x) = exp(x) could be assigned infinite value, but that doesn't work for exp(x)sin(x). You might want f(x) = 1/x to have an integral of zero by symmetry, but you can get any value you like depending on how you approach the singularity at 0. And there are far wilder functions than these (the non-measurable functions, but I don't want to start explaining measure theory).

It would be silly to solve this problem by insisting that the real numbers must be bounded. The real numbers are not up for grabs. They simply are what they are, and if that is not consistent with your intuitions about the "size" of a function, so much the worse for your intuitions. (I'm aware of ultrafinitists who try to do exactly that. As a staunch Platonist, I regard their efforts as interesting, and worthwhile in the spirit of "let 10,000 views contend", but I read them as describing what the Platonic reals look like from within certain more limited axiom systems.)

What one actually does when defining integration is to start with some simple class of functions that intuitively do have obvious values for their integrals, extend this by limiting constructions as far as possible, and then accept that not every function has an integral. I do something similar in revising Savage's axioms, and as a result I can consistently admit unbounded utilities and the more well-behaved sort of infinite games.

I don't get the connection why the integrals need to be numerated in real numbers.

I do wonder whether surreal non-standard analysis would be a better fit. It seems that people automatically associate integrals with real numbers but there might be cases where real numbers are inadequate.

It might be that having infinite numbers of options trying to turn them into a number system finite numbers are just not enough. But real numbers are already "singly infinite" so I don't have good intuition which operations eat up that and which ones do not.

Is there such a thing as surreal non-standard analysis? It's not a subject I follow, but I understood that there still wasn't a good concept of integration for the surreal numbers.

My general attitude to other systems of non-standard numbers is the same as my attitude to ultrafinitism. Non-standard real numbers are really a way of thinking about the Platonic reals, in which what look like infinite real numbers from within the system are finite real numbers so enormous that in a limited system of reasoning they cannot be reached by any of the available constructions. But I don't have a formalisation of this idea.

Transfinite induction does feel a bit icky in that finite prooflines you outline a process that has infinitely many steps. But as limits have a similar kind of thing going on I don't know whether it is any ickier.

Part of my motivation on searching them is that the foundations for surreals are way more elegant than reals. From that perspective it seems very strange how reals would be the "actually existing numbers".

In particular reals have infinite precision while any human can only determine a scalar to a finite precision. If constructibility would be a concern reals should be concerning but if reals are not concerning there shouldn't be any additional twists.

Particuarly in surreal thinking " You might want f(x) = 1/x to have an integral of zero by symmetry, but you can get any value you like depending on how you approach the singularity at 0 " the multiplicative inverse of the first infinity is epsilon rather than 0 so that would open up as another alternative for the integral value. In real precision any positive number must be finite but for surreals there are positive infinidesimals. Most applications are applied to finite domains so the infinities can be "rounded" to nearest finite. But it wouldn't be surprising that in a properly infinite application such rounding would introduce non-neglible errors.

In a case where you want to compare a gamble with finite options and a gamble which has infinite outcomes one might sometimes favour one or the other based whether the oods are good or not. In surreals infinidesimals times a first-order infinity yields a finite value so you could write down a formula like (a*x)+w(e*b*y)>(c*z)+w(e*d*v) with w being the first infinite and e=1/w and have it make sense and have it be true or not true based on the weights. With reals you can easily compare finite vs finite and infinite vs infinite but doing a comparison across multiple archimedean fields gets tricky.

Did you mean non-archimedean fields? I regard those as not the real real numbers. For practical purposes in the present context, I don't think you can beat the Dedekind-complete ordered field (i.e. the real numbers), with nominal ∞ and –∞ symbols added as a shorthand for more verbose statements about infinite integrals and sums.

If you add +1 up from 0 and do -1 from w you never cross because those are part of separate archimedean fields. I think there is also a result that if you have surreals and add the limitation that they need to be archimedean you get the real numbers

Real number have the completeness axiom/property. While for lesser systems the limit fails to exist and thus it is missing a "completing piece" for surreals the limit is not unique. Between the ascending numbers and what would be the real limit there is quaranteed to be a surreal number, so in that sense they are "overcomplete" to follow the standard limit constructions. That is {series|}=U is always going to exist but so will {series|U}=V while for reals V is required to either equal U or equal one of the series.

Surreals are more rich than hyperreals. One could imagine a dart board where some areas scored 10 points, some lines scored 15 points and intersections of lines score 20 points. Treating anything that isn't area as having literally zero measure the disintions between lines being more probable than line intersections get lost (0=0). With surreals one could try to say that llines are infidesimal in respects to areas and intersection points are infinidesimal with respect to lines. And you could still hold that twice area is twice as likely and twice the line length is twice as likely and twice the amount of points is twice as likely. But areas, lines and points would seem to live in 3 disconnected lands. An abundance of points would not be able to make them reach line probability (unless you provide an actual infinite amount)

However if intersections scored infinite points in respect to lines then it could make expected value sense to aim for intersection points. But if we know that all points are not infinidesimal in respect of each other then we know that any intersection point aiming strategy is inferior.

I guess the connection here is that if you try to divide the dart board into small areas, they are still going to be areas and some mechanism might misvalue an area containing an intersection point that whole area is worth what the point is worth.

Ah, you are using "field" in a different sense (than "something with addition and multiplication obeying the usual laws").

I intend not to or atleast the sense is meant to be compatible with the mathematical word. Looking at wikipedia article for "Field" subsection "ordered fields" there is the mention that real are the unique complete field up to isomorphism and there are multiple subsections in surreals that could contend to be isomorphic. Real multiples of w also have an additive inverse and multiplicative inverse making them "another copy" of the reals. I think I might have thougt in error that the other subsection would fullfill field axioms in its own right but rather it is just a subsection that can be mapped to reals.

> Transfinite induction does feel a bit icky in that finite prooflines you outline a process that has infinitely many steps. But as limits have a similar kind of thing going on I don't know whether it is any ickier.

Well, transfinite induction / recursions is reduced to (at least in ZF set theory) the existence of an infinite set and the Replacement axioms (a class function on a set is a set). I suspect you don't trust the latter.

The primary first need for transfinite recursion is to go from the successor construction to the natural numbers existing. Going to an approach that assumes an infinite set rather than proves it seems handy but weaker. Althought I guess in reading surreal papers I take set theory as given and while it doesn't feel like any super advanced features are used there migth be a lot of assumption baggage.

It also feels llike a dirty trick that we don't need to postulate the existence of zero and that we get surreals from not knowing any surreals. Surreal number definition references sets of surreal numbers? Don't know any? Worry not there is the set that is of every type. And now that you have read the definition with that knowledge you know a new surreal number which enables you to read the definition again. So we get a lot of finite numbers without positing the existence of a single number and we don't even need to explicitly define a successor relation.

The base number construction only uses set formation and order and doesn't touch arithmetic operations, so on that level "the birthday" of mappings has yet to come so it is of limited use. I have seen formulations of surreal theory where it is written in a more axiomatic fashion but a "process" style gives a lto of ground to realise connections betweeen strctures.

The way it's used in the set theory textbooks I've read is usually this:

successoron a set S: S→S∪{S}assumethe existence of aninductiveset that contains a set and all its successors. This is a weak and very limited form of infinite induction.generalform of transfinite recursion.So, there is indeed the assumption of a kind of infinite process before the assumption of the existence of an infinite set, but it's not (necessarily) the ordinal ω. You can't also use it to deduce anything else, you still need Replacement. The same can be said for the existence and uniqueness of the empty set, which can be deduced from the axioms of Separation.

This approach is not equivalent nor weaker to having fiat transfinite recursion , it's the only correct way if you want to make the least amount of new assumptions.

Anyway, as far as I can tell, having a well defined theory of sets is crucial to the definitions of surreals, since they are based on set operations and ontology, and use infinite sets of every kind.

On the other hand, I don't understand your problem with the impredicativity of the definitions of the surreals. These are often resolved into recursive definitions and since ZF-sets are well-founded, you never run into any problem.

I am pretty sure the is not obstacle for applying the successor function to the infinite set. And then there is the construction mirroring ω + ω. If you have the infinite set and it has many successors what limits one to not do the inductive set trick again to this situation?

I kinda know that if you assume a special inductive set that is only one "permitted application" of it and a "second application" would need a separate ad hoc axiom.

Then if we have "full blown" transfinite recursion we just allow that second-level application.

New assumtions assume that there are old assumptions. If we just have non-proof "I have feeling it should be that way" we have a pre-axiomatic system before hand. If we don't aim to get the same theorems then "minimal change to keep intact" doesn't make sense. The conneciton here is whether some numbers "fakely exist" where a fake existence could be that some axiom says the thing exist but there is no proof/construction that results in it. A similar kind of stance could be that real numbers are just a fake way to talk about natural numbers and their relations. One could for example note that reals are innumerable but proofs are discrete so almost all reals are undefineable. If most reals are undefinable then unconstructibility by itself doesn't make transfinites any less real. But if the real field can establish some kind of properness then the same avenues of properness open up to make transfinites "legit".

I am not that familar how limits connect to the foundamentals but if that route-map checks out then transfinites should not be any ickier than limits.

Interesting! Can you explain more about what this part means, I'm unfamiliar with the math of measurable functions, or the analogy to second-class citizenship.

Suppose I tell you that I am God and if you send me $1000, you'll get to play a pathological St. Petersburg game of the sort you just described, with the payoffs being in money in your Divine Bank Account. (Did you know you have one? You do!). Do you assign 0 credence to this hypothesis, and to the set of all hypotheses in the vicinity? If not, ... well, nothing really, since presumably your utility is not linear in money. But what if it was? Or do you agree that utility can't be linear in money?

I think everyone agrees that utility is not linear in money, although there are different ideas about what the relationship is or should be. But utility is linear in itself, so one can consider all bets to be denominated in utilons or utiles. I haven't seen an agreed currency symbol for utilons. Maybe one could use the symbol ウ (katakana for the sound "oo").

I basically assign 0 credence to the supposed offer of this game, although that is not quite the way I would put it. Rather, games of this sort are excluded (at least, by me) from the purview of utility theory. It is outside the scope of the preference relation and is not assigned a utility.

I think it reasonable to do this, and the argument "yes, but what if?" an empty one, because, one can always say, "yes, but what if?" Yes, but what if God promised you $BIGNUM utiles for sawing your head off with a chainsaw? Yes, but what if mathematics is inconsistent, all the way down to propositional calculus? Yes, but what if all your arguments are wrong in a way you can't see because some demon afflicts you? Yes, but what if you're wrong? Then you'd be wrong! So you could be wrong!

So, despite the maxim that "0 and 1 are not probabilities", at the meta-level, where the theory of probability and utility is constructed, I do as everyone does, and think in terms of ordinary logic, where everything has probability 0 or 1, and nothing in (0,1) is a truth value.

Thanks for the explanation!

I think this is where we disagree. If you are going to exclude some possibilities, well, then the problem gets loads easier, doesn't it? Imagine if I said "I've come up with a voting system which satisfies all of Arrow's axioms, thus getting around his famous theorem" and then you qualified with "To make this work, I had to exclude certain scenarios from the purview of preference aggregation theory, namely, the ones that would make my system violate one of the axioms..."

Another way of putting it: Look, some people assign non-zero credence to these pathological scenarios. (I do, for example. As does anyone who takes "0 and 1 are not probabilities" seriously, I think.) These people also have preferences over these scenarios; they choose between them, you can ask them what they will prefer and they'll answer, etc. So your system for taking someone's beliefs and preferences and then spitting out a (possibly unbounded) utility function... either just says these people don't have utility functions at all, or gives them utility functions constructed by ignoring some of their beliefs and preferences in favor of others. This seems bad to me.

I am actually excluding less than Savage does, not more: models of my axioms include all models of his, and more. And since Savage at first did not know that his axioms implied bounded utility, that cannot have been a consideration in his design of them.

People may give preferences involving pathological scenarios, but clearly those preferences cannot satisfy Savage's axioms (since his axioms rule them out, and even more strongly than mine do).

There is no free lunch here. You can have preferences about everything in the Tegmark level 7 universe (or however high the hierarchy goes -- somewhere I saw it extended several levels beyond Tegmark himself), but at the cost of them failing to obey reasonable sounding properties of rational preference.

I think I agree with you that "Savage axioms imply bounded utility, so there" isn't a strong argument. And the fact that you've found a set of axioms that don't imply bounded utility makes it even weaker. My disagreement is with the claim that utility can/should be unbounded. I'm saying that making sense of various important kinds of scenarios/preferences requires (or at least, is best done via) bounded utility. You are saying those scenarios/preferences are unimportant to make sense of and we should ignore them. (And you are saying Savage agrees with you on this point). Right?

Also, I deny that bounded utility functions disobey reasonable-sounding properties of rational preference. For one thing, there are other axiom sets besides yours and Savage's, ones which I like better anyway (e.g. Jeffrey-Bolker). For another... are you sure Savage's axioms rule out the sorts of preferences I'm talking about? They don't rule out bounded utility functions, after all. And so why would they rule out someone listening to the proposal, saying "Eh, it basically cancels out IMO; large amounts of money/debt don't matter to me much" and refusing to pay up? (I am not super familiar with the savage axioms to be honest; maybe they do rule out this person's preferences. If so, so much the worse for them, I say.)

Re Jeffrey-Bolker, the only system I studied in detail was Savage's, but my impression is that the fix I applied to that system can be applied all the others that paint themselves into the corner of bounded utility, and with the same effect of removing that restriction. Do the Jeffrey-Bolker axioms either assume or imply bounded utility?

Having now read some expositions of the Jeffrey-Bolker theory, I can answer my own question.

The Jeffrey-Bolker axioms imply the finite utility of every prospect (to be technical, the Averaging axiom fails when there are infinite utilities), but the utility can be unbounded above and below. It cannot be infinite. In this it differs from Savage's system.

For Savage's axioms, unbounded utility implies the existence of gambles like St. Peterburg, of infinite utility, and all the rest of the menagerie of infinite games listed in this SEP article. From these a contradiction with Savage's axioms can be found. Hence all models of Savage's axioms have bounded utility.

In the Jeffrey-Bolker system, gambles cannot be constructed at will. The set of available gambles is built into the world that the agent faces. The agent is an observer: it cannot act upon the world, only have preferences about how the world is. None of the paradoxical games exist in a model of the Jeffrey-Bolker axioms. They do allow the existence of non-paradoxical infinite games, games such as Convergent St. Petersburg, which is St. Petersburg modified to have arithmetically instead of geometrically growing payouts. However, I note that one of Jeffrey's verbal arguments against St. Petersburg — that no-one can offer the game because it requires them to be able to cover arbitrarily large payouts — applies equally to Convergent St. Petersburg.

Savage's axioms

implythat utility is bounded. This is what Savage did not know when he formulated them, but Peter Fishburn proved it, and Savage included the result in the second edition of his book. So Savage accidentally brute-forced the pathological games out of existence. All acts, in Savage's system, have a defined, finite expected value, and the St. Petersburg game and its variants do not exist. God himself cannot offer you these games. The utilities of the successive St. Peterburg payoffs are bounded, and cannot even increase linearly, although intuitively that version should have a well-defined, finite expected value.In my approach, I proceed more cautiously by only considering "finite" acts at the outset: acts with only finitely many different consequences. Then I introduce acts with infinitely many consequences as limits of these, some of which have finite expected values and some infinite.

The first clause does not imply the second. The St. Petersburg game variant in which the payoffs are utility does not exist, but the St. Petersburg game variant in which the payoffs are dollars does exist. (Or does something else in Savage's framework rule it out?)

The St. Petersburg game variant in which the payoffs are dollars can only exist in Savage's system if there is a limit on the number of utilons that any amount of dollars could buy. No more utility than that exists. But that game is not paradoxical. It has a finite expected value in utilons, and that is an upper bound on the fee it is worth paying to play the game.

In other words, the St. Petersburg game (dollars variant) can exist just fine in Savage's system, it's only the utility variant that can't. Good. What about in your system? Can the dollars variant exist?

If the dollars variant can exist, what happens in your system when someone decides that their utility function is linear in dollars? Does your system (like Savage's) say they can't do that, that utility must be bounded in dollars at least?

With my axioms, utility can be unbounded, and the St. Petersburg game is admitted and has infinite utility. I don't regard this as paradoxical. The game cannot be offered, because the offerer must have infinite resources to be able to cover every possible outcome, and on average loses an infinite amount.

St. Petersburg-like games with finite expected utility also exist, such as one where the successive payouts grow linearly instead of exponentially. These also cannot be offered in practice, for the same reason. But their successive approximations converge to the infinite game, so it is reasonable to include them as limits.

Both types are excluded by Savage's axioms, because those axioms require bounded utility. This exclusion appears to me unnatural and resulting from an improper handling of infinities, hence my proposal of a revised set of axioms.

Oops, right, I meant the Pasadena Game (i.e. a variant of Petersburg where the infinite sum is undefined). Sorry.

I think maybe our disagreement has to do with what is unnatural. I don't think it's unnatural to exclude variants in which the payoffs are specified as utilities, since utility is what we are trying to construct. The agent doesn't have preferences over such games, after all; they have preferences over games with payoffs specified in some other way (such as dollars) and then we construct their utility function based on their preferences. However, it seemed to me that your version was excluding variants in which the payoffs were specified in dollars -- which

didseem unnatural. But maybe I've been misinterpreting you.Your argument for why these things cannot be offered in practice seems misplaced, or at least irrelevant. What matters is whether the agent has some nonzero credence that they are being offered such a game. I for one am such an agent, and you would be too if you bought into the "0 is not a probability" thing, or if you bought into solomonoff induction or something like that. The fact that presumably most agents should have very small credence in such things, which is what you seem to be saying, is irrelevant.

Overall I'm losing interest in this conversation, I'm afraid. I think we are talking past each other; I don't think you get what I am trying to say, and probably I'm not getting what you are trying to say either. I think I understand (some of) your mathematical points (you have some axioms, they lack certain implications the Savage axioms had, etc.) but don't understand how you get from them to the philosophical conclusion. (And this is genuine not understanding, not polite way of saying I think you are wrong) If you are still interested, great, that would motivate me to continue, and perhaps to start over but more carefully, but I'm saying this now in case you want to just call it a day. ;)

In fact I don't buy into those things. One has to distinguish probability at the object level from probability at the metalevel. At the metalevel it does not exist, only true and false exist, 0 and 1. So when I propose a set of axioms whereby measures of probability and utility are constructed, the probability exists within that framework. The question of whether the framework is a good one matters, but it cannot be discussed in terms of the probability that it is right. I have set out the construction, which I think improves on Savage's, but people can study it themselves and agree or not. It rules out the Pasadena game. To ask what the probability is of being faced with the Pasadena game is outside the scope of my axioms, Savage's, and every set of axioms that imply bounded utility. Everyone excludes the Pasadena game.

No, actually they don't. I've just come across a few more papers dealing with Pasadena, Altadena, and St. Petersburg games, beginning with Terrence Fine's "Evaluating the Pasadena, Altadena, and St Petersburg Gambles", and tracing back the references from there. From a brief flick through, all of these papers are attempting what seems to me to be a futile activity: assigning utilities to these pathological games. Always, something has to be given up, and here, what is given up is any systematic way of assigning these games utilities; nevertheless they go ahead and do so, even while noticing the non-uniqueness of the assignments.

So there is the situation. Savage's axioms, and all systems that begin with a total preference relation on arbitrary games, require utility to be bounded in order to exclude not only these games, but also infinite games that converge perfectly well to intuitively natural limits. I start from finite games and then extend to well-behaved limits. Others try to assign utility to pathological games, but fail to do so uniquely.

I'm happy to end the conversation here, because at this point there is probably little for us to say that would not be repetition of what has already been said.

Yeah, it seems like we are talking past each other. Thanks for engaging with me anyway.

Neat! Sent you a PM with my email.

I wonder what second-class citizenship means and here is what I guess it migth mean.

You have to choose between two games A and B. In A we throw 2 coins and if we get 2 heads we play St. Petersburg otherwise we play Petrograd. In B we thow 2 coins ad if we get 2 heads we play Petrograd and otherwise we play St. Peterburg.

I expect that the second-class status means that A and B both have "unbounded+" and thus have equal utility and no reason to prefer either of them.

A proper first-class citizen non-finite utility theory could try to say that A pays out more and A should be preferred over B. But then there are distinctions among non-finite utilities.

Yes, St. Peterburg and Petrograd (= St. Petersburg with all payouts increased by one) are given the same infinite utility. Neither is preferable to the other, despite the intuition saying that Petrograd is better. While intuition can be a guide, it is an untrustworthy one, a castle in the air that requires a foundation to be built underneath it.

The problem with comparing infinities is that if you impose conditions on the preference relation that seem reasonable for finite games, then before you know it — literally so in Savage's case — you end up excluding all the infinities, and neither St. Peterburg nor Petrograd exist. To avoid doing that, you have to give up some of those conditions. Savage's P2, for example, sounds perfectly reasonable if you don't think about infinite games, but as soon as you do, you can see that it must fail. Not that there's anything special about P2, it's really the basic ontology of the system that is at fault.

I have to wonder how strong a mathematical background some of the people who have published on the subject had. Attempting to construct a total ordering on all functions from a probability space to the real numbers, or even just on the measurable functions, seems doomed to failure.

To my perspective that infinities are equal is a unfounded intuitiion. Reading a surreal proof on how w+1 is strictly greater than w is partly amazing how you can have claims about infinities without relying on intuition (ie can actually proove stuff). Then a law like "infinity + infinity = infinity" starts to feel like "positive + positive = positive", "positive" is not a number but a quality. There is additional structure in "2+2=4" than that positivity is preserved.

In the same way that if one option has negative utlity and one options has positive utility you can safely choose the positive one without regard to the actual magnitude so it is also safe to choose a transfinite positive over a finite positive.

If the theory doesn't treat finites in a special way (is finiteness-ambivalent) then the core material should transfer for applicable parts to the transfinite domain.