This is a linkpost for https://ninarimsky.substack.com/p/when-betting-consider-non-ergodicity

This is a linkpost for https://ninarimsky.substack.com/p/when-betting-consider-non-ergodicity

When betting, consider non-ergodicity and absorbing states

6jkraybill

6meijer1973

This post was inspired by listening tothis episodeof theEconTalk podcast, where Russ Roberts interviews Luca Dellanna.Understanding non-ergodicity and absorbing states provides vital insights into how to approach betting/decision-making when there are risks of catastrophic outcomes.

Ergodic and Non-Ergodic ProcessesA process is

ergodicif it behaves the same when observed over a long period as it does, on average, over many independent realizations at a single point in time - in an ergodic process, time averages are the same as ensemble averages.Consider a simple coin toss, a classic example of an ergodic process. The probability of getting heads in any given toss is 50%, irrespective of past events. Hence, the average outcome over a long series of tosses will match the individual toss probability.

In contrast, a process is non-ergodic when time averages and ensemble averages differ. This means that the historical path of the process influences future outcomes. For example, in a game of

Russian roulette, the outcome is always fatal if played for enough rounds. It's a clear example of a non-ergodic process, as the player cannot repeat the game indefinitely.Absorbing StatesAn

absorbing statein a stochastic process is a state that, once entered, cannot be left. It's called "absorbing" because the system gets trapped in this state. The game of Russian roulette mentioned earlier has a clear absorbing state: death.These states are critical in betting because they represent points of no return. If you're betting in a system with an absorbing state, your strategy needs to account for the reality that specific outcomes will permanently impact your capacity to continue.

A process with an absorbing state will be non-ergodic if there's a non-zero probability of transitioning from a recurrent state (a state that the process is guaranteed to return to eventually, given enough time) to the absorbing state. This is because once the absorbing state is entered, it is irreversible and prevents the process from revisiting all other states infinitely often, a necessary condition for ergodicity. Thus, the non-zero possibility of ending up in an irreversible state disrupts the ergodic property.

Betting in Ergodic vs. Non-Ergodic ProcessesBetting strategies in ergodic processes can differ significantly from those in non-ergodic processes. In ergodic processes, where past events do not influence future ones, you can use methods like the

Kelly Criterion, a well-known formula used to determine the optimal size of a series of bets, to maximize your expected growth rate over time. The process's memoryless nature means you can base your betting strategy on the known odds without considering the outcomes of previous bets.In contrast, non-ergodic processes require a more careful approach. Here, it's crucial to consider the path dependence and the existence of absorbing states. Strategies like the Kelly Criterion may not be suitable in these cases because they don't account for the possibility of a catastrophic loss.

Go for Long Term SurvivalMaximizing growth isn't always optimal in non-ergodic processes with absorbing states. Sometimes, avoiding the worst outcomes is the winning bet - long-term survival outweighs short-term gains. Therefore, betting strategies in these scenarios should heavily emphasize risk management.

Here are a few general suggestions for such a betting strategy:

DiversificationDiversification is an effective way to reduce risk exposure by spreading bets across various non-correlated outcomes. This way, if one bet leads to an absorbing state, it won't necessarily result in a total loss, as the other bets may balance out the loss.

Dynamic Betting SizesAdapt your bet size based on your overall wealth and the risk associated with each bet. This approach is akin to the Kelly Criterion but with a cautious modification: you bet a fraction of the optimal Kelly size to buffer against the amplified consequences of losing bets in systems with absorbing states. This strategy is often referred to as "Fractional Kelly Betting."

Limit Exposure to Absorbing StatesIf you can identify the outcomes that would lead to an undesired absorbing state, you should limit your exposure to those outcomes as much as possible. This might involve avoiding specific bets altogether, or stopping betting after a predetermined number of losses.

When to BeLessRisk AverseIn the example of Russian roulette, the absorbing state in question is clearly bad. However, what if the absorbing state is a desirable one? For example, consider the process of applying for jobs. Each individual application may have a low chance of success and require significant effort (hence negative expected value in an ensemble average scenario), but if you do land a job that you enjoy and find fulfilling, you reach a positive absorbing state: you stop applying for jobs and enjoy a significant increase in life satisfaction. Therefore, you should be willing to take more risk than by default as this process is non-ergodic in the positive sense.

The notion of pursuing a positive absorbing state also applies to the realm of entrepreneurship. If an entrepreneur continually seeks promising business opportunities, even if each venture has a low probability of success due to various factors such as market competition and technological challenges, their cumulative chance of reaching a prosperous terminal state rises as they persistently attempt new ventures. Of course, this is only a good strategy if each attempt does not risk bankruptcy or some other kind of bad absorbing state, such as ending up on the wrong side of the law.

Practical Decision Making given Absorbing StatesThe typical method of cost-benefit analysis involves examining a specific situation or act and weighing the likelihood of a positive outcome against the cost of action. However, if the decision can be made repeatedly and there are potential absorbing states, it's essential to consider the average outcome over time. Specifically, consider the following:

If the response to 1) is a practically attainable number, your decision-making process should be predominantly influenced by the characteristics of the absorbing states rather than the initial probabilities of success or failure.

For example, some enjoy the exhilaration of riding a motorcycle, and it's likely that nothing bad will happen on any given ride. However, every ride represents a repeated decision and a chance to reach a terminal state of a severe accident or even death. On average, you can expect to experience severe injury or death every 200,000 miles traveled, based on

data from 2020(there are 468 injuries + 31.64 fatalities per 100 million vehicle miles traveled, 100,000,000 miles / 500 = 200,000 miles). If each ride is 50 miles on average, you would expect an injury or death after 4,000 rides. This is why I don't want my friends to get motorcycles.