This post was rejected for the following reason(s):
No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).
Misappropriation of probability reviewed under various complexities of thought
The Blind Men and the Elephant... or was it a Bear?
Random meandering thought on my walk out today sometime in [REDACTED]:
There exists a non zero probability people in the streets contain hostile intent, in the event and consideration where I rationally write it off as bogus I still respect the non zero probability and brace myself. In my case the resulting action appears as more rational than a weaker woman who may hide her biases less effectively out of genuine fear whereas in my case, confidence in dealing with the scenario presents a more “rational” attitude, despite maybe similar conclusions being reached. In this context the probability is non trivial and we work with an existing model, so can’t actually be dismissed as irrational, but rather a behavioural strategic preference for risk. The comparative severity of p() occurring between the woman and I, influences external attitudes but not internal reasoning. In observing awareness of this situation in women, the responses range. Some understand that only the severity is altered and not the probability itself. But more commonly, severity is conflated with probability - believing the likelihood increases with severity.
The Religious Parallel
To what extent (it doesn’t really) this mirrors chain messages or religious parallels - that someone may understand the infinitesimally small chance of their religion being correct among infinite claims, but given its bigger experiential weight, and their desire to avoid such a “maximally detrimental” outcome their very internal rationale is taken hostage and crosses the “street”. An argument can be made whether they truly intuitively understand the concept of infinitesimally small as it doesn’t mirror natural analogies as much as p(human hostilities) are not infinitesimally small. Hence, this intuition born from natural expectations isn’t correct and shouldn’t be relied on. As in the case of infinitesimally small probabilities, (excluding prob densities as infinite categorical claims do not work with ranges) by any mathematical definition of the number, there does not exist any non-zero probability. Or rather there is no literal difference between 0 and this 0.000… recurring.
The actual thought experiment for Pascals mugging and Wager seem more like intuition pumps that abuse the conflation of probability with uncertainty. Uncertainty encapsulates any lack of information including lack of information of the distribution of outcomes or even the potential set of outcomes; while probability is a subset which can include outcomes whose effective probability is 0. From the perspective of an individual in this scenario, it might seem all the same but treating uncertainty as probability leads to the bad decisions outlined by this thought experiment rather than being a true critique on probability and certain utility functions which use expected value calculations. It serves as a local minima in intuition space for people that are predisposed to apply naive quantification to analyse inappropriate situations.
To restrain one’s natural tendency to misappropriate probability, requires an imaginative consideration of a sample space of infinite competing claims, each of which may be true regardless of the god/mugger’s framing; ulterior, unexpected or unreferenced alternative situations may also occur. The over focus on the scripted premise is then a result of paucity of imagination and relates to the lying God’s consideration in other chapters. To distinguish between stated and non stated outcomes, one then needs a model or priors.
Teasing Uncertainty from Probability
Probability is a framework that describes our relations to a set of outcomes in the presence of uncertainty and abstraction/lack of focus, e.g. not focusing on mechanisms or not including stochastic factors into our model and it simply indicates lack of specific knowledge or modelling of the mechanistic properties.
One might argue that a form of uncertainty, namely epistemic uncertainty focuses around this lack of knowledge rather than empirical distributions.
Within uncertainty where probability is applied there is epistemic uncertainty and aleatoric uncertainty:
Within epistemic uncertainty that relates to Bayes, with priors and posteriors and a focus on subjective uncertainty, then probability serves a different role than the aleatoric instance. Here it asks, given the limited inputted stimuli and my prior conceptualisations what does the likely objective world actually contain. It can be placed in a probabilistic framework similar to a poor/simple visual network that can take in perceived stimuli from a ventral stream and in constructing a representation of the stimuli in its architecture, produce potential solutions from this forward modelling problem. The problem is then that individual solutions have numerous correlated inverse modelling solutions when one considers what stimuli would cause the observed network representation. The lack of awareness of precise neural differences between these states as well as combined uncertainty regarding the general world leads to the conceptualisation of epistemic uncertainty in humans.
The misconceptions regarding conflating uncertainty with inappropriate probability occur primarily due to three intuition errors: a set of outcome assuming a uniform distribution of those outcomes; misunderstanding that applying a probability system to something can mean one of its outcomes has a probability of 0 and is not guaranteed to be a realisable outcome by simply presenting it as possibility; the third is attributing probability to a lack of knowledge without any abstracted models, serving as a stand-in for otherwise claiming “I dont know”. This seems to be an overreach likely stemming from familiarity of applying probability as a framework given uncertainty and crucially a model, and the overreach is actualised by applying probability for uncertainty that doesnt include a model, insight or any kind of information to go off on, making probability inappropriate in these scenarios.
Limits of Parsimony
Aleatory uncertainty mostly exists as a result of abstraction away from deterministic processes and underlying fundamentals, with the exception of quantum mechanics, which is argued both ways, either as being inherently random, or influenced deterministically by non-local hidden variables. I don’t tend to see why randomness has a place in the universe; I view the quantum mechanics dilemma as the hard limit to our interactability with the mechanisms of this world, rather than being inherent randomness, but neither stance seems falsifiable.
The rule of parsimony works as a virtually universal rule for discerning between two explanatory or instrumental frameworks that support observations or implement desired outcomes. This works extremely well as a rule that emphasizes reflection and expression of all modelled variables. Between frameworks where one has surplus explanatory variables, it makes the more baggaged framework more questionable. An appeal to parsimony usually occurs to justify randomness as an explanation in accordance with this applied rule. Generally in the world things work so that we’re always able to interact with the underlying mechanism to express these modelled variables and parsimony works in light of accreting evidence to either falsify frameworks or select frameworks that dont inadvertently make complex implications that can be falsified by newer evidence. In the case where the mechanism is non interactable then an appeal to parsimony doesn’t serve any greater effects.
Additionally, I find it hard to even accept treatment of inherent randomness as not a “black box hand waving away” of an equally possible non interacting mechanism. My perspective is that an appeal to explicit black boxes like randomness shouldn’t even be labelled as more parsimonious.
The third is attributing probability to a lack of knowledge without any abstracted models, serving as a stand-in for otherwise claiming “I dont know”.