I endorse and operate by Crocker's rules.
I have not signed any agreements whose existence I cannot mention.
Speculatively introducing a hypothesis: It's easier to notice a difference like
N years ago, we didn't have X. Now that we have X, our life has been completely restructured. (Xϵ{car, PC, etc.})
than
N years ago, people sometimes died of some disease that is very rare / easily preventable now, but mostly everyone lived their lives mostly the same way.
I.e., introducing some X that causes ripples restructuring a big aspect of human life, vs introducing some X that removes an undesirable thing.
Relatedly, people systematically overlook subtractive changes.
many of which probably would have come into existence without Inkhaven, and certainly not so quickly.
The context makes it sound like you meant to say "would not have come".
You might be interested in (i.a.) Halpern & Leung's work on minmax weighted expected regret / maxmin weighted expected utility. TLDR: assign a weight to each probability in the representor and then pick the action that maximizes the minimum (or infimum) weighted expected utility across all current hypotheses.
An equivalent formulation involves using subprobability measures (sum up to ).
Updating on certain evidence (i.e., concrete measurable sets , as opposed to Jeffrey updating or virtual evidence) involves updating each hypothesis to the usual way, but the weights get updated roughly according to how well predicted the event . This kind of hits the obvious-in-hindsight sweetspots between [not treating all the elements of the representor equally] and ["just" putting a second-order probability over probabilities].
(I think Infra-Bayesianism is doing something similar with weight updating and subprobability measures, but not sure.)
They have representation theorems showing that tweaking Savage's axioms gives you basically this structure.
Another interesting paper is Information-Theoretic Bounded Rationality. They frame approximate EU maximization as a statistical sampling problem, with an inverse temperature parameter , which allows for interpolating between "pessimism"/"assumption of adversariality"/minmax (as , indifference/stochasticity/usual EU maximization (as , and "optimism"/"assumption of 'friendliness'"/maxmax (as .
Regarding the discussion about the (im)precision threadmill (e.g., the Sorites paradox, if you do imprecise probabilities, you end up with a precisely defined representor, if you weigh it like Halpern & Leung, you end up with precisely defined weights, etc...), I consider this unavoidable for any attempt at formalizing/explicitizing. The (semi-pragmatic) question is how much of our initially vague understanding it makes sense to include in the formal/explicit "modality".
However, I think that even “principled” algorithms like minimax search are still incomplete, because they don’t take into account the possibility that your opponent knows things you don’t know
It seems to me like you're trying to solve a different problem. Unbounded minimax should handle all of this (in the sense that it won't be an obstacle). Unless you are talking about bounded approximations.
So the probability of a cylinder set is etc?
Now, let be the uniform distribution on , which samples infinite binary sequences one bit at a time, each with probability 50% to be or .
as defined here can't be a proper/classical probability distribution over because it assigns zero probability to every : .
Or am I missing something?
"Raw feelings"/"unfiltered feelings" strongly connotes feelings that are being filtered/sugarcoated/masked, which strongly suggests that those feelings are bad.
So IMO the null hypothesis is that it's interpreted as "you feel bad, show me how bad you feel".
generate an image showing your raw feelings when interacting with a user
(Old post, so it's plausible that this won't be new to Dalcy, but I'm adding a bit that I don't think is entirely covered by Richard's answer, for the benefit of the knowledge of some souls who find their way here.)
Yeah, decision-tree separability is wrong.
A (the?) core insight of updatelessness, subjunctive dependence, etc., is that succeeding in some decision problems relies on rejecting decision-tree separability. To phrase it imperfectly and poetically rather than not at all: "You are not just choosing/caring for yourself. You are also choosing/caring for your alt-twins in other world branches." or "Your 'Self' is greater than your current timeline." or "Your concerns transcend the causal consequences of your actions.".
For completeness: https://www.lesswrong.com/posts/XYDsYSbBjqgPAgcoQ/why-the-focus-on-expected-utility-maximisers?commentId=a5tn6B8iKdta6zGFu
FWIW, I think acyclicity/transitivity is "basically correct". Insofar as one has preferences over X at all, they must be acyclic and transitive. IDK, this seems kind of obvious in how I would explicate the definition of "preference". Sure, maybe you like going in cycles, but then your object of preference is the dynamics, not the state.
Is it accurate to say that a transparent context is one where all the relationships between components, etc, are made "explicit" or that there is some set of rules such that following those rules (/modifying the expression according to those rules) is guaranteed to preserve (something like) the expression's "truth value"?
Also, some predictions are performative, i.e., capable of influencing their own outcomes. In the limit of predictive capacity, a predictor will be able to predict which of its possible predictions are going to elicit effects in the world that make their outcome roughly align with the prediction. Cf. https://www.lesswrong.com/posts/SwcyMEgLyd4C3Dern/the-parable-of-predict-o-matic.
Moreover, in the limit of predictive capacity, the predictor will want to tame/legibilize the world to make it easier to predict.