Adding other hypothesis doesn't fix the problem. For every hypothesis you can think of, theres a version of it that says "but I survive for sure" tacked on. This hypothesis can never lose evidence relative to the base version, but it can gain evidence anthropically. Eventually, these will get you. Yes, theres all sorts of considerations that are more relevant in a realistic scenario, thats not the point.
The problem, as I understand it, is that there seem to be magical hypothesis you can't update against from ordinary observation, because by construction the only time they make a difference is in your odds of survival. So you can't update them from observation, and anthropics can only update in their favour, so eventually you end up believing one and then you die.
Maybe the disagreement is in how we consider the alternative hypothesis to be? I'm not imagining a broken gun - you could examine your gun and notice it isn't, or just shoot into the air a few times and see it firing. But even after you eliminate all of those, theres still the hypothesis "I'm special for no discernible reason" (or is there?) that can only be tested anthropically, if at all. And this seems worrying.
Maybe heres a stronger way to formulate it: Consider all the copies of yourself across the multiverse. They will sometimes face situations where... (read more)
To clarify, do you think I was wrong to say UDT would play the game? I've read the two posts you linked. I think I understand Weis, and I think the UDT described there would play. I don't quite understand yours.
Another problem with this is that it isn't clear how to form the hypothesis "I have control over X".
You don't. I'm using talk about control sometimes to describe what the agent is doing from the outside, but the hypothesis it believes all have a form like "The variables such and such will be as if they were set by BDT given such and such inputs".
One problem with this is that it doesn't actually rank hypotheses by which is best (in expected utility terms), just how much control is implied.
For the first setup, where its trying to learn what it has control ov... (read more)
From my perspective, Radical Probabilism is a gateway drug.
This post seemed to be praising the virtue of returning to the lower-assumption state. So I argued that in the example given, it took more than knocking out assumptions to get the benefit.
So, while I agree, I really don't think it's cruxy.
It wasn't meant to be. I agree that logical inductors seem to de facto implement a Virtuous Epistemic Process, with attendent properties, whether or not they understand that. I just tend to bring up any interesting-seeming thoughts that are triggered during ... (read more)
Either way, we've made assumptions which tell us which Dutch Books are valid. We can then check what follows.
Ok. I suppose my point could then be made as "#2 type approaches aren't very useful, because they assume something thats no easier than what they provide".
I think this understates the importance of the Dutch-book idea to the actual construction of the logical induction algorithm.
Well, you certainly know more about that than me. Where did the criterion come from in your view?
This part seems entirely addressed by logical induction, to me.
Quite p... (read more)
I wanted to separate what work is done by radicalizing probabilism in general, vs logical induction specifically.
From my perspective, Radical Probabilism is a gateway drug. Explaining logical induction intuitively is hard. Radical Probabilism is easier to explain and motivate. It gives reason to believe that there's something interesting in the direction. But, as I've stated before, I have trouble comprehending how Jeffrey correctly predicted that there's something interesting here, without logical uncertainty as a motivation. In hindsight, I feel hi... (read more)
One of the most important things I learned, being very into nutrition-research, is that most people can't recognize malnutrition when they see it, and so there's a widespread narrative that it doesn't exist. But if you actually know what you're looking for, and you walk down an urban downtown and look at the beggars, you will see the damage it has wrought... and it is extensive.
Can someone recommed a way of learning to recognize this without having to spend effort on nutrition-in-general?
I think giving reasons made this post less effective. Reasons make naive!rationalist more likely to yield on this particular topic, but thats no longer a live concern, and it probably inhibits learning the general lesson.
What is actually left of Bayesianism after Radical Probabilism? Your original post on it was partially explaining logical induction, and introduced assumptions from that in much the same way as you describe here. But without that, there doesn't seem to be a whole lot there. The idea is that all that matters is resistance to dutch books, and for a dutch book to be fair the bookie must not have an epistemic advantage over the agent. Said that way, it depends on some notion of "what the agent could have known at the time", and giving a coherent account of thi... (read more)
Definition (?). A non-anthropic update is one based on an observation E that has no (or a negligible) bearing on how many observers in your reference class there are.
Not what I meant. I would say anthropic information tells you where in the world you are, and normal information tell you what the world is like. An anthropic update, then, reasons about where you would be, if the world were a certain way, to update on world-level probabilities from anthropic information. So sleeping beauty with N outsiders is a purely anthropic update by my count. Big worlds ... (read more)
I have thought about this before posting, and I'm not sure I really believe in the infinite multiverse. I'm not even sure if I believe in the possibility of being an individual exception for some other sort of possibility. But I don't think just asserting that without some deeper explanation is really a solution either. We can't just assign zero probability willy-nilly.
That link also provides a relatively simple illustration of such an update, which we can use as an example:
I didn't consider that illustrative of my question because "I'm in the sleeping beauty problem" shouldn't lead to a "normal" update anyway. That said I haven't read Anthropic Bias, so if you say it really is supposed to be the anthropic update only then I guess. The definition in terms of "all else equal" wasn't very informative for me here.
... (read more)To fix this issue we would need to include in your reference class whoever has the same background knowledge as
In most of the discussion from the above link, those fractions are 100% on either A or B, resulting, according to SSA, in your posterior credences being the same as your priors.
For the anthropic update, yes, but isn't there still a normal update? Where you just update on the gun not firing, as an event, rather than your existence? Your link doesn't have examples where that would be relevant either way. But if we didn't do this normal updating, then it seems like you could only learn from an obervation if some people in your reference class make the opposit... (read more)
Hm. I think your reason here is more or less "because our current formalisms say so". Which is fair enough, but I don't think it gives me an additional reason - I already have my intuition despite knowing it contradicts them.
What if the game didn't kill you, it just made you sick? Would your reasoning still hold?
No. The relevant gradual version here is forgetting rather than sickness. But yes, I agree there is an embedding question here.
In that case, after every game, 1 in 6 of you die in the A scenario, and 0 in the B scenario, but in either scenario there are still plenty of "you"s left, and so SSA would say you shouldn't increase your credence in B (provided you remove your corpses from your reference class, which is perfectly fine a la Bostrom).
Can you spell that out more formally? It seems to me that so long as I'm removing the corpses from my reference class, 100% of people in my reference class remember surviving every time so far just like I do, so SSA just does normal bayesian up... (read more)
Isn't the prior probability of B the sum over all specific hypotheses that imply B?
I would say there is also a hypothesis that just says that your probability of survival is different, for no apparent reason, or only similarly stupid reasons like "this electron over there in my pinky works differently from other electrons" that are untestable for the same anthropic reasons.
Your going to have some prior on "this is safer for me, but not totally save, it actually has a 1/1000 chance of killing me." This seems no less reasonable than the no chance of killing you prior.
If you've survived often enough, this can go arbitrarily close to 0.
I think that playing this game is the right move
Why? It seems to me like I have to pick between the theories "I am an exception to natural law, but only in ways that could also be produced by the anthropic effect" and "Its just the anthropic effect". The latter seems obviously more reasonable to me, and it implies I'll die if I play.
Sure, but with current theories, even after you've gotten an infinite amount of evidence against every possible alternative consideration, you'll still believe that youre certain to survive. This seems wrong.
Hence, I (standing outside of PA) assert that (since I think PA is probably consistent) agents who use PA don't know whether PA is consistent, but, believe the world is consistent.
Theres two ways to express "PA is consistent". The first is . The other is a complicated construct about Gödel-encodings. Each has a corresponding version of "the world is consistent" (indeed, this "world" is inside PA, so they are basically equivalent). The agent using PA will believe only the former. The Troll expresses the consistency of PA using provabilit... (read more)
If you're reasoning using PA, you'll hold open the possibility that PA is inconsistent, but you won't hold open the possibility that . You believe the world is consistent. You're just not so sure about PA.
Do you? This sounds like PA is not actually the logic you're using. Which is realistic for a human. But if PA is indeed inconsistent, and you don't have some further-out system to think in, then what is the difference to you between "PA is inconsistent" and "the world is inconsistent"? In both cases you just believe everything and its negatio... (read more)
If I'm using PA, I can prove that .
Sure, thats always true. But sometimes its also true that . So unless you believe PA is consistent, you need to hold open the possibility that the ball will both (stop and continue) and (do at most one of those). But of course you can also prove that it will do at most one of those. And so on. I'm not very confident whats right, ordinary imagination is probably just misleading here.
It seems particularly absurd that, in some sense, the reason you think that is just because you think that.
The fa... (read more)
Heres what I imagine the agent saying in its defense:
Yes, of course I can control the consistency of PA, just like everything else can. For example, imagine that you're using PA and you see a ball rolling. And then in the next moment, you see the ball stopping and you also see the ball continuing to roll. Then obviously PA is inconsistent.
Now you might think this is dumb, because its impossible to see that. But why do you think its impossible? Only because its inconsistent. But if you're using PA, you must believe PA really might be inconsistent, so you ca... (read more)
A different perspective, perhaps not motivating quite the same things as yours:
A theory needs to be able to talk about itself and its position in and effect on the world. So in particular it will have beliefs about how the application of just this theory in just this position will influence whatever it is that we want the theory to do. Then reflective consistency demands that the theory rates itself well on its objective: If I have a belief, and also a belief that the first belief is most likely the result of deception, then ... (read more)
Yeah you're right. I just realized that what I had in mind originally already implicitly had superationality.
I'm not sure exactly what setup you're imagining.
Defecting one round earlier dominates pure tit-for-tat, but defecting five rounds earlier doesn't dominate pure tit-for-tat. Pure tit-for-tat is better against pure tit-for-tat. So there might be a nash equilibrium containing only strategies that play tit-for-tat until the last few rounds.
Ah, I see, so this approach differs a lot from Ole Peters'
I looked at his paper on the petersburg paradox and I think he gets the correct result for the iterated game. He doesn't do fractional betting, but he has a variable... (read more)
For me this rewording takes the teeth out of the statement
And that's why you should be careful about it. With "barely worth living", you have a clear image of what you're thinking about. With "barely meeting the criteria for me valuing them" you probably don't, you just have an inferential insurance that you are happy applying the repugnant conclusion to them. The argument for total value is not actually any stronger or weaker than it was before - you've just decided that the intution against it ought to be rationally overridden, because the opposite is "trivially true".
but the only really arbitrary seeming port is the choice about how to order the limits.
Deciding that the probabaility of overtaking needs to be might also count here. The likely alternative would be . I've picked 0 because I expect that normally this limit will be 0 or 1 or 0.5, and if we get other values then might lead to intransitive comparisons - but I might rethink this if I discover a case where the limit isn't one of those three.
... (read more)The tit-for-tat strategy unravels all the way back to the beginning, and we're b
When we don't know how to solve a problem even given infinite computing power, the very work we are trying to do is in some sense murky to us.
I wonder where this goes with questions about infinite domains. It seems to me that I understand what it means to argmax a generic bounded function on a generic domain, but I don't know an algorithm for it and as far as I know there can't be one. So it seems taking this very seriously would lead us to some form of constructionism.
But you're saying we should change that game from f:Ω->[0,inf] to g:Ω,?R?->[0,inf]
No. The key change I'm making is from assigning every strategy an expected value (normally real, but including infinity as you do should be possible) to having the essential math thing be a comparison between two strategies. With your version, all we can say is that all-in has EV 0, don't bet has EV 1, and everything else has EV infinity - but by doing the comparison inside the limits, we get some more differentiation there.
R isn't distinct from Ω. The EV function "bind... (read more)
If I understand that context correctly, thats not what I'm doing. The unconventional writing doesn't pull a lim outside an EV, it replaces an EV with a lim construction. In fact, that comment seems somewhat in support of my point: he's saying that doesn't properly represent an infinite game. And if the replacing of E with the n-lim that I'm doing works out, then thats saying that the order of limits that results in Kelly is the right one. Its similar to what I said (less formally) about expected utility maximization not re... (read more)
I've worked out our earlier discussion of the order of limits into a response to this here.
I now think you were right about it not solving anthropics. I interpreted afoundationalism insufficiently ambitiously; now that I have a proof-of-concept for normative semantics I can indeed not find it to presuppose an anthropics.
It seems like there's some mystery process that connects observations to hypotheses about what some mysterious other party "really means"
The hypothesis do that. I said
We start out with a prior over hypothesis about meaning. Such a hypothesis generates a propability distribution over all propositions of the form "[Observation] means [propositon]." for each observation (including the possibility that the observation means nothing).
Why do you think this doesn't answer your question?
... (read more)but if this process always (sic) connects the observations to propositio
I'm not sure what you don't understand, so I'll explain a few things in that area and hope I hit the right one:
I give sentences their english name in the example to make it understandable. Here are two ways you could give more detail on the example scenario, each of which is consistent:
I've thought about applying normativity to language learning some more, its written up here.
This is simply saying that, given we've randomly selected a truth table, the probability that every snark is a boojum is 0.4.
Maybe I misunderstand your quantifiers, but I don't think it says that. It says that for every monster, if we randomly pick a truth table on which it's a snark, the propability that it's also a boojum on that table is 0.4. I think the formalism is right here and your description of it wrong, because thats just what I would expect to mean.
I thi... (read more)
The first sentence of your first paragraph appears to appeal to experiment, while the first sentence of your second paragraph seems to boil down to "Classically, X causes Y if there is a significant statistical connection twixt X and Y."
No. "Dependence" in that second sentence does not mean causation. It just means statistical dependence. The definition of dependence is important because an intervention must be statistically independent from things "before" the intervention.
None of these appear to involve intervention.
These are methods of causal inf... (read more)
Pearl's answer, from IIRC Chapter 7 of Causality, which I find 80% satisfying, is about using external knowledge about repeatability to consider a system in isolation. The same principle gets applied whenever a researcher tries to shield an experiment from outside interference.
This is actually a good illustration of what I mean. You can't shield an experiment from outside influence entirely, not even in principle, because its you doing the shielding, and your activity is caused by the rest of the world. If you decide to only look at a part of the world, on... (read more)
What I had in mind was increasing precision of Y.
X and Y are variables for events. By complexity class I mean computational complexity, not sure what scaling parameter is supposed to be there?
I'm not sure if "translation" is a good word for what youre talking about. For example it's not clear what a Shor-to-Constance translation would look like. You can transmit the results of statistical analysis to non-technical people, but the sharing of results wasn't the problem here. The Constance-to-Shor translator described Constances reasons in such a way that Shor can process them, and what could an inverse of this be? Constances beliefs are based on practical experience, and Shor simply hasn't had that, whereas Constance did get "data" in a broad sen... (read more)
We don't have to justify using "similarity", "resemblance"
I think you still do. In terms of induction, you still have the problem of grue and bleen. In terms of Occams Razor, it's the problem of which language a description needs to be simple in.
It's sort of like the difference between a programmable computer vs an arbitrary blob of matter.
This is close to what I meant: My neurons keep doing something-like reinforcement learning, whether or not I theoretically believe thats valid. "I in fact can not think outside this" does adress the worry about a merely rational constraint.
On the other hand, we do want AI to eventually consider other hardware, and that might even be necessary for normal embedded agency, since we dont fully trust our hardware even when we dont want to normal-sense-change it... (read more)
we have an updating process which can change its mind about any particular thing; and that updating process itself is not the ground truth, but rather has beliefs (which can change) about what makes an updating process legitimate.
This should still be a strong formal theory, but one which requires weaker assumptions than usual
There seems to be a bit of a tension here. What you're outlining for most of the post still requires a formal system with assumptions within which to take the fixed point, but then that would mean that it can't change its mind about an... (read more)
Where do these crisp ontologies come from, if (under the signalling theory of meaning) symbols only have probabilistic meanings?
There are two things here which are at least potentially distinct: The meaning of symbols in thinking, and their meaning in communication. I'd expect these mechanisms to have a fair bit on common, but specifically the problem of alignment of the speakers which is adressed here would not seem to apply to the former. So I dont think we need to wonder here where those crisp ontologies came from.
... (read more)This is the type of thinking that can't
However, how I assign value to divergent sums is subjective -- it cannot be determined precisely from how I assign value to each of the elements of the sum, because I'm not trying to assume anything like countable additivity.
This implies that you believe in the existence of countably infinite bets but not countably infinite dutch booking processes. Thats seems like a strange/unphysical position to be in - if that were the best treatment of infinity possible, I think infinity is better abandoned. Im not even sure the framework in your linked post can really... (read more)
I'm generally OK with dropping continuity-type axioms, though, in which case you can have hyperreal/surreal utility to deal with expectations which would otherwise be problematic (the divergent sums which unbounded utility allows).
Have you worked this out somewhere? I'd be interested to see it but I think there are some divergences it can't adress. There is for one the Pasadena paradox, which is also a divergent sum but one which doesn't stably lead anywhere, not even to infinity. The second is an apparently circular dominance relation: Imagine you are lin... (read more)
But I still think it's important to point out that the behavioral recommendations of Kelly do not violate the VNM axioms in any way, so the incompatibility is not as great as it may seem.
I think the interesting question is what to do when you expect many more, but only finitely many rounds. It seems like Kelly should somehow gradually transition, until it recommends normal utility maximization in the case of only a single round happening ever. Log utility doesn't do this. I'm not sure I have anything that does though, so maybe it's unfair to ask it from yo... (read more)
I'm not sure why you think there would be a decision theory in that as well. Obviously when BDT decides its output, it will have some theory about how its output nodes propagate. But the hypothesis as a whole doesn't think about influence. Its just a total probability distribution, and it includes that some things inside it are distributed according to BDT. It doesn't have beliefs about "if the output of ... (read more)