All of Bunthut's Comments + Replies

Phylactery Decision Theory

Right, but then, are all other variables unchanged? Or are they influenced somehow? The obvious proposal is EDT -- assume influence goes with correlation.

I'm not sure why you think there would be a decision theory in that as well. Obviously when BDT decides its output, it will have some theory about how its output nodes propagate. But the hypothesis as a whole doesn't think about influence. Its just a total probability distribution, and it includes that some things inside it are distributed according to BDT. It doesn't have beliefs about "if the output of ... (read more)

Learning Russian Roulette

Adding other hypothesis doesn't fix the problem. For every hypothesis you can think of, theres a version of it that says "but I survive for sure" tacked on. This hypothesis can never lose evidence relative to the base version, but it can gain evidence anthropically. Eventually, these will get you. Yes, theres all sorts of considerations that are more relevant in a realistic scenario, thats not the point.

2ChristianKl2dYou don't need to add other hypothesis to know that there might be unknown additional hypothesis.
Learning Russian Roulette

The problem, as I understand it, is that there seem to be magical hypothesis you can't update against from ordinary observation, because by construction the only time they make a difference is in your odds of survival. So you can't update them from observation, and anthropics can only update in their favour, so eventually you end up believing one and then you die.

2Charlie Steiner2dThe amount that I care about this problem is proportional to the chance that I'll survive to have it.
Learning Russian Roulette

Maybe the disagreement is in how we consider the alternative hypothesis to be? I'm not imagining a broken gun - you could examine your gun and notice it isn't, or just shoot into the air a few times and see it firing. But even after you eliminate all of those, theres still the hypothesis "I'm special for no discernible reason" (or is there?) that can only be tested anthropically, if at all. And this seems worrying.

Maybe heres a stronger way to formulate it: Consider all the copies of yourself across the multiverse. They will sometimes face situations where... (read more)

2Charlie Steiner2dI think in the real world, I am actually accumulating evidence against magic faster than I am trying to commit elaborate suicide.
Learning Russian Roulette

To clarify, do you think I was wrong to say UDT would play the game? I've read the two posts you linked. I think I understand Weis, and I think the UDT described there would play. I don't quite understand yours.

2Charlie Steiner3dI agree with faul sname, ADifferentAnonymous, shminux, etc. If every single person in the world had to play russian roulette (1 bullet and 5 empty chambers), and the firing pin was broken on exactly one gun in the whole world, everyone except the person with the broken gun would be dead after about 125 trigger pulls. So if I remember being forced to pull the trigger 1000 times, and I'm still alive, it's vastly more likely that I'm the one human with the broken gun, or that I'm hallucinating, or something else, rather than me just getting lucky. Note that if you think you might be hallucinating, and you happen to be holding a gun, I recommend putting it down and going for a nap, not pulling the trigger in any way. But for the sake of argument we might suppose the only allowed hypotheses are "working gun" and "broken gun." Sure, if there are miraculous survivors, then they will erroneously think that they have the broken gun, in much the same way that if you flipped a coin 1000 times and just so happened to get all heads, you might start to think you had an unfair coin. We should not expect to be able to save this person. They are just doomed. It's like poker. I don't know if you've played poker, but you probably know that the basic idea is to make bets that you have the best hand. If you have 4 of a kind, that's an amazing hand, and you should be happy to make big bets. But it's still possible for your opponent to have a royal flush. If that's the case, you're doomed, and in fact when the opponent has a royal flush, 4 of a kind is almost the worst hand possible! It makes you think you can bet all your money when in fact you're about to lose it all. It's precisely the fact that four of a kind is a good hand almost all the time that makes it especially bad that remaining tiny amount of the time. The person who plays russian roulette and wins 1000 times with a working gun is just that poor sap who has four of a kind into a royal flush. (P.S.: My post is half explan
Phylactery Decision Theory

Another problem with this is that it isn't clear how to form the hypothesis "I have control over X".

You don't. I'm using talk about control sometimes to describe what the agent is doing from the outside, but the hypothesis it believes all have a form like "The variables such and such will be as if they were set by BDT given such and such inputs".

One problem with this is that it doesn't actually rank hypotheses by which is best (in expected utility terms), just how much control is implied.

For the first setup, where its trying to learn what it has control ov... (read more)

2abramdemski2dRight, but then, are all other variables unchanged? Or are they influenced somehow? The obvious proposal is EDT -- assume influence goes with correlation. Another possible answer is "try all hypotheses about how things are influenced."
Reflective Bayesianism

From my perspective, Radical Probabilism is a gateway drug.

This post seemed to be praising the virtue of returning to the lower-assumption state. So I argued that in the example given, it took more than knocking out assumptions to get the benefit.

So, while I agree, I really don't think it's cruxy. 

It wasn't meant to be. I agree that logical inductors seem to de facto implement a Virtuous Epistemic Process, with attendent properties, whether or not they understand that. I just tend to bring up any interesting-seeming thoughts that are triggered during ... (read more)

2abramdemski2dAgreed. Simple Bayes is the hero of the story in this post, but that's more because the simple bayesian can recognize that there's something beyond.
Reflective Bayesianism

Either way, we've made assumptions which tell us which Dutch Books are valid. We can then check what follows.

Ok. I suppose my point could then be made as "#2 type approaches aren't very useful, because they assume something thats no easier than what they provide".

I think this understates the importance of the Dutch-book idea to the actual construction of the logical induction algorithm. 

Well, you certainly know more about that than me. Where did the criterion come from in your view?

This part seems entirely addressed by logical induction, to me.

Quite p... (read more)

I wanted to separate what work is done by radicalizing probabilism in general, vs logical induction specifically. 

From my perspective, Radical Probabilism is a gateway drug. Explaining logical induction intuitively is hard. Radical Probabilism is easier to explain and motivate. It gives reason to believe that there's something interesting in the direction. But, as I've stated before, I have trouble comprehending how Jeffrey correctly predicted that there's something interesting here, without logical uncertainty as a motivation. In hindsight, I feel hi... (read more)

I'm from a parallel Earth with much higher coordination: AMA

One of the most important things I learned, being very into nutrition-research, is that most people can't recognize malnutrition when they see it, and so there's a widespread narrative that it doesn't exist. But if you actually know what you're looking for, and you walk down an urban downtown and look at the beggars, you will see the damage it has wrought... and it is extensive.

Can someone recommed a way of learning to recognize this without having to spend effort on nutrition-in-general?

Don't Sell Your Soul

I think giving reasons made this post less effective. Reasons make naive!rationalist more likely to yield on this particular topic, but thats no longer a live concern, and it probably inhibits learning the general lesson.

Reflective Bayesianism

What is actually left of Bayesianism after Radical Probabilism? Your original post on it was partially explaining logical induction, and introduced assumptions from that in much the same way as you describe here. But without that, there doesn't seem to be a whole lot there. The idea is that all that matters is resistance to dutch books, and for a dutch book to be fair the bookie must not have an epistemic advantage over the agent. Said that way, it depends on some notion of "what the agent could have known at the time", and giving a coherent account of thi... (read more)

2abramdemski4dPart of the problem is that I avoided getting too technical in Radical Probabilism, so I bounced back and forth between different possible versions of Radical Probabilism without too much signposting. I can distinguish at least three versions: 1. Jeffrey's version. I don't have a good source for his full picture. I get the sense that the answer to "what is left?" is "very little!" -- EG, he didn't think agents have to be able to articulate probabilities. But I am not sure of the details. 2. The simplification of Jeffrey's version, where I keep the Kolmogorov axioms (or the Jeffrey-Bolker axioms) but reject Bayesian updates. 3. Skyrms' deliberation dynamics. This is a pretty cool framework and I recommend checking it out (perhaps via his book The Dynamics of Rational Deliberation). The basic idea of its non-bayesian updates is, it's fine so long as you're "improving" (moving towards something good). 4. The version represented by logical induction. 5. The Shafer & Vovk version. I'm not really familiar with this version, but I hear it's pretty good. (I can think of more, but I cut myself off.) Making a broad generalization, I'm going to stick things into camp #2 above or camp #4. Theories in camp #2 have the feature that they simply assume a solid notion of "what the agent could have known at the time". This allows for a nice simple picture in which we can check Dutch Book arguments. However, it does lend itself more easily to logical omniscience, since it doesn't allow a nuanced picture of how much logical information the agent can generate. Camp #4 means we do give such a nuanced picture, such as the poly-time assumption. Either way, we've made assumptions which tell us which Dutch Books are valid. We can then check what follows. I think this understates the importance of the Dutch-book idea to the actual construction of the logical induction algorithm. The criterion came first, and the construction was finished soon after.
Learning Russian Roulette

Definition (?). A non-anthropic update is one based on an observation E that has no (or a negligible) bearing on how many observers in your reference class there are.

Not what I meant. I would say anthropic information tells you where in the world you are, and normal information tell you what the world is like. An anthropic update, then, reasons about where you would be, if the world were a certain way, to update on world-level probabilities from anthropic information. So sleeping beauty with N outsiders is a purely anthropic update by my count. Big worlds ... (read more)

Learning Russian Roulette

I have thought about this before posting, and I'm not sure I really believe in the infinite multiverse. I'm not even sure if I believe in the possibility of being an individual exception for some other sort of possibility. But I don't think just asserting that without some deeper explanation is really a solution either. We can't just assign zero probability willy-nilly.

Learning Russian Roulette

That link also provides a relatively simple illustration of such an update, which we can use as an example:

I didn't consider that illustrative of my question because "I'm in the sleeping beauty problem" shouldn't lead to a "normal" update anyway. That said I haven't read Anthropic Bias, so if you say it really is supposed to be the anthropic update only then I guess. The definition in terms of "all else equal" wasn't very informative for me here.

To fix this issue we would need to include in your reference class whoever has the same background knowledge as

... (read more)
1Dmitriy Vasilyuk6dLearning that "I am in the sleeping beauty problem" (call that E) when there are N people who aren't is admittedly not the best scenario to illustrate how a normal update is factored into the SSA update, because E sounds "anthropicy". But ultimately there is not really much difference between this kind of E and the more normal sounding E* = "I measured the CMB temperature to be 2.7K". In both cases we have: 1. Some initial information about the possibilities for what the world could be: (a) sleeping beauty experiment happening, N + 1 or N + 2 observers in total; (b) temperature of CMB is either 2.7K or 3.1K (I am pretending that physics ruled out other values already). 2. The observation: (a) I see a sign by my bed saying "Good morning, you in the sleeping beauty room"; (b) I see a print-out from my CMB apparatus saying "Good evening, you are in the part of spacetime where the CMB photons hit the detector with energies corresponding to 3.1K ". In either case you can view the observation as anthropic or normal. The SSA procedure doesn't care how we classify it, and I am not sure there is a standard classification. I tried to think of a possible way to draw the distinction, and the best I could come up with is: Definition (?). A non-anthropic update is one based on an observation E that has no (or a negligible) bearing on how many observers in your reference class there are. I wonder if that's the definition you had in mind when you were asking about a normal update, or something like it. In that case, the observations in 2a and 2b above would both be non-anthropic, provided N is big and we don't think that the temperature being 2.7K or 3.1K would affect how many observers there would be. If, on the other hand, N = 0 like in the original sleeping beauty problem, then 2a is anthropic. Finally, the observation that you survived the Russian roulette game would, on this definition, similarly be anthropic or not depending on who you put in th
Learning Russian Roulette

In most of the discussion from the above link, those fractions are 100% on either A or B, resulting, according to SSA, in your posterior credences being the same as your priors.

For the anthropic update, yes, but isn't there still a normal update? Where you just update on the gun not firing, as an event, rather than your existence? Your link doesn't have examples where that would be relevant either way. But if we didn't do this normal updating, then it seems like you could only learn from an obervation if some people in your reference class make the opposit... (read more)

2Dmitriy Vasilyuk8dYou have described some bizarre issues with SSA, and I agree that they are bizarre, but that's what defenders of SSA have to live with. The crucial question is: The normal updates are factored into the SSA update. A formal reference would be the formula for P(H|E) on p.173 of Anthropic Bias, which is the crux of the whole book. I won't reproduce it here because it needs a page of terminology and notation, but instead will give an equivalent procedure, which will hopefully be more transparently connected with the normal verbal statement of SSA, such as one given in https://www.lesswrong.com/tag/self-sampling-assumption: That link also provides a relatively simple illustration of such an update, which we can use as an example: In this case, the reference class is not trivial, it includes N + 1 or N + 2 observers (observer-moments, to be more precise; and N = trillion), of which only 1 or 2 learn that they are in the sleeping beauty problem. The effect of learning new information (that you are in the sleeping beauty problem or, in our case, that the gun didn't fire for the umpteenth time) is part of the SSA calculation as follows: * Call the information our observer learns E (in the example above E = you are in the sleeping beauty problem) * You go through each possibility for what the world might be according to your prior. For each such possibility i (with prior probability Pi) you calculate the chance Qi of having your observations E assuming that you were randomly selected out of all observers in your reference class (set Qi = 0 if there no such observers). * In our example we have two possibilities: i = A, B, with Pi = 0.5. On A, we have N + 1 observers in the reference class, with only 1 having the information E that they are in the sleeping beauty problem. Therefore, QA = 1 / (N + 1) and similarly QB = 2 / (N + 2). * We update the priors Pi based on these probabilities, the lower the chance Qi of you having E in some possibilit
Learning Russian Roulette

Hm. I think your reason here is more or less "because our current formalisms say so". Which is fair enough, but I don't think it gives me an additional reason - I already have my intuition despite knowing it contradicts them.

What if the game didn't kill you, it just made you sick? Would your reasoning still hold?

No. The relevant gradual version here is forgetting rather than sickness. But yes, I agree there is an embedding question here.

Learning Russian Roulette

In that case, after every game, 1 in 6 of you die in the A scenario, and 0 in the B scenario, but in either scenario there are still plenty of "you"s left, and so SSA would say you shouldn't increase your credence in B (provided you remove your corpses from your reference class, which is perfectly fine a la Bostrom).

Can you spell that out more formally? It seems to me that so long as I'm removing the corpses from my reference class, 100% of people in my reference class remember surviving every time so far just like I do, so SSA just does normal bayesian up... (read more)

1Dmitriy Vasilyuk8dSure, as discussed for example here: https://www.lesswrong.com/tag/self-sampling-assumption, [https://www.lesswrong.com/tag/self-sampling-assumption),] if there are two theories, A and B, that predict different (non-zero) numbers of observers in your reference class, then on SSA that doesn't matter. Instead, what matters is what fraction of observers in your reference class have the observations/evidence you do. In most of the discussion from the above link, those fractions are 100% on either A or B, resulting, according to SSA, in your posterior credences being the same as your priors. This is precisely the situation we are in for the case at hand, namely when we make the assumptions that: * The reference class consists of all survivors like you (no corpses allowed!) * The world is big (so there are non-zero survivors on both A and B). So the posteriors are again equal to the priors and you should not believe B (since your prior for it is low). I completely agree, it seems very strange to me too, but that's what SSA tells us. For me, this is just one illustration of serious problems with SSA, and an argument for SIA. If your intuition says to not believe B even if you know the world is small then SSA doesn't reproduce it either. But note that if you don't know how big the world is you can, using SSA, conclude that you now disbelieve the combination small world + A, while keeping the odds of the other three possibilities the same - relative to one another - as the prior odds. So basically you could now say: I still don't believe B but I now believe the world is big. Finally, as I mentioned, I don't share your intuition, I believe B over A if these are the only options. If we are granting that my observations and memories are correct, and the only two possibilities are: I just keep getting incredibly lucky OR "magic", then with every shot I'm becoming more and more convinced in magic.
Learning Russian Roulette

Isn't the prior probability of B the sum over all specific hypotheses that imply B?

I would say there is also a hypothesis that just says that your probability of survival is different, for no apparent reason, or only similarly stupid reasons like "this electron over there in my pinky works differently from other electrons" that are untestable for the same anthropic reasons.

7ADifferentAnonymous6dOkay. So, we agree that your prior says that there's a 1/N chance that you are unkillable by Russian Roulette for stupid reasons, and you never get any evidence against this. And let's say this is independent of how much Russian Roulette one plays, except insofar as you have to stop if you die. Let's take a second to sincerely hold this prior. We aren't just writing down some small number because we aren't allowed to write zero; we actually think that in the infinite multiverse, for every N agents (disregarding those unkillable for non-stupid reasons), there's one who will always survive Russian Roulette for stupid reasons. We really think these people are walking around the multiverse. So now let K be the base-5/6 log of 1/N. If N people each attempt to play K games of Russian Roulette (i.e. keep playing until they've played K games or are dead), one will survive by luck, one will survive because they're unkillable, and the rest will die (rounding away the off-by-one error). If N^2 people across the multiverse attempt to play 2K games of Russian Roulette, N of them will survive for stupid reasons, one of them will survive by luck, and the rest will die. Picture that set of N immortals and one lucky mortal, and remember how colossal a number N must be. Are the people in that set wrong to think they're probably immortals? I don't think they are.
Learning Russian Roulette

Your going to have some prior on "this is safer for me, but not totally save, it actually has a 1/1000 chance of killing me." This seems no less reasonable than the no chance of killing you prior. 

If you've survived often enough, this can go arbitrarily close to 0.

I think that playing this game is the right move

Why? It seems to me like I have to pick between the theories "I am an exception to natural law, but only in ways that could also be produced by the anthropic effect" and "Its just the anthropic effect". The latter seems obviously more reasonable to me, and it implies I'll die if I play.

2Donald Hobson9dWork out your prior on being an exception to natural law in that way. Pick a number of rounds such that the chance of you winning by luck is even smaller. You currently think that the most likely way for you to be in that situation is if you were an exception. What if the game didn't kill you, it just made you sick? Would your reasoning still hold? There is no hard and sharp boundary between life and death.
Learning Russian Roulette

Sure, but with current theories, even after you've gotten an infinite amount of evidence against every possible alternative consideration, you'll still believe that youre certain to survive. This seems wrong.

1ADifferentAnonymous9dIsn't the prior probability of B the sum over all specific hypotheses that imply B? So if you've gotten an arbitrarily large amount of evidence against all of those hypotheses, and you've won at Russian Roulette an arbitrarily high number of times... well, you'll just have to get more specific about those arbitrarily large quantities to say what your posterior is, right?
Troll Bridge

Hence, I (standing outside of PA) assert that (since I think PA is probably consistent) agents who use PA don't know whether PA is consistent, but, believe the world is consistent.

Theres two ways to express "PA is consistent". The first is . The other is a complicated construct about Gödel-encodings. Each has a corresponding version of "the world is consistent" (indeed,  this "world" is inside PA, so they are basically equivalent). The agent using PA will believe only the former. The Troll expresses the consistency of PA using provabilit... (read more)

Troll Bridge

If you're reasoning using PA, you'll hold open the possibility that PA is inconsistent, but you won't hold open the possibility that . You believe the world is consistent. You're just not so sure about PA.

Do you? This sounds like PA is not actually the logic you're using. Which is realistic for a human. But if PA is indeed inconsistent, and you don't have some further-out system to think in, then what is the difference to you between "PA is inconsistent" and "the world is inconsistent"? In both cases you just believe everything and its negatio... (read more)

4abramdemski12dMaybe this is the confusion. I'm not using PA. I'm assuming (well, provisionally assuming) PA is consistent. If PA is consistent, then an agent using PA believes the world is consistent -- in the sense of assigning probability 1 to tautologies, and also assigning probability 0 to contradictions. (At least, 1 to tautologies it can recognize, and 0 to contradictions it can recognize.) Hence, I (standing outside of PA) assert that (since I think PA is probably consistent) agents who use PA don't know whether PA is consistent, but, believe the world is consistent. If PA were inconsistent, then we need more assumptions to tell us how probabilities are assigned. EG, maybe the agent "respects logic" in the sense of assigning 0 to refutable things. Then It assigns 0 to everything. Maybe it "respects logic" in the sense of assigning 1 to provable things. Then it assigns 1 to everything. (But we can't have both. The two notions of "respect logic" are equivalent if the underlying logic is consistent, but not otherwise.) But such an agent doesn't have much to say for itself anyway, so it's more interesting to focus on what the consistent agent has to say for itself. And I think the consistent agent very much does not "hold open the possibility" that the world is inconsistent. It actively denies this.
Troll Bridge

If I'm using PA, I can prove that .

Sure, thats always true. But sometimes its also true that . So unless you believe PA is consistent, you need to hold open the possibility that the ball will both (stop and continue) and (do at most one of those). But of course you can also prove that it will do at most one of those. And so on. I'm not very confident whats right, ordinary imagination is probably just misleading here.

It seems particularly absurd that, in some sense, the reason you think that is just because you think that.

The fa... (read more)

5abramdemski17dI think you're still just confusing levels here. If you're reasoning using PA, you'll hold open the possibility that PA is inconsistent, but you won't hold open the possibility thatA&¬A. You believe the world is consistent. You're just not so sure about PA. I'm wondering what you mean by "hold open the possibility". * If you mean "keep some probability mass on this possibility", then I think most reasonable definitions of "keep your probabilities consistent with your logical beliefs" will forbid this. * If you mean "hold off on fully believing things which contradict the possibility", then obviously the agent would hold off on fully believing PA itself. * Etc for other reasonable definitions of holding open the possibility (I claim).
Troll Bridge

Heres what I imagine the agent saying in its defense:

Yes, of course I can control the consistency of PA, just like everything else can. For example, imagine that you're using PA and you see a ball rolling. And then in the next moment, you see the ball stopping and you also see the ball continuing to roll. Then obviously PA is inconsistent.

Now you might think this is dumb, because its impossible to see that. But why do you think its impossible? Only because its inconsistent. But if you're using PA, you must believe PA really might be inconsistent, so you ca... (read more)

2abramdemski19dThis part, at least, I disagree with. If I'm using PA, I can prove that¬(A&¬A). So I don't need to believe PA is consistent to believe that the ball won't stop rolling and also continue rolling. On the other hand, I have no direct objection to believing you can control the consistency of PA by doing something else than PA says you will do. It's not a priori absurd to me. I have two objections to the line of thinking, but both are indirect. 1. It seems absurd to think that if you cross the bridge, it will definitely collapse. It seems particularly absurd that, in some sense, the reason you think that is just because you think that. 2. From a pragmatic/consequentialist perspective, thinking in this way seems to result in poor outcomes.
Four Motivations for Learning Normativity

A different perspective, perhaps not motivating quite the same things as yours:

Embedded Reflective Consistency

A theory needs to be able to talk about itself and its position in and effect on the world. So in particular it will have beliefs about how the application of just this theory in just this position will influence whatever it is that we want the theory to do. Then reflective consistency demands that the theory rates itself well on its objective: If I have a belief, and also a belief that the first belief is most likely the result of deception, then ... (read more)

A non-logarithmic argument for Kelly

Yeah you're right. I just realized that what I had in mind originally already implicitly had superationality.

A non-logarithmic argument for Kelly

I'm not sure exactly what setup you're imagining.

Defecting one round earlier dominates pure tit-for-tat, but defecting five rounds earlier doesn't dominate pure tit-for-tat. Pure tit-for-tat is better against pure tit-for-tat. So there might be a nash equilibrium containing only strategies that play tit-for-tat until the last few rounds.

Ah, I see, so this approach differs a lot from Ole Peters'

I looked at his paper on the petersburg paradox and I think he gets the correct result for the iterated game. He doesn't do fractional betting, but he has a variable... (read more)

3abramdemski1moDefecting in the last x rounds is dominated by defecting in the last x+1, so there is no pure-strategy equilibrium which involves cooperating in any rounds. But perhaps you mean there could be a mixed strategy equilibrium which involves switching to defection some time near the end, with some randomization. Clearly such a strategy must involve defecting in the final round, since there is no incentive to cooperate. But then, similarly, it must involve defecting on the second-to-last round, etc. So it should not have any probability of cooperating -- at least, not in the game-states which have positive probability. Right? I think my argument is pretty clear if we assume subgame-perfect equilibria (and so can apply backwards induction). Otherwise, it's a bit fuzzy, but it still seems to me like the equilibrium can't have a positive probability of cooperating on any turn, even if players would hypothetically play tit-for-tat according to their strategies. (For example, one equilibrium is for players to play tit-for-tat, but with both players' first moves being to defect.)
Thoughts on the Repugnant Conclusion

For me this rewording takes the teeth out of the statement

And that's why you should be careful about it. With "barely worth living", you have a clear image of what you're thinking about. With "barely meeting the criteria for me valuing them" you probably don't, you just have an inferential insurance that you are happy applying the repugnant conclusion to them. The argument for total value is not actually any stronger or weaker than it was before - you've just decided that the intution against it ought to be rationally overridden, because the opposite is "trivially true".

A non-logarithmic argument for Kelly

but the only really arbitrary seeming port is the choice about how to order the limits.

Deciding that the probabaility of overtaking needs to be  might also count here. The likely alternative would be . I've picked 0 because I expect that normally this limit will be 0 or 1 or 0.5, and if we get other values then  might lead to intransitive comparisons - but I might rethink this if I discover a case where the limit isn't one of those three.

The tit-for-tat strategy unravels all the way back to the beginning, and we're b

... (read more)
2abramdemski1moAh, right, good point. I missed that. 0.5 does seem like a more reasonable choice, so that we know the ordering is as fine-grained as possible. I'm not sure exactly what setup you're imagining. If we're just thinking about Nash equiliblria or correlated equilibria, there's no "initial population". If we're doing something like evolving strategies for the n-round iterated game, then things will still unravel, but it can unravel pretty slowly. A genetic algorithm would have to develop the ability to count rounds, and develop a special exception for defecting in the final round, and then later develop a special exception to defect in the second to last round, etc. So it's possible that a genetic algorithm would get stuck and effectively never unravel. If so, though, that's due to limitations in its search. (And of course this is very dependent on initial population; if it starts out with a lot of defection, it might never develop tit-for-tat in the first place.) Ah, I see, so this approach differs a lot from Ole Peters' (which suggests logarithmic utility for St Petersburg, just like for Kelly). He studies iterated St Petersburg, though (but w/o fractional betting -- just an option to participate in the lotto, at the set price, or not). OTOH, if we use a cutoff of 1/2 rather than 0, the story might be different; there might be a finite price after which it's not worth it. Which would be interesting. But probably not, I think.
A Semitechnical Introductory Dialogue on Solomonoff Induction

When we don't know how to solve a problem even given infinite computing power, the very work we are trying to do is in some sense murky to us.

I wonder where this goes with questions about infinite domains. It seems to me that I understand what it means to argmax a generic bounded function on a generic domain, but I don't know an algorithm for it and as far as I know there can't be one. So it seems taking this very seriously would lead us to some form of constructionism.

1Optimization Process1moHmm. If we're trying to argmax some function f over the real numbers, then the simplest algorithm would be something like "iterate over all mathematical expressions e; for each one, check whether the program 'iterate over all provable theorems, halting when you find one that says e=argmaxf' halts; if it does, return e." ...but I guess that's not guaranteed to ever halt, since there could conceivably be an infinite procession of ever-more-complex expressions, eking out ever-smaller gains on f. It seems possible that no matter what (reasonably powerful) mathematical language you choose, there are function-expressions with finite maxima at values not expressible in your language. Which is maybe what you meant by "as far as I know there can't be [an algorithm for it]." (I'm assuming our mathematical language doesn't have the word argmax, since in that case we'd pretty quickly stumble on the expression argmaxf, verify that arg maxf=argmaxf, and return it, which is obviously a cop-out.)
A non-logarithmic argument for Kelly

But you're saying we should change that game from f:Ω->[0,inf] to g:Ω,?R?->[0,inf]

No. The key change I'm making is from assigning every strategy an expected value (normally real, but including infinity as you do should be possible) to having the essential math thing be a comparison between two strategies. With your version, all we can say is that all-in has EV 0, don't bet has EV 1, and everything else has EV infinity - but by doing the comparison inside the limits, we get some more differentiation there.

R isn't distinct from Ω. The EV function "bind... (read more)

A non-logarithmic argument for Kelly

If I understand that context correctly, thats not what I'm doing. The unconventional writing doesn't pull a lim outside an EV, it replaces an EV with a lim construction. In fact, that comment seems somewhat in support of my point: he's saying that  doesn't properly represent an infinite game. And if the replacing of E with the n-lim that I'm doing works out, then thats saying that the order of limits that results in Kelly is the right one. Its similar to what I said (less formally) about expected utility maximization not re... (read more)

3GuySrinivasan1moSo in the triple-your-even-odds-bet situation, the normal setup is to take the expectation of f={(1,1,1,...): inf, otherwise: 0}, and EV(f)=0. But you're saying we should change that game from f:Ω->[0,inf] to g:Ω,?R?->[0,inf] where ?R? is a domain I don't really understand, a "source of randomness", and then we can try many times, averaging, and take the limit? I'm suspicious that I don't understand how the "source of randomness" actually operates with infinities and limits, and it seems like it's important to make it formal to make sure nothing's being swept under the rug. Do you have a link to something that shows how "source of randomness" is generally formalized, or if not, how you're thinking it works more explicitly?
Kelly *is* (just) about logarithmic utility

I've worked out our earlier discussion of the order of limits into a response to this here.

Normativity

I now think you were right about it not solving anthropics. I interpreted afoundationalism insufficiently ambitiously; now that I have a proof-of-concept for normative semantics I can indeed not find it to presuppose an anthropics.

Learning Normativity: Language

It seems like there's some mystery process that connects observations to hypotheses about what some mysterious other party "really means"

The hypothesis do that. I said

We start out with a prior over hypothesis about meaning. Such a hypothesis generates a propability distribution over all propositions of the form "[Observation] means [propositon]." for each observation (including the possibility that the observation means nothing). 

Why do you think this doesn't answer your question?

but if this process always (sic) connects the observations to propositio

... (read more)
Learning Normativity: Language

I'm not sure what you don't understand, so I'll explain a few things in that area and hope I hit the right one:

I give sentences their english name in the example to make it understandable. Here are two ways you could give more detail on the example scenario, each of which is consistent:

  1. "It's raining" is just the english name for a complicated construct in a database query language, used to be understandable. It's connected to the epistemology module because the machine stores its knowledge in that database.
  2. Actually, you are the interpreter and I'm the spea
... (read more)
2Charlie Steiner2moI don't mean the internal language of the interpreter, I mean the external language, the human literally saying "it's raining." It seems like there's some mystery process that connects observations to hypotheses about what some mysterious other party "really means" - but if this process ever connects the observations to propositions that are always true, it seems like that gets most favored by the update rule, and so "it's raining" (spoken aloud) meaning 2+2=4 (in internal representation) seems like an attractor.
Signalling & Simulacra Level 3

I've thought about applying normativity to language learning some more, its written up here.

Bayesian inference on 1st order logic

This is simply saying that, given we've randomly selected a truth table, the probability that every snark is a boojum is 0.4.

Maybe I misunderstand your quantifiers, but I don't think it says that. It says that for every monster, if we randomly pick a truth table on which it's a snark, the propability that it's also a boojum on that table is 0.4. I think the formalism is right here and your description of it wrong, because thats just what I would expect  to mean.

I thi... (read more)

1Daniel Abolafia2moYou are right. Thank you for the correction, and I like your description which I hope you don't mind me using (with credit) when I edit this post. My error was not realizing that P(boojum(x)|snark(x)) is the marginal probability for one particular row in the table. Even though the syntax is (hopefully) valid, this stuff is still confusing to think about! I'm not quite sure how Chapman is interpreting these things, but what you are describing does sound like a reasonable objection for someone who interprets these probabilities to be physically "real" (whatever that means). Though Chapman is the one who chose to assert that all conditional probabilities are 0.4 in this example. I think he want's to conclude that such a "strong" logical statement as a "for-all" is nonsensical in the way you are describing, whereas something like "for 90% of x, P(boojum(x)|snark(x)) is between 0.3 and 0.5" would be more realistic. Or you can just interpret this as being a statement about your model, i.e. without knowing anything about particular cats, you decided to model the probability any each cat is (independently) black as 40%. You can choose to make these probabilities different if you like.
Limiting Causality by Complexity Class

The first sentence of your first paragraph appears to appeal to experiment, while the first sentence of your second paragraph seems to boil down to "Classically, X causes Y if there is a significant statistical connection twixt X and Y."  

No. "Dependence" in that second sentence does not mean causation. It just means statistical dependence. The definition of dependence is important because an intervention must be statistically independent from things "before" the intervention.

None of these appear to involve intervention.

These are methods of causal inf... (read more)

Limiting Causality by Complexity Class

Pearl's answer, from IIRC Chapter 7 of Causality, which I find 80% satisfying, is about using external knowledge about repeatability to consider a system in isolation. The same principle gets applied whenever a researcher tries to shield an experiment from outside interference.

This is actually a good illustration of what I mean. You can't shield an experiment from outside influence entirely, not even in principle, because its you doing the shielding, and your activity is caused by the rest of the world. If you decide to only look at a part of the world, on... (read more)

1Darmani2moCausal inference has long been about how to take small assumptions about causality and turn them into big inferences about causality. It's very bad at getting causal knowledge from nothing. This has long been known. For the first: Well, yep, that's why I said I was only 80% satisfied. For the second: I think you'll need to give a concrete example, with edges, probabilities, and functions. I'm not seeing how to apply thinking about complexity to a type causality setting, where it's assumed you have actual probabilities on co-occurrences.
Limiting Causality by Complexity Class

What I had in mind was increasing precision of Y.

1Measure2moI guess that makes sense. Thanks for clarifying!
Limiting Causality by Complexity Class

X and Y are variables for events. By complexity class I mean computational complexity, not sure what scaling parameter is supposed to be there?

2Measure2moComputational complexity only makes sense in terms of varying sizes of inputs. Are some Y events "bigger" than others in some way so that you can look at how the program runtime depends on that "size"?
Great minds might not think alike

I'm not sure if "translation" is a good word for what youre talking about. For example it's not clear what a Shor-to-Constance translation would look like. You can transmit the results of statistical analysis to non-technical people, but the sharing of results wasn't the problem here. The Constance-to-Shor translator described Constances reasons in such a way that Shor can process them, and what could an inverse of this be? Constances beliefs are based on practical experience, and Shor simply hasn't had that, whereas Constance did get "data" in a broad sen... (read more)

2toonalfrink3moA Shor-to-Constance translation would be lossy because the latter language is not as expressive or precise as the former
Dissolving the Problem of Induction

We don't have to justify using "similarity", "resemblance"

I think you still do. In terms of induction, you still have the problem of grue and bleen. In terms of Occams Razor, it's the problem of which language a description needs to be simple in.

2Liron3moJustifying that blue is an a-priori more likely concept than grue is part of the remaining problem of justifying Occam's Razor. What we don't have to justify is the wrong claim that science operates based on generalized observations of similarity.
Normativity

It's sort of like the difference between a programmable computer vs an arbitrary blob of matter. 

This is close to what I meant: My neurons keep doing something-like reinforcement learning, whether or not I theoretically believe thats valid. "I in fact can not think outside this" does adress the worry about a merely rational constraint.

On the other hand, we do want AI to eventually consider other hardware, and that might even be necessary for normal embedded agency, since we dont fully trust our hardware even when we dont want to normal-sense-change it... (read more)

Normativity

we have an updating process which can change its mind about any particular thing; and that updating process itself is not the ground truth, but rather has beliefs (which can change) about what makes an updating process legitimate.

This should still be a strong formal theory, but one which requires weaker assumptions than usual

There seems to be a bit of a tension here. What you're outlining for most of the post still requires a formal system with assumptions within which to take the fixed point, but then that would mean that it can't change its mind about an... (read more)

7abramdemski5moIt's sort of like the difference between a programmable computer vs an arbitrary blob of matter. A programmable computer provides a rigid structure which can't be changed, but the set of assumptions imposed really is quite light. When programming language designers aim for "totally self-revising systems" (languages with more flexibility in their assumptions, such as Lisp), they don't generally attack the assumption that the hardware should be fixed. (Although occasionally they do go as far as asking for FPGAs.) (a finite approximation of) Solomonoff Induction can be said to make "very few assumptions", because it can learn a wide variety of programs. Certainly it makes less assumptions than more special-case machine learning systems. But it also makes a lot more assumptions than the raw computer. In particular, it has no allowance for updating against the use of Bayes' Rule for evaluating which program is best. I'm aiming for something between the Solomonoff induction and the programmable computer. It can still have a rigid learning system underlying it, but in some sense it can learn any particular way of selecting hypotheses, rather than being stuck with one. This seems like a rather excellent question which demonstrates a high degree of understanding of the proposal. I think the answer from my not-necessarily-foundationalist but not-quite-pluralist perspective (a pluralist being someone who points to the alternative foundations proposed by different people and says "these are all tools in a well-equipped toolbox") is: The meaning of a confused concept such as "the real word for X" is not ultimately given by any rigid formula, but rather, established by long deliberation on what it can be understood to mean. However, we can understand a lot of meaning through use. Pragmatically, what "the real word for X" seems to express is that there is a correct thing to call something, usually uniquely determined, which can be discovered through investigation (EG by askin
Signalling & Simulacra Level 3

Where do these crisp ontologies come from, if (under the signalling theory of meaning) symbols only have probabilistic meanings?

There are two things here which are at least potentially distinct: The meaning of symbols in thinking, and their meaning in communication. I'd expect these mechanisms to have a fair bit on common, but specifically the problem of alignment of the speakers which is adressed here would not seem to apply to the former. So I dont think we need to wonder here where those crisp ontologies came from.

This is the type of thinking that can't

... (read more)
2abramdemski5moGood points! I'll have to think on this.
Weird Things About Money

However, how I assign value to divergent sums is subjective -- it cannot be determined precisely from how I assign value to each of the elements of the sum, because I'm not trying to assume anything like countable additivity.

This implies that you believe in the existence of countably infinite bets but not countably infinite dutch booking processes. Thats seems like a strange/unphysical position to be in - if that were the best treatment of infinity possible, I think infinity is better abandoned. Im not even sure the framework in your linked post can really... (read more)

Weird Things About Money

I'm generally OK with dropping continuity-type axioms, though, in which case you can have hyperreal/surreal utility to deal with expectations which would otherwise be problematic (the divergent sums which unbounded utility allows).

Have you worked this out somewhere? I'd be interested to see it but I think there are some divergences it can't adress. There is for one the Pasadena paradox, which is also a divergent sum but one which doesn't stably lead anywhere, not even to infinity. The second is an apparently circular dominance relation: Imagine you are lin... (read more)

2abramdemski5moIt's a bit of a mess due to some formatting changes porting to LW 2.0, but here it is [https://www.lesswrong.com/posts/5bd75cc58225bf067037539a/generalizing-foundations-of-decision-theory-ii] . I've gotten the impression over the years that there are a lot of different ways to arrive at the same conclusion, although I unfortunately don't have all my sources lined up in one place. * I think if you just drop continuity from VNM you get this kind of picture, because the VNM continuity assumption corresponds to the Archimedian assumption for the reals. * I think there's a variant of Cox's theorem which similarly yields hyperreal/surreal probabilities (infinitesimals, not infinities, in that case). * If you want to condition on probability zero events, you might do so by rejecting the ratio formula for conditional probabilities, and instead giving a basic axiomatization of conditional probability in its own right. It turns out that, at least under one such axiom system, this is equivalent to allowing infinitesimal probability and keeping the ratio definition of conditional probability. (Sorry for not having the sources at the ready.) Here's how it works. I have to assign expectations to gambles. I have some consistency requirements in how I do this; for example, if you modify a gambleg by making a probabilitypoutcome havevless value, then I must think the new gambleg′is worthp⋅vless. However, how I assign value to divergent sums is subjective -- it cannot be determined precisely from how I assign value to each of the elements of the sum, because I'm not trying to assume anything like countable additivity. In a case like the St Petersburg Lottery, I believe I'm required to have some infinite expectation. But it's up to me what it is, since there's no one way to assign expectations in infinite hyperreal/surreal sums. In a case like the Pasadena paradox, though, I'm thinking I'll be subjectively allowed to assign any expectation whatsoeve
Weird Things About Money

But I still think it's important to point out that the behavioral recommendations of Kelly do not violate the VNM axioms in any way, so the incompatibility is not as great as it may seem.

I think the interesting question is what to do when you expect many more, but only finitely many rounds. It seems like Kelly should somehow gradually transition, until it recommends normal utility maximization in the case of only a single round happening ever. Log utility doesn't do this. I'm not sure I have anything that does though, so maybe it's unfair to ask it from yo... (read more)

2abramdemski5moAh, I see, interesting. Yeah, I agree with this. Yeah. I'm generally OK with dropping continuity-type axioms, though, in which case you can have hyperreal/surreal utility to deal with expectations which would otherwise be problematic (the divergent sums which unbounded utility allows). So while I agree that boundedness should be thought of as part of the classical notion of real-valued utility, this doesn't seem like a huge deal to me. OTOH, logical uncertainty / radical probabilism introduce new reasons to require boundedness for expectations. What is the expectation of the self-referential quantity "one greater than your expectation for this value"? This seems problematic even with hyperreals/surreals. And we could embed such a quantity into a decision problem.
Load More