kvas_duplicate0.1636121129676118

Posts

Sorted by New

Wiki Contributions

Comments

I took the survey. It was long but fun. Thanks for the work you've put into designing it and processing the results.

What can I say, your prior does make sense in the real world. Mine was based on the other problems featuring Omega (Newcomb's problem and Counterfactual mugging) where apart from messing with your intuitions Omega was not playing any dirty tricks.

There's no good reason for assigning 50% probability to game A but neither is there a good reason to assign any other probability. I guess I can say that i'm using something like "fair Omega prior" that assumes that Omega is not trying to trick me.

You and Gurkenglas seem to assume that Omega would try to minimize your reward. What is the reason for that?

You could also make a version where you don't know what X is. In this case always reject strategy doesn't work since you would reject k*X in real life after the simulation rejected X. It seems like if you must precommit to one choice, you would have to accept (and get (X+X/k)/2 on average) but if you have a source of randomness, you could try to reject your cake and eat it too. If you accept with probability p and reject with probability 1 - p, your expected utility would be (p*X + (1-p)*p*k*X + p*p*X/k)/2. If you know the value of k, you can calculate the best p and see if random strategy is better than always-accept. I'm still not sure where this is going though.

I also agree with Dagon's first paragraph. Then, since I don't know which game Omega is playing except that either is possible, I will assign 0.5 probability to each game, calculate expected utilities (reject -> $5000, accept -> $550) and reject.

For general form I will reject if k > 1/k + 1, which is the same as k*k - k - 1 > 0 or k > (1+sqrt(5))/2. Otherwise i will accept.

It seems like I'm missing something, though, because it's not clear why you chose these payoffs and not the ones that give some kind of nice answer.

Thank you, this is awesome! I've just convinced my wife to pay more attention to LW discussion forum.

And then they judge what some high-status members of their group would say about the particular Quantum Mechanics conundrum. Then, they side with him about that. Almost nobody actually ponders what the Hell is really going on with the Schrodinger's poor cat. Almost nobody.

I find it harder to reason about the question "what would high status people in group X say about Schrodinger's cat?" than about the question "based on what I understand about QM, what would happen to Schrodinger's cat?". I admit that I suck at modelling other people, but how many people are actually good at it?

Not to say that belief signalling doesn't happen. After all in many cases you just know what the high status people say since they, well, said it.

Thank you for more bits of information that answer my original question in this thread. You have my virtual upvote :)

After reading a bit more about meta-rationality and observing how my perspective changes when I try to think this way, I've come to an opinion that the "disagreement on priorities", as I have originally called it, is more significant than I originally acknowledged.

To give an example, if one adopts the science-based map (SBM) as the foundation of their thinking for most practical purposes and only checks the other maps when the SBM doesn't work (or when modelling other people), they will see the world differently from a person who routinely tries to adopt multiple different perspectives when exploring every problem they face. Even though technically their world views are the same, the different priorities (given that both have bounded computational resources) will lead them to exploring different parts of the solution space and potentially finding different insights. The differences can accumulate through updating in different directions, so, at least in theory, their world views can drift apart to a significant degree.

... the genesis of the meta-rationalist epistemology is that the map is part of the territory and is thus the map is constrained by the territory and not by an external desire for correspondence or anything else.

Again, even though I see this idea as being part (or a trivial consequence) of LW-rationality, focusing your attention on how your map is influenced by where you are in the territory gives new insights.

So my current take aways are: as rationalists that agree with meta-rationalists on (meta-)epistemological foundations we should consider updating our epistemological priorities in the direction that they are advocating; if we can figure out ways to formulate meta-rationalist ideas in a less inscrutable way with less nebulosity, we should do so -- it will benefit everyone; we should look into what meta-rationalists have to say about creativity / hypothesis generation -- perhaps it will help with formulating a general high level theory of creative thinking (and if we do it in a way that's precise enough to be programmed into computers, that would be pretty significant).

You are steelmanning the rationalist position

That could very well be. I had an impression that meta-rationalists are arguing against a strawman, but that would just mean we disagree about the definition of "rationalist position".

I agree that one-true-map rationalism is rather naive and that there are many people who hold this position, but I haven't seen much of this on LW. Actually, LW contains the clearest description of the map/territory relationship that I've seen, no nebulosity or any of that stuff.

Ok, I think I get it. So basically, pissing contests put aside, meta-rationalists should probably just concede that LW-style rationalists are also meta-rational and have a constructive discussion about better ways of thinking (I've actually seen a bit of this, for example in the comments to this post).

Judging from the tone of your comment, I gather that that's the opposite of what many of them are doing. Well, that doesn't really surprise me, but it's kind of sad.

Load More