I think value claims are more likely to be parasitic (mostly concerned with copying themselves or participating in a memetic ensemble that's mostly copying itself) than e.g. physics claims, but I don't think you have good evidence to say "mostly parasitic".
My model is that parasitic memes that get a quick and forceful pushback from reality would face an obstacle to propagation compared to parasitic memes for which the pushback from reality is delayed and/or weak. Value claims and claims about longevity (as in your example, although I don't think those are value claims) are good examples of a long feedback cycle, so we should expect more parasites.
I took the survey. It was long but fun. Thanks for the work you've put into designing it and processing the results.
What can I say, your prior does make sense in the real world. Mine was based on the other problems featuring Omega (Newcomb's problem and Counterfactual mugging) where apart from messing with your intuitions Omega was not playing any dirty tricks.
There's no good reason for assigning 50% probability to game A but neither is there a good reason to assign any other probability. I guess I can say that i'm using something like "fair Omega prior" that assumes that Omega is not trying to trick me.
You and Gurkenglas seem to assume that Omega would try to minimize your reward. What is the reason for that?
You could also make a version where you don't know what X is. In this case always reject strategy doesn't work since you would reject k*X
in real life after the simulation rejected X. It seems like if you must precommit to one choice, you would have to accept (and get (X+X/k)/2
on average) but if you have a source of randomness, you could try to reject your cake and eat it too. If you accept with probability p and reject with probability 1 - p
, your expected utility would be (p*X + (1-p)*p*k*X + p*p*X/k)/2
. If you know the value of k, you can calculate the best p and see if random strategy is better than always-accept. I'm still not sure where this is going though.
I also agree with Dagon's first paragraph. Then, since I don't know which game Omega is playing except that either is possible, I will assign 0.5 probability to each game, calculate expected utilities (reject -> $5000, accept -> $550) and reject.
For general form I will reject if k > 1/k + 1, which is the same as k*k - k - 1 > 0 or k > (1+sqrt(5))/2. Otherwise i will accept.
It seems like I'm missing something, though, because it's not clear why you chose these payoffs and not the ones that give some kind of nice answer.
Thank you, this is awesome! I've just convinced my wife to pay more attention to LW discussion forum.
And then they judge what some high-status members of their group would say about the particular Quantum Mechanics conundrum. Then, they side with him about that. Almost nobody actually ponders what the Hell is really going on with the Schrodinger's poor cat. Almost nobody.
I find it harder to reason about the question "what would high status people in group X say about Schrodinger's cat?" than about the question "based on what I understand about QM, what would happen to Schrodinger's cat?". I admit that I suck at modelling other people, but how many people are actually good at it?
Not to say that belief signalling doesn't happen. After all in many cases you just know what the high status people say since they, well, said it.
Thank you for more bits of information that answer my original question in this thread. You have my virtual upvote :)
After reading a bit more about meta-rationality and observing how my perspective changes when I try to think this way, I've come to an opinion that the "disagreement on priorities", as I have originally called it, is more significant than I originally acknowledged.
To give an example, if one adopts the science-based map (SBM) as the foundation of their thinking for most practical purposes and only checks the other maps when the SBM doesn't work (or when modelling other people), they will see the world differently from a person who routinely tries to adopt multiple different perspectives when exploring every problem they face. Even though technically their world views are the same, the different priorities (given that both have bounded computational resources) will lead them to exploring different parts of the solution space and potentially finding different insights. The differences can accumulate through updating in different directions, so, at least in theory, their world views can drift apart to a significant degree.
... the genesis of the meta-rationalist epistemology is that the map is part of the territory and is thus the map is constrained by the territory and not by an external desire for correspondence or anything else.
Again, even though I see this idea as being part (or a trivial consequence) of LW-rationality, focusing your attention on how your map is influenced by where you are in the territory gives new insights.
So my current take aways are: as rationalists that agree with meta-rationalists on (meta-)epistemological foundations we should consider updating our epistemological priorities in the direction that they are advocating; if we can figure out ways to formulate meta-rationalist ideas in a less inscrutable way with less nebulosity, we should do so -- it will benefit everyone; we should look into what meta-rationalists have to say about creativity / hypothesis generation -- perhaps it will help with formulating a general high level theory of creative thinking (and if we do it in a way that's precise enough to be programmed into computers, that would be pretty significant).
In many parts of Europe nobody has to work 60-hour weeks just to send their kids to a school with low level of violence. A bunch of people don't work at all and still their kids seem to have all teeth in place and get some schooling. Not sure what we did here that the US is failing to do, but I notice that the described problem of school violence is a cultural problem -- it's related to poverty, but is not directly caused by it.