Lazy ecopsych explanation: maybe peoples's sense of the obviousness of signals is calibrated for a small social environment where everyone knows everyone else really well?
Elaborate on what you see as the main determining features making a future go extremely well VS okay? And what interventions are tractable?
I think in terms of wealth, it's just because there's a lot more of them to start with
Ah yes, but why is that the case in the first place? Surely it's due to the evolutionary processes that make some cognitive styles more widespread than others. But yeah I think it's also plausible that there is net selection pressure for this and there just hasn't been enough time(probably the selection processes are changing a lot due to technological progress as well...)
I think the results (both intellectual and financial) speak for themselves?
I mean, it still seems to be the case that people with a less philosophical style control vastly more resources/influence, and are currently using them to take what are from your perspective insanely reckless gambles on AGI, no? I'm saying from an ecological perspective this is due to those cognitive styles being more useful/selected-for[well, or maybe they're just "easier" to come up with and not strongly selected against] on more common "mundane" problems where less philosophical reflection is needed(abstractly, because those problems have more relevant "training data" available)
Another thing which I wasn't sure how to fit in with the above. I framed the neglect of your "philosophizing" cognitive style as being an error on the world's part, but in some cases I think this style might ultimately be worse at getting things done, even on its own terms.
Like with UDT or metaphilosophy my reaction is "yes we have now reached a logical terminus of the philosophizing process, it's not clear how to make further progress, so we should go back and engage with the details of the world in the hope that some of them illuminate our philosophical questions". As a historical example, consider that probability theory and computability theory arose from practical engagement with games of chance and calculations, but they seem to be pretty relevant to philosophical questions(well, to certain schools of thought anyway). More progress was perhaps made in this way than could've been made by people just trying to do philosophy on its own.
Just spitballing here, but one thing that strikes me about a lot of your ideas is that they seem correct but impractical. So for example, yes it seems to be the case that a rational civilization would implement a long pause on AI, in a sense that's even "obvious", but in practice, it's going to be very hard to convince people to do that. Or yes, in theory it might be optimal to calculate the effect of all your decisions on all possible Turing machines according to your mathematical intuition modules, but in practice that's going to be very difficult to implement. Or yes, in theory we can see that money/the state are merely an arbitrary fixed-point in what things people have agreed to consider valuable, but it's gonna be hard to get people to adopt a new such fixed-point.
So the question arises, why are there few people with a similar bent towards such topics? Well, because such speculations are not in practice rewarded, because they are impractical! Of course, you can sometimes get large rewards from being right about one of these, e.g. bitcoin. But it seems like you captured a lot less of the value from that than you could've, such that the amount of resources controlled by people with your cognitive style remains small. Perhaps because getting the rewards from one of those large sparse payoffs still depends on a lot of practical details and luck.
Yet another way of formulating this idea might be that the theoretically optimal inference algorithm is a simplicity prior, but in practice that's impossible to implement, so people instead use approximations. In reality most problems we encounter have a lot of hard-to-compress detail, but there is a correspondingly large amount of "data" available(learned through other people/culture perhaps) so the optimal approximation ends up being something like interpolation from a large database of examples. But that ends up performing poorly on problems where there the amount of data is relatively sparse(but for which there may be large payoffs).
So this then raises the question of how cognitive styles that depend on large but sparse rewards can defend/justify themselves to styles that benefit from many small consistent rewards.
The "UDASSA/UDT-like solution" is basically to assign some sort of bounded utility function to the output of various Turing machines weighted by a universal prior, like here. Although Wei Dai doesn't specify that the preference function has to be bounded in that post, and he allows preferences over entire trajectories(but I think you should be able to do away with that by having another Turing machine running the first and evaluating any particular property of its trajectory)
"Bounded utility function over Turing machine outputs weighted by simplicity prior" should recover your thing as a special case, actually, at least in the sense of having identical expected values. You could have a program which outputs 1 utility with probability 2^-[(log output of your utility turing machine) - (discount factor of your utility turing machine)]. That this is apparently also the same as Eliezer's solution suggests there might be convergence on a unique sensible way to do EU maximization in a Turing-machine-theoretic mathematical multiverse.
Project Lawful exists as a story in-universe, so maybe her parents intentionally named her after the character.
but deep down, do we all really want to pay $80,000 for a 15% chance at a dream life?
Yes.
That is not what the linked graph shows. It shows that of the women who set height filters over 50% set a filter greater than or equal to 6 feet.