LESSWRONG
LW

205
interstice
2531Ω108395671
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Tomás B.'s Shortform
interstice2d52

suggest over 50% of American women filter out man below 6 feet in dating apps/sites

That is not what the linked graph shows. It shows that of the women who set height filters over 50% set a filter greater than or equal to 6 feet.

Reply
People Seem Funny In The Head About Subtle Signals
interstice2d220

Lazy ecopsych explanation: maybe peoples's sense of the obviousness of signals is calibrated for a small social environment where everyone knows everyone else really well?

Reply
Eric Neyman's Shortform
interstice4d40

Elaborate on what you see as the main determining features making a future go extremely well VS okay? And what interventions are tractable?

Reply
Trying to understand my own cognitive edge
interstice4d20

I think in terms of wealth, it's just because there's a lot more of them to start with

Ah yes, but why is that the case in the first place? Surely it's due to the evolutionary processes that make some cognitive styles more widespread than others. But yeah I think it's also plausible that there is net selection pressure for this and there just hasn't been enough time(probably the selection processes are changing a lot due to technological progress as well...)

Reply
Trying to understand my own cognitive edge
interstice4d51

I think the results (both intellectual and financial) speak for themselves?

I mean, it still seems to be the case that people with a less philosophical style control vastly more resources/influence, and are currently using them to take what are from your perspective insanely reckless gambles on AGI, no? I'm saying from an ecological perspective this is due to those cognitive styles being more useful/selected-for[well, or maybe they're just "easier" to come up with and not strongly selected against] on more common "mundane" problems where less philosophical reflection is needed(abstractly, because those problems have more relevant "training data" available)

Reply
Trying to understand my own cognitive edge
interstice4d31

Another thing which I wasn't sure how to fit in with the above. I framed the neglect of your "philosophizing" cognitive style as being an error on the world's part, but in some cases I think this style might ultimately be worse at getting things done, even on its own terms.

Like with UDT or metaphilosophy my reaction is "yes we have now reached a logical terminus of the philosophizing process, it's not clear how to make further progress, so we should go back and engage with the details of the world in the hope that some of them illuminate our philosophical questions". As a historical example, consider that probability theory and computability theory arose from practical engagement with games of chance and calculations, but they seem to be pretty relevant to philosophical questions(well, to certain schools of thought anyway). More progress was perhaps made in this way than could've been made by people just trying to do philosophy on its own.

Reply
Trying to understand my own cognitive edge
interstice4d40

Just spitballing here, but one thing that strikes me about a lot of your ideas is that they seem correct but impractical. So for example, yes it seems to be the case that a rational civilization would implement a long pause on AI, in a sense that's even "obvious", but in practice, it's going to be very hard to convince people to do that. Or yes, in theory it might be optimal to calculate the effect of all your decisions on all possible Turing machines according to your mathematical intuition modules, but in practice that's going to be very difficult to implement. Or yes, in theory we can see that money/the state are merely an arbitrary fixed-point in what things people have agreed to consider valuable, but it's gonna be hard to get people to adopt a new such fixed-point.

So the question arises, why are there few people with a similar bent towards such topics? Well, because such speculations are not in practice rewarded, because they are impractical! Of course, you can sometimes get large rewards from being right about one of these, e.g. bitcoin. But it seems like you captured a lot less of the value from that than you could've, such that the amount of resources controlled by people with your cognitive style remains small. Perhaps because getting the rewards from one of those large sparse payoffs still depends on a lot of practical details and luck.

Yet another way of formulating this idea might be that the theoretically optimal inference algorithm is a simplicity prior, but in practice that's impossible to implement, so people instead use approximations. In reality most problems we encounter have a lot of hard-to-compress detail, but there is a correspondingly large amount of "data" available(learned through other people/culture perhaps) so the optimal approximation ends up being something like interpolation from a large database of examples. But that ends up performing poorly on problems where there the amount of data is relatively sparse(but for which there may be large payoffs).

So this then raises the question of how cognitive styles that depend on large but sparse rewards can defend/justify themselves to styles that benefit from many small consistent rewards.

Reply
shortplav
interstice8d40

The "UDASSA/UDT-like solution" is basically to assign some sort of bounded utility function to the output of various Turing machines weighted by a universal prior, like here. Although Wei Dai doesn't specify that the preference function has to be bounded in that post, and he allows preferences over entire trajectories(but I think you should be able to do away with that by having another Turing machine running the first and evaluating any particular property of its trajectory)

"Bounded utility function over Turing machine outputs weighted by simplicity prior" should recover your thing as a special case, actually, at least in the sense of having identical expected values. You could have a program which outputs 1 utility with probability 2^-[(log output of your utility turing machine) - (discount factor of your utility turing machine)]. That this is apparently also the same as Eliezer's solution suggests there might be convergence on a unique sensible way to do EU maximization in a Turing-machine-theoretic mathematical multiverse.

Reply1
On Fleshling Safety: A Debate by Klurl and Trapaucius.
interstice11d51

Project Lawful exists as a story in-universe, so maybe her parents intentionally named her after the character.

Reply
I will not sign up for cryonics
interstice14d33

but deep down, do we all really want to pay $80,000 for a 15% chance at a dream life?

Yes.

Reply
Load More
4
5y
2
Future Fund Worldview Prize
3 years ago
49Alignment Might Never Be Solved, By Humans or AI
3y
6
15Will Values and Competition Decouple?
3y
11
9Kolmogorov's AI Forecast
Q
3y
Q
1
41Tao, Kontsevich & others on HLAI in Math
3y
5
36What's the Relationship Between "Human Values" and the Brain's Reward System?
Q
4y
Q
17
18Consciousness: A Compression-Based Approach
4y
14
15Algorithmic Measure of Emergence v2.0
4y
2
5Advancing Mathematics By Guiding Human Intuition With AI
4y
0
33NTK/GP Models of Neural Nets Can't Learn Features
Ω
5y
Ω
33
4
5y
2
Load More