Viliam

Comments

Viliam's Shortform

Moving a comment away from the article it was written under, because frankly it is mostly irrelevant, but I put too much work into it to just delete it.

But occasionally I hear: who are you to give life advice, your own life is so perfect! This sounds strange at first. If you think I’ve got life figured out, wouldn’t you want my advice?

How much your life is determined by your actions, and how much by forces beyond your control, that is an empirical question. You seem to believe it's mostly your actions. I am not trying to disagree here (I honestly don't know), just saying that people may legitimately have either model, or a mix thereof.

If your model is "your life is mostly determined by your actions", then of course it makes sense to take advice from people who seem to have it best, because those are the ones who probably made the best choices, and can teach you how to make them, too.

If your model is "your life is mostly determined by forces beyond your control", then the people who have it best are simply the lottery winners. They can teach you that you should buy a ticket (which you already know has 99+% probability of not winning), plus a few irrelevant things they did which didn't have any actual impact on winning.

The mixed model "your life is partially determined by your actions, and partially by forces beyond your control" is more tricky. On one hand, it makes sense to focus on the part that you can change, because that's where your effort will actually improve things. On the other hand, it is hard to say whether people who have better outcomes than you, have achieved it by superior strategy or superior luck.

Naively, a combination of superior strategy and superior luck should bring the best outcomes, and you should still learn the superior strategy from the winners, but you should not expect to get the same returns. Like, if someone wins a lottery, and then lives frugally and puts all their savings in index funds, they will end up pretty rich. (More rich than people who won the lottery and than wasted the money.) It makes sense to live frugally and put your savings in index funds, even if you didn't win the lottery. You should expect to end up rich, although not as rich as the person who won the lottery first. So, on one hand, follow the advice of the "winners at life", but on the other hand, don't blame yourself (or others) for not getting the same results; with average luck you should expect some reversion to the mean.

But sometimes the strategy and luck are not independent. The person with superior luck wins the lottery, but the person with superior strategy who optimizes for the expected return would never buy the ticket! Generally, the person with superior luck can win at life because of doing risky actions (and getting lucky) that the person with superior strategy would avoid in favor of doing something more conservative.

So the steelman of the objection in the mixed model would be something like: "Your specific outcome seems to involve a lot of luck, which makes it difficult to predict what would be the outcome of someone using the same strategy with average luck. I would rather learn strategy from successful people who had average luck."

A toy model to illustrate my intuition about the relationship between strategy and luck:

Imagine that there are four switches called A, B, C, D, and you can put each of them into position "on" or "off". After you are done, a switch A, B, C, D in a position "on" gives you +1 point with probability 20%, 40%, 60%, 80% respectively, and gives you -1 point with probability 80%, 60%, 40%, 20% respectively. A switch in a position "off" always gives you 0 points. (The points are proportional to utility.)

Also, let's assume that most people in this universe are risk-averse, and only set D to "on" and the remaining three switches to "off".

What happens in this universe?

The entire genre of "let's find the most successful people and analyze their strategy" will insist that the right strategy is to turn all four switches to "on". Indeed, there is no other way to score +4 points.

The self-help genre is right about turning on the switch C. But also wrong about the switches A and B. Neither the conservative people nor the contrarians get the answer right.

The optimal strategy -- setting A and B to "off", C and D to "on" -- provides an expected result +0.8 points. The traditional D-only strategy provides an expected result +0.6 points, which is not too different. On the other hand, the optimal strategy makes it impossible to get the best outcome; with best luck you score +2 points, which is quite different from the +4 points advertised by the self-help genre. This means the optimal strategy will probably fail to impress the conservative people, and the contrarians will just laugh at it.

It will probably be quite difficult to distinguish between switches B and C. If most people you know personally set both of them "off", and the people you know from self-help literature set both of them "on" and got lucky at both, you have few data points to compare; the difference betwen 40% and 60% may not be large enough to empirically determine that one of them is a net harm and the other is a net benefit.

(Of course, whatever are your beliefs, it is possible to build a model where acting on your beliefs is optimal, so this doesn't prove much. It just illustrates why I believe that it is possible to achieve outcomes better than usual, and also that it is a bad idea to follow the people with extremely good outcomes, even if they are right about some of the things most people are wrong about. I believe that in reality, the impact of your actions is much greater than in this toy model, but the same caveats still apply.)

Against Victimhood

Yep, I was just nitpicking about literally two lines from the entire article. Guess they triggered me somehow.

Humbled by your niceness when pointing this out, I moved the comment away. Thank you!

otto.barten's Shortform

Technically, tiling the entire universe with paperclips or tiny smiling faces would probably count as modern art...

Against Victimhood

EDIT: Moved this comment to my shortform, because it was nitpicking mostly irrelevant to the article. Sorry about that.

Against Victimhood
Quite interesting, how all these different worldviews converge on that one :)

Maybe a religion that wants to appeal to people with modern sense of justice (i.e. those not satisfied with "the ingroup goes to heaven, the outgroup goes to hell, exactly as you would wish, right?") has no better option than take the just-world hypothesis and dress it up in religious terms.

What are examples of simpler universes that have been described in order to explain a concept from our more complex universe?

Relevant comment here:

I think Wolfram's "theory" is complete gibberish. Reading through "some relativistic and gravitational properties of the Wolfram model" I haven't encountered a single claim that was simultaneously novel, correct and non-trivial...
What Does "Signalling" Mean?
For example, a bird performing an impressive mating display signals that it is healthy and has good genes.
But we already have a term for signalling desirable properties about yourself: virtue signalling!

I don't understand the objection.

Virtue signalling is a subset of signalling. Specifically, it is signalling of moral virtues.

Therefore, a bird signalling health and good genes is not virtue signalling (but it is signalling in general). Because health and good genes are usually not considered to be moral virtues.

In some of these cases, mere assertion goes a long way. [...] In other cases, mere assertion doesn't work.
I'll charitably assume that he meant both cases to be types of signalling.

I think Scott got this right, but you misunderstood it.

X is a signal of Y if seeing X makes Y more likely. In some cases, mere assertions do that, in some cases, they don't.

For example, saying "I read Less Wrong" is a signal of reading Less Wrong, because people who read Less Wrong are more likely to say that they read Less Wrong. However, saying "I am not a criminal" is not a signal of not being a criminal, because criminals also say it a lot.

It's not about what the words mean, it's about what they correlate with. Sometimes the act of speaking the words correlates with their literal meaning (not lying, or lying rarely). Sometimes the act of speaking the words has almost zero correlation with their literal meaning (lying almost always).

.

I agree with your third objection, that Less Wrong uses signalling in the narrower sense (about agent), because that is how Robin Hanson typically uses it, and most of us were probably introduced to the concept by him.

(I am not sure whether Robin never used signalling in the wider sense, or he did but we just didn't notice.)

Low hanging fruits (LWCW 2020)

As an alternative to One Note, I suggest trying Cherrytree. It is open-source; works on Windows, Linux, Mac.

4thWayWastrel's Shortform

Given the power of mindkilling, the result could easily be an army of ex-altruistic ex-rationalists in politics. (Which wouldn't necessarily be worse than the current state of politics, it just wouldn't be the expected improvement.)

It's not like a have a better plan, though. I was thinking along the line of "suppose that certain fraction of politicians will be responsible, and will seek advice among the experts... look at what algorithm they use to pick their advisors... and position yourself so that they pick you".

But I suspect the algorithm would be something like "choose the most visible people already working in the domain you want to improve". In which case my advice reduces to "the 'life hack' to improve domain X is to spend your life working in domain X and become successful and famous", which sounds like doing things the hard way and being sufficiently lucky. (Maybe that is the optimal answer, dunno.)

The only point of intervention I see here is that we could notice the people who are doing the right thing, and try making them more visible, e.g. by writing articles about how they are doing the right thing. Which might slightly increase their chances of being picked as an advisor, compared to a person who is doing a wrong thing, but is good at climbing the hierarchy, so from outside seems like an equally qualified expert. In other words, instead of trying to place rationalists into domain X, just find people already in domain X who are relatively more rational than average, and try giving them more light.

Another potentially interesting project would be to create and publish a compilation of "rational policies on everything", and allow politicians to steal the ideas from the book. Let your memes travel farther than you can. The question is whether we could even compile such book. Because it's not just about technical answers, but also choosing your values. Often the choice is not between a policy that is "good" or "bad", but "better for X and worse for Y" and "better for Y and worse for X". Even the obviously bad choices usually have someone who derives some small benefit from status quo.

Load More