I implemented this two days ago, and I’m already seeing incredible results in some areas! Not much progress in productivity, since I had already tried most of these ideas before, but I did experiment with buying different products for my lunch, found something better, and also made progress on something that takes a lot of context to explain.
(This comment doesn’t share any new important information about the technique, but it’s still important to write comments like this to support the authors. It’s hard to keep creating when all you hear is criticism.
I think of comments like this as a kind of reward for good behavior. In behavioral style)
I'm curious about useful topics and uncurious about unusefuls ones. I find it better that your proposition.
So, when a community will reach 240, it will have an incentive to don't grow to don't have drama and decrease of efficiency because of economies of scale? How would you prevent it?
Excellent point!
I don’t mean that the probability is always 50/50. But it’s not 100% either.
In Europe, the smartest people for centuries believed in god, and they saw endless confirmations of that belief. And then—bam! It turned out they were simply all wrong.
Or take any case of ancient medicine. European doctors believed for centuries that bloodletting cured everything, while Chinese doctors believed that eating lead prolonged life.
There are also other examples where all the experts were wrong: geocentrism, the ether theory, the idea that mice spontaneously generate in dirty laundry, the miasma theory of disease…
In all these cases it was either about cognitive biases (God, medicine) or about lack of information or broken public discussion (geocentrism).
Today we fight biases much better than a thousand years ago, but we’re still far from perfect.
And we still sometimes operate under very limited information.
I think one should have fundamental rational habits that would protect me from being so sure in god or bloodletting. That’s why, from any conclusion I make, I subtract a few percentage points of confidence. The more complex the conclusion, the more speculative my reasoning or vulnerable to diases, the more I subtract.
If you claim that my way of fighting this overconfidence shouldn’t be used, I’d want you to suggest something else instead. Because you can’t just leave it as it is—otherwise one might assign 99% confidence to some nonsense.
Interesting model. Probably you are right and I didn't considered this because all my friends and me are not idiots.
No. You won’t see yourself winning the lottery when you wake up.
There is the real you. You may create copies of yourself, but they are still just copies.
Let’s suppose Eliezer starts this experiment: the universe splits into 10,000,000 copies, and in one of them he wins and creates a trillion copies of himself.
So, there are 10,000,000 actual Eliezers — most of whom are in universes where he didn’t win — but there are also a huge number of copies of the winning Eliezer.
Or, if Eliezer’s consciousness continues only into one future universe and doesn’t somehow split 10,000,000 ways, then in most of the universes where his consciousness could go, he didn’t win the lottery.
Since your clones are not you, and you don’t feel what your clones feel, I don’t think the number of clones created really matters.
You might think that there could be many versions of you sharing your consciousness — that they are all “you” — but consciousness is a result of physical processes. So I don’t think it can teleport from one dimension to another. Therefore, since most of the real/original Eliezers exist in universes where he lost, he would wake up to find that he lost.