He means that in the counterfactual world where he didn't find this book, he became normal.
In that case, he would have wished that his parents had not let him read this book (which is precisely what would have indeed happened).
That's interesting... Did you actually count sheep and rocks when writing this article?
Did the character you give voice to count sheep and rocks?
Usually, when I make this kind of arguments, what I really say is "If I counted 2 sheep and 3 sheep, I would find 5 sheep" which means that it actually is what I expect, but that's not evidence if my cognition process is put into question.
Yet, I don't think it is necessary to actually count sheep and rocks when making this argument... But if I was discussing with someone who thought that 2 + 3 = 6 (or someone who thinks that either answer is meaningless), then it would be necessary to make the experiment, because we would expect different results.
I think you mean lightspeed travel ?
That doesn't rule out infinite computation, though, since in an infinite universe we have a perpetually increasing amount of resources (as we explore further and further at lightspeed).
I think it's noteworthy that absolute laws are more easily respected when what you actually want are laws with exceptions.
The correct law may be "don't kill unless it's right" but just saying "don't kill" will actually make people think about it twice before killing.
This reminds me a lot of existentialcomics.
Funnily enough, it seems the less meta your beliefs, the less distortedly you can transmit them into the chronophone.If you believe in God not because society said so, or because you were taught it as an infant, or because it's proper, if you truly believe it for itself as an uncaused truth (not because it's your most fundamental belief), then it might just come out exactly the same.Ask deluded patients in psychiatric hospitals to talk into the chronophone and Archimedes might learn about Jesus and Napoleon.
I don't think the two closed answers of "Have you stopped beating your wife ?" have such a well-defined meaning.
Since this is natural language, and I understand a no as meaning "I'm still beating her." and I expect most people to interpret a no the same way as I, then it's not from obvious why this interpretation is incorrect (if we ignore that the sentence is typically used as an example that has no good answer. Use "Will you stop smoking soon ?" which is less standard for the sake of the argument.)
Update : nowadays, top chess engines (AlphaZero, Stockfish 13) rely on neural networks which are basically black boxes.
It doesn't undermine your point though. NL's objection is indeed invalid.
Consider RYY : your best probabilistic guess of your next move.
Assuming you know yourself perfectly (or good enough to predict your own moves reliably at least), it will turn out RYY is very similar to you (RYY is not deterministic if you are not, but then it is still an opponent as skilled as you for all chess-related purposes).
Then since you have shown you win against RYK, I can guess that RYY will reliably win against RYK, which I find very surprising.
Why is emulating a stronger player less efficient than emulating yourself ? (It sounds more surprising to me formulated like that than how you said it.)
The explanation I see is that you don't know Kasparov well enough to emulate him correctly (what you already pointed out) whereas you know yourself very well.
Then, the question that comes to my mind is : how can you use this knowledge to improve your play ?
I have received the advice "what would X do in your stead" by a number of people in a number of circumstances, including here by rationalists.
How can it be useful ?
If it is helpful, then it means that your cognition algorithm can be optimized and you know a specific way to improve it.
If you find yourself frequently finding that wondering about what someone else would do is helpful, then there is additional computation that could be saved if you knew how you know how that someone else behaves (by doing it directly instead of imitating) .
So, it's purely a matter of knowing yourself : the advice I had received was no better than "think about yourself".
The other question I wonder about is how that applies to artificial intelligence.
I don't know more much about it. Is the "know yourself" important in that case ? How does an AI see its own source code ?
I guess the first step toward making this question meaningful would be to design precisely how the emulator works. A naive approach (that doesn't account for styla and psychology) would be to use statistics for every position, and be random if an unprecedented position occurs.
Then it becomes clear that there is no "default player" to emulate. There is no such thing as the emulator emulating itself because the emulator is not a player. Bummer.
Remember how in another post you argued a rationalist should be able to reserve his knowledge of it was taken away ?
I believe this is a similar approach as the one taken by these hypothetical Jesuits.
In fact, I see two possible ways to explain such a behavior : one could ask a physics student whether Newtonian physics were not the absolute best if they expected the student to discover relativity by themselves.
Likewise, I guess the hypothetical Jesuits could want two separate benefits out of this :
-Ensuring the student is savant/fanatic enough to join the tribe.
-Teaching the student to discover core beliefs of their faith by themselves, both reinforcing these beliefs and assuring their correctedness.