the opener in John Psmith's review of Reentry by Eric Berger: "My favorite ever piece of business advice comes from a review by Charles Haywood of a book by Daymond John..."
I found this nesting very funny. Bravo if it was intentional
In the "all positions" page, why is the second sentence of most summaries referring to a "detail" or "full description"? I see no way to access anything like that
It's worth highlighting that the two expectations do not condition on the same event. This explains why we can have E[A | all even] < E[B | even] even though A ≥ B almost surely: the two "all even"s actually refer to different events.
He says that he only cares about the learning aspect, and that AI cannot help, because he isn't bottlenecked by typing speed, i.e., it would take as much time for him to write the code as to read it. But it's easier to learn from a textbook than figure things out yourself? Perhaps he meant that he only cares about the "figuring out" aspect.
Just to be sure, are you on the "Latest" feed? (as opposed to "Enriched" or "Recommended")
Would you say that you knew they wouldn't like the new thing, but you didn't care because it wasn't against the rules?
I would like to practice your form of relaxation, do you have some channel suggestions?
You can try to make a prediction about what future you will think. For example, "in 2 years, I will think that working on project X was a good idea". If other people don't want to bet on those terms (since you can technically say whatever you want at the end), you can just write down predictions and then see whether your predictions in the past were correct.
You might object that now you don't have skin in the game, but I think you do, if you care about trying to win the game of writing down good predictions.
You need to have beliefs to test how well-calibrated your beliefs are. Studying is one way to form new beliefs. You could avoid that effort by just testing your existing beliefs.
Motivated by getting real-world results ≠ motivated by the status and power that often accrue from real-world results. The interestingness of problems does not exist in a vacuum outside of their relevance. Even in theoretical research, I think problems that lead towards resolving a major conjecture are more interesting, which could be construed as a payoff-based motivation.