If it's all about prediction, why do poor team still have fans?
2 years later, I'd still be interested in your model if you're willing to share it.
I can't shake the feeling that throughout the book Sowell tries to make a case for a more right-wing/free-market point of view without admitting it, albeit in the most eloquent manner.
Did you find any of his political claims to be dubious?
FYI this link doesn't go anywhere
Here's a link to the book's Goodreads page
I really like the idea of doing a pre-mortem here.
Suppose you and I have two different models, and my model is less wrong than yours. Suppose that my model assigns a 40% probability to event X, and your model assigns a 60%, we disagree and bet, and event X happens. If I had an oracle over the true distribution of X, my write-up would consist of saying "this falls into the 40% of cases, as predicted by my model", which doesn't seem very useful. In the absence of an oracle, I would end up writing up praise for, and updating towards, your more wrong model, which is obviously not what we want.
T...
Interesting about ultralearning, I will need to skim that in more detail some point. Without spaced repetition/incremental reading, that looks like the best method of learning to me.
His book touches on spaced repetition (he's a big proponent of the testing effect) and other things. It's really about how to put together effective learning projects, from the research phase, through execution.
Regarding SuperMemo, yes, I use the software and incremental reading extensively (if you have an interest in learning it, I would happily teach you).
I am int...
1. you know what you don't know so if you need some preceding information you can find that for yourself (in large part thanks to the internet)
2. teaching is centered around the idea that a teacher knows what you should know better than you do. In many cases, I don't think this makes much sense. If I want to learn how to make x thing, getting a general education on the field x falls into (field y) doesn't make sense. Learning a bunch of useless things in field y is a waste of my time. If I'm deciding what to learn by myself, I can make...
I think that's a bit of a shame because I personally have found LW-style thinking useful for programming. My debugging process has especially benefited from applying some combination of informal probabilistic reasoning and "making beliefs pay rent", which enabled me to make more principled decisions about which hypotheses to falsify first when finding root causes.
As someone who landed on your comment specifically by searching for what LW has said about software engineering in particular, I'd love to read more about your methods, experiences, and thoughts on the subject. Have you written about this anywhere?
Thanks for this, it's a good unifying summary on systemization that I felt was valuable in addition to reading the Systemization chapter in the CFAR Handbook.
Another thing that falls into the 'spend your money to conserve attention category' is hiring a personal assistant. A fellow CFAR alum convinced me to try it out, and it's definitely effective. I fell out of using my PA, but that is something I want to revisit, possibly when I have more money.
Automatically donate money.
This might be bad because giving Tuesday exists.
Is this out o...
Yes, I used Anki in college for a range of different courses. It made memorization based courses (art history) an absolute breeze, and helped me build my conceptual tower for advanced math courses. Spaced repetition is quite useful for remembering things. I recommend reading this article by Michael Nielsen, alongside the comprehensive reference from Gwern.
I'm skeptical of the value of Readwise, because it is so passive. I think part of the value of using SRS programs like Anki comes from formulating good questions and structuring your knowledge into a...
Murray has a new book out, Human Diversity, so that may be a good place to start.
Thank you for writing such a clear article on the issue. Cleared up my confusion around EMH, and especially how it differs from the random walk hypothesis. I'll definitely reference this article when people bring up EMH.
specifically focused on doing planks, an exercise that's far more intellectually challenging than physically challenging.
How are planks intellectually challenging? They certainly present great physical challenge, so this is an interesting claim.
If however, you’ve developed more stoic thinking patterns and ask yourself “I made a mistake, but that’s already happened so instead of regretting I’m going to focus on what I can do to avoid that mistake in the future”, you’ll also likely have body language and speech that doesn’t communicate regret in the same way. Sometimes people will recognize that you are still aware of your mistake but are approaching it from a different angle, especially if they already know you, but don’t count on it.
T...
As a student, did you experience any particular frustrations with this approach?
Retweet Trump with comment.
What is the error that you're implying here?
A simple example is debugging code: a gears-level approach is to try and understand what the code is doing and why it doesn't do what you want, a black-box approach is to try changing things somewhat randomly.
To drill in further, a great way to build a model of why a defect arises is using the scientific method. You generate some hypothesis about the behavior of your program (if X is true, then Y) and then test your hypothesis. If the results of your test invalidate the hypothesis, you've learned something about your code and where not to look. If your hypothesis is confirmed, you may be able to resolve your issue, or at least refine your hypothesis in the right direction.
There is some irony in the author's insistence that Musk is excellent because of his exceptional software, not his hardware. How could the author possibly know this, or be able to separate out the effect of Musk's raw intellectual horespower, and his critical reasoning skills?
I did find this post quite inspirational, although I do wonder how the author came up with the Want box / Reality box / strategy box model. It doesn't seem like Musk explicitly gave this model to the author.
What have you found in your experiments, in terms of what helps or hurts in developing DDO culture?