Side point: this whole idea is arguably somewhat opposed to what Cal Newport in Deep Work describes as the "any benefit mindset", i.e. people's tendency to use tools when they can see any benefit in them (Facebook being one example, as it certainly does come with the benefit of keeping you in touch with people you would otherwise have no connection to), while ignoring the hidden costs of these tools (such as the time/attention they require). I think both ideas are worth to keep in mind when evaluating the usefulness of a tool. Ask yourself both if the usefulness of the tool can be deliberately increased, and if the tool's benefits are ultimately worth its costs.
I think it does relate to examples 2 and 3, although I would still differentiate between perfectionism in the sense that you actually keep working on something for a long time to reach perfection on the one hand, and doing nothing because a hypothetical alternative deters you from some immediate action on the other hand. The latter is more what I was going for here.
Good point, agreed. If "pay for a gym membership" turns out to be "do nothing and pay $50 a month for it", then it's certainly worse than "do nothing at home".
I would think that code generation has a much greater appeal to people / is more likely to go viral than code review tools. The latter surely is useful and I'm certain it will be added relatively soon to github/gitlab/bitbucket etc., but if OpenAI wanted to start out building more hype about their product in the world, then generating code makes more sense (similar to how art generating AIs are everywhere now, but very few people would care about art critique AIs).
Can you elaborate? Were there any new findings about the validity of the contents of Predictably Irrational?
This is definitely an interesting topic, and I too would like to see a continued discussion as well as more research in the area. I also think that Jeff Nobbs' articles are not a great source, as he seems to twist the facts quite a bit in order to support his theory. This is particularly the case for part 2 of his series - looking into practically any of the linked studies, I found issues with how he summarized them. Some examples:
(note I wrote this up from memory, so possible I've mixed something up in the examples above - might be worth writing a post about it with properly linked sources)
I still think he's probably right about many things, and it's most certainly correct that oils high in Omega6 in particular aren't healthy (which might indeed include Canola oil, which I was not aware of before reading his articles). Still he seems to be very much on an agenda to an extent that it prevents him from summarizing studies accurately, which is not great. Doesn't mean he's wrong, but also means I won't trust anything he says without checking the sources.
I could well imagine that there are there are strong selection effects at play (more health-concerned people being more likely to give veganism a shot), and the positive effects of the diet just outweighing the possible slight increase in plant oil usage. And I wouldn't even be so sure if vegans on average consume more plant oil than non-vegans - e.g. vegans probably generally consume much less processed food, which is a major source of vegetable oil.
In The Rationalists' Guide to the Galaxy the author discusses the case of a chess game, and particularly when a strong chess player faces a much weaker one. In that case it's very easy to make the prediction that the strong player will win with near certainty, even if you have no way to predict the intermediate steps. So there certainly are domains where (some) predictions are easy despite the world's complexity.
My personal rather uninformed take on the AI discussion is that many of the arguments are indeed comparable in a way to the chess example, so the predictions seem convincing despite the complexity involved. But even then they are based on certain assumptions about how AGI will work (e.g. that it will be some kind of optimization process with a value function), and I find these assumptions pretty intransparent. When hearing confident claims about AGI killing humanity, then even if the arguments make sense, "model uncertainty" comes to mind. But it's hard to argue about that since it is unclear (to me) what the "model" actually is and how things could turn out different.
Assuming slower and more gradual timelines, isn't it likely that we run into some smaller, more manageable AI catastrophes before "everybody falls over dead" due to the first ASI going rogue? Maybe we'll be at a state of sub-human level AGIs for a while, and during that time some of the AIs clearly demonstrate misaligned behavior leading to casualties (and general insights into what is going wrong), in turn leading to a shift in public perception. Of course it might still be unlikely that the whole globe at that point stops improving AIs and/or solves alignment in time, but it would at least push awareness and incentives somewhat into the right direction.
One could certainly argue that improving an existing system while keeping its goals the same may be an easier (or at least different) problem to solve than creating a system from scratch and instilling some particular set of values into it (where part of the problem is to even find a way to formalize the values, or know what the values are to begin with - both of which would be fully solved for an already existing system that tries to improve itself).
I would be very surprised if an AGI would find no way at all to improve its capabilities without affecting its future goals.