A flip side of this analysis is that the detrimental effects of the aforementioned cognitive distortions might be much higher than is usually supposed or realized, perhaps sometimes causing multi-year/decade delays in important approaches and conclusions, and can't be overcome by others even with significant IQ advantages over me. This may be a crucial strategic consideration, e.g., implying that the effort to reduce x-risks by genetically enhancing human intelligence may be insufficient without other concomitant efforts to reduce such distortions.
Since I have been working on germline engineering, I have been thinking about the same thing. My intuition is that if I could magically just increase everyone's IQ by 5 points that would result in a marginally saner world. But creating a few babies with ~160+ IQs doesn't seem obviously beneficial. Even if 3/4 get coordination problems etc. what if the fourth one decides to work at OpenAI or Anthropic on capabilities, because working on capabilities is just so much more exciting. If GWAS for personality were working better I'd be more optimistic about selecting for something like capacity to gather wisdom? With editing, you could also go for cognitively enhanced clones of people we consider wise (Main bottleneck with this option would be PR). Problem is that I am going to disagree with people whom we consider wise. Although perhaps we can all agree, we would not like to clone the CEOs of the AGI labs. Perhaps really good education would also help. Not an expert on education.
How does your purely causal framing escape backward induction? Pure CDT agents defect in the iterated version of the prisoners' dilemma, too. Since at the last time step you wouldn't care about your reputation.
In conclusion, if you find yourself freely choosing between options, it’s rational to take a dominating strategy, like two-boxing in Newcomb’s problem, or defecting in the sorted prisoner’s dilemma. However, given the opportunity to actually pre-commit to decisions that get you better outcomes provided your pre-commitment, you should do so.
How do you tell if you are in a “pre-commitment” or in a defecting situation?
This is not the correct level for thinking about decision theory—we don’t think about any of our decisions that way. Decision theory is about determining the output of the specific choice-making procedure “consider all available options and pick the best one in the moment”.
Categorical imperative has been popular for a long while:
Act only according to that maxim whereby you can at the same time will that it should become a universal law.
I don't think this is incompatible with making the best decision in the moment. You just decide in the moment to go with the more sophisticated version of the categorical imperative, because that seems best? If I didn't reason like this, I would not vote and I would have a harder time to stick to commitments. I agree thinking about decisions in a way that is not purely greedy is complicated.
A more general claim is: if something can predict your action with better than random accuracy no matter how hard you try to prevent them, you don’t have free will over that action. (I’m not addressing the question of whether free will exists in general, only whether a particular action is chosen freely.)
The whole framing here strikes me as confused (although maybe I am confused). The way you are phrasing it already assumes that you are in conflict with someone ("no matter how hard you try to prevent them"). Your setup already assumes away you have free will. Both Cooperate Bot and Defect Bot do in fact not have free will. The whole point of game-theory is that you have agents that can simulate each other to some extent. A more useful game theory bot is a function that takes the function of another agent as input. When you assume that you just exist in one place and time, then you are already assuming a setup where game theory is not useful. If you are predicted by someone else, then you (your computation) is being simulated, you don't exist in just one place and time, you exist in different places and times (although not all of these versions are the full You). You don't get to choose where you exist, but in this framing you do get to choose your output.
Embarrassingly enough I don't think past me ever tried to defrost it in the oven. Tried to put it in the fridge the day before and then microwave, which was not very effective. Just too excited to use microwave even for things where I know they don't work well because my parents didn't use to have one that I did consider the oven for this. Currently I am sharing a tiny freezer shelf with 7 people, but I might try this in the future.
Wow. I already had high expectations for this discussion, but reading this definitely made my day. I love how both plex and Audrey Tang were taking each other seriously. Gives me glimpses of hope that we as humanity will actually be able to coordinate.
I agree that cheap labour isn't great for the majority. Another thing I kept thinking to myself is how labor is too cheap in India. For example when I saw people farming without a tractor using a cow instead. Why revolt instead of asking for a higher salary or quit? Conditions in India seem to be improving pretty rapidly.
Fine. I accept that this was overdue. I need to do this today. Thanks for writing this :). My last full digital declutter was in ~April of 2022.