This is not the correct level for thinking about decision theory—we don’t think about any of our decisions that way. Decision theory is about determining the output of the specific choice-making procedure “consider all available options and pick the best one in the moment”.
Categorical imperative has been popular for a long while:
Act only according to that maxim whereby you can at the same time will that it should become a universal law.
I don't think this is incompatible with making the best decision in the moment. You just decide in the moment to go with the more sophisticated version of the categorical imperative, because that seems best? If I didn't reason like this, I would not vote and I would have a harder time to stick to commitments. I agree thinking about decisions in a way that is not purely greedy is complicated.
A more general claim is: if something can predict your action with better than random accuracy no matter how hard you try to prevent them, you don’t have free will over that action. (I’m not addressing the question of whether free will exists in general, only whether a particular action is chosen freely.)
The whole framing here strikes me as confused (although maybe I am confused). The way you are phrasing it already assumes that you are in conflict with someone ("no matter how hard you try to prevent them"). Your setup already assumes away you have free will. Both Cooperate Bot and Defect Bot do in fact not have free will. The whole point of game-theory is that you have agents that can simulate each other to some extent. A more useful game theory bot is a function that takes the function of another agent as input. When you assume that you just exist in one place and time, then you are already assuming a setup where game theory is not useful. If you are predicted by someone else, then you (your computation) is being simulated, you don't exist in just one place and time, you exist in different places and times (although not all of these versions are the full You). You don't get to choose where you exist, but in this framing you do get to choose your output.
Embarrassingly enough I don't think past me ever tried to defrost it in the oven. Tried to put it in the fridge the day before and then microwave, which was not very effective. Just too excited to use microwave even for things where I know they don't work well because my parents didn't use to have one that I did consider the oven for this. Currently I am sharing a tiny freezer shelf with 7 people, but I might try this in the future.
Wow. I already had high expectations for this discussion, but reading this definitely made my day. I love how both plex and Audrey Tang were taking each other seriously. Gives me glimpses of hope that we as humanity will actually be able to coordinate.
I agree that cheap labour isn't great for the majority. Another thing I kept thinking to myself is how labor is too cheap in India. For example when I saw people farming without a tractor using a cow instead. Why revolt instead of asking for a higher salary or quit? Conditions in India seem to be improving pretty rapidly.
Yes, I vaguely remember using AutoHotKey on Windows as well. If you have a system that is working well for you, I would not recommend switching because switching costs. I think I mostly used AutoHotKey to type special characters more easily on my keyboard. That problem was solved by switching to the neoqwertz which was designed by nerds like me who had already found better solutions to all the problems I had. If I have a functionality I use frequently enough that I want a Keybind for it, I can usually implement it as a shell one-liner or get Claude to do it for me. Then I can assign a keybind to it in i3 (It allows different modes like in vim, which allows for a lot of different Keybinds). I do not know a really good tool for text-expansion on linux though. Espanso works but it's completions are not reliable enough for me to make use of them a lot (I don't know if AutoHotKey did a really reliable job here either). I don't know Everything, but from a first look it looks similar to what you might get out of fzf?
Yep, I thought about this. More in-person/video interaction with people who had already learned this lesson would have helped. Especially, watching them work or study on things they consider hard while they think out loud would have helped.
How does your purely causal framing escape backward induction? Pure CDT agents defect in the iterated version of the prisoners' dilemma, too. Since at the last time step you wouldn't care about your reputation.
How do you tell if you are in a “pre-commitment” or in a defecting situation?