In a conversation with a friend about decision theory, they gave the following rebuttal to FDT:
Suppose there is a gene with two alleles, A and B. A great majority of agents with allele A use FDT. A great majority of agents with allele B use EDT. Omega presents the following variant of Newcomb's problem: instead of examining the structure of your mind, Omega simply sequences your genome, and determines which allele you have. The opaque box contains a million dollars if you have allele B, and is empty if you have allele A. Here, FDT agents would two-box, and more often than not end up with $1,000, as nothing depends on... (read more)
TAPs are a nice tool if you can actually notice the trigger when it happens. If you are of necessity in the middle of doing something (or more precisely, thinking of something) when the trigger happens, you may forget to implement the action, because you're too busy thinking about other things. If something happens on a predictable, regular basis, then an alarm is sufficient to get you to notice it, but this is not always the case.
So, how do you actually notice things when you're in the middle of something and focusing on that?
Option 1: Alter the "something" you are in the middle of. This may be done via forced, otherwise unnecessary... (read more)
Rot13'd because I might have misformatted
V guvax V zvtug or fcbvyrerq sebz ernqvat gur bevtvany cncre, ohg zl thrff vf "Gur vzcnpg gb fbzrbar bs na rirag vf ubj zhpu vg punatrf gurve novyvgl gb trg jung jr jnag". Uhznaf pner nobhg Vaveba rkvfgvat orpnhfr vg znxrf vg uneqre gb erqhpr fhssrevat naq vapernfr unccvarff. (Abg fher ubj gb fdhner guvf qrsvavgvba bs vzcnpg jvgu svaqvat bhg arj vasb gung jnf nyernql gurer, nf va gur pnfr bs Vaveba, vg unq nyernql rkvfgrq, jr whfg sbhaq bhg nobhg vg.) Crooyvgrf pner nobhg nyy gurve crooyrf orpbzvat bofvqvna orpnhfr vg punatrf gurve novyvgl gb fgnpx crooyrf. Obgu uhznaf naq crooyvgrf pner nobhg orvat uvg ol na nfgrebvq orpnhfr vg'f uneqre gb chefhr bar'f inyhrf vs bar vf xvyyrq ol na nfgrebvq.
Most people in the rationality community are more likely to generate correct conclusions than I am, and are in general better at making decisions. Why is that the case?
Because they have more training data, and are in general more competent than I am. They actually understand the substrate on which they make decisions, and what is likely to happen, and therefore have reason to trust themselves based on their past track record, while I do not. Is the solution therefore just "git gud"?
This sounds unsatisfactory, it compresses competence to a single nebulous attribute rather than recommending concrete steps. It is possible that there are in fact generalizable decisionmaking algorithms/heuristics... (read 457 more words →)
Results: it is hard; harder than meditation. I can get lost in thought easily, though I snapped back often enough to successfully execute the action. I note that any change in the thing I was in the middle of prompted a snapback: no longer lost in thought due to suddenly needing to think about external reality. I expect I will get better at this with time, especially since in this case the action needs to be executed daily.