Sorry if someone has covered this before, but I had an interesting thought. Sometimes I'll make a deal with myself, I'll say I'll goof off for X minutes but then I have to work for Y minutes afterwards. Often times, when the time is up, I won't follow through on the deal. What's interesting is I that feel like a causal agent being asked to just leave the money that's lying right there. I'm only going to give myself chances to goof off if I trust myself to get back to work but by the time the time for work comes, I'm in some sense a different person, no longer bound or endangered by old agreements. Omega (old me) is gone and never coming back. This all of course ignore long term goals, moral satisfactions, etc.

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 3:54 PM
[-][anonymous]13y150

Who do you think you're stealing from?

Imagine three people, A, B, and C, sitting in different rooms of an experimenter's lab. Person A has a choice: to accept 10 units of fun (consumed on the spot, e.g. playing the first level of Halo 4), or to accept 1 unit of fun (also consumed on the spot, e.g. listening to the first song on Halo 4's soundtrack) plus a wooden token. Person A then leaves the lab forever; if they received a wooden token, on their way out they toss it to B.

Next, Person B acts. If they received a wooden token, they can't do anything with it. They're faced with the same choice: 10 units of fun, or 1 unit of fun plus a wooden token. Then they leave the lab forever, and toss their wooden tokens (zero, one, or two) to Person C.

Finally, Person C acts. If C has zero wooden tokens, they're given 10 units of fun. If C has one wooden token, they're given 100 units of fun (e.g. playing Mass Effect 4 in its entirety). And if C has two wooden tokens, they're given 1000 units of fun (e.g. an advance screening of the movie Deus Ex: A Deepness In The Methods Of Rationality). C then leaves the lab forever, possibly on a stretcher due to awesome overload.

What do you think will happen here? A and B might grab their 10 fun units each, leaving C with only 10 fun units. (This would be much more likely to happen if A and B weren't told the meaning of the wooden tokens. It would be virtually guaranteed to happen if the wooden tokens were abstracted away entirely, leaving the experimenter to "compute" behind the scenes what C gets.) A and/or B might also decide, altruistically, to give up 9 units of fun so C gets 90 or 900 more.

Now run the experiment with Person D, except that D just walks out of the first room, puts on a hat labeled "Person E", runs the second step, walks out of the second room, puts on a hat labeled "Person F", and runs the final step.

Now replace the rooms and hats with a simple timer, counting chunks of minutes/days/years/etc. Adjust the reward numbers (1/10/100/1000 fun units) as you see fit.

The point of this thought experiment, phrased in a silly manner, is that at any point in time, you can imagine yourself as being either A, B, or C in the temporal version of the experiment. Suppose you're person B (Current-You). Person A (Past-You) is all-powerful, having already acted and left the scene forever, taking their fun rewards and leaving Current-You wooden tokens (or not!). Current-You has some level of power: you can choose your own level of fun rewards, and whether to accumulate more wooden tokens to give to person C. That's Future-You, who is completely at the mercy of Past-You and Current-You combined - but you'll become Future-You so you probably want to keep their interests in mind.

This is actually how I view my own life.

Deus Ex: A Deepness In The Methods Of Rationality

Sounds fun...

A slight modification of this used to work for me.

I visualized a clock striking a certain time, or reaching a certain point in a game, or reading to the end of a chapter, then quitting the game/closing the book and starting whatever pertinent activity was waiting. When done visualizing I'd then continue with the current activity. When the previously visualized (and anticipated) event happened, I'd notice myself executing this pre-programmed command, without a conscious input on my part. It felt as if someone predicted this happening and the prediction was coming true.

In a sense, I Omega'ed myself into one-boxing :)

One nice thing about this approach, if it works for you, is that the implementation threshold is quite low relative to the other suggestions here, since visualizing a process is easier than actually doing it.

The traditional saying is "business before pleasure".

I agree that there are parallels between akrasia and Newcomb's problem. I attempted to generalize this and show how a large class of problems fits into this extracausal-reasoning template in my post Real World Newcomblike Problems.

For me, I find the better deal to make with myself is to work for Y minutes now after which I have to take a break and do non-work things for X minutes. Typically Y = 45 and X is widely variable. I set a timer to interrupt my work. Often I find myself wanting to keep working after Y minutes are up, but I try not to dwell on this too much, and I really do try to stop working — when I set the timer I want the reward to be credible.

You may be interested in this: http://lesswrong.com/lw/4sh/how_i_lost_100_pounds_using_tdt/