This is interestingly parallel to the first part of NVC, which could be easily paraphrased as "What am I feeling, and why am I feeling it?" As with yours, the second part is meant to be broad and long-term, not situational: in that case, not a goal, but a need. In both cases I think it is indeed a useful way to think about goals and actions.
I'm a bit put off by your example, though; specifically, you seemed to stop as soon as you thought of one better thing, and both things are quite similar in that they both take the form "I will learn X, because X might be useful towards FAI dev." What if learning a new, directly-FAI-relevant skill is not actually the optimal course? For example, as you touch on briefly and then seem to bypass, the best way you can help FAI development might be to earn money and then donate to groups actively working on FAI. In that case, a good thing to do would be doing work which you can efficiently convert into money; this may have nothing to do with FAI, but instead call on skills you've already developed.
This is interestingly parallel to the first part of NVC, which could be easily paraphrased as "What am I feeling, and why am I feeling it?" As with yours, the second part is meant to be broad and long-term, not situational: in that case, not a goal, but a need. In both cases I think it is indeed a useful way to think about goals and actions.
Not mine! The Fundamental Questions were pioneered by Eliezer and Mike Blume respectively, I think. "What do you think you know, and why do you think you know it?" (and permutations thereof) is the First Question; "What are you doing, and why are you doing it?" is the Second. I sometimes forget that not everyone has absorbed all of Less Wrong canon! My bad.
For example, as you touch on briefly and then seem to bypass, the best way you can help FAI development might be to earn money and then donate to groups actively working on FAI. In that case, a good thing to do would be doing work which you can efficiently convert into money; this may have nothing to do with FAI, but instead call on skills you've already developed.
It might be, and it's worth thinking about more, but intelligence amplification research really needs to be done. I'm currently a volunteer for SIAI, so currently I donate time instead of money. But the main things I do for SIAI aren't IA-related (IA = intelligence amplification, like researching nootropics), they're "helping the Visiting Fellows program run"-related. If I plan on continuing to help in that role, then I should seek to improve at it, which I do do, but didn't feel like putting it in my example.
In that case, a good thing to do would be doing work which you can efficiently convert into money; this may have nothing to do with FAI, but instead call on skills you've already developed.
There is no obvious path here, as I have no skills already developed. I would have to develop new skills, in which case programming indeed would be the most obvious choice (though maybe not the best one). But currently IA work seems more important. (I'd rather not debate the relative merits of IA at this time; sorry.)
"The Fundamental Questions were pioneered by Eliezer and Mike Blume respectively, I think. "What do you think you know, and why do you think you know it?" (and permutations thereof) is the First Question; "What are you doing, and why are you doing it?" is the Second. I sometimes forget that not everyone has absorbed all of Less Wrong canon! My bad."
The order of these questions should be switched.
PS: How do you 'quote'?
There seem overwhelmingly many more ways to earn a useful surplus of money than there are ways to usefully research IA, so any given person (i.e. you) is more likely to be able to learn a skill from the large set of {skills which could earn surplus money} than to learn a skill from the much smaller set {skills useful for advancing the state of IA research}.
Particularly since your claim of "no skills already developed" ignores your ability and willingness to write, and your having a blog and associated skills, both of which are things some people earn plenty from.
There seem overwhelmingly many more ways to earn a useful surplus of money than there are ways to usefully research IA, so any given person (i.e. you) is more likely to be able to learn a skill from the large set of {skills which could earn surplus money} than to learn a skill from the much smaller set {skills useful for advancing the state of IA research}.
I already have a path laid out for IA research. Nobody's actually doing focused, systematized IA work; it's a really high value target right now, and it's something I'm already specialized for. In general I believe your advice holds, but I'm narcissistic enough and confident enough in my narcissism to think I'm a special case.
Particularly since your claim of "no skills already developed" ignores your ability and willingness to write, and your having a blog and associated skills, both of which are things some people earn plenty from.
Such people are almost invariably better writers than me. I could learn how to write better, but it's a skill I'd have to develop. I actually have a very good opportunity to make good money, but there are other things I'd like to do within the framework of SIAI first, I think. All my plans are in flux. I fly into Berkeley Tuesday; we'll see what happens after that.
One method for increasing high utility productivity I thought up was choosing a specific well-defined answer for the second half ("Why am I doing it?") and consistently checking to see if the answer to the first half satisfyingly aligns with the second half. For example, if I'd checked myself an hour ago, it'd be "I'm learning to program because I want to maximize the probability of FAI development." Ideally the second half would be related to a 'something to protect' or 'definite major purpose' that stays constant over time and that you want to be consistently moving towards. If you're already good at noticing rationalization this technique might work to induce cognitive dissonance when engaging in suboptimal courses of action. (Whether or not inducing cognitive dissonance in order to make yourself more productive is likely to work is open to debate. I suspect P.J. Eby would thoroughly disagree.) I'm going to try this over the next few days and see if the results are any better than how I've been doing recently. I am at a relative productivity high point right now though, so the data might not be too meaningful. I encourage others to see if this method works.