I definitely agree.
For (3), now is the time to get this moving. Right now, machine ethics (especially regarding military robotics) and medical ethics (especially in terms of bio-engineering) are hot topics. Connecting AI Risk to either of these trends would allow you extend and, hopefully, bud it off as a separate focus.
Unfortunately, academics are pack animals, so if you want to communicate with them, you can't just stake out your own territory and expect them to do the work of coming to you. You have to pick some existing field as a starting point. Then, knowing the assumptions of that field, you point out the differences in what you're proposing and slowly push out and extend towards what you want to talk about (the pseudopod approach). This fits well with (1) since choosing what journals you're aiming at will determine the field of researchers you'll be able to recruit from.
One note, if you hold a separate conference, you are dependent on whatever academic credibility SIAI brings to the table (none, at present (besides, you already have the Singularity Summit to work with)). But, if you are able to get a track started at an existing conference, suddenly you can define this as the spot where the cool researchers are hanging out. Convince DARPA to put a little money towards this and suddenly you have yourselves a research area. The DOD already pushes funds for things like risk analyses of climate change and other 30-100 year forward threats so it's not even a stretch.
I try to avoid pure cheerleading comments, but this post was extremely helpful. Thank you!
I'm curious as to why you chose to target this paper at academic philosophers. Decision theory isn't my focus, but it seems that while the other groups of researchers in this area (mathematicians, computer scientists, economists, etc) talk to one another (at least a little), the philosophers are mostly isolated. The generation of philosophers trained while it was still the center of research in logic and fundamental mathematics is rapidly dying off and, with them, the remaining credibility of such work in philosophy.
Of course, philosophers are the only group that pay any attention to things like Newcomb's problem so, if you were writing for another group, you'd probably have to devote one paper to justifying the importance of the problem. Also, given some of the discussions on here, perhaps the goal is precisely to write this in an area isolated from actual implementation to avoid the risk of misuse (can't find the link, but I recall seeing several comment threads discussing the risks of publishing this at all).
If your goal is to found an IT startup, I'd recommend learning basic web development. I formerly used rails and, at the time I picked it up, the learning curve was about a month (just pick a highly rated book and work through). If not web, consider app development. If you know a bit of Java, Android would probably be the way to go. With either of these, you'll have a skill that allows you to single-handedly create a product.
At the same time, start keeping a list of ideas you have for startups. Some will be big, others small. But start looking for opportunities. Particularly focus on those that fit with the skills you're learning (web or app).
Potentially, that leaves you two months to start your first startup. Doesn't have to be great. Doesn't even have to be good. But knowing that you can take something from idea to product is extremely powerful. Because now, as you're learning, when you see an opportunity, you'll know how to take it.
More, it will allow you to fit your studies into your ideas. In your algorithms class, you'll see techniques and realize how those could solve problems you've had with your existing ideas or spark all new ideas. And if you don't walk out of your first AI class with a long list of new possibilities, something went seriously wrong :). But everything you're learning will have a context which will be extremely powerful.
All this time, keep creating. Any good entrepreneur goes through a training process of learning how to see opportunities and take them. You have four years of access to excellent technical resources, free labor (your peers), and no cost to failure (and learning how to handle those will be another step in your growth). If you go in with an ability to create (even a very basic ability), you will not only be able to make use of those opportunities, you'll get far more out of the process than you otherwise would.
[also: I'd like to second the recommendations to establish an exercise habit]
A little more information (if you have it) would help with some of this. Computer Science is a huge field, so getting a sense of what you're interested in, why you're doing it, and what background you already have would probably help with recommendations.
Rather than thinking of it as spending 30 minutes a day on rationality when you should be doing other things, it might be more accurate to think of it as 30 minutes a day spent optimizing the other 23.5 hours. At least in my experience, taking that time yields far greater total productivity than when I claim to be too busy.
And, if the research is fundamentally new, you may have to wait another few years (at least) before the good scholarly criticism comes out.
It works for me, but only after changing my preferences to view articles with lower scores (my cutoff had been set at -2).
Some are specifically focused in at the level of prompting addiction. Zynga, for example, has put a lot of work into optimizing the rate of rewards for just this effect.
Strongly seconded. While getting good people is essential (the original point about rationality standards), checks and balances are a critical element of a project like this.
The level of checks needed probably depends on the scope of the project. For the feasibility analysis, perhaps you don't need anything more than splitting your research group into two teams, one assigned to prove, the other disprove the feasibility of a given design (possibly switching roles at some point in the process).