Posts

Sorted by New

Wiki Contributions

Comments

Do you mean that he actively seeks to encourage young people to try and slow Moore's Law, or that this is an unintentional consequence of his writings on AI risk topics?

(Wherein I seek advice on what may be a fairly important decision.)

Within the next week, I'll most likely be offered a summer job where the primary project will be porting a space weather modeling group's simulation code to the GPU platform. (This would enable them to start doing predictive modeling of solar storms, which are increasingly having a big economic impact via disruptions to power grids and communications systems.) If I don't take the job, the group's efforts to take advantage of GPU computing will likely be delayed by another year or two. This would be a valuable educational opportunity for me in terms of learning about scientific computing and gaining general programming/design skill; as I hope to start contributing to FAI research within 5-10 years, this has potentially big instrumental value.

In "Why We Need Friendly AI", Eliezer discussed Moore's Law as a source of existential risk:

Moore’s Law does make it easier to develop AI without understanding what you’re doing, but that’s not a good thing. Moore’s Law gradually lowers the difficulty of building AI, but it doesn’t make Friendly AI any easier. Friendly AI has nothing to do with hardware; it is a question of understanding. Once you have just enough computing power that someone can build AI if they know exactly what they’re doing, Moore’s Law is no longer your friend. Moore’s Law is slowly weakening the shield that prevents us from messing around with AI before we really understand intelligence. Eventually that barrier will go down, and if we haven’t mastered the art of Friendly AI by that time, we’re in very serious trouble. Moore’s Law is the countdown and it is ticking away. Moore’s Law is the enemy.

Due to the quality of the models used by the aforementioned research group and the prevailing level of interest in more accurate models of solar weather, successful completion of this summer project will probably result in a nontrivial increase in demand for GPUs. It seems that the next best use of my time this summer would be to work full time on the expression-simplification abilities of a computer algebra system.

Given all this information and the goal of reducing existential risk from unFriendly AI, should I take the job with the space weather research group, or not? (To avoid anchoring on other people's opinions, I'm hoping to get input from at least a couple of LW readers before mentioning the tentative conclusion I've reached.)

ETA: I finally got an e-mail response from the research group's point of contact and she said all their student slots have been taken up for this summer, so that basically takes care of the decision problem. But I might be faced with a similar choice next summer, so I'd still like to hear thoughts on this.

Although I don't have much cash to spare, I've cut back in some personal budget areas for the next few months and donated $500 to the 'Hard Takeoff Paper' project. I have two hopes: that the donation (and matching funds) will make a non-negligible difference in the number of AI researchers taking the possibility of hard takeoff seriously, and that publicly posting this will nudge at least a few people into re-evaluating their willingness to donate to SIAI.