Selective opinion and answers (for longer discussions, respond to specific points and I'll furnish more details):
Which kinds of differential technological development should we encourage, and how?
I recommend pushing for whole brain emulations, with scanning-first and emphasis on fully uploading actual humans. Also, military development of AI should be prioritised over commercial and academic development, if possible.
Which open problems are safe to discuss, and which are potentially dangerous?
Seeing what has already been published, I see little adva...
I suggest adding some more meta questions to the list.
"In what direction should we nudge the future, to maximize the chances and impact of a positive Singularity?"
Friendly AI is incredible hard to get right and a friendly AI that is not quite friendly could create a living hell for the rest of time, increasing negative utility dramatically.
I vote for antinatalism. It should be seriously considered to create a true paperclip maximizer that transforms the universe into an inanimate state devoid of suffering. Friendly AI is simply too risky.
I think that humans are not psychological equal. Not only a...
a friendly AI that is not quite friendly could create a living hell for the rest of time, increasing negative utility dramatically
"Ladies and gentlemen, I believe this machine could create a living hell for the rest of time..."
(audience yawns, people look at their watches)
"...increasing negative utility dramatically!"
(shocked gasps, audience riots)
I was just amused by the anticlimacticness of the quoted sentence (or maybe by how it would be anticlimactic anywhere else but here), the way it explains why a living hell for the rest of time is a bad thing by associating it with something so abstract as a dramatic increase in negative utility. That's all I meant by that.
It should be seriously considered to create a true paperclip maximizer that transforms the universe into an inanimate state devoid of suffering.
Have you considered the many ways something like that could go wrong?
From your perspective, wouldn't it be better to just build a really big bomb and blow up Earth? Or alternatively, if you want to minimize suffering throughout the universe and maybe throughout the multiverse (e.g., by acausal negotiation with superintelligences in other universes), instead of just our corner of the world, you'd have to solve a lot of the same problems as FAI.
Currently you suspect that there are people, such as yourself, who have some chance of correctly judging whether arguments such as yours are correct, and of attempting to implement the implications if those arguments are correct, and of not implementing the implications if those arguments are not correct.
Do you think it would be possible to design an intelligence which could do this more reliably?
That said, I think his fear of culpability (for being potentially passively involved in an existential catastrophe) is very real. I suspect he is continually driven, at a level beneath what anyone's remonstrations could easily affect, to try anything that might somehow succeed in removing all the culpability from him. This would be a double negative form of "something to protect": "something to not be culpable for failure to protect".
If this is true, then if you try to make him feel culpability for his communication acts as usual, this will only make his fear stronger and make him more desperate to find a way out, and make him even more willing to break normal conversational rules.
I don't think he has full introspective access to his decision calculus for how he should let his drive affect his communication practices or the resulting level of discourse. So his above explanations for why he argues the way he does are probably partly confabulated, to match an underlying constraining intuition of "whatever I did, it was less indefensible than the alternative".
(I feel like there has to be some kind of third alternative I'm missing here, that would derail t...
Link: In this ongoing thread, Wei Dai and I discuss the merits of pre-WBE vs. post-WBE decision theory/FAI research.
What could an FAI project look like? Louie points out that it might look like Princeton's Institute for Advanced Study:
...Created as a haven for thinking, the Institute [for Advanced Study] remains for many the Shangri-la of academe: a playground for the scholarly superstars who become the Institute's permanent faculty. These positions carry no teaching duties, few administrative responsibilities, and high salaries, and so represent a pinnacle of academic advancement. The expectation is that given this freedom, the professors at the Institute will think the
But did the IAS actually succeed? Off-hand, the only thing I can think of them for was hosting Einstein in his crankish years, Kurt Godel before he want crazy, and Von Neumann's work on a real computer (which they disliked and wanted to get rid of). Richard Hamming, who might know, said:
When you are famous it is hard to work on small problems. This is what did Shannon in. After information theory, what do you do for an encore? The great scientists often make this error. They fail to continue to plant the little acorns from which the mighty oak trees grow. They try to get the big thing right off. And that isn't the way things go. So that is another reason why you find that when you get early recognition it seems to sterilize you. In fact I will give you my favorite quotation of many years. The Institute for Advanced Study in Princeton, in my opinion, has ruined more good scientists than any institution has created, judged by what they did before they came and judged by what they did after. Not that they weren't good afterwards, but they were superb before they got there and were only good afterwards.
(My own thought is to wonder if this is kind of a regression to the mean, or perhaps regression due to aging.)
Ok, I thought when you said "FAI project" you meant a project to build FAI. But I've noticed two problems with trying to do some of the relatively safe FAI-related problems in public:
Which open problems are safe to discuss, and which are potentially dangerous
This one seems particularly interesting, especially as it seems to apply to itself. Due to the attention hazard problem, coming up with a "list of things you're not allowed to discuss" sounds like a bad idea. But what's the alternative? Yeuugh.
"In what direction should we nudge the future, to maximize the chances and impact of a positive Singularity?"
Friendly AI is incredible hard to get right and a friendly AI that is not quite friendly could create a living hell for the rest of time, increasing negative utility dramatically.
I vote for antinatalism. It should be seriously considered to create a true paperclip maximizer that transforms the universe into an inanimate state devoid of suffering. Friendly AI is simply too risky.
I think that humans are not psychological equal. Not only are there many outliers, but most humans would turn into abhorrent creatures given their own pocket universe, unlimited power and a genie. And even given our current world, if we were to remove the huge memeplex of western civilization, most people would act like stone age hunter-gatherer. And that would be bad enough. After all, violence is the major cause of death within stone age socities.
Even proposals like CEV (Coherent Extrapolated Volition) can turn out to be a living hell for a percentage of all beings. I don't expect any amount of knowledge, or intelligence, to cause humans to abandon their horrible preferences.
Eliezer Yudkowsky says that intelligence does not imply benevolence. That an artificial general intelligence won't turn out to be friendly. That we have to make it friendly. Yet his best proposal is that humanity will do what is right if we only knew more, thought faster, were more the people we wished we were and had grown up farther together. The idea is that knowledge and intelligence implies benevolence for people. I don't think so.
The problem is that if you extrapolate chaotic systems, e.g. human preferences given real world influence, small differences in initial conditions are going to yield widely diverging outcomes. That our extrapolated volition converges rather than diverges seems to be a bold prediction.
I just don't see that a paperclip maximizer burning the cosmic commons is as bad as it is currently portrayed. Sure, it is "bad". But everything else might be much worse.
Here is a question for those who think that antinatalism is just stupid. Would you be willing to rerun the history of the universe to obtain the current state? Would you be willing to create another Genghis Khan, a new holocaust, allowing intelligent life to evolve?
As Greg Egan wrote: "To get from micro-organisms to intelligent life this way would involve an immense amount of suffering, with billions of sentient creatures living, struggling and dying along the way."
If you are not willing to do that, then why are you willing to do the same now, just for much longer, by trying to colonize the universe? Are you so sure that the time to come will be much better? How sure are you?
ETA
I expect any friendly AI outcome that fails to be friendly in a certain way to increase negative utility and only a perfectly "friendly" (whatever that means, it is still questionable if the whole idea makes sense) AI to yield a positive utility outcome.
That is because the closer any given AGI design is to friendliness the more likely it is that humans will be kept alive but might suffer. Whereas an unfriendly AI in complete ignorance of human values will more likely just see humans as a material resource without having any particular incentive to keep humans around.
Just imagine a friendly AI which fails to "understand" or care about human boredom.
There are several possibilities by which SIAI could actually cause a direct increase in negative utility.
1) Friendly AI is incredible hard and complex. Complex systems can fail in complex ways. Agents that are an effect of evolution have complex values. To satisfy complex values you need to meet complex circumstances. Therefore any attempt at friendly AI, which is incredible complex, is likely to fail in unforeseeable ways. A half-baked, not quite friendly, AI might create a living hell for the rest of time, increasing negative utility dramatically.
2) Humans are not provably friendly. Given the power to shape the universe the SIAI might fail to act altruistic and deliberately implement an AI with selfish motives or horrible strategies.
a friendly AI that is not quite friendly could create a living hell for the rest of time, increasing negative utility dramatically
"Ladies and gentlemen, I believe this machine could create a living hell for the rest of time..."
(audience yawns, people look at their watches)
"...increasing negative utility dramatically!"
(shocked gasps, audience riots)

Suppose you buy the argument that humanity faces both the risk of AI-caused extinction and the opportunity to shape an AI-built utopia. What should we do about that? As Wei Dai asks, "In what direction should we nudge the future, to maximize the chances and impact of a positive intelligence explosion?"
This post serves as a table of contents and an introduction for an ongoing strategic analysis of AI risk and opportunity.
Contents:
The main reason to discuss AI safety strategy is, of course, to draw on a wide spectrum of human expertise and processing power to clarify our understanding of the factors at play and the expected value of particular interventions we could invest in: raising awareness of safety concerns, forming a Friendly AI team, differential technological development, investigating AGI confinement methods, and others.
Discussing AI safety strategy is also a challenging exercise in applied rationality. The relevant issues are complex and uncertain, but we need to take advantage of the fact that rationality is faster than science: we can't "try" a bunch of intelligence explosions and see which one works best. We'll have to predict in advance how the future will develop and what we can do about it.
Before engaging with this series, I recommend you read at least the following articles:
Which strategic questions would we like to answer? Muehlhauser (2011) elaborates on the following questions:
Salamon & Muehlhauser (2013) list several other questions gathered from the participants of a workshop following Singularity Summit 2011, including:
These are the kinds of questions we will be tackling in this series of posts for Less Wrong Discussion, in order to improve our predictions about which direction we can nudge the future to maximize the chances of a positive intelligence explosion.