When I collected some SIAI volunteers' opinions on this, most thought that there was a very significant chance that super-smart AI will arrive sooner than that, though.
Biased sample!
A small dose of outside view shows that it's all nonsense. The idea of evil terrorist or criminal mastermind is based on nothing - such people don't exist. Virtually all terrorists and criminals are idiots, and neither are interested in maximizing destruction.
See everything Schneier has ever written about it if you need data confirming what I just said.
So, we could decompile humans, and do FAI to them. Or we could just do FAI. Isn't the latter strictly simpler?
X-risk-alleviating AGI just has to be days late to the party for a supervirus created by a terrorist cell to have crashed it. I guess I'd judge against putting all our eggs in the AI basket.
I am very skeptical about any human gene-engineering proposals (for anything other than targeted medical treatment purposes.)
Even if we disregard superhuman artificial intelligences, there are a lot of more direct and therefore much quicker prospective technologies in sight: electronic/chemical brain-enhancing/control, digital supervision technologies, memetic engineering, etc.
IMO, the prohibitively long turnaround time of large scale genetic engineering and its inherently inexact (indirect) nature makes it inferior to almost any thinkable alternatives.
Much the same tech as is used to make intelligent machines augments human intelligence - by preprocessing its sensory inputs and post-processing its motor outputs.
In general, it's much quicker and easier to change human culture and the human environment than it is to genetically modify human nature.
The reason why we have terrorism is because we don't have a moral consensus that labels killing people as bad. The US does a lot to convince Arabs that killing people is just when there's a good motive.
Switching to a value based foreign policy where the west doesn't violate it's moral norms in the mind of the Arabs could help us to get a moral consensus against terrorism but unfortunately that doesn't seem politically viable at the moment.
Doomsday predictions have never come true in the past, no matter much confidence the futurist had. Why should we believe this particular futurist?
"we don't take seriously the possibility that science can illuminate, and change, basic parts of human behavior" is interesting, at 18:11 in the second video.
The video of the talk has two parts, only first of which was included in the post. Links to both parts:
The key question isn't: Should we do genetic engineering when we know the complete effects of it but should we try genetically engineering even when we don't know what result we will get.
Should we gather centralized databases of DNA sequences of every human being and mine them for gene data? Are potential side effects worth the risk of starting now with genetic engineering? Do we accept the increased inequality that could result out of genetic engineering. How do we measure what constitutes a good gene? Low incarnation rates, IQ, EQ?
In this video, Julian Savulescu from the Uehiro centre for Practical Ethics argues that human beings are "Unfit for the future" - that radical technological advance, liberal democracy and human nature will combine to make the 21st century the century of global catastropes, perpetrated by terrorists and psychopaths, with tools such as engineered viruses. He goes on to argue that enhanced intelligence and a reduced urge to violence and defection in large commons problems could be achieved using science, and may be a way out for humanity.
Skip to 1:30 to avoid the tedious introduction
Genetically enhance humanity or face extinction - PART 1 from Ethics of the New Biosciences on Vimeo.
Genetically enhance humanity or face extinction - PART 2 from Ethics of the New Biosciences on Vimeo.
Well, I have already said something rather like this. Perhaps this really is a good idea, more important, even, than coding a friendly AI? AI timelines where super-smart AI doesn't get invented until 2060+ would leave enough room for human intelligence enhancement to happen and have an effect. When I collected some SIAI volunteers' opinions on this, most thought that there was a very significant chance that super-smart AI will arrive sooner than that, though.
A large portion of the video consists of pointing out the very strong scientific case that our behavior is a result of the way our brains are structured, and that this means that changes in our behavior are the result of changes in the way our brains are wired.