That's sounds naive and gives the impression that you haven't taken the time to
understand the AI risk concerns. You provide no arguments besides the fact that
you don't see the problem of AI risk.
The prevailing wisdom in this community is that most GAI designs are going to be
unsafe and a lot of the unsafety isn't obvious beforehand. There's the belief
that if the value alignment problem isn't solved before human level AGI, that
means the end of humanity.
0turchin5y
If you prove that HLAI is safer than narrow AI jumping in paper clip maximiser,
it is good EA case.
If you prove that risks of synthetic biology is extremely high if we will not
create HLAI in time, it would also support your point of view.
1dogiv5y
The idea that friendly superintelligence would be massively useful is implicit
(and often explicit) in nearly every argument in favor of AI safety efforts,
certainly including EY and Bostrom. But you seem to be making the much stronger
claim that we should therefore altruistically expend effort to accelerate its
development. I am not convinced.
Your argument rests on the proposition that current research on AI is so
specific that its contribution toward human-level AI is very small, so small
that the modest efforts of EAs (compared to all the massive corporations working
on narrow AI) will speed things up significantly. In support of that, you mainly
discuss vision--and I will agree with you that vision is not necessary for
general AI, though some form of sensory input might be. However, another major
focus of corporate AI research is natural language processing, which is much
more closely tied to general intelligence. It is not clear whether we could call
any system generally intelligent without it.
If you accept that mainstream AI research is making some progress toward
human-level AI, even though it's not the main intention, then it quickly becomes
clear that EA efforts would have greater marginal benefit in working on AI
safety, something that mainstream research largely rejects outright.
Free online AGI seminar coming up