[ Question ]

Looking for non-AI people to work on AGI risks

by otto.barten 1 min read30th Dec 20195 comments


I'm worried about AGI safety, and I'm looking for non-AI people to worry with. Let me explain.

A lecture by futurist Anders Sandberg, online reading, and real-life discussions with my local Effective Altruist group, gave me as a non-AI person (33-yo physicist, engineer, climate activist and startup founder) the convictions that:

- AGI (Artificial General Intelligence, Superintelligence, or the Singularity) is a realistic possibility in the next decades, say between 2030 and 2050
- AGI could well become orders of magnitude smarter than humans, fast
- If unaligned, AGI could well lead to human extinction
- If aligned ('safe'), AGI could still possibly lead to human extinction, for example because someone's goals turned out to be faulty, or because someone removed the safety from the code

I'm active for two climate NGOs, where a lot of people are worrying about human extinction because of the climate crisis. I'm also worrying about this, but at the same time, I think the chance of human extincion due to AGI is much larger. Although the chance is much larger, I don't believe it to be 100%: we could still stop AGI development, for example (I think that makes more sense than fleeing to Mars or working on a human-machine interface). Stopping development is a novel angle for many safe AI researchers, futurists, startup founders, and the like. However, many non-AI people think this is a very sensible solution, at least if all else fails. I agree with them. It is not going to be an easy goal to achieve and I see the penalty, but I think it makes the most sense from the options we have.

Therefore, I'm looking for non-AI people, who are interested to work with me on common sense solutions for existential risks posed by AGI.

Does anyone know where to find them?


New Answer
Ask Related Question
New Comment

2 Answers

I'm someone who is moving in the opposite direction mainly (from AI to climate change). I see AGI as a lot harder to do than most, mainly due to the potential political ramifications causing slow development and thinking it will need experiments with novel hardware, so is more visible than just coding. So I see it as relatively easy to stop, at least inside a country. Multi-nationally would be trickier.

Some advise, I would try and frame your effort as "Understanding AGI risk". While you think there is risk currently, having an open mind about the status of the risk is important. If AGI turns out to be existential risk-free then it could help with climate adaptation, even if it is not in time for climate mitigation.

Edit: You could frame it just as understanding AI, and put together independent briefs on each project for policy makers to understand the likely impacts both positive and negative and the state of play. Getting a good reputation and maintaining independence might be hard though.

The first and selfish answer (probably shared by countless others would be "I'm interested in working on that."

Am I qualified? Maybe; maybe not. I suspect I won't know what makes an effective AI safety planner until somebody actually starts to do it.

I make this observation. It looks to me that the potential emergence of AGI has two fronts. The first is raw scientific development. Programmers, engineers and cognitive scientists just "doing their thing;" understanding our world by replicating and modifying parts of it. The second is the one that a vast majority of people can already see; specific-task AI devices getting stronger/faster and better connected. If it cannot be done today, within months a person can talk to the air around them, order a cheeseburger that will be cooked, assembled, delivered, and paid for completely by automated, unconscious agents. Who am I to say that with enough forward development and integration of such automated systems, we would not see emergent automated behavior, just as fantastic or dangerous as a "thinking" machine might display.

Such a watchdog group can be potentially useful already, if they allow some economic skill-power to assist with current technology issues (i.e. workplace automation, and the unavoidable employment changes that causes.)

This is a long winded "I agree." We should not wait for someone else to organize our protective stance from the agents we build specifically to be better at tasks than ourselves, be they specific or general. Multiple, experienced folk should always be asking "What is the driving goal of this AGI? What are it's success/failure conditions? What information does it have access too? Where are the means to interrupt it, if it finds an unfriendly solution to its hurdles?