That's about 1500 man-years.
1.5e3 is not large compared to the total number of man-years spent on AI, which is probably more like 1.5e5. There are probably 1e4 researchers in AI-related fields, so we're producing at least 1e4 man-years of effort per year. It may be that private sector projects are more promising/threatening than academic projects, but it seems implausible that this would be a 100x effect.
By "heading off" I think we should be clear that we are referring to go stones, not some form of sabotage. How can we ensure there will be better safety incentives over the next few decades? That sort of thing.
It's possible that, if the feasibility just isn't there yet no matter the funding, it'll turn out like nanotechnology - funding for molecule-sized robots that gets spent on chemistry instead. (I wonder what the "instead" would be in this case.)
I'm curious at what likelihood of AGI imminence SI or LessWrong readers would think it was a good idea to switch over to an ecoterrorist strategy. The day before the badly vetted machine is turned on is probably a good day to set the charges to blow during the night shift. The funding of this project is probably too early.
Do people think SI should be devoting more of its time and resources to corporate espionage and/or sabotage if unfriendly AI is the most pressing existential threat?
With the difference that many people think it may have been a mistake to make those things illegal to begin with. People considering industrial sabotage to stop UFAI probably don't think that industrial sabotage should be legal in general.
I'd argue the latter. It's hard to imagine how you could know in advance that a uFAI has a high chance of working, rather than being one of thousands of ambitious AGI projects that simply fail.
(Douglas Lenat comes to you, saying that he's finished a powerful fully general self-modifying AI program called Eurisko, which has done very impressive things in its early trials, so he's about to run it on some real-world problems on a supercomputer with Internet access; and by the way, he'll be alone all tomorrow fiddling with it, would you like to come over...)
I think any sequence of events that leads to anyone at all in any way associated with either lesswrong or SI doing anything to hinder any research would be a catastrophe for this community. At best, you will get a crank label (more than now, that is), at worst the FBI will get involved.
Convince programmers to refuse to work on risky AGI projects:
Please provide constructive criticism.
We're in an era where the people required to make AGI happen are in so much demand that if they refused to work on an AGI that wasn't safe, they'd still have plenty of jobs left to choose from. You could convince programmers to adopt a policy of refusing to work on unsafe AGI. These specifics would be required:
Make sure that programmers at all levels have a good way to determine whether the AGI they're working on has proper safety mechanisms in place.
The company to get all of this right will be the first two trillion dollar company.
Is there any way to reverse this trend in public perception?
You don't supply a counter-argument. Do you disagree - or are you looking for a way to create a mass delusion?
Help good guys beat the race:
Please provide constructive criticism.
An open source project might prevent this problem, not because having an open source AGI is safe, but because 1.) open source projects are open, so anybody can influence it, including people who are knowledgeable about risks and 2.) the people involved in open source projects probably tend to have a pretty strong philanthropic streak and they're more likely to listen to the dangers than a risk-taking capitalist. The reason it may stop them is this: If an open source project gets there fi...
Create public relations nightmare for anyone producing risky AGI:
Please provide constructive criticism.
One powerful way to get people thinking about safety is if clever ways are invented to shout from the rooftops that this could be dangerous and present the message in a way that most people will grok. If everybody is familiar enough with how dangerous it could be, then funding an AGI project without a safety plan in place would be a PR disaster for the companies doing it. That would put a lot of pressure on them to put safeties into place. This wouldn'...
Even Faster Solution:
Survey a bunch of open source people asking them if they'd switch to working on friendly AGI in the event that an AGI project started without enough safety, or get their signatures. Surely the thousands of programmers now working on projects like Firefox and Open Office, who clearly have an altruistic bent as they are working for free, will see that it is more important to prevent unfriendly AGI than to make sure the next version of these smaller projects are released on time.
If we can honestly say to these companies "If you tr...
Even Faster Solution:
Survey a bunch of open source people asking them if they'd switch to working on friendly AGI in the event that an AGI project started without enough safety, or get their signatures. Surely the thousands of programmers now working on projects like Firefox and Open Office, who clearly have an altruistic bent as they are working for free, will see that it is more important to prevent unfriendly AGI than to make sure the next version of these smaller projects are released on time.
If we can honestly say to these companies "If you try to start an AGI project without thorough safety precautions, 100,000 programmers have said they'll rise up against you and make a FREE AGI to compete with yours that's safer." What they will hear is "We'll be put out of business!" Assuming they believe the survey results are accurate, and that the plan for the project is feasible, they will be will forced to take safety precautions in order to protect their investments.
Just that ONE piece of information, if communicated right, could transform a risky AGI arms race into a much safer one.
Here's a multiplier effect: If you're asking a bunch of programmers anyway, you may as well ask them if they'd be willing to make a monetary contribution toward the friendly AGI project for x, y, or z strategies/prerequisites. Programmers tend to make a lot of money.
How this could postpone an arms race:
If the bar is set high enough (which can be done by asking the programmers all the conditions the AGI would have to meet, without which they'd deem it "risky" and get involved), you may postpone the arms race quite some time while companies regroup and try to figure out a strategy to compete with these guys. This assumes, also, that it becomes common knowledge among the people who would start a risky AGI project that this pact among open source programmers exists.
Other pieces that would be required to make this idea work:
The open source programmers would have to be given a message about the company who has started an AGI project that gets them to understand the gravity of the problem. They're probably more likely to grok it just because they're programmers and they're the right sort to have already thought about this sort of thing, but we'd want to make sure the message is really clear. This could be a little tricky due to laws about libel.
Companies may not believe the open source programmers are serious about switching. This is easily resolved by creating a wall where they can put remarks about why they think competing with unsafe AGI is an important project. Surely they will put convincing things like "I love using Linux, but if a risky AGI destroys enough, that won't matter anymore."
Have a way to contact the companies who are starting risky AGI projects in order to send them the message that they're provoking a loss of their investment. Asking the volunteer programmers to email them a threat to compete with them, (the way that a lot of activist organizations ask their members to tell companies they won't put up with them destroying the environment), would be one way. This requires previously having collected the email addresses of the programmers so that they can be asked to email the company. It also requires getting the email addresses of important people at the company, but that's not hard if you know how to look up who owns a website.
Ensure the open source programmers in question are knowledgeable enough about the dangers of AGI to want high standards for safety. They may need to be educated about this in order to make informed decisions. Providing compelling examples and a clearly written list of safety standards are both important or not everyone will be on the same page, and there won't be something solid causing them to consciously confront their biases and doubts.
Have a way to ask all (or a significant number of) the people interested in doing open source programming whether they'd switch. This does two things: 1. You get your survey results / signatures. 2. You get them thinking about it as a cause. Getting them thinking about it and discussing it, if they aren't already, would catalyze more of them to decide to work on it, assuming they use rational thought processes. After all, what's more important? Failing to upgrade Firefox, or having to live with unfriendly AGI? Just asking enough people the question would start a snowball effect that would attract people to the cause.
Retain some contact method that allows you to inform them when a risky AGI project starts. Note: Sending out mass-mailings is really tricky because spam filters are set to "paranoid" - it might take a person experienced with this to get the email campaign to go through.
When it is time for them to switch to AGI, they'll need to be convinced that it ACTUALLY is, in fact, that time. There will be inertia to overcome, so you'd need to present compelling reasons to believe that they should change over immediately.
The idea for the open source AGI must sound feasible in order for it to be convincing to companies that are starting unsafe AGI projects.
Please critique. I'd be happy to get more involved in problem-solving.
Other Ideas: Three possible solutions: (Which also explains why I think open source might have a competitive advantage.)
I know people have talked about this in the past, but now seems like an important time for some practical brainstorming here. Hypothetical: the recent $15mm Series A funding of Vicarious by Good Ventures and Founders Fund sets off a wave of $450mm in funded AGI projects of approximately the same scope, over the next ten years. Let's estimate a third of that goes to paying for man-years of actual, low-level, basic AGI capabilities research. That's about 1500 man-years. Anything which can show something resembling progress can easily secure another few hundred man-years to continue making progress.
Now, if this scenario comes to pass, it seems like one of the worst-case scenarios -- if AGI is possible today, that's a lot of highly incentivized, funded research to make it happen, without strong safety incentives. It seems to depend on VCs realizing the high potential impact of an AGI project, and of the companies having access to good researchers.
The Hacker News thread suggests that some people (VCs included) probably already realize the high potential impact, without much consideration for safety:
Is there any way to reverse this trend in public perception? Is there any way to reduce the number of capable researchers? Are there any other angles of attack for this problem?
I'll admit to being very scared.