I'm often surprised more people haven't read Open AI CEO Sam Altman's 2015 blog posts Machine Intelligence Part 1 & Part 2. In my opinion, they contain some of the most strong, direct, and clear articulations of why AGI is dangerous from a person at an AGI company.

(Note that the posts were published before OpenAI was founded. There's a helpful wiki of OpenAI history here.) 

Hence: a linkpost. I've copied both posts directly below for convenience. I've also bolded a few of the lines I found especially noteworthy.  


Machine intelligence, part 1

This is going to be a two-part post—one on why machine intelligence is something we should be afraid of, and one on what we should do about it.  If you’re already afraid of machine intelligence, you can skip this one and read the second post tomorrow—I was planning to only write part 2, but when I asked a few people to read drafts it became clear I needed part 1.
 

WHY YOU SHOULD FEAR MACHINE INTELLIGENCE

Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity.  There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could.  Also, most of these other big threats are already widely feared.

It is extremely hard to put a timeframe on when this will happen (more on this later), and it certainly feels to most people working in the field that it’s still many, many years away.  But it’s also extremely hard to believe that it isn’t very likely that it will happen at some point.

SMI does not have to be the inherently evil sci-fi version to kill us all.  A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out.  Certain goals, like self-preservation, could clearly benefit from no humans.  We wash our hands not because we actively wish ill towards the bacteria and viruses on them, but because we don’t want them to get in the way of our plans.

(Incidentally, Nick Bostrom’s excellent book “Superintelligence” is the best thing I’ve seen on this topic.  It is well worth a read.)

Most machine intelligence development involves a “fitness function”—something the program tries to optimize.  At some point, someone will probably try to give a program the fitness function of “survive and reproduce”.  Even if not, it will likely be a useful subgoal of many other fitness functions.  It worked well for biological life.  Unfortunately for us, one thing I learned when I was a student in the Stanford AI lab is that programs often achieve their fitness function in unpredicted ways.

Evolution will continue forward, and if humans are no longer the most-fit species, we may go away.  In some sense, this is the system working as designed.  But as a human programmed to survive and reproduce, I feel we should fight it.

How can we survive the development of SMI?  It may not be possible.  One of my top 4 favorite explanations for the Fermi paradox is that biological intelligence always eventually creates machine intelligence, which wipes out biological life and then for some reason decides to makes itself undetectable.

It’s very hard to know how close we are to machine intelligence surpassing human intelligence.  Progression of machine intelligence is a double exponential function; human-written programs and computing power are getting better at an exponential rate, and self-learning/self-improving software will improve itself at an exponential rate.  Development progress may look relatively slow and then all of a sudden go vertical—things could get out of control very quickly (it also may be more gradual and we may barely perceive it happening).

As mentioned earlier, it is probably still somewhat far away, especially in its ability to build killer robots with no help at all from humans.  But recursive self-improvement is a powerful force, and so it’s difficult to have strong opinions about machine intelligence being ten or one hundred years away.

We also have a bad habit of changing the definition of machine intelligence when a program gets really good to claim that the problem wasn’t really that hard in the first place (chess, Jeopardy, self-driving cars, etc.).  This makes it seems like we aren’t making any progress towards it.  Admittedly, narrow machine intelligence is very different than general-purpose machine intelligence, but I still think this is a potential blindspot.

It’s hard to look at the rate or improvement in the last 40 years and think that 40 years for now we’re not going to be somewhere crazy.  40 years ago we had Pong.  Today we have virtual reality so advanced that it’s difficult to be sure if it’s virtual or real, and computers that can beat humans in most games.

Though, to be fair, in the last 40 years we have made little progress on the parts of machine intelligence that seem really hard—learning, creativity, etc.  Basic search with a lot of compute power has just worked better than expected. 

One additional reason that progress towards SMI is difficult to quantify is that emergent behavior is always a challenge for intuition.  The above common criticism of current machine intelligence—that no one has produced anything close to human creativity, and that this is somehow inextricably linked with any sort of real intelligence—causes a lot of smart people to think that SMI must be very far away.

But it’s very possible that creativity and what we think of us as human intelligence are just an emergent property of a small number of algorithms operating with a lot of compute power (In fact, many respected neocortex researchers believe there is effectively one algorithm for all intelligence.  I distinctly remember my undergrad advisor saying the reason he was excited about machine intelligence again was that brain research made it seem possible there was only one algorithm computer scientists had to figure out.)

Because we don’t understand how human intelligence works in any meaningful way, it’s difficult to make strong statements about how close or far away from emulating it we really are.  We could be completely off track, or we could be one algorithm away. 

Human brains don’t look all that different from chimp brains, and yet somehow produce wildly different capabilities.  We decry current machine intelligence as cheap tricks, but perhaps our own intelligence is just the emergent combination of a bunch of cheap tricks. 

Many people seem to believe that SMI would be very dangerous if it were developed, but think that it’s either never going to happen or definitely very far off.   This is sloppy, dangerous thinking. 

[1] I prefer calling it "machine intelligence" and not "artificial intelligence" because artificial seems to imply it's not real or not very good.  When it gets developed, there will be nothing artificial about it.


(Reminder: I have bolded some lines I found noteworthy. Nothing from Sam Altman's original posts is bolded.) 


 

Machine intelligence, part 2

This is part two of a a two-part post—the first part is here.

 

THE NEED FOR REGULATION

Although there has been a lot of discussion about the dangers of machine intelligence recently, there hasn’t been much discussion about what we should try to do to mitigate the threat. 

Part of the reason is that many people are almost proud of how strongly they believe that the algorithms in their neurons will never be replicated in silicon, and so they don’t believe it’s a potential threat.  Another part of it is that figuring out what to do about it is just very hard, and the more one thinks about it the less possible it seems.  And another part is that superhuman machine intelligence (SMI) is probably still decades away [1], and we have very pressing problems now.

But we will face this threat at some point, and we have a lot of work to do before it gets here.  So here is a suggestion. 

The US government, and all other governments, should regulate the development of SMI.  In an ideal world, regulation would slow down the bad guys and speed up the good guys—it seems like what happens with the first SMI to be developed will be very important.

Although my general belief is that technology is often over-regulated, I think some regulation is a good thing, and I’d hate to live in a world with no regulation at all.  And I think it’s definitely a good thing when the survival of humanity is in question.  (Incidentally, there is precedent for classification of privately-developed knowledge when it carries mass risk to human life.  SILEX is perhaps the best-known example.) 

To state the obvious, one of the biggest challenges is that the US has broken all trust with the tech community over the past couple of years.  We’d need a new agency to do this.

I am sure that Internet commentators will say that everything I’m about to propose is not nearly specific enough, which is definitely true.  I mean for this to be the beginning of a conversation, not the end of one.

The first serious dangers from SMI are likely to involve humans and SMI working together.  Regulation should address both the case of malevolent humans intentionally misusing machine intelligence to, for example, wreak havoc on worldwide financial markets or air traffic control systems, and the “accident” case of SMI being developed and then acting unpredictably.

Specifically, regulation should: 

1)   Provide a framework to observe progress.  This should happen in two ways.  The first is looking for places in the world where it seems like a group is either being aided by significant machine intelligence or training such an intelligence in some way. 

The second is observing companies working on SMI development.  The companies shouldn’t have to disclose how they’re doing what they’re doing (though when governments gets serious about SMI they are likely to out-resource any private company), but periodically showing regulators their current capabilities seems like a smart idea.

2)   Given how disastrous a bug could be, require development safeguards to reduce the risk of the accident case.  For example, beyond a certain checkpoint, we could require development happen only on airgapped computers, require that self-improving software require human intervention to move forward on each iteration, require that certain parts of the software be subject to third-party code reviews, etc.  I’m not very optimistic tha[t] any of this will work for anything except accidental errors—humans will always be the weak link in the strategy (see the AI-in-a-box thought experiments).  But it at least feels worth trying.

Being able to do this—if it is possible at all—will require a huge amount of technical research and development that we should start intensive work on now.  This work is almost entirely separate from the work that’s happening today to get piecemeal machine intelligence to work.

To state the obvious but important point, it’s important to write the regulations in such a way that they provide protection while producing minimal drag on innovation (though there will be some unavoidable cost).

3)   Require that the first SMI developed have as part of its operating rules that a) it can’t cause any direct or indirect harm to humanity (i.e. Asimov’s zeroeth law), b) it should detect other SMI being developed but take no action beyond detection, c) other than required for part b, have no effect on the world.

We currently don’t know how to implement any of this, so here too, we need significant technical research and development that we should start now. 

4)   Provide lots of funding for R+D for groups that comply with all of this, especially for groups doing safety research.

5)   Provide a longer-term framework for how we figure out a safe and happy future for coexisting with SMI—the most optimistic version seems like some version of “the human/machine merge”.  We don’t have to figure this out today.

Regulation would have an effect on SMI development via financing—most venture firms and large technology companies don’t want to break major laws.  Most venture-backed startups and large companies would presumably comply with the regulations.

Although it’s possible that a lone wolf in a garage will be the one to figure SMI out, it seems more likely that it will be a group of very smart people with a lot of resources.  It also seems likely, at least given the current work I’m aware of, it will involve US companies in some way (though, as I said above, I think every government in the world should enact similar regulations).

Some people worry that regulation will slow down progress in the US and ensure that SMI gets developed somewhere else first.  I don’t think a little bit of regulation is likely to overcome the huge head start and density of talent that US companies currently have. 

There is an obvious upside case to SMI —it could solve a lot of the serious problems facing humanity—but in my opinion it is not the default case.  The other big upside case is that machine intelligence could help us figure out how to upload ourselves, and we could live forever in computers.  Or maybe in some way, we can make SMI be a descendent of humanity.

Generally, the arc of technology has been about reducing randomness and increasing our control over the world.  At some point in the next century, we are going to have the most randomness ever injected into the system. 

In politics, we usually fight over small differences.  These differences pale in comparison to the difference between humans and aliens, which is what SMI will effectively be like.  We should be able to come together and figure out a regulatory strategy quickly.

 

 

Thanks to Dario Amodei (especially Dario), Paul Buchheit, Matt Bush, Patrick Collison, Holden Karnofsky, Luke Muehlhauser, and Geoff Ralston for reading drafts of this and the previous post. 

[1] If you want to try to guess when, the two things I’d think about are computational power and algorithmic development.  For the former, assume there are about 100 billion neurons and 100 trillion synapses in a human brain, and the average neuron fires 5 times per second, and then think about how long it will take on the current computing trajectory to get a machine with enough memory and flops to simulate that.

For the algorithms, neural networks and reinforcement learning have both performed better than I’ve expected for input and output respectively (e.g. captioning photos depicting complex scenes, beating humans at video games the software has never seen before with just the ability to look at the screen and access to the controls).  I am always surprised how unimpressed most people seem with these results.  Unsupervised learning has been a weaker point, and this is probably a critical part of replicating human intelligence.   But many researchers I’ve spoken to are optimistic about current work, and I have no reason to believe this is outside the scope of a Turing machine.

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 1:38 AM

This is nice to read, because it seems Sam is more often on the defensive in public recently and comes across sounding more "accel" than I'm comfortable with.   In this video from 6 years ago, various figures like Hassabis and Bostrom (Sam is not there) propose on several occasions exactly what's happening now - a period of rapid development, perhaps to provoke people into action / regulation while the stakes are somewhat lower, which makes me think this may have been in part what Sam was thinking all along too.

https://www.youtube.com/watch?v=h0962biiZa4

Very interesting, thanks for sharing.

Accurate and honest representation of Altman's views as of 2015, particularly as this was before he had a personal financial or reputational stake in the development of AGI.

Since the regulation he called for has not come about, I'd think he's following a different strategy now, probably develop it first so someone else doesn't. I actually take his statements on the risk seriously. I think he probably believes, as stated, that alignment isn't solved.

I think open AI has now hit the level of financial success that the profit motive is probably less than the reputational motive. I think that Altman probably thinks more about becoming a hero in the public eye and avoiding being a villain then he does about becoming the head of the largest company in history. But both will be a factor. In any case I think his concerns about alignment are sincere. That's not enough of a guarantee that he'll get it right as we close in on X-risk AI (XRAI seems a more precise term than AGI at this point), but is something.

He said that they weren't training a GPT-5 and that they prefer to aim to adapt smaller AIs to society (I suppose they might still be researching on AGI regardless, just not training new models).

I thought that it might have been to slow down the race.

You said;-
"In my opinion, they contain some of the most strong, direct, and clear articulations of why AGI is dangerous from a person at an AGI company." 

With respect, this is simply Sam's stance at this at point in time, it would be most interesting if you would care to offer your own opinion / analyses of the "dangerous" aspect - albeit set in the context of the current state of the art and the emerging pros and cons.