Epistemic Status: Quickly written, uncertain. I'm fairly sure there's very little in terms of the public or government concerned about AGI claims, but I'm sure there's a lot I'm missing. I'm not at all an expert on government or policy and AI.

This was originally posted to Facebook here, where it had some discussion.  Many thanks to Rob Bensinger, Lady Jade Beacham, and others who engaged in the discussion there.


Multiple tech companies now are openly claiming to be working on developing AGI (Artificial General Intelligence).

As written in a lot of work on AGI (See Superintelligence, as an example), if any firm does establish sufficient dominance in AGI, they might have some really powerful capabilities.

  • Write bots that could convince (some) people to do almost anything
  • Hack into government weapons systems
  • Dominate vital parts of the economy
  • Find ways to interrupt other efforts to make AGI

And yet, from what I can tell, almost no one seems to really mind? Governments, in particular, seem really chill with it. Companies working on AGI get treated similarly to other exciting AI companies.

If some company were to make a claim like,

"We're building advanced capabilities that can hack and modify any computer on the planet"

or,

"We're building a private nuclear arsenal",

I'd expect that to draw attention.

But with AGI, crickets.

I assume that governments dismiss corporate claims of AGI development as overconfident marketing-speak or something.

You might think,

"But concerns about AGI are really remote and niche. State actors wouldn't have come across them."

That argument probably applied 10 years ago. But at this point, the conversation has spread a whole lot. Superintelligence was released in 2014 and was an NYT bestseller. There are hundreds of books out now about concerns about increasing AI capabilities. Elon Musk and Bill Gates both talked about it publicly. This should be one of the easiest social issues at this point for someone technically savvy to find.

The risks and dangers (of a large power-grab, not of alignment failures, though those too) are really straightforward and have been public for a long time.

Responses

In the comments to my post, a few points were made, some of which I was roughly expecting.  Points include:

  1. Companies saying they are making AGI are ridiculously overconfident
  2. Governments are dramatically incompetent
  3. AGI will roll out gradually and not give one company a dominant advantage

My quick responses would be:

  1. I think many longtermist effective altruists believe these companies might have a legitimate chance in the next 10 to 50 years, in large part because of a lot of significant research (see everything on AI and forecasting on LessWrong and the EA Forum). At the same time, my impression is that most of the rest of the world is indeed incredibly skeptical of serious AGI transformation.
  2. I think this is true to an extent. My impression is that government nonattention can change dramatically and quickly, particularly in the United States, so if this is the crux, it might be a temporary situation.
  3. I think there's substantial uncertainty here. But I would be very hesitant to put over a 70% chance that: (a) one, or a few, of these companies will gain a serious advantage, and (b) the general-purpose capabilities of these companies will come with significant global power capabilities. AGI is general-purpose, it seems difficult to be sure that your company can make it without it being an international security issue of some sort or other.
New Comment
3 comments, sorted by Click to highlight new comments since: Today at 12:41 PM

Maybe they think, all this AI stuff is just tools?

Maybe they're more worried about other things. (Right now it's easy to say Covid, the economy, stuff like that. Compare how seriously global warming is taken.)

Maybe it's not someone's job.

Two more responses to add to your list.

  1. Governments are doing things. Eg, China: Deciphering China's AI dream, China aims to become world leader in AI. US: Palantir wins US Army AI contract worth $91 million.

  2. Governments may be doing things that we can't and wouldn't be able to see, eg espionage, counter-espionage, secret projects, secret plans, etc.

What would you expect governments to be doing, and how would we know? (if you'd expect unhelpful actions, and don't want to give them ideas, that's fine too)

Maybe you would file this under your conception of governments not being competent in this area, but I wonder if part of this could be governments thinking that this is exciting new tech, and new tech that takes off often makes lots of money, so a government might assume that if the company that strikes it big in this area is based in its country that could be economically great. A big new business, based in their country, paying lots of tech salaries, income taxes rising, spending in the economy up, maybe even able to get some corporation taxes if they've not structured their business to put that offshore. Might sound positively attractive to a government that isn't looking for the downside.