My model of the economy after AGI is not currently explosive. There is some acceleration and risk but the level of existential risk does not have a huge multiplier. I thought it interesting to map out my current thoughts, based on these assumptions. I'm exploring the scenario where ai-alignment looks possible, but might still lead to bad outcomes.


Threats are normally caused by asymmetry. Attackers can more easily upgrade their attack infrastructure as it is concentrated, defensive infrastructure can take longer to upgrade as it is spread out geographically and over many different security levels. The current mishmash of humans and computers is moderately insecure as it is. If bad actors gain an advantage in AGI, this insecurity can lead to a number of possible threats.

National Security Threats

If rogue states or terrorists gain an advantage they can potentially exploit the current technology and human make up of the military services. One example of something that may be possible with an AGI asymmetry is sending fake orders to military units. This could have devastating impacts.

Such things will be avoided if at all possible by national security agencies.

Political Threats

Manipulating the populace to elect bad leaders or support bad policies (such as excessive military reduction) can have a variety of political implications.

Takeover from national security

Another possible threat is national security gaining the lead in AGI and subverting their own country with it. This kind of silent coup would be very hard to detect, as the logic of the situation suggests that the national security agencies should do very similar actions whether, malign or benevolent, until AGI is created. The more covert and less oversight an operation has the greater the potential for national security based project to go rogue.

Likely response from security agencies

If the security agencies think that they are within this kind of world they will likely do what they can to make sure they come out ahead in the asymmetry so that they can increase the defensive capabilities ahead of the offensive ramp up.

This might involve trying to slow other people down or avoiding too much public knowledge about AGI to maintain their edge.

You would also hope that they would be investing massively in any defensive non-agi technologies and spreading them around through out the world. If malicious actors can exploit India's or Pakistan's military or political institutions it would still be disasterous.


New Comment
2 comments, sorted by Click to highlight new comments since: Today at 12:10 AM

I can not imagine well functioning AGI without subverting of government. If you look at the quality of law writing that's currently in existence there are many reasons for letting an AGI write laws because the quality will be better if you put the equivalent of 10,000 smart humans as AGIs on the task of writing a law than when a bunch of congressional staffers write it.

I agree that it is likely our political institutions will change post AGI. And that there might even be a period of martial law if things go really haywire.

On having an AGI write laws by itself (rather than as part of an augmented politician), I consider that scenario needing fleshing out a lot more. AGI is insufficiently magic just to wave it at things. You need to consider things like describing how the AGI is aligned with the populace (fairly!). If the AGI persists over time you would want to make sure that the AGI design avoids any calcification of its ideas. In humans it is possible that successful ideas monopolize the idea space so new ideas don't have room to flourish, rather than by physical aging. You want to avoid that kind of problem with large scale AGI in positions of power.

Issues of trust in the system also become important (how can you be sure that it is aligned with you?), much as they occur with voting machines.

New to LessWrong?