cquenelle

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Q5 Scenarios

I have different thoughts about different doomsday scenarios.  I can think of two general categories, but maybe there are more.

A) "Build us a better bomb." - The AGI is locked in service to a human organization who uses it's superpowers to dominate and control the rest of the world.  In this scenario the AGI is essentially a munitions that may appear in the piucture without warning (which takes us back to the time scale concern).  This doesn't require the AGI to become self-sufficient.  Presumably lesser AIs would also be capable of building better bombs.

B) "Evil Overlord" - An AGI comes into being fast enough that nobody can stop it, and somehow it gains control of power and mechanical resources needed to preserve it's own existence.  I'm having a hard time visualizing how this happens with nobody noticing until it's too late.  Individual humans and even groups of humans have a hard enough time "preserving their existence" in the face to world troubles.  If a physically centralized AGI threatens the world, it will get bombed out of existence.  If it's distributed it will be tracable and security measures will be put in place to prevent it from invading unauthorized computers.

Q4 Time scale

In order to claim that we need to worry about AGI Alignment today, you need to prove that the time scale of development will be short.  Common sense tells us that humans will be able to deal with whatever software we can create.  1) We create some software (eg self driving cars, nuclear power plant sofrtware) 2) People accidentally die (or have other "bad outcomes") 3) Humans, governments, people in general will "course correct".

So you have to prove (or convince) that an AGI will develop, gain control of it's own resources and then be able to act on the world in a very short period of time.  I haven't seen a convincing argument for that.
 

Q3 Technology scale

I would love to read more about how software can emulate a human brain.  The human brain is an analog system down the molecular level.  The brain is a giant soup with a delicate balance of neurotransmitters and neuropeptides.  There thousands of different kinds of neurons in the brain, each one acts a little different.  As a programmer, I cannot imagine how to faithfully model something like that directly.  Digital computers seem completely inadequate.   I would guess you'd have more luck wiring together 1000 monkey brains.
 

Q2 Agency

I also have a question about agency.  Let's say Bob invents an AGI in his garage one day.  It even gets smarter the more it runs.  When Bob goes to sleep at night he turns the computer off and his AI stops getting smarter.  It doesn't control it's own power switch, it's not managing Bob's subnet for him.  It doesn't have internet access.  I guess in a doomsday scenario Bob would have to have programmed in "root access" for his ever more intelligent software?  Then it can eventually modify the operating system that it's running on?  How does such a thing get to be in charege of anything?  It would have be people who put it in charge of stuff, and people who would vet it's decisions.

So here's a question:  If we write software that can do simple things (make coffee in the morning, do my laundry) how many years is it going to be before I let it make stock trades for me?  For some people, they do that right away.  Then the software gets confused and loses all their money for them.  They call the broker (who says: "You did WHAT?  Hahaha")

So how do these scary kind of AIs actually get control of their own power cord?  Much less their internet connection.

When you say "an AGI" are you implying a program that has control of enough resources to guarantee it's own survival?
 

Q1 Definitions

Who decides what kind of software gets called AI?  Forget about AGI, just talking about the term AI.  What about code in a game that decides where the monsters should move and attack? We call that AI.  What about a program that plays Go well enough to beat a master?  What about a program that plays checkers?  What about a chicken that's trained so that it can't lose at tic-tac-toe?  Which of those is AI?  The only answer I can think of is that AI is when a program acts in ways that seem like only a person should be able to do it.  Any more specific definitions are welcome.  You can't really have a rational conversation about how AI will develop into AGI unless you are specific about what AI is.
 

Thanks for your essay, it was encouraging and inspiring!

What you have observed seems to accurately reflect the world and the way people function (not just on the internet).  When I did a google search for "the need to believe" I found links that seemed interesting and relevant.  I have a working theory about the human brain which seems to fit the evidence that I see in my life, and what I have read.

The human brain is a giant pattern-matching machine.  It operates most of the time on incomplete data.  But the brain doesn't express patterns as abstract theories, it expresses those observed patterns as "belief".  We observe evidence, and we form a belief about the world in a very unscientific way.

There is no genetic, neurological process for "possible pattern, but not enough data"

Science itself (and by extension rationality itself) seems to be something humans invented to overcome the normal operating mode of the human brain which naturally operates more as a social instrument governed by sophistry and cognitive bias.

Another thing that ties in more specifically to the internet is the need to grab people's interest.  Claiming that an accepted pattern is not true or not useful is unlikely to attract attention or support.  Claiming that a different pattern (moral, ethical, social, etc) fits reality better will be more engaging to readers because of the nature of human brains that I described above.