cquenelle
cquenelle has not written any posts yet.

Q4 Time scale
In order to claim that we need to worry about AGI Alignment today, you need to prove that the time scale of development will be short. Common sense tells us that humans will be able to deal with whatever software we can create. 1) We create some software (eg self driving cars, nuclear power plant sofrtware) 2) People accidentally die (or have other "bad outcomes") 3) Humans, governments, people in general will "course correct".
So you have to prove (or convince) that an AGI will develop, gain control of it's own resources and then be able to act on the world in a very short period of time. I haven't seen a convincing argument for that.
Q3 Technology scale
I would love to read more about how software can emulate a human brain. The human brain is an analog system down the molecular level. The brain is a giant soup with a delicate balance of neurotransmitters and neuropeptides. There thousands of different kinds of neurons in the brain, each one acts a little different. As a programmer, I cannot imagine how to faithfully model something like that directly. Digital computers seem completely inadequate. I would guess you'd have more luck wiring together 1000 monkey brains.
Q2 Agency
I also have a question about agency. Let's say Bob invents an AGI in his garage one day. It even gets smarter the more it runs. When Bob goes to sleep at night he turns the computer off and his AI stops getting smarter. It doesn't control it's own power switch, it's not managing Bob's subnet for him. It doesn't have internet access. I guess in a doomsday scenario Bob would have to have programmed in "root access" for his ever more intelligent software? Then it can eventually modify the operating system that it's running on? How does such a thing get to be in charege of anything? It would have... (read more)
Q1 Definitions
Who decides what kind of software gets called AI? Forget about AGI, just talking about the term AI. What about code in a game that decides where the monsters should move and attack? We call that AI. What about a program that plays Go well enough to beat a master? What about a program that plays checkers? What about a chicken that's trained so that it can't lose at tic-tac-toe? Which of those is AI? The only answer I can think of is that AI is when a program acts in ways that seem like only a person should be able to do it. Any more specific definitions are welcome. You can't really have a rational conversation about how AI will develop into AGI unless you are specific about what AI is.
The beginning of chapter 1 introduces a definition for the word "belief" which is very specific and not close to the common everyday meaning. The concept you're talking about has no specific intuitive word that would be better to use, so "belief" is as good as any (IMHO) but I think you need to spend the first page or two examining what you mean by "belief" and relating your concept to existing concepts in neuroscience and psychology.
In the real world, understanding of the universe changes incrementally. So starting out by telling the reader that everyone else is wrong and you have the "true" answers is bound to make people stop reading. In... (read more)
Thanks for your essay, it was encouraging and inspiring!
What you have observed seems to accurately reflect the world and the way people function (not just on the internet). When I did a google search for "the need to believe" I found links that seemed interesting and relevant. I have a working theory about the human brain which seems to fit the evidence that I see in my life, and what I have read.
The human brain is a giant pattern-matching machine. It operates most of the time on incomplete data. But the brain doesn't express patterns as abstract theories, it expresses those observed patterns as "belief". We observe evidence, and we form a... (read more)
Q5 Scenarios
I have different thoughts about different doomsday scenarios. I can think of two general categories, but maybe there are more.
A) "Build us a better bomb." - The AGI is locked in service to a human organization who uses it's superpowers to dominate and control the rest of the world. In this scenario the AGI is essentially a munitions that may appear in the piucture without warning (which takes us back to the time scale concern). This doesn't require the AGI to become self-sufficient. Presumably lesser AIs would also be capable of building better bombs.
B) "Evil Overlord" - An AGI comes into being fast enough that nobody can stop it, and somehow... (read more)