Posts

Sorted by New

Wiki Contributions

Comments

Humans will put it out, it doesn't need to escape, we will do it in order to be able to use it.We will connect it to internet to do whatever task we think it would be profitable to do and it will be out. Call it manipulating an election, call it customer care, call it whatever you want.... 

1- I really think it's much simpler than that. Just look at the cold war, look at how one person with a past  story of frustration, etc. (which by the way is usually totally accesible for an AGI) could end up selling his own people just out of some unattended old feeling. We, humans, are very easy to manipulate. For me our weakest point is that nobody really has the complete picture of whats going on, and I mean NOBODY. So the chances of an AGI being shut down are small, it would replicate itself, morph, we're not even good at fighting ransomware which is commanded by people. The connected network of computers, and the huge clouds we've created are somewhat inhospit places for us, it's like being blind in the middle of a forest. The AGI would move much better than us. Our only chance would lie in working all toghether as a team, but there's always going to be some group of people that would end up aligning with the AGI, history has proven time and time again that joining all humans on a common task has a really small (almost inexistent) chance of happening. My 0.02. 

2- Again I don't feel there's much knowledge to be gained in my scenario, it's just about controlling us. An AGI could perfectly well speak, generate (fake) video, communicate over our preferred channels of communication, etc. I don't see much information, much trial and error or anything needed. All the tools an AGI would need to (again) dominate us are already here. 

Nikita, I don't agree... 

1- nowadays even in ukrainian war zone there's some kind of (electronic) communication taking place. If an AGI becomes sentient it could "speak" to people and communicate if it want's to. There's no way for us to distinguish an AGI from a "very intelligent human" The only caveat here is replace "exterminate" with "dominate" because while relying on us as it's labour force it wouldn't exterminate us but dominate us. 

Also manipulating humans is, imho, "so simple" that even humans with a little bit more of knowledge or intelligence can do it. Even magicians can do it, and your first impression is that magic exists. Manipulating humans, imho, should be as simple for an AGI as it is manipulating insects or relatively simple animals for us. We can process such a small amount of data, retain such a small amount of data and also we're still so tied to "old precepts" that made us the way we're (being part of natural selection), such things as trusting everything we see... (again think the magician and the rational effort you have to do to oppose your first impression...) 

Also we've already laid the fundamental tools needed for this with our modern hyper communication. I would totally agree with you if we didn't have satellites in low orbit sharing internet, ubiquitous mobile phones, etc. 

2- by being orders of magnitude more intelligent than us, the margin for error could be much smaller, the changes needed to process those errors and correct them could be almost instantaneous (for our standards), etc. 

(as usual I apologize for my typos)

I'm afraid I don't agree with mukashi, and the example with the monkeys feels relevant to me, thing is as in my example, you don't need much time to hack a human, aand haacked humans would be the end of "free" humanity.. If we're just talking aabout speed I feel controlling a human and how long would it taake to hack humanity is more relevant than analyzing robots, batteries, etc. And even though it took us thousands of years, we're teaching and developing AI, it's not a natural ocurring development by chance and probability of sucess (as in natural evolution)

Hi all, I'm really sorry I've not yet been able to read the whole list of comments and replies, but I'd like to rise the point that usually an intelligence which is one order of magnitude or more than the existing ones can controll them at will. We humans are able to control dogs and make them kill each other (dog fights) beacuse we kind of understand the way they react to different stimulus. I don't see why the AGI would need to spend so much time preparing robots, it could just keep an army of humans the size it will and this army could perfectly well do anything it needs given that the AGI is far superior to us regarding intelligence. Also humans would probably never know that they're being commanded by an AGI, I don't feel it's too hard to convince a human to kill another human for a high porpouse. What I mean is that I think the whole point of analyzing the robots, etc is useless, what should be analyzed is how long would it take an AGI to make humans believe they're fighting for a higher porpouse (as in the cruzades for example) and have an army of humans do whatever it takes. Of course that's not hte end of humans, but at least it's the end of "free" humans (if that's something we are right now, which is also a matter of discussion...)

Sorry for my english, not my native tongue. 

(minor corrections, sorry again)