Posts

Sorted by New

Wiki Contributions

Comments

I agree with this post - like Eliezer says, it's unlikely that the battle of AI vs humanity would come in the form of humanoid robots vs humans like in Terminator, more likely it would be far more boring and subtle. I also think that one of the key vectors of attack for an AI is the psychological fallibility of humans. An AI that is really good at pattern recognition (i.e. most AIs) would probably have little issue with finding out your vulnerabilities just from observing your behavior or even your social media posts. You could probably figure out whether someone is highly empathetic (vulnerable to emotional blackmail) or low-IQ (vulnerable to trickery) pretty easily by reading their writing. There are already examples of programmers who fell in love with AI and were ready to do its bidding. From there, if you manipulate a rich person or someone who's otherwise in a position of power, you can do a lot to covertly set up a losing position for humanity. 

This makes an interesting point about scarcity. On one hand, it sucks to be limited in the amount of stuff you have. On the other hand, struggling through adversity or having scarce skills can give people a sense of meaning. We know that people whose skills are automated can suffer a lot from it. 

 

I think that even once all of humans' cognitive skills can be replaced by AI, we will still be useful to one another. We will still relate to each other on account of our shared biological nature. I think that people will not for the most part have the same emotional relation with silicon-based intelligence as we have with other homo sapiens, because it will not share our emotions and flaws. I think a big part of why we like interacting with other people is because we can see ourselves in them, which we will not be able to do with AI. This is why I support developing AI capabilities, as long as we can keep control.