Posts

Sorted by New

Wiki Contributions

Comments

Is a system that optimizes for destruction an optimizing system?

Similarly appreciate the response!

I would say (3). Societal resilience is mandatory as threat systems proliferate and grow in power. You would need positive systems to counter them.

Regarding your points on writing in dystopia tone, I don't disagree. But it's easier to highlight an idea via narrative than bullet points. I personally like Mr. Smiles, he's my new mascot when I inevitably give up trying to solve AI alignment and turn to villainy. 

Few comparisons/contrasts on allow vs not allow creation of bad systems:

  • Major point, as above, is that disallowing the creation of out-of-control systems requires significant power in surveillance and control. Allowing their creation and preventing the worst effects requires significantly less. I can protect my system from viruses, but I can't stop a script kiddie from releasing one from their personal PC.
  • I think non-optimal agents are key to the diversity of any ecosystem. Further, I think it's important that the human genome allows for antisocial, even evil humans. In my mind, minimizing a trait, rather than disallowing it, is of fundamental importance to the long-term survival of any adaptive collective. It just becomes especially important that the ecosystem/culture/society/justice system is robust to the negative externalities of that diversity. 
  • We humans have a justice system based on actions conducted, rather than an individual's characteristics. It's illegal to murder, not be on the ASPD spectrum. I think there's a lot more merit to that than first glance would suggest. I also think it will be similarly difficult to decide whether a system is inherently "out-of-control," just as it is difficult to determine if a given person with ASPD will commit a crime in the future.
Epimetheus2moΩ240

I agree with Paul Christiano here. Let's call this rogue-AI preventing superintelligence Mr. Smiles. Let's assume that Mr. Smiles cannot find a "good" solution within a decade, and instead must temporarily spend much of his efforts preventing the creation of "bad" AGI.

How does Mr. Smiles ensure that no rogue actors, nation-states, corporations, or other organizations create AGI? Well, Mr. Smiles needs two things: sensors and actuators. The "mind" of Mr. Smiles isn't a huge problem, but the sensors and actuators are extremely problematic:

  1. Sensors: The world will have to be covered in sensors. A global surveillance states needs to be instantiated. Hidden or governmental networks? Remote areas of the world? Those are massive risks. Yet, for every system Mr. Smiles infiltrates, he has made an enemy of its owners. I don't think the PLA will appreciate a likely-Western AGI infiltrating and spying on its network. 
  2. Actuators: This is the real danger. Mr. Smiles has assessed a threat. What does Mr. Smiles do? How does he stop a human or an organization from working on AGI? Does he destroy humanity's basis of knowledge? Prevent humans from accessing computing technology? If a human gets too close, what means does he go to? How many humans are worth ending to prevent the creation of a maligned AGI? 

I have a love for the way we unknowingly instantiate religion into conversations on AGI. Mr. Smiles is God. Mr. Smiles is omnipotent, omniscient, and omnibenevolent. I do not need to relay the main arguments against the logical possibility of such an entity. Mr. Smiles cannot, and will not work. If Mr. Smiles "works," our world begins to look awfully dystopian. 


Instead, I agree with Christiano that we should look at the defense-industrial complex. We just have to continue to build defenses against offenses, in an ever-lasting battle that's raged since the beginning of life on this planet. Conflict is part of having agents in the world. 

The real issue is asymmetric warfare, where one AGI has outsized power, either due to its size, or the asymmetry of offensive weaponry. I will steal an excerpt from "Sapiens," and instead call it the defense-industrial-scientific complex. Our defense complex is not new to the asymmetric effects of technology, nor the ability to wage destruction, and the difficulty in promoting healing. Yet it has adapted countless times to new capabilities, threats, and societal orders. I do not think our current system is robust to the threats of AGI, but I imagine it can adapt into such a system. Further, while humans cannot solve the proposed game theories of superintelligent agents, superintelligent societies may just be able to. 

I think there's only one reasonable path towards a "good" future. We need to solve internal alignment. "Control" is nice, but it's main use is for a "first-mover." Once multiple actors acquire AGI technology, control is no longer a sufficient approach. If we solve alignment, we need to propagate a multitude of aligned agents across the globe, and help in shaping their new burgeoning society. 

If we are serious about creating "AGI," we need to understand that we are not creating a tool, but a new form of life. Will the world improve? Hopefully. Will the world become more complex? Dramatically so. I hate to advocate for this position, but our defense-industrial-scientific complex won't likely be discarded, but improved upon a thousandfold.