Epistemic status: ¯\_(ツ)_/¯
An unaligned artificial general intelligence ("AGI"), in or out of a box, must be an existential risk to humanity. In order to safeguard humanity from an unaligned AGI, it's critical that we develop and deploy an aligned AGI, with the intent of performing a "pivotal act". A pivotal act is a decisive action that makes it impossible for subsequent groups to create AGI. Outside of our rationalist community, the simple facts of these computations go unnoticed.
That is why when I found myself in the control room, slightly tipsy on account of a few Toki highballs, I took great care when summarizing the facts to Multivac, the self-adjusting and self-correcting computer.
... (read 1481 more words →)
I see that reading comprehension was an issue for you, since it seems that you stopped reading my post halfway through. Funny how a similar thing occurred on my last post too. It's almost like you think that the rules don't apply to you, since everyone else is required to read every single word in your posts with meticulous accuracy, whereas you're free to pick & choose at your whim.