Nikita Sokolsky

Wiki Contributions


Thanks for coming! Please fill out a short survey if you have a moment:

Weather is nearly perfect, so the event is definitely happening today. I'll be wearing a black jacket and a grey shirt.

Points from this post I agree with:

  • AGI will have at least 100x faster decision making speed for any given decision, compared to human decision making
  • AGI will be able to interact with all 9 billion humans at once, in parallel, giving it a massive advantage
  • Slow motion videos present a helpful analogy

My objection is primarily around the fact that having a 100x faster processing power wouldn't automatically allow you to do things 100x faster in the physical world:

  1. Any mechanical systems that you control won't be 100x faster due to limitations of how faster real-world mechanical parts will work. I.e. if you control a drone, you have to deal with the fact that the drone won't fly/rotate 100x faster just because your processing power is 100x faster. And you'll probably have to control the drone remotely because you wouldn't fit the entire AGI on the drone itself, placing a limit on how fast you can make decisions.
  2. Any operations where you rely on human action will run at 1x speed, even if you somewhat streamline them thanks to parallelization and superior decision making
  3. Being 100x faster is useless if you don't have full information on what the humans are doing/plotting. And they could hide pretty easily by meeting up offline with no electronics in place.
  1. Even "manipulating humans" is something that can be hard to do if you don't have a way to directly interact with the physical world. I.e. good luck manipulating the Ukrainian war zone from the Internet.
  2. But how will the AI get that confidence without trial & error?

You should make a separate post on "Can AGI just simulate the physical world?". Will make it easier to find and reference in the future.

"but with humanity being overwhelmed by the number of different kinds of attack."

But AGI will only be able to start carrying out these sneaky attacks once its fairly convinced it can survive without human help? Otherwise humans will notice the various problems propping up and might just decide to "burn all GPUs" which is currently an unimaginable act.. So AGI will have to act sneakily behind the scenes for a very long time. This is again coming back to the argument that humans have a strong upper hand as long as we've got monopoly on physical world manipulation.

People here shouldn't assume that, because Eliezer never posted a detailed analysis on LessWrong, everyone on the doomer train is starting from unreasonable premises regarding how robot building and research could function in practice.

I agree but unfortunately my Google-fu wasn't strong enough to find detailed prior explanations of AGI vs. robot research. I'm looking forward to your explanation.

Those are excellent comments! Do you mind if I add a few quotes from them to the post?

Load More