Wiki Contributions

Comments

We’re in the north-west corner of Stoup

Suggested discussion questions / ice breakers for today's meetup, assembled from ACX posts in the past 6 months. See you all in one hour :-)

  1. What was the most interesting question for you from the recent ACX survey?
  2. What do you think of the Coffeepocalypse argument in relation to AI risk?
  3. Do you agree with the Robin Hanson idea that (more) medicine doesn’t work?
  4. Do you like the “Ye Olde Bay Area House Party” series of posts?
  5. What do you think about the Lumina Probiotic? Are you planning to order it in the future?
  6. What’s your position on the COVID lab leak debate?
  7. Do you like prediction markets? What was a prediction you’ve made in the past year that you’re proud of?
  8. What book would you review for the ACX book review contest if you were to write one? 
  9. Do you believe that capitalism is more effective than charity in solving world poverty?
  10. Which dictator did you find the most interesting from the “Dictator Book Club” series? 

Looks like we’ll have a small reimbursement budget for this meetup so there will be free Chipotle (chicken or vegan) available to the first ~20 attendees.

Thanks for coming! Please fill out a short survey if you have a moment: https://forms.gle/WDHS9Z2dvgoM7iCg9

Weather is nearly perfect, so the event is definitely happening today. I'll be wearing a black jacket and a grey shirt.

Points from this post I agree with:

  • AGI will have at least 100x faster decision making speed for any given decision, compared to human decision making
  • AGI will be able to interact with all 9 billion humans at once, in parallel, giving it a massive advantage
  • Slow motion videos present a helpful analogy

My objection is primarily around the fact that having a 100x faster processing power wouldn't automatically allow you to do things 100x faster in the physical world:

  1. Any mechanical systems that you control won't be 100x faster due to limitations of how faster real-world mechanical parts will work. I.e. if you control a drone, you have to deal with the fact that the drone won't fly/rotate 100x faster just because your processing power is 100x faster. And you'll probably have to control the drone remotely because you wouldn't fit the entire AGI on the drone itself, placing a limit on how fast you can make decisions.
  2. Any operations where you rely on human action will run at 1x speed, even if you somewhat streamline them thanks to parallelization and superior decision making
  3. Being 100x faster is useless if you don't have full information on what the humans are doing/plotting. And they could hide pretty easily by meeting up offline with no electronics in place.
  1. Even "manipulating humans" is something that can be hard to do if you don't have a way to directly interact with the physical world. I.e. good luck manipulating the Ukrainian war zone from the Internet.
  2. But how will the AI get that confidence without trial & error?

You should make a separate post on "Can AGI just simulate the physical world?". Will make it easier to find and reference in the future.

Load More