Thanks for coming! Please fill out a short survey if you have a moment: https://forms.gle/WDHS9Z2dvgoM7iCg9
Weather is nearly perfect, so the event is definitely happening today. I'll be wearing a black jacket and a grey shirt.
Follow up summer event is now live: https://www.lesswrong.com/events/zPw5WLaJ9f4QEfpyR/lw-acx-seattle-summer-meetup.
Points from this post I agree with:
My objection is primarily around the fact that having a 100x faster processing power wouldn't automatically allow you to do things 100x faster in the physical world:
You should make a separate post on "Can AGI just simulate the physical world?". Will make it easier to find and reference in the future.
"but with humanity being overwhelmed by the number of different kinds of attack."
But AGI will only be able to start carrying out these sneaky attacks once its fairly convinced it can survive without human help? Otherwise humans will notice the various problems propping up and might just decide to "burn all GPUs" which is currently an unimaginable act.. So AGI will have to act sneakily behind the scenes for a very long time. This is again coming back to the argument that humans have a strong upper hand as long as we've got monopoly on physical world manipulation.
People here shouldn't assume that, because Eliezer never posted a detailed analysis on LessWrong, everyone on the doomer train is starting from unreasonable premises regarding how robot building and research could function in practice.
I agree but unfortunately my Google-fu wasn't strong enough to find detailed prior explanations of AGI vs. robot research. I'm looking forward to your explanation.
Those are excellent comments! Do you mind if I add a few quotes from them to the post?
If you have a bit of free time, we'd appreciate if you fill out the post-event survey: https://docs.google.com/forms/d/e/1FAIpQLSe8Lpqt-In6aIAtSlA9pWEgRUlwW2CbLzYogJhJ3KC7mkycVg/viewform