I'm curious about an interaction I had a few weeks ago with someone in the rationality community, I was wondering if someone here can look at the conversation and evaluate what 'went wrong' so to speak.
It began with some comments I made on a blog post, where I disagreed with the author that 'metoo' was good, but rather than discuss the entire point I wanted just to address some counterexamples to something the author said about metoo never having gone too far. The post is here: http://benjaminrosshoffman.com/metoo-is-good/
After a bit of back and forth, it seemed like I should try take it to private chat before it turned into a demon thread.... (read more)
I agree... seems to me like the most likely course of action to happen is for some researchers to develop human+ / fast capability gain AI that is dedicated to the problem of solving AI Alignment, and its that AI which develops the solution that can be implemented for aligning the inevitable AGI.