LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
What sort of things do you solve with this? I feel like when I have a problem that's not fairly easy for an AI to solve straightforwardly, if I sent it on a loop it'd just do a bunch of random crazy shit that was clearly not the right solution.
I can imagine a bunch of scaffolding that helps but don't it seems like most of the work is in the problem specification and I'm not sure if I don't have the sort of problems that benefit from this or if skill issue.
I think 10k ones approximately won't do anything.
The thing I saw the sentence as doing is mostly clarifying "We're not naive, obviously just doing the naive thing here would not work, that's why we're not asking for it." (I think I agree that a US ban would be some-kind-of-useful but it feels way less politically viable to me, since it feels more like throwing away the lead for no reason to most people. I realize it may sound weird to think "banning in one country less viable than banning worldwide", but, I think the ban worldwide actually makes clearly makes sense in a way that banning l locally only maybe makes sense if you tune the parameters just right)
"ban AI development temporarily near but not after max-controllable AI"
I'm not sure I'm parsing the grammar here, wondering if you flipped the sign or I'm misreading. (it sounds like "AIs that are almost uncontrollable are banned, uncontrollably powerful AIs are allowed)
Yeah when I was doing the graphics I considered a version where everyone was waving stop signs. It looked a bit weird as an illustration but I suspect would probably work in real life.
Yeah it feels more like a paramilitary army which is not the point.
(By contrast I do think "Stop Sign" is pretty good)
I think these look kinda scary, in particular the black/red. White/red does feel more reasonable, although basically the original March page was constrained by the aesthetic of the book cover, and by the time there's serious effort invested in this I'd expect the aesthetic is going to get an overhaul that isn't as tied to the book.
I think the effect there would be pretty minimal, there's not more than a few thousand people in any given city that likely to show up. It'd be weird to ask 90k people to travel to San Francisco, since that doesn't send a particular message about international treaties. (You might run a different protest that is the "please AI companies, unilaterally stop", but, I don't actually think that protest makes much sense)
I gave this a +4 because it feels pretty important for the "how to develop a good intellectual" question. I'd give it a +9 if it was better argued.
I think it's generally the case that patterns need to survive to be good, but also it's fairly normal for patterns to die and this to be kinda fine. (i.e. it's fine if feudalism lasts a few hundred years and then is outcompeted by other stuff).
The application to superintelligence does seem, like, true and special, but, probably most patterns that evolved naturally don't successfully navigate superintelligence well and I'm not sure it's the right standard for them.
This comment led me to realize there really needed to be a whole separate post just focused on "fluent cruxfinding", since that's a necessary step in Fluent Cruxy Predictions, and, it's the step that's more likely to immediately pay off.
Here's that post:
https://www.lesswrong.com/posts/wkDdQrBxoGLqPWh2P/finding-cruxes-help-reality-punch-you-in-the-face