LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
Yep agree with all that. (I stand by my comment as mostly arguing directionally against Richard's summary but seems fine to also argue directionally against mine)
I lean towards this, despite being a guy currently heavily invested in AI tools.
Cluster of things that all seem true to me:
But:
Focusing on helping the worst-off instead of empowering the best
I feel like this is getting at some directionally correct stuff but, feels off.
EA was the union of a few disparate groups, roughly encapsulated by:
There are other specific subgroups. But, my guess is early Givewell was most loadbearing in there ending up being an "EA" identity, in that there were concrete recommendations of what to do with the money that stood up to some scrutiny. Otherwise it'd have just been more "A", or, "weird transhumanists with weird goals."
Givewell started out with a wider variety of cause areas, including education in America. It just turned out that it seemed way more obviously cost effective to do specific global health interventions than to try to fix education in America. (I realize education in America isn't particularly "empowering the best", but, the flow towards "helping worst off" seems to me like it wasn't actually the initial focus)
I agree some memeplex accreted around that, which had some of the properties you describe.
But meanwhile:
EA started off with global health and ended up pivoting hard to AI safety, AI governance, etc.
It seems off to say "started off in global health, and pivoted to AI", when all the AI stuff was there from the beginning at the very first pre-EA-Global events, and just eventually became clear that it was real, and important. The worldview that generated AI was not (exactly) the same one generating global health, they were just two clusters of worldview that were in conversation with each other from the beginning.
This post inspired a pretty longrunning trail of thought for me I am still chewing on. I have considered pivoting my life to pursue the sort of vision this post articulates. I haven't actually done it because other things so far have seemed more urgent/tractable but I still think it's pretty important.
Okay I think I don't stand by my previous statement. More like, I expect that overall process to be a lot more expensive than just going for a big protest off the bat. Obviously yeah there's a more common pattern of escalating groundwork and smaller protests.
But, I think this is dramatically more expensive, to the point where it doesn't seem worth my time, in a way that just going straight for the big protest does.
I don't really have that much confident it's possible to get a big protest off the bat. But, I think there is a discrete step-change in "you got the AI safety folk to all show up once" and "you got a substantial fraction of mainstream support." Once you're trying to do the latter, the SF benefit just seems very low to me.
The mechanism by which I'd try to hit 10k numbers involves starting from scratch recruiting a lot of people, at which point I might as well just start in DC. A crux is that I expect a 100k protest to involve similar amounts of work as a 10k protest, and requires calling in favors from famous people that are very expensive and I don't want to have to call in twice
(I also not your Russia example starts with a 50k-100k protest, which is already a different league)
Some reasons I'm more bullish on "just go for a big protest right off the bat."
Gotcha. Was the game one real for you? (I guess I'm looking for things that will show up in my day job, and trying to get a sense of whether people have different day-jobs than me, or doing random side projects, or what)
The test-coverage one is interesting.
Yeah I get the principle, but, like, what in practice do you do where this is useful? Like concrete (even if slightly abstracted) examples of things you did with it.
This comment led me to realize there really needed to be a whole separate post just focused on "fluent cruxfinding", since that's a necessary step in Fluent Cruxy Predictions, and, it's the step that's more likely to immediately pay off.
Here's that post:
https://www.lesswrong.com/posts/wkDdQrBxoGLqPWh2P/finding-cruxes-help-reality-punch-you-in-the-face
I'm not sure how much this matters and I'm not 100% sure this effect is real (it's the sort of thing I could have just psy-op'd myself into believing). But, as an artist: there's a difference between live models and pictures is that the live models force a subtle skill of... like, converting the 3D shape into a 2D one.
This is sort of like the difference between lifting free weights vs lifting weights at a machine. There's a bunch of little subtle micromovements you have to make when free-weight-lifting whereas the machine isolates a given muscle. When you are looking at a real person, your head rocks back and forth, choosing exactly which 2D plane you are copying is subtly more difficult than when copying from a picture.
If you never intend to draw from life, then, I wouldn't argue this matters that much. A lot of it might be a particular culture of art. But, I'm like 75% there's a difference.