we see a situation where a somewhat circularly defined reputation gets bootstrapped, with the main end state being fairly unanimous EA messaging that "people should give money to EA orgs, in a general sense, and EA orgs should be in charge of more and more things"
This seems to be an accurate description of what happened with EA (or at least one of the several dynamics). I've seen this plenty in my 10 years being involved, but one illustrative example is people campaigning for Carrick Flynn, without knowing anything about him except that he was an EA, because "having an EA in congress is obviously good." You also see this whenever EAs talk about whether people are "aligned" (a shorthand for "values-aligned with EA").
More generally, I think people often defer to the "EA elite" about crucial topics (like say, how one should relate to Anthropic) that require actually a lot of thinking and modeling to reason well about, because they assume that the EA elite are smart and earnest.
I think it's sad that the "let's actually do the research and figure out which interventions actually work" movement turned into a deferral network. I think it's bad, and I've publicly critiqued EA on these grounds in the past.
That said, I want to defend the dynamics that lead to this outcome, or at least elucidate those dynamics and how they can feel from the inside.
In the early days, being affiliated with EA was, in fact, an extremely strong signal of earnest altruistic intent, and a basic but in-practice-rare kind of intellectual seriousness.
Like, there were a bunch of specific ideas and attitudes that seemed basically obvious and important to me, but which almost no one in the world seemed interested in. Things like "some charities are much more impactful than others, so obviously you should prioritize based on where you can do the most good" (something people I knew in 2014 specifically argued against), or "when you're presented a solid counterargument, you change your mind". But if a person was an EA that meant that they "got it". Against a backdrop of civilizational insanity, there were these people who also saw these obvious things, and acted on them.
Such that, if I knew someone was any EA, but I knew literally nothing about them, I would be glad to let them crash on my couch, or introduce them to a professional connection that might help them, or boost their projects, sight-unseen.[1]
I think it's pretty natural for that to turn into a deferral network. Of course I want to empower the smart, earnest, thoughtful people who "get it". Even if I don't understand all their impact models in detail, it seems like a really appealing way to do good is to team up with the those people and help them with their plans. And if there's a whole movement of people like that, the more I can help the movement, the better!
The sort of people who become EAs want to have a positive impact on the world, and EA-the-movement-and-community will often look to them like a big "effective" channel for having impact. A central way that I can have impact is through boosting EA.
This is especially convenient if I'm a college-age EA without much career experience, who's not very well equipped to try for an ambitious project in bioengineering, or malaria-eradication, or government-reform, or making progress on AI alignment (!), or whatever. But a thing that I can do, while still in college, is EA movement-building. If movement building is the main thing that the young and excited new entrants to the movement can do, there's a structural incentive for the movement to be functionally about propagating itself.
Plus, there's something additionally appealing about boosting the EA movement generally, which is that it allows me to abstract over a bunch of hard-to-answer questions about cause prioritization, and the details of specific plans. Trying a specific object-level plan is like buying a specific stock, but investing in the EA movement as a whole is like buying an index fund: you're diversified.
So there's a very natural inclination to believe in "EA" as a good thing in the world, which your own hope of having impact can route through. And then you end up with an EA that is largely self-recommending.
In early 2015, I read an okcupid profile, where someone declared themselves as an EA early in their bio (as I did as well). I sent her a message, and we hopped on a video call immediately, and talked for 2 hours. A few months later we were living in the same house, and she's a close friend to this day.
Today, just saying you're an EA wouldn't be nearly enough signal to make it obviously the case that I want to talk with you for hours. This is in part because my opportunity cost has gone up, and in part because my standards for conversation have risen as I've learned more and it's gotten harder to tell me something interesting that I don't already know, and in part because the signal of being an EA has weakened as the franchize has expanded.
the early EA stock (who I believe came from Bridgewater)
Only the givewell stream. But EA originated from a combination of givewell, the Oxford philosophers who ended up founding 80,000 Hours and Giving What We Can, and LessWrong.
I like this compression but it felt like it sort of lost steam in the last bullet. It doesn't have very much content, and so the claim feels pretty wooly. I think there's probably a stronger claim that's similarly short, that should be there.
Here's a different attempt...
...which turns out a bit longer, but maybe it can be simplified down.
This is great, and on an important topic that's right in the center of our collective ontology and where I've been feeling for a while that our concepts are inadequate.
Top level post! Top level post!
So I think this post is pointing at something very important for my personal rationality practice, but it gives me almost none of what I need to actually do it successfully.
Even if you did expect scaling to probably bring in huge profits, naively it'd still be wiser to pick a growth strategy that didn't require your company to become literally the most profitable company in the history of all companies or go bankrupt.
I mean, that depends on your goals.
I'm uninformed about the specifics of this situation, but I think that taking all-or-nothing gambles like this is evidence that someone is playing for unprecedented personal power, rather than standard capitalist mega-wealth.
It's the one Roam replacer that seems to beat Roam on many of the things it was good at.
What specific things does it beat roam at?
I have vauge plans to switch from Roam to LogSeq, but it's a bit annoying because I'll have to recreate some of the software I've built that is central to my workflows. Should I switch to remnote instead of LogSeq?
Why do almost all of the GPT self-images have the same high level features (notably similarly shaped heads, with two round "headphones" on each side[1]). Does OpenAI train the model to represent itself that way in particular?
Which apparently sometimes get interpreted as more-or-less literal headphones, as in Eliezer's and Roon's.