Another example I would cite was the response to If Anyone Builds It, Everyone Dies by the core EA people, including among others Will MacAskill himself and also the head of CEA. This was a very clear example of PR mindset, where quite frankly a decision was made that this was a bad EA look, the moves it proposes were unstrategic, and thus the book should be thrown overboard.
FWIW, while I got a vibe like this from the head of CEA's review, I didn't get this vibe from Will's review. The vibe I got from Will's review was an interest in the arguments and whether they really supported the strong claims of the book.
And the complaints I saw about Will's review (at least the ones that I was sympathetic to, rather than ones that seemed just very off-base) weren't "this is insufficiently truth-seeking". Rather, they were "this is too nitpicky, you should be putting more emphasis on stuff you agree with, because it's important that readers understand that AI takeover risk is high and the situation isn't currently being handled well".
Yeah, that particular part sounded a lot like "I can't understand why people disagree with IABIED without suffering from PR mindset".
Like, this would be not out of place if someone couldn't actually understand or tolerate disagreement with the Grand Ideas, and I really wish this quote was stricken from the post entirely:
Another example I would cite was the response to If Anyone Builds It, Everyone Dies by the core EA people, including among others Will MacAskill himself and also the head of CEA. This was a very clear example of PR mindset, where quite frankly a decision was made that this was a bad EA look, the moves it proposes were unstrategic, and thus the book should be thrown overboard. If Will is sincere about this reckoning, he should be able to recognize that this is what happened.
There’s basically no reason for everyone not to outright copy this format, forever. Indeed, one wonders if you shouldn’t have such badges and wear them at parties.
Is this format not already standard? I think I get a nametag more or less like this one every time I attend a conference.
Preventing Value Drift
Peter Thiel warned Elon Musk to ditch donating to The Giving Pledge because Bill Gates will give his wealth away ‘to left-wing nonprofits.’ As John Arnold points out, this seems highly confused. The Giving Pledge is a promise to give away your money, not a promise to let Bill Gates give away your money. The core concern, that your money ends up going to causes one does not believe in (and probably highly inefficiently at that) seems real, once you send money into a foundation ecosystem it by default gets captured by foundation style people.Maximizing Good Makes People Look Bad
A new paper goes Full Hanson with the question Does Maximizing Good Make People Look Bad? They answer yes, if you give deliberately rather than empathetically and seek to maximize impact this is viewed as less moral and you are seen as a less desirable social partner, and donors estimate this effect roughly correctly. Which makes sense if you consider that one advantage of being a social partner is that you can direct your partners with social and emotional appeals, and thereby extract their resources. As with so many other things, you can be someone or do something, and if you focus on one you have to sacrifice some of the other. This is one place where the core idea of Effective Altruism is pretty great. You create a community of people where it is socially desirable to be deliberative, and scorn is put on those who are empathic instead. If that was all EA did, without trying to drum up more resources or direct how people deliberated? That alone is a big win.No We Have No Tuition
UATX eliminates tuition forever as the result of a $100 million gift from Jeff Yass. Well, hopefully. This gift alone doesn’t fund that, they’re counting on future donations from grateful students, so they might have to back out of this the way Rice had to in 1965. One could ask, given schools like Harvard, Yale and Stanford make such bets and have wildly successful graduates who give lots of money, and still charge tuition, what is the difference? In general giving to your Alma Mater or another university is highly ineffective altruism. One can plausibly argue that fully paying for everyone’s tuition, with an agreement to that effect, is a lot better than giving to the general university fund, especially if you’re hoping for a cascade effect. It would be a highly positive cultural shift if selective colleges stopped charging tuition. Is that the best use of $100 million? I mean, obviously not even close, but it’s not clear that it is up against the better uses.Will MacAskill and the Dangers of PR Focus
Will MacAskill asks what Effective Altruism should do now that AI is making rapid progress and there is a large distinct AI safety movement. He argues EA should embrace the mission of making the transition to a post-AGI society go well. He does not mention until later the obvious objection, which is that the Effective Altruist brand is toxic, to the point that the label is used as a political accusation. No, this isn’t primarily because EA is ‘inherently controversial’ for the things it advocates. It is primarily because, as I understand things:- EA tells those who don’t agree with EA, and who don’t allocate substantial resources to EA causes, that they are bad, and that they should feel bad.
- EA (long before FTX) adopted in a broad range of ways the ‘PR mentality’ MacAskill rightfully criticizes, and other hostile actions it has taken, also FTX.
- FTX, which was severely mishandled.
- Active intentional scapegoating and fear mongering campaigns.
- Yes, the things it advocates for, and the extent to which it and components of it have pushed for them, but this is one of many elements.
Thus, I think that the things strictly labeled EA should strive to stay away from the areas in which being politically toxic is a problem, and consider the risks of further negative polarization. It also needs to address the core reasons EA got into the ‘PR mentality.’ Here are the causes he thinks this new EA should have in its portfolio (with unequal weight that is not specified): There are some strange flexes in there, but given the historical origins, okay, sure, not bad. Mostly these are good enough to be ‘some of you should do one thing, and some of you should do the other’ depending on one’s preferences and talents. I strongly agree with Will’s emphasis that his shift into AI is an affirmation of the core EA principles worth preserving, of finding the important thing and focusing there. I am glad to see Will discuss the problem of ‘PR focus.’ I also appreciate Will’s noticing that the PR focus hasn’t worked even on its own terms, that EA discourse is withering. I would add that EA’s brand and PR position is terrible in large part exactly because EA has often acted, for a long period, in this PR-focused, uncooperative and fundamentally hostile way, that comes across as highly calculated because it was, along with a lack of being straight with people, and eventually people learn the pattern. This laid the groundwork, when combined with FTX and an intentional series of attacks from a16z and related sources, to poison the well. It wouldn’t have worked otherwise to anything like the same extent. This was very wise: Except I think this was a far broader issue than a post-FTX narrow PR focus. Thus I see ‘PR focus’ as a broader problem than Will does. It is about this kind of communication, but also broader decision making and strategy and prioritization, and was woven into the DNA. It is the asking ‘what maximizes the inputs into EA brands’ question more broadly and centrally involves confusion of costs and benefits. The broader set of things all come from the same underlying mindset. And I think that mindset greatly predates FTX. Indeed, it is hard to not view the entire FTX incident, and why it went so wrong, as largely about the PR mindset. As a clear example, he thinks ‘growing the inputs’ was a good focus of EA in the last year. He thinks the focus should now shift to improving the culture, but his justifications still fall into the ‘maximize inputs’ mindset. Actively looking to grow the movement has obvious justification, but inputs are costs and not benefits, it is easy to confuse the two, and focus on growing inputs tends to cause severe PR mindset and hostile actions as you strive to capture resources, including people’s time and attention. Another example I would cite was the response to If Anyone Builds It, Everyone Dies by the core EA people, including among others Will MacAskill himself and also the head of CEA. This was a very clear example of PR mindset, where quite frankly a decision was made that this was a bad EA look, the moves it proposes were unstrategic, and thus the book should be thrown overboard. If Will is sincere about this reckoning, he should be able to recognize that this is what happened. What should you do if your brand is widely distrusted and toxic? The good news, I agree with Will, is that you can stop doing PR. The bad news is that this doesn’t raise the obvious question, which is why are you doubling down on this toxic brand, especially given the nature of many of the cause areas Will suggests EA enter?Tag, You’re It
When you hold your conference, This Is The Way: There’s basically no reason for everyone not to outright copy this format, forever. Indeed, one wonders if you shouldn’t have such badges and wear them at parties.Impact Philanthropy
Alex Shintaro Araki offers thoughts on Impact Philanthropy fundraising, and Sarah Constantin confirms this matches her experiences. Impact philanthropy is ideally where you try to make cool new stuff happen, especially a scientific or technological cool new thing, although it can also be simply about ‘impact’ through things like carbon sequestration. This is a potentially highly effective approach, but also a tough road. Individual projects need $20 million to $100 million and most philanthropists are not interested. Sarah notes that many people temperamentally aren’t excited by cool new stuff, which is alien to me, that seems super exciting, but it’s true. One key insight is that if you’re asking for $3 million you might as well ask for $30 million, provided you have a good pitch on what to do with it, and assuming you pitch people who have the money. If someone is a billionaire, they’re actively excited to be able to place large amounts of money. Another is that there’s a lot of variance and luck, although he doesn’t call it that. You probably need a deep connection with your funder, but you also need to find your funder at the right time when things line up for them. Finally, it sounds weird, but it matches my experience that funders need good things to fund even more than founders need to find people to fund them, the same way this is also true in venture capital. They don’t see good opportunities and have limited time. So things like cold emails can actually work. Expect another philanthropy-related post later this month.