Some notes from the transcript:
I believe there are ways to recruit college students responsibly. I don't believe the way EA is doing it really has a chance to be responsible. I would say, the way EA is doing it can't filter and inform the way healthy recruiting needs to. And they're funneling people, into something that naivete hurts you in. I think aggressive recruiting is bad for both the students and for EA itself.
Enjoyed this point -- I would guess that the feedback loop from EA college recruiting is super long and is weakly aligned. Those in charge of setting recruiting strategy (eg CEA Groups team, and then university organizers) don't see the downstream impacts of their choices, unlike in a startup where you work directly with your hires, and quickly see whether your choices were good or bad.
Might be worth examining how other recruiting-driven companies (like Google) or movements (...early Christianity?) maintain their values, or degrade over time.
...Seattle EA watched a couple of the animal farming suffering documentaries. And everyone was of course horrified But, not everyone was ready to just jump on, let's give this up entirely forever. So we started doing
don't see the downstream impacts of their choices,
This could be part of it... but I think a hypothesis that does have to be kept in mind is that some people don't care. They aren't trying to follow action-policies that lead to good outcomes, they're doing something else. Primarily, acting on an addiction to Steam. If a recruitment strategy works, that's a justification in and of itself, full stop. EA is good because it has power, more people in EA means more power to EA, therefore more people in EA is good. Given a choice between recruiting 2 agents and turning them both into zombies, vs recruiting 1 agent and keeping them an agent, you of course choose the first one--2 is more than 1.
Mm I'm extremely skeptical that the inner experience of an EA college organizer or CEA groups team is usefully modeled as "I want recruits at all costs". I predict that if you talk to one and asking them about it, you'd find the same.
I do think that it's easy to accidentally goodhart or be unreflective about the outcomes of pursuing a particular policy -- but I'd encourage y'all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned.
I haven't grokked the notion of "an addiction to steam" yet, so I'm not sure whether I agree with that account, but I have a feeling that when you write "I'd encourage y'all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned" you are papering over real values differences.
Tons of EAs will tell you that honesty and integrity and truth-seeking are of course 'important', but if you observe their behavior they'll trade them off pretty harshly with PR concerns or QALYs bought or plan-changes. I think there's a difference in the culture and values between (on one hand) people around rationalist circles who worry a lot about how to give honest answers to things like 'How are you doing today?', who hold themselves to the standards of intent to inform rather than simply whether they out and out lied, who will show up and have long arguments with people who have moral critiques of them, and (on the other hand) most of the people in the EA culture and positions of power who don't do this, and so the latter can much more easily deceive and take advantage of people by funneling them into career paths which basically boil down to 'd...
Basically insofar as EA is screwed up, its mostly caused by bad systems not bad people, as far as I can tell.
Insofar as you're thinking I said bad people, please don't let yourself make that mistake, I said bad values.
There are occasional bad people like SBF but that's not what I'm talking about here. I'm talking about a lot of perfectly kind people who don't hold the values of integrity and truth-seeking as part of who they are, and who couldn't give a good account for why many rationalists value those things so much (and might well call rationalists weird and autistic if you asked them to try).
I don't think differences in values explain much of the differences in results - sure, truthseeking vs impact can hypothetically lead one in different directions, but in practice I think most EAs and rationalists are extremely value aligned
This is a crux. I acknowledge I probably share more values with a random EA than a random university student, but I don't think that's actually saying that much, and I believe there's a lot of massively impactful difference in culture and values.
...I'm pushing back against Tsvi's claims that "some people don't care" or "EA recruiters would consciously
Was there ever a time where CEA was focusing on truth-alignment?
I doesn't seem to me like they used to be truth-aligned and then they did recruiting in a way that caused a value shift is a good explanation of what happened. They always optimized for PR instead of optimizing for truth-alignment.
It's quite a while since they edited out Leverage Research on the photos that they published with their website, but the kind of organization where people consider it reasonable to edit photos that way is far from truth-aligned.
Edit:
Julia Wise messaged me and made me aware that I confused CEA with the other CEA. The photo incident happened on the 80,000 hours website and the page talks about promoting CEA events like EA global and the local EA groups that CEA supports (at the time 80,000 hours was part of the CEA that's now called EV). While I don't think that this makes CEA completely innocent here, because they should see that people who promote their events under the banner of their organization name should behave ethically, I do think it gives a valid explanation for why this wouldn't be make it central for the mistakes page of CEA and they want to focus the mistakes page on mistakes made by direct employees of the entity that's now called CEA.
I think not enforcing an "in or out" boundary is big contributor to this degradation -- like, majorly successful religions required all kinds of sacrifice.
I feel ambivalent about this. On one hand, yes, you need to have standards, and I think EA's move towards big-tentism degraded it significantly. On the other hand I think having sharp inclusion functions are bad for people in a movement[1], cut the movement off from useful work done outside itself, selects for people searching for validation and belonging, and selects against thoughtful people with other options.
I think I'm reasonably Catholic, even though I don't know anything about the living Catholic leaders.
I think being a Catholic with no connection to living leaders makes more sense than being an EA who doesn't have a leader they trust and respect, because Catholicism has a longer tradition, and you can work within that. On the other hand... I wouldn't say this to most people, but my model is you'd prefer I be this blunt... my understanding is Catholicism is about submission to the hierarchy, and if you're not doing that or don't actively believe they are worthy of that, you're LARPing. I don't think this is tru...
I think AIS might have been what poisoned EA? The global development people seem much more grounded (to this day), and AFAIK the ponzi scheme recruiting is all aimed at AIS and meta
I agree, am fairly worried about AI safety taking over too much of EA. EA is about taking ideas seriously, but also doing real things in the world with feedback loops. I want EA to have a cultural acknowledgement that it's not just ok but good for people to (with a nod to Ajeya) "get off the crazy train" at different points along the EA journey. We currently have too many people taking it all the way into AI town. I again don't know what to do to fix it.
I think it's good to want to have moderating impulses on people doing extreme things to fit in. But insofar as you're saying that believing 'AI is an existential threat to our civilization' is 'crazy town', I don't really know what to say. I don't believe it's crazy town, and I don't think that thinking it's crazy town is a reasonable position. Civilization is investing billions of dollars into growing AI systems that we don't understand and they're getting more capable by the month. They talk and beat us at Go and speed up our code significantly. This is just the start, companies are raising massive amounts of money to scale these systems.
I worry you're caught up worrying what people might've thought about you thinking that ten years ago. Not only is this idea now well within the overton window, my sense is that people saying it's 'crazy town' either haven't engaged with the arguments (e.g.) or are somehow throwing their own ability to do basic reasoning out of the window.
Added: I recognize it's rude to suggest any psychologizing here but I read the thing you wrote as saying that the thing I expect to kill me and everyone I love doesn't exist and I'm crazy for thinking it, and so I'm naturally a bit scared by you asserting it as though it's the default and correct position.
We currently have too many people taking it all the way into AI town.
I reject the implication that AI town is the last stop on the crazy train.
This is a good point. In my ideal movement makes perfect sense to disagree with every leader and yet still be a central member of the group. LessWrong has basically pulled that off. EA somehow managed to be bad at having leaders (both in the sense that the closest things to leaders don't want to be closer, and that I don't respect them), while being the sort of thing that requires leaders.
(Commenting as myself, not representing any org)
Thanks Elizabeth and Timothy for doing this! Lots of valuable ideas in this transcript.
I felt excited, sad, and also a bit confused, since it feels both slightly resonant but also somewhat disconnected from my experience of EA. Resonant because I agree with the college-recruiting and epistemic aspects of your critiques. Disconnected, because while collectively the community doesn't seem to be going in the direction that I would hope, I do see many individuals in EA leadership positions who I deeply respect and trust to have good individual views and good process and I'm sad you don't see them (maybe they are people who aren't at their best online, and mostly aren't in the Bay).
I am pretty worried about the Forum and social media more broadly. We need better forms of engagement online - like this article + your other critiques. In the last few years, it's become clearer and clearer to me that EA's online strategy is not really serving the community well. If I knew what the right strategy was, I would try to nudge it. Regardless I still see lots of good in EA's work and overall trajectory.
...[my critiques] dropped like a stone through wa
Maybe you just don't see the effects yet? It takes a long time for things to take effect, even internally in places you wouldn't have access to, and even longer for them to be externally visible. Personally, I read approximately everything you (Elizabeth) write on the Forum and LW, and occasionally cite it to others in EA leadership world. That's why I'm pretty sure your work has had nontrivial impact. I am not too surprised that its impact hasn't become apparent to you though.
I've repeatedly had interactions with ~leadership EA that asks me to assume there's a shadow EA cabal (positive valence) that is both skilled and aligned with my values. Or puts the burden on me to prove it doesn't exist, which of course I can't do. And what you're saying here is close enough to trigger the rant.
I would love for the aligned shadow cabal to be real. I would especially love if the reason I didn't know how wonderful it was was that it was so hypercompetent I wasn't worth including, despite the value match. But I'm not going to assume it exists just because I can't definitively prove otherwise.
If shadow EA wants my approval, it can show me the evidence. If it decides my approval isn't worth the work, it can accept my disapproval while continuing its more important work. I am being 100% sincere here, I treasure the right to take action without having to reach consensus- but this doesn't spare you from the consequences of hidden action or reasoning.
I liked Zach's recent talk/Forum post about EA's commitment to principles first. I hope this is at least a bit hope-inspiring, since I get the sense that a big part of your critique is that EA has lost its principles.
The problem is that Zach does not mention being truth-aligned as one of the core principles that we wants to uphold.
He writes "CEA focuses on scope sensitivity, scout mindset, impartiality, and the recognition of tradeoffs".
If we take an act like deleting out inconvenient information like the phrase Leverage Research from a photo on the CEA website, it does violate the principle of being truth aligned but not any of the one's that Zach mentioned.
If I would ask Zach whether he thinks releasing the people that CEA bars with nondisclosure agreements about that one episode with Leverage about which we unfortunately don't know more than that there are nondisclosure agreements, I don't think he would release them. A sign of being truth-aligned would be to release the information but none of the principles Zach points in the direction of releasing people from the nondisclosure agreements.
Saying that your principle is "impartiality" instead of saying that it is "un...
I want to register high appreciation of Elizabeth for her efforts and intentions described here. <3
The remainder of this post is speculations about solutions. "If one were to try to fix the problem", or perhaps "If one were to try to preempt this problem in a fresh community". I'm agnostic about whether one should try.
Notes on the general problem:
Issues in transcript labeling (I'm curious how much of it was done by machine):
I work at CEA, and I recently became the Interim EA Forum Project Lead. I’m writing this in a personal capacity. This does not necessarily represent the views of anyone else at CEA.
I’m responding partly because my new title implies some non-zero amount of “EA leadership”. I don’t think I’m the person anyone would think of when they think “EA leadership”, but I do in fact have a large amount of say wrt what happens on the EA Forum, so if you are seriously interested in making change I’m happy to engage with you. You’re welcome to send me a doc and ask me to...
There’s a lot here and if my existing writing didn’t answer your questions, I’m not optimistic another comment will help[1]. Instead, how about we find something to bet on? It’s difficult to identify something both cruxy and measurable, but here are two ideas:
I see a pattern of:
1. CEA takes some action with the best of intentions
2. It takes a few years for the toll to come out, but eventually there’s a negative consensus on it.
3. A representative of CEA agrees the negative consensus is deserved, but since it occurred under old leadership, doesn’t think anyone should draw conclusions about new leadership from it.
4. CEA announces new program with the best of intentions.
So I would bet that within 3 years, a CEA representative will repudiate a major project occurring under Zach’s watch.
I would also bet on more posts similar to Bad Omens in Current Community Building or University Groups Need Fixing coming out in a few years, talking about 2024 recruiting.
Although you might like Change my mind: Veganism entails trade-offs, and health is one of the axes (the predecessor to EA Vegan Advocacy is not Truthseeking) and Truthseeking when your disagreements lie in moral philosophy
fwiw, I think it'd be helpful if this post had the transcript posted as part of the main post body.
[00:31:25] Timothy:... This is going to be like, they didn't talk about any content, like there's no specific evidence,
[00:31:48] Elizabeth: I wrote down my evidence ahead of time.
[00:31:49] Timothy: Yeah, you already wrote down your evidence
I feel pretty uncertain to what extent I agree with your views on EA. But this podcast didn't really help me decide because there wasn't much discussion of specific evidence. Where is all of it written down? I'm aware of your post on vegan advocacy but unclear if there are lots more examples. I also hea...
I still consider myself to be EA, but I do feel like a lot of people calling themselves that and interacting with the EA forum aren't what I would consider EA. Amusingly, my attempts to engage with people on the EA forum recently resulted in someone telling me that my views weren't EA. So they also see a divide. What to do about two different groups wanting to claim the same movement? I don't yet feel ready to abandon EA. I feel like I'm a grumpy old man saying "I was here first, and you young'uns don't understand what the true EA is!"
A link to a comment I...
What I think is more likely than EA pivoting is a handful of people launch a lifeboat and recreate a high integrity version of EA.
Thoughts on how this might be done:
Interview a bunch of people who became disillusioned. Try to identify common complaints.
For each common complaint, research organizational psychology, history of high-performing organizations, etc. and brainstorm institutional solutions to address that complaint. By "institutional solutions", I mean approaches which claim to e.g. fix an underlying bad incentive structure, so it won't
That was an interesting conversation.
I do have some worries about the EA community.
At the same, I'm excited to see that Zach Robison has taken the reins as CEA and I'm looking forward to seeing how things develop under his leadership. The early signs have been promising.
The post basically says that the taking actions like "running EA global" is the "principles-first" approach as it is not "cause-first". None of the actions he advocates as principle-first are about, rewarding people for upholding principles or holding people accountable for violating principles.
How can a strategy for "principle-first" that does not deal with the questions of how to set incentives for people to uphold principles be a good strategy?
If you read the discussion on this page with regards to university groups not upholding principles, there are issues. Zach's proposed strategy sees funding them in the way they currently operate, as a good example of what he sees as principle-first because:
Our Groups program supports EA groups that engage with members who prioritize a variety of causes.
Our current training for facilitators for the intro program emphasizes framing EA as a question and not acting as if there is a clear answer.
This suggests that Zach sees the current training for facilitators already as working well and not as something that should be changed. Suggesting that just because EA groups prioritize a variety of causes they are principles-first seems to me lik...
Elizabeth: So I got them nutritional testing. It showed roughly what I thought. And this was like a whole thing. I applied for a grant. I had to test a lot of people. It's a logistical nightmare. I found exactly what I thought I would. that there were serious nutritional issues, not in everyone, but enough that people should have been concerned.
How many people in total were tested? From the Interim report, it looks like only six people got tested, so I assume you're referencing something else.
I work at CEA, and I recently became the Interim EA Forum Project Lead. I’m writing this in a personal capacity. This does not necessarily represent the views of anyone else at CEA.
I’m responding partly because my new title implies some non-zero amount of “EA leadership”. I don’t think I’m the person anyone would think of when they think “EA leadership”, but I do in fact have a large amount of say wrt what happens on the EA Forum, so if you are seriously interested in making change I’m happy to engage with you. You’re welcome to send me a doc and ask me to comment, and/or if you want to have a video call with me, you can DM me and I’ll send you a link.
Hi Elizabeth. I wanted to start by saying that I’m sorry you feel betrayed by EA. I’m guessing I have not felt any betrayal that painful in my own life, and I completely understand if you never want to interact with EA again. I don’t think EA is right for everyone, and I have no desire to pressure anyone into doing something they would regret.
I have some thoughts and reactions to the things you (and Timothy) said. On a meta level, I want to say that you are very welcome not to engage with me at all. I will not judge you for this, nor should any readers judge you. I am not trying to burden you to prove yourself to me or to the public.
I have three main goals in writing this:
I listened to the video and read the transcript, so I’ll structure much of this as responding to quotes from the transcript.
RE: “recruiting heavily and dogmatically among college students”:
I’m certainly no expert, but my understanding is that, while this is a relatively accurate description of how things worked when there was FTX money available, things have been significantly different since then. For example, Jessica McCurdy is Head of Groups at CEA (and I believe she took this role after FTX) and wrote this piece about potential pitfalls in uni group organizing, which includes points about creating truth-seeking discussions and finding the right balance of openness to ideas. I would say that this is some evidence that currently, recruiting is more careful than you describe, because, as Head of Groups at CEA, her views are likely a significant influence on uni group culture.
I wasn’t involved with EA in college, but my own relevant experience is in Virtual Programs. I’ve participated in both the Intro and Advanced courses, plus facilitated the Intro course once myself. In my opinion, both myself and the other facilitators were very thoughtful about not being dogmatic, and not pressuring participants into thinking or acting in specific ways. I also talk with a fair number of younger people at conferences who are looking for advice, and something I have repeated many times is that young people should be really careful with how involved with EA they get, because it’s easy to accidentally get too involved (ex. all your friends are EAs). I’ve encouraged multiple young people not to take jobs at EA orgs. As I alluded to above, I really do not want to pressure anyone into doing something that they would regret.
RE: “the way EA is doing it can't filter and inform the way healthy recruiting needs to”
I’d be really curious to hear more about what you mean by this, especially if it is unrelated to Jessica’s piece above.
RE: “if I believe that EA's true values, whatever that means, are not like in high integrity or not aligned with the values I want it to have, then I'm not going to want to lend my name to the movement”
I agree with this. I certainly internally struggled with this following FTX. However, in my experience of meeting people in a variety of EA contexts, from different places around the world, I would say that they are far more aligned with my values than like, people on average are. This is particularly clear when I compare the norms of my previous work places with the norms of CEA. I’ll quote myself from this recent comment:
“When I compare my time working in for-profit companies to my time working at CEA, it’s pretty stark how much more the people at CEA care about communicating honestly. For example, in a previous for-profit company, I was asked to obfuscate payment-related changes to prevent customers from unsubscribing, and no one around me had any objection to this.”
Perhaps more importantly, in my opinion, speaking for no one else, I think Zach in particular shares these values. In a different recent comment, I was responding to a critical article about the Forum and tried to clarify how much staff time put towards the Forum costs. I thought my original comment was open/clear about these costs, but Zach felt that it was misleading, because it did not talk about the indirect overheads that CEA pays per employee, and this could lead readers to think that our team is more cost-effective than it actually is. You can read more in my EDIT section, which I added based on his suggestion. I personally think this is an example of Zach having high integrity and being truth-seeking, and after this exchange I personally updated towards being more optimistic about his leadership of CEA. Of course you can't judge any person on a single action, so just like in any other context, you should only think of this as one data point.
RE: “I had had vague concerns about EA for years, but had never written them up because I couldn't get a good enough handle on it. It wouldn't have been crisp, and I had seen too many people go insane with their why I left EA backstories. I knew there were problems but couldn't articulate them and was in, I think, a pretty similar state to where you are now. Then I found a crisp encapsulation where I could gather data and prove my point and then explain it clearly so everyone could see it.”
I would be very interested to read a crisp encapsulation. Apologies if I missed it, but I didn’t see any specific concerns about EA overall that rise to the level of like, preventing EA from reaching its potential for improving the world, either in your transcript or in your two linked articles in the video description. (Perhaps this is a misunderstanding on my part — perhaps you are highlighting problems that you don’t see as very severe for EA overall, but you left EA because of the lack of response to your writing rather than the severity of the problems?)
The two linked articles are:
RE: “I would be delighted if that happened. But I think it gets harder to do that every year. As EA dilutes itself doubling with college students every two years.”
It depends on how you define who is “EA”, but based on recent data I have seen, the major growth was in 2022. Therefore, I think it’s broadly accurate to say that EA doubled between 2020 and 2022, but I don’t think it’s accurate to say that about any time after 2022. In particular, after FTX, growth has slowed pretty dramatically across basically anything you’d consider to be EA.
I think the claim that “the doubling is due to college students” needs evidence to support it. My understanding is that many EA groups got more resources and grew around 2022, not just university groups. And the marketing that drove a lot of the growth in 2022 was not aimed at college students, so I don’t see why, for example, the increase in EA Forum users would all be from college students.
RE: “but EA makes a lot of noise about arguing within itself and yet there's a reason so much criticism on EA forum is anonymous And the criticism on LessWrong is all done under people's stable pseudonym or real name.”
There are probably many reasons for this, and I think one is that there are less power structures within the rationalist community (I don’t think there is the equivalent of OP — maybe Eliezer?). The EA community has more major funders. I think people in the EA community also tend to be more risk-averse than LWers, so they are more likely to default to writing anonymously. And I believe that culturally, LW socially rewards blunt criticism more than the EA Forum does, so there is extra incentive to do that since it’s more likely to be good for your social status. On the other hand, my impression is that the EA community is more welcoming of criticizing people/orgs in power (in the community, not in the world) than LW is, so it’s possible that criticism on the EA Forum has more teeth. For example, there is a fair amount of criticism of OP and CEA that is well-received on the Forum, and I don’t know of a similar situation for Eliezer and Lightcone on LW (I’m less familiar with LW so definitely feel free to correct me if I’m mistaken here). So in some sense, I think more anonymity is a result of the fact that holding power in EA is more consequential, and it is more socially acceptable to criticize those with actual power in the community. To be clear, I’m not trying to encourage people to be scared of posting criticism with their real name, and I don’t know of any specific cases where someone was harmed from doing that, but I just think that it’s reasonable for a person who doesn’t know the consequences of posting criticism to default to doing so anonymously.
In my opinion, it’s not clear that, in an ideal world, all criticism would be written using someone’s real name. I feel pretty uncertain about this, but it seems to me that we should be supportive of people sharing criticism even if they have personal reasons for staying anonymous. Personally, I really appreciate receiving feedback in general, and I would prefer someone give it to me anonymously than not at all.
RE: “E: Which EA leaders do you most resonate with? T: …it's a list of friends…most of my friends don't actually want to be talked about in public. I think it speaks against the health of the EA movement right now”
I agree with this, which is why I feel optimistic about CEA putting significant resources towards EA communications work. My guess is that this will be some combination of, being more open and publicly communicating about CEA itself, and putting more effort into proactively writing about what EA is and what people in the community are doing, for a broader audience.
RE: “I would suggest that if you don't care about the movement leaders who have any steering power. You're, you're not in that movement.“
I will say that I identified as an (aspiring) effective altruist for many years before I could name or identify any EA leaders (I only started learning names when I was hired at CEA). I simply found the core principles very compelling, and occasionally read some relevant articles, and donated money via GiveWell and some other places. You could argue that I wasn’t “in the movement”, but I do keep my identity small and the principles resonated with me enough to add that identity.
RE: “ea would be better served by the by having leadership that actually was willing to own their power more”
I would say that under Zach, CEA as an institution is taking more of a leadership role than it previously had been (which was basically none), and people within CEA are more empowered to “own our power” (for example, the Forum team will likely do more steering than before, which again was minimal). EDIT: Based on responses, I'm a bit worried that this is misleading. So I will disconnect this point from the actual quote, and add some clarifications.
RE: “if you're going to have a big enough movement, you actually want leaders who are. Not just leading in their shadow but are like hey, I feel called to lead which means like i'm gonna like Be the scapegoat and i'm gonna be the one like making big level observations and if you come to me with a problem I'm gonna try to align the integrity of our entire community around The truth.”
I do expect this means that CEA will be the scapegoat more often. I also expect CEA will put more resources towards high level observations about the EA community (for example, I referenced data about EA growth earlier, which came from a CEA project). I’m less sure about that last point, because EA is ultimately a framework (like the scientific method is), and we can’t be sure how many people are out there inspired by EA principles but do not, like, read the EA Forum. I guess you could just define “the entire community” as “people who read the EA Forum” to circumvent that issue. In which case, I expect CEA to do more of that going forward.
Concluding thoughts
I understand that this is an emotional topic for you; however, I was surprised at the amount of anti-truth-seeking I saw in your responses. (My bar for you is perhaps unreasonably high because you write about truth-seeking a lot, and I assume you hold yourself to the standards that you preach.) For example, I think the state of EA university group recruiting is a key factor in your beliefs, but it also seems like you do not have up-to-date information (nor did you attempt to find that information before stating your internal view as if it was fact). You often use exaggerated language (for example, “EA had failed in noticing these obvious lies that everyone knew”, which is certainly not literally true), which I think is actually quite harmful for truth-seeking. Outside of the “EA Vegan Advocacy” post, I see surprisingly few instances of you publicly thinking through ways in which you might be wrong, or even gesturing towards the possibility that you might be wrong, or at least hinting at what your confidence levels are. I genuinely want to engage with your concerns about EA, but I feel like this post (even together with your two posts linked above) is not epistemically legible[2] enough for me to do that. I can’t find a clear core claim to grasp onto.
“Integrity” is a concept that comes up a lot in the interview. I haven’t really addressed it in my comment so I figured I should do so here. Personally I have some complicated/unresolved feelings about what integrity actually means[3], so I don’t know to what extent I have it. I’m happy to dive into that if anyone is interested. If you want to test me and tell me objectively how much integrity I have, I’m open to that — that sounds like it would be helpful for me to know as well. :)
To close, I’ll just caveat that I spent a long time writing this comment, but because I wrote about so many things, I wouldn’t be surprised if I said something that’s wrong, or if I misunderstood something that was said, or if I change my mind about something upon further reflection. I’m generally happy to receive feedback, clarify anything I said that was unclear, and discuss these issues further. Specifically, I have significant influence over the EA Forum, so I would be particularly interested to discuss issues and improvements focused on that project.
Meaning, based on rationalist writings, I had higher expectations, so I was disappointed to find they did not meet those expectations.
I had forgotten that you were the person who coined this term — thank you for that, I find it very helpful!
For example, I think people sometimes mix up the concept of “having integrity” with the concept of “acting in the same way that I would” or even “acting in a way that I would find reasonable”, but my understanding is that they are distinct. I’m quite unsure about this though so I could certainly be wrong!
[I have only read Elizabeth’s comment that I’m responding to here (so far); apologies if it would have been less confusing for me to read the entire thread before responding.]
I have always capitalized both EA and Rationality, and have never thought about it before. The first justification for capitalizing R that comes to mind is all the intentionality/intelligence that I perceive was invested into the proto-“AI Safety” community under EY’s (and others’) leadership. Isn’t it fair to describe the “Rationalist/Rationality” community as the branch of AI Safety/X-risk that is downstream of MIRI, LW, the Sequences, 🪄HPMOR, etc?
~5 months I formally quit EA (formally here means “I made an announcement on Facebook”). My friend Timothy was very curious as to why; I felt my reasons applied to him as well. This disagreement eventually led to a podcast episode, where he and I try convince each other to change sides on Effective Altruism- he tries to convince me to rejoin, and I try to convince him to quit.
Some highlights:
Spoilers: Timothy agrees leaving EA was right for me, but he wants to invest more in fixing it.
Thanks to my Patreon patrons for supporting my part of this work.