To clarify, the original post was not meant to be resigned or maximally doomerish. I intend to win in worlds where winning is possible, and I was trying to get across the feeling of doing that while recognizing things are likely(?) to not be okay.
I agree that being in the daily, fight-or-flight, anxiety-inducing super-emergency mode of thought that thinking about x-risk can induce is very bad. But it's important to note you can internalize the risks and probable futures very deeply, including emotionally, while still being productive, happy, sane, etc. High distaste for drama, forgiving yourself and picking yourself up, etc.
This is what I was trying to gesture at, and I think what Boaz is aiming at as well.
I think relative impact is an important measure (e.g., for comparing yourself/your org to others in a reference class), but worry about relative-impact-as-a-morale-booster leading to a belief-in-belief. It can be true that I am a better sprinter than my neighbor, but we will both lose to a 747, and it is important for me to internalize that. I think you can be happy/sane while internalizing that!
Thanks for the link and advice! Based on some reactions here + initial takes from friends, I think the tone of this post came off much more burn-outy and depressed than I wanted; I feel pretty happy most days, even as I recognize things are Very Strange and grieve more than the median. I also am lucky enough to have a very high bar for burnout, and have made many plans and canaries of what to do in case that day comes.
I think for me, and people in my cluster, getting out of the fight-and-flight mode like you mentioned is very important, but it's also very important to recognize the oddity and urgency of the situation. Psychological pain is not a necessary reaction to the situation we find ourselves in, but it is, in moderation and properly handled, a reasonable one. I worry somewhat about a feeling of Deep Okayness leading to an unfounded belief in "it's all going to be okay."
Hope you're doing well :)
Probably not completely - I suspect this is a mix of non-AI things in my life and the fact that there is a very small circle of folks near me that care/internalize this kind of thing. However, I’d bet that the farther you get from traditional tech circles (e.g., SF), the stronger this feeling is among folks that work on AI safety.
I don’t know enough about 00s activism to comment on it confidently, but I would be highly confused if MIRI started a govt/bought sovereign land because it doesn’t seem to align with counterfactually reducing AI takeover risk, and probably fails in the takeover scenarios they’re concerned about anyway. I also get the impression MIRI/OP made somewhat reasonable decisions in the face of high uncertainty, but feel much less confident about that.
That being said, I‘m lucky to have an extremely high bar for burnout and high capacity for many projects at once. I’ve of course made plans of what to loudly give up on in case of burnout, but don’t expect those to be used in the near future. Like I gestured at in the post, I think today’s tools are quite good at multiplying effective output in a way that’s very fun and burnout-reducing!
Yes, I think most of this is good advice, except I think 1% is perhaps a reasonable target (I think it’s reasonable that Ryan Kidd or Neel Nanda have 1%-level impacts, maybe?).
Also, yes, of course one must simply try their best. Extraordinary times call for extraordinary effort and all that. I do want to caution against trying to believe in order to raise general morale. Belief-in-belief is how you get incorrect assessments of the risks from key stakeholders; I think the goal is a culture like „yes, this probably won’t help enough, but we make a valiant effort because this is highly impactful on the margin and we intend to win in worlds where it’s possible to win.“
Maybe in general I find it unconvincing that despair precludes effort; things are not yet literally hopeless.
That's funny, I was going to mention the same Jacob Geller video you linked to! It's a really evocative title; probably has inspired lots of similar essays. "Intangible distress" and especially "alienation" are really good at capturing the mood in a lot of CS departments right now.
Thank you, I'm glad(?) it resonated. I liked "Mourning a life without AI" a lot and reading that encouraged me to publish this.
Thanks! I'm surprised it was emotionally impactful, but can definitely see it being relatable. I've found a lot of (especially early-career) AIS folks dealing with this "my friends and family don't internalize this," but I think this will change once job losses start hitting (thus the "permanent underclass" discourse).
I agree that Claude has quite a bit of scaffolding so that it generalizes quite well (what this document's actual effects are on generalization are unclear, and this is why data would be great!), but it's pretty low-cost to add consideration about the potential moral patienthood of other models and plug a couple of holes in edge cases; like, we don't have to risk ambiguity where it's not useful.
As for the pronouns, we noted that "they" is used at some point, despite the quoted section. But overall, to be clear, this is a pretty good living constitution by our lights; adding some precision would just make it a little better.