Wiki Contributions


I'm still working my way through this list and referring other people to it,  A++, thank you for creating this post.

That's a super reasonable request that I wish I was able to fulfill. Engaging with Val on this is extremely costly for me, and it's not reasonable to ask him to step out of a conversation on his own post, so I can't do it here. I thought about doing a short form post but couldn't feature creeped myself to the point it was infeasible.

I think you're reacting to a tone, not content.


Yes, this is correct. 

it seems that "you need to do the trauma processing first and only then do useful work" is a harmful self-propagating meme in a very similar way as "you need to track and control every variable in order for AI to go well"


This. Trauma processing is just as prone to ouroboros-ing as x-risk work, if not more so.  

AFAICT from skimming, the object level of this post has a lot of overlap with my own algorithm. I limit engagement with x-risk to an amount that's healthy and sustainable for me. I keep non-x-risk clients in part to ground me in the real world. I'm into trauma processing and somatics. I think the fact that the people most scared of AGI risk are also the ones most scared of not developing AGI should raise some eyebrows. I treat "this feels bad" as a reason to stop without waiting for a legible justification.

And right now I'm using that last skill to not read this post. I wouldn't have even skimmed if I didn't think it was important to make this comment and have it not be totally uninformed. When I read this I feel awful, highly activated, and helpless/freeze response. It instills the same "you can't trust yourself, follow this rigidity" that it's trying to argue against. 

You can't fight fire with fire, getting out of a tightly wound x-risk trauma spiral involves grounding and building trust in yourself, not being scared into applying the same rigidity in the opposite direction. 

made my own alternative, where if you really want to go on a blocked website, you have to effectively stare at your screen for ~1 minute


This is a great idea, I've tried to find something like this in the past and failed, can you share yours?

I don't see anything here to question Ilverin's mental fortitude. They're not demanding people appease sneerclub or going into a shame spiral. They raised a hypothesis. I think "we don't accept hypotheses from sneerclub and accept that we will miss the twice a day when that broken clock is correct" is a reasonable blanket policy for LW, but that doesn't mean everyone who raises a hypothesis from them has been irretrievably corrupted. 

Hypothesis: they overhired because they were worried about difficulty hiring in the future, and are now laying off people they never actually needed but were holding onto in case they were needed later. Their staffing needs could stay exactly the same, but if they stop being afraid of hiring difficulties they don't need to lock people in ahead of time.

Depends on who wins. I know at least one person who won the lottery and then planned to donate to LTFF, which I thought was a total waste and if that was what they were going to do they should have donated to LTFF in the first place. But I think Timothy TL's uses were plausibly 10x EV what would have happened otherwise, even if one of them ultimately had little impact and the other I don't know anything about.  Admittedly that was back when funding was scarcer and there were no regranting programs, but I don't think those have 100% covered everyone who could direct medium sized amounts usefully.

I should note I don't participate in donor lotteries, for reasons not covered here. I definitely don't think everyone should do them. But I do think there's a place for them.

Over on FB I asked Jeff to quantify his estimates and he very kindly said

so I think the arguments about legibility and influencing others are kind of a red herring, Jeff just doesn't think donor lotteries create much value.

Load More