All of _will_'s Comments + Replies

So basically I don't think it's possible to do robustly positive actions in longtermism with high (>70%? >60%?) probability of being net positive for the long-term future

This seems like an important point, and it's one I've not heard before. (At least, not outside of cluelessness or specific concerns around AI safety speeding up capabilities; I'm pretty sure that most EAs I know have ~100% confidence that what they're doing is net positive for the long-term future.)

I'm super interested in how you might have arrived at this belief: would you be able t... (read more)

8Howie Lempel6mo
"I'm pretty sure that most EAs I know have ~100% confidence that what they're doing is net positive for the long-term future" Fwiw, I think this is probably true for very few if any of the EAs I've worked with, though that's a biased sample. I wonder if the thing giving you this vibe might be they they actually think something like "I'm not that confident that my work is net positive for the LTF but my best guess is that it's net positive in expectation. If what I'm doing is not positive, there's no cheap way for me to figure it out, so I am confident (though not ~100%) that my work will keep seeming positive EV to me for the near future." One informal way to describe this is that they are confident that their work is net positive in expectation/ex ante but not that it will be net positive ex post I think this can look a lot like somebody being ~sure that what they're doing is net positive even if in fact they are pretty uncertain.
1Daniel_Eth6mo
One way I think about this is there are just so many weird (positive and negative) feedback loops and indirect effects, so it's really hard to know if any particular action is good or bad. Let's say you fund a promising-seeming area of alignment research – just off the top of my head, here are several ways that grant could backfire: • the research appears promising but turns out not to be, but in the meantime it wastes the time of other alignment researchers who otherwise would've gone into other areas • the research area is promising in general, but the particular framing used by the researcher you funded is confusing, and that leads to slower progress than counterfactually • the researcher you funded (unbeknownst to you) turns out to be toxic or otherwise have bad judgment, and by funding him, you counterfactually poison the well on this line of research • the area you fund sees progress and grows, which counterfactually sucks up lots of longtermist money that otherwise would have been invested and had greater effect (say, during crunch time) • the research is somewhat safety-enhancing, to the point that labs (facing safety-capabilities tradeoffs) decide to push capabilities further than they otherwise would, and safety is hurt on net • the research is somewhat safety-enhancing, to the point that it prevents a warning shot, and that warning shot would have been the spark that would have inspired humanity to get its game together regarding combatting AI X-risk • the research advances capabilities, either directly or indirectly • the research is exciting and draws the attention of other researchers into the field, but one of those researchers happens to have a huge, tail negative effect on the field outweighing all the other benefits (say, that particular researcher has a very extreme version of one of the above bullet points) • Etcetera – I feel like I could do this all day. Some of the above are more likely than others, but there are just so many differen

I'm pretty sure that most EAs I know have ~100% confidence that what they're doing is net positive for the long-term future).

Really? Without giving away names, can you tell me roughly what cluster they are in? Geographical area, age range, roughly what vocation (technical AI safety/AI policy/biosecurity/community building/earning-to-give)? 

I'm super interested in how you might have arrived at this belief: would you be able to elaborate a little? For instance, is there a theoretical argument going on here, like a weak form of cluelessness? Or is it mor

... (read more)

"GeneSmith"... the pun just landed with me. nice.

Very nitpicky (sorry): it'd be nice if the capitalization to the epistemic status reactions was consistent. Currently, some are in title case, for example "Too Harsh" and "Hits the Mark", while others are in sentence case, like "Key insight" and "Missed the point". The autistic part of me finds this upsetting.

Thanks for this comment. I don't have much to add, other than: have you considered fleshing out and writing up this scenario in a style similar to "What 2026 looks like"?

Thanks for this question.

Firstly, I agree with you that firmware-based monitoring and compute capacity restrictions would require similar amounts of political will to happen. Then, in terms of technical challenges, I remember one of the forecasters saying they believe that "usage-tracking firmware updates being rolled out to 95% of all chips covered by the 2022 US export controls before 2028" is 90% likely to be physically possible, and 70% likely to be logistically possible. (I was surprised at how high these stated percentages were, but I didn't have tim... (read more)

There is a vibe that I often get from suffering focused people, which is a combo of

a) seeming to be actively stuck in some kind of anxiety loop, preoccupied with hell in a way that seems more pathological to me than well-reasoned. 

b) something about their writing and vibe feels generally off,

...

I agree that this seems to be the case with LessWrong users who engage in suffering-related topics like quantum immortality and Roko's basilisk. However, I don't think any(?) of these users are/have been professional s-risk researchers; the few (three, iirc) s-risk researchers I've talked to in real life did not give off this kind of vibe at all.

4CronoDAS1y
"There is no afterlife and there are no supernatural miracles" is true, important, and not believed by most humans. The people who post here, though, have a greater proportion of people who believe this than the world population does.