LESSWRONG
LW

2282
C.S.W.
1110
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Open Thread - Summer 2025
C.S.W.2mo*10

I'm seeking resources/work done related to forecasting overall progress made on AI safety, such as trying to estimate how much X-risks from AI can be expected to be reduced within the medium-term future (as in the time range people generally expect to have left before said risks become legitimately feasible). Ideally, resources trying to quantify the reduction in risk, and/or looking at technical or governance work independently (or even better, both).

If not this, the next best alternative would be resources that try to estimate reduction in AI risk from work done thus far (again, especially quantified, even if only something like an overview of progress on alignment benchmarks). And if not that, any pointers you may have for someone trying to do work like this themselves. I do expect any such estimates or work to naturally be extremely uncertain, but nonetheless believe it would be valuable for my interests in the field.

(Side note: I'm new to LW, so let me know if this post would belong better elsewhere.)

Reply
2Resources on quantifiably forecasting future progress or reviewing past progress in AI safety?
Q
2mo
Q
1