Some calls to action not bottlenecked on admissions:
https://apartresearch.com/#get-started
https://coda.io/@alignmentdev/ai-safety-info
https://researcher2.eleuther.ai/get-involved/
https://aisafety.quest/#volunteer
https://aisafetyawarenessproject.org/next-steps/
https://www.theomachialabs.com/
https://www.horizonomega.org/#get-involved
https://www.taraprogram.org/
The RoastMyPost review is much better, I made one edit as a result (Anthropic settled rather than letting a precedent be set). Takes a while to load!
Try right-click > Inspect > drag the console to cover half the screen
Yeah we are thinking of making it real-time rather than annual, will chat once we've recovered.
"Big if true" is biased (by the chosen scale) towards applications rather than pure intellectual significance. But I wanted to try and cover maths anyway, since it is always ignored.
Please consider adding corrections anonymously here!
Yeah this is strictly invalid but was intentional (see Methods). See the last column for the true EV, which produces a less useful ordering. I think this is fine because the the ordering was the objective rather than using the EV as a decision input.
Fair. I don't doubt there is some bias, but I think most of the rest is blameless correlation (ribose and fast LUCA are the same event) and hiding behind the conditional (IF true, and they won't all be).
Nice points. I would add "backtracking" as one very plausible general trick purely gained by RLVR.
I will own up to being unclear in OP: the point I was trying to make is that last year that there was a lot of excitement about way bigger off-target generalisation than cleaner CoTs, basic work skills, uncertainty expression, and backtracking. But I should do the work of finding those animal spirits/predictions and quantifying them and quantifying the current situation.
Yep, thanks, just tried. Just say @synthid in any Gemini session.
I vote that you work on horizon modelling and automated risk model updates with your pal technicalities