Another distinguishing property of (AGI) alignment work is that it's forward looking and trying to solve future alignment problems. Given the large increase in AI safety work from academia, this feels like a useful property to keep in mind.
(Of course, this is not to say that we couldn't use current day problems as proxies for those future problems.)
I'm curious: what percent of upvotes are strong upvotes? What percent of karma comes from strong upvotes?
Yeah my guess is also that the average philosophy meetup person is a lot more annoying than the average, I dunno, boardgames meetup person.
Yeah I would like to mute some users site-wide so that I never see reacts from them & their comments are hidden by default....
As far as I'm aware of, this is one of the very few pieces of writing that sketches out what safety reassurances could be made for a model capable of doing significant harms. I wish there were more posts like this one.
This post and (imo more importantly) the discussion it spurred has been pretty helpful for how I think about scheming. I'm happy that it was written!
I feel like the react buttons are cluttering up the UI and distracting. Maybe they should be e.g., restricted to users with 100+ karma and everyone gets only one react a day or something?
Like they are really annoying when reading articles like this one.
Yeah I get that the actual parameter count isn’t, but I think the general argument that bigger pre trains remember more facts, and we can use that to try predict the model size.
For what it's worth, I'm still bullish on pre-training given the performance of Gemini-3, which is probably a huge model based on its score in the AA-Omniscience benchmark.
FYI the paraphrasing stuff sounds like what Yoshua Bengio is trying to do with the scientist AI agenda. See his talk at the alignment workshop in Dec 2025.
(Although I feel like Bengio has shared very little about the actual progress they've made (if any), and also very little detail on what they've been up to).