https://x.com/JBloomAus/status/2050150634268098997?s=20
I'm hiring for Research Scientists and Engineers on my team at UK AISI. I think this is likely to be a very high impact role people should seriously consider. My team strongly prioritises good epistemic standards and have great incentives working in the public interest at UK AISI. Please apply!
IMO, UK AISI is kicking ass and one of the best places to have positive impact on making AGI go well. I too encourage people to apply!
Could you share some arguments for your claims? (likely high impact, strongly prioritises good epistemic standards, great incentives)
Sure (quick response + only representing my personal views)
- High impact: I think about this in terms of EV in different worlds. In high political will worlds, though maybe less likely, are where government is likely to matter more - these are worlds where governments having more knowledge about what norms are good or how to regulate effectively really matters. The UK could be quite influential here. One of the main things my team has been doing is both developing expertise and relationships to help us leverage them - we touch on very important topics for x-risk like oversight of AI systems (monitoring / auditing). We've got a big report coming soon on this topic which I think people will like. In low political will worlds, then "making the good path easier" for labs could be quite helpful. Red-teaming / showing that mitigations may not work or pointing out flaws in safety arguments can help create pressure for marginally better solutions. UK AISI's work on safeguards I think can fit into this. I like METR's sabotage report review and think that can play a similar role. To abstract a little: UK AISI's strong stakeholder relationships (labs, government, technical talent) place it particularly well to influence the ecosystem to do better.
- Strongly prioritises good epistemic standards: Partly this is a statement about my team specifically and how we operate within it. Pre-registration of beliefs / making beliefs pay rent, trying to de-confuse ourselves and separating observations from conclusions are large parts of how we operate. More broadly, I think the top down pressure is to do very high quality work probably because this mitigates risks (of saying incorrect things and these becoming known later). We're often seeking feedback internally and externally with a request to red-team our results / claims.
- Having great incentives working in the public interest: Most obviously, we're not a lab and being part of government team members can't have significant conflicts of interest. The more senior you go in UK AISI, the more likely it is that the individuals could be working at a frontier lab and are choosing not too - so there isn't a strong drive to look good to labs so we'll get jobs with them (even though this might be a motivation for more junior team members). I think there are pressures in the direction of overstating and understating risks that roughly cancel out - with the dominant motivation mostly being to make statements that are true, and useful / practical recommendations.
It might be easier to explain why I feel good about our impact / epistemics / incentives in the context of opposing arguments. I'd be happy to steel-man any suggestions to the contrary.