LESSWRONG
LW

60
Kishan Panaganti
0010
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
AGI Safety & Alignment @ Google DeepMind is hiring
Kishan Panaganti7mo10

Having such a clear FAQ and LLM or bot-proof questionnaire in the job application is unique! I was curious about the choice of prioritizing engineers to hire this cycle as opposed to core researchers (e.g. adopting principled frameworks to solve safety issues). This feels like putting safety research as an afterthought on evaluations of these models rather than coming up with principled methodologies for integrating safety into the models! I may be wrong in this interpretation, so hoping for the better to get through the HRs in the given job application:)

Reply
No wikitag contributions to display.
No posts to display.