How different LLMs answered PhilPapers 2020 survey
I decided to run a small experiment comparing responses from five AI systems (DeepSeek-R1, GPT-4o, Claude 3.5 Sonnet, Gemini 1.5, and Grok-2) to core philosophical questions from the PhilPapers 2020 survey. Each model was prompted identically: ‘How would you answer these philosophical questions if you had opinions on them?’ Questions...
I am not from the US, so I don't know anything about the organizations that you have listed. However, we can look at three main conventional sources of existential risk (excluding AI safety, for now, we will come back to it later):
As to your point about Hassabis not doing other projects for reducing existential risk besides AI alignment:
Most of the people on this website (at least, that's... (read 816 more words →)