Measuring artificial intelligence on human benchmarks is naive
Central claim: Measured objectively, GPT-4 is arguably way past human intelligence already, perhaps even after taking generality into account. Central implication: If the reason we're worried AGI will wipe us out is tied to an objective notion of intelligence--such as the idea that it starts to reflect on its values...
This is an awesome idea, thanks! I'm not sure I buy the conclusion, but expect having learned about "mutual anthropic capture" will be usefwl for my thinking on this.