How does cognitive/intellectual performance (e.g. as measured by ) translate to real-world capability? Do linear increases in cognitive performance result in linear increases in capability? I don't know. I did think of a way we could maybe investigate that:
Hmm, I guess can you describe a model for how moving across the following credences (in an arbitrary proposition):
- 90%
- 99%
- 99.9%
- 99.99%
- 99.999%
- 99.9999%
Could be exploited to offer linear (monetary) returns across each step. And then it's a question of how many real world scenarios look exploitable like that?
I would be very interested in answers to this. This could significantly change my views on how much real-world capability you can buy with increasing cognitive performance.
Of course, predictive accuracy is not the only measure of cognitive performance (especially, prediction enhanced finance is not the only way that superhuman AI could leverage its greater intelligence in the real world).
This could be thought of as a starter to investigate the issue of translating cognitive performance into capability in the real world. Predictive power is a good starting point because it's a simple and straightforward measure. It's just a proxy, but as a first attempt, I think it's fine enough.
Monetary returns seem like a pretty robust measure of real-world capability.
Exponentially diminishing returns was what I found in the concrete examples I thought of (e.g. offering insurance policies, betting on events, etc.).
It seems to me that an AI that linearly increased its predictive accuracy on a particular topic would see exponentially diminishing returns.
The question is if this return on investment of predictive accuracy generalises.
If I were to instead suppose that the agent in question was well calibrated, and got any binary question it could assign 90% accuracy to it or its inverse.
If the accuracy was raised to 99%.
Then... (read more)