Are there known "rational paradoxes", akin to logical paradoxes ? A basic example is the following :
In the optimal search problem, the cost of search at position i is C_i, and the a priori probability of finding at i is P_i.
Optimality requires to sort search locations by non-decreasing P_i/C_i : search in priority where the likelyhood of finding divided by the cost of search is the highest.
But since sorting cost is O(n log(n)), C_i must grow faster than O(log(i)) otherwise sorting is asymptotically wastefull.
Do you know any other ?
You don't need O(n log(n)) sorting, but the real problem is that this is a problem in bounded rationality where the cost of rational reasoning itself is considered to come from a limited resource that needs to be allocated.
What do you mean I don't "need" O(n log(n)) sorting ?
It's just the asymptotic cost of sorting by comparison...
I'll have a look into bounded rationality. I was missing the keyword.
EDIT : had a look, the concept is too imprecise to have clear cut paradoxes.
There are O(n) sorting methods for max-sorting bounded data like this, with generalized extensions of radix sort. It's bounded because C_i is bounded below by the minimum cost of evaluating C_i (e.g. 1 FLOP), and P_i is bounded above by 1.
Though yes, bounded rationality is a broad class of concepts to which this problem belongs and there are very few known results that apply across the whole class.
So P_i/C_i is in [0,1], the precision is unbounded, but for some reason, a radix sort can do the job in linear time ?
There could be pathological cases where all P_i/C_i are the same up to epsilon.
I guess I'm searching for situation where doing cost c, computing c cost c', etc... Branching prediction comes to mind.
There could be pathological cases where all P_i/C_i are the same up to epsilon.
We could dismiss that by saying that if the ratios are the same up to epsilon, then it does not truly matter which one of them we choose.
(Mathematically speaking, we could redefine the problem from "choosing the best option" to "making sure that our regret is not greater than X".)
OK, so an approximate sorting algorithm in O(n) would do the trick.
The problem then boils down to weither computing the cost of (computing expected cost) is worth the expected gain.
Which goes back to my initial question : is there a rationality paradox ? Maybe simply the fact that 1) computing cost might boils down to the halting problema 2) cost's cost's cost... is possibly infinite ?
We aren't necessarily all alive anymore !
We weren't necessarily all human (me and my dog), but now "we" can include machines.
That's obvious I know. We have changed dramatically in just a few years.
I've just asked various AI "Who are we?". Old models got it wrong (automatically assume "we" means humanity...). Recent models gets it. I wonder if its due to the training data now including chats between AI and human or if they figured it out logically.