Hello everyone. I’m an AI/BCI engineer exploring formal ways to define rationality in multi-agent competitive settings (e.g., multi-agent hide-and-seek) within simulated environments. I currently hypothesize that rationality can be viewed as a product of two core components:
Observation Capability: This term quantifies the extent to which the agent can perceive its environment. More formally, it captures how much of the environment’s state is reflected in the agent’s internal model and how recently that information has been updated. For instance, in a simple grid-based environment, this might be measured by the recency with which each grid cell has been observed.
Predictive Potential: This term measures the agent’s ability to anticipate or forecast future states. Essentially, it gauges the agent’s stored knowledge about the environment’s dynamics, without distinguishing between actively intervening in the environment and merely predicting it. In a grid scenario, one might calculate this by comparing the agent’s predicted state of a newly observed cell to the actual observed state.
I’m interested in whether this formulation—focusing on observation capability and predictive potential—could serve as a robust definition of rationality in more complex environments. Do you see potential limitations or extensions that could strengthen this definition, particularly in higher-dimensional or more realistic multi-agent simulations?
Any thoughts or critiques, especially links to relevant previous discussions would be greatly appreciated. Thank you.