Preferences and goals
It might be interesting to put on top of this theory something that is dealing more with utilities, or something similar. Since this theory is basically a calculus of what agents could do, it seems likely that we could say interesting things by putting on top of it analysis of what agents should do.
I don't think that's what you had in mind, but one reason I am interested in learning more about Cartesian Frames is that I think that they might prove useful for formalizing the locality of goals. Basically, the idea is to capture whether the goal followed by a system is really about its inputs, or if it is about the state of the world.
One way to understand this distinction is through wireheading. For example, I consider my own goals as about the world, because I wouldn't want to wirehead to believe that I accomplished them. Whereas having the goal of always being happy means being completely okay with wireheading, and so having a goal about my input instead of what truly happens in the world.
Intuitively, this distinction seem to depend on how the boundaries are drawn between the system/agent and the environment, as well as the interface. Which is where I draw a possible connection with Cartesian Frames. But I'm not sure if it is possible to use them for that purpose.
Category-theory-first approaches
I am in general not especially proficient in category theory, and I think that the whole framework could be rewritten from the ground up by someone who is more proficient in category theory than me, and be made much better in the process.
Logical uncertainty
There is a sense in which Cartesian frames is a very updateless ontology, and thus I am concerned about how to make it play nicely with logical uncertainty. Indeed, Cartesian frames are basically assuming that we have a set of possible worlds, which is assuming that we have objects that are the possible world that are not realized. Logical uncertainty does not do well with this assumption. Extending Cartesian frames to connect up with logical uncertainty is a major open problem.
Formalizing time
I think that much of the meat of what I want Cartesian frames to do is connected to time, and I have only really touched the surface of that. I think that there is a lot more to say about time, and I think there are options we have about how to think about time in Cartesian frames. The one I presented is my favorite at the moment, but I am uncertain.
For example, one might want to think about an agent, and the collection of pairs of partitions and of , such that the agent has a (multiplicative?) subagent that could choose , while observing . This collection of pairs is closed under coarsening in both arguments, and so one could talk about a sort of Pareto frontier of how refined you can make given or vice versa. I think this Pareto frontier looks a lot like time.
"subagent [] that could choose " -- do you mean or or neither of these? Since is not closed under unions, I don't think the controllables version of "could choose" is closed under coarsening the partition. (I can prove that the ensurables version is closed; but it would have been nice if the controllables version worked.)
ETA: Actually controllables do work out if I ignore the degenerate case of a singleton partition of the world. This is because, when considering partitions of the world, ensurables and controllables are almost the same thing.
Time and coarse world models
I feel like the partial observability I get from taking a coarsening of the world and saying an agent has observations in that coarsening is similar to the partial observability I get when saying an agent learns something at a specific time. In particular, these two things seem similar enough to me that one might be able to unify the two definitions, and in the process reveal new things about them.
I have something suggestive of a negative result in this direction:
Let be the prime-detector situation from Section 2.1 of the coarse worlds post, and let be the (non-surjective) function that "heats" the outcome (changes any "C" to an "H"). The frame is clearly in some sense equivalent to the one from the example (which deletes the temperature from the outcome) -- I am using my version just to stay within the same category when comparing frames. As a reminder, primality is not observable in but is observable in .
Claim: No frame of the form is biextensionally equivalent to
Proof Idea:
The kind of additional observability we get from coarsening the world seems in this case to be very different from the kind that comes from externalising part of the agent's decision.
Computational complexity
A random open question I am curious about, but doesn't seem that important: Is the existence of a morphism between Cartesian frames NP-complete?
Logical time
In agent simulates predictor, I am given a proof that I output a certain action, and then I must make a choice. In making this choice, I am determining whether or not I am given that proof in the first place. Further, the proof must in some sense compress my deliberation, or I would not be able to comprehend it. Thus, I feel that there are some details of the proof that are not "true inputs" for me.
I want to say that my deciding what I would do if I saw a proof is "earlier" than the proof according to some generalized notion of causality, or earlier in "logical time." I want to say that the only way to make the agent-simulates-predictor set-up make sense is to have the full proof itself not be a true input for me. I think that Cartesian frames is a step towards making continuous the notion of inputs and outputs, and so could help our thinking around this problem.
Subagents
I think that our current ability to talk about agents contained within other agents is pretty limited, and Cartesian frames is a significant step forward on that. It would not surprise me if this could help with fixing our ontology around subsystem alignment. It could also help with our ontology around reasoning about committees, subcommittees, and members.
Generalizing observability
Observables can clearly be extended to infinite partitions, and maybe further to a sigma algebra or something similar. One might want to also think of and as sigma algebras.
Observables can also be extended to talk about separating two subsets of , rather than separating a subset of from its complement. One could also talk about observables that don't allow for arbitrary functions from the observed to , but instead allow for some restricted class such as continuous or Kakutani functions.
Such restricted classes might make more sense when using this more general notion of observables, or it might be possible to entirely construct these classes from this notion of observables.
This could allow the theory to encompass game theory, since you could have two agents which choose a probabilistic strategy, while knowing the probabilistic strategy chosen by the other player.
Frames that are partitions into rectangles
I think that there might be significantly more that can be said about Cartesian frames that are a "partition into rectangles" than can be said about Cartesian frames in general.
By a "partition into rectangles," I mean a Cartesian frame such that if , then . In particular, this assumption is saying something to the effect of "the level of description of this world is refined enough to play nicely with the factorization into and ."
Yosef can observe for the purpose of deciding the first digit, but can't observe for the purpose of deciding the third digit.
Am I missing something, or should this be the other way around? Intuitively, I'd think that it makes sense that Yosef can observe the second digit when choosing the third, but not when choosing the first.