I've only played it for a few hours, but I think it is also Tier-2-according-to-Scott.
I am in general not especially proficient in category theory, and I think that the whole framework could be rewritten from the ground up by someone who is more proficient in category theory than me, and be made much better in the process.
Time and coarse world models
I feel like the partial observability I get from taking a coarsening of the world and saying an agent has observations in that coarsening is similar to the partial observability I get when saying an agent learns something at a specific time. In particular, these two things seem similar enough to me that one might be able to unify the two definitions, and in the process reveal new things about them.
A random open question I am curious about, but doesn't seem that important: Is the existence of a morphism between Cartesian frames NP-complete?
I think that much of the meat of what I want Cartesian frames to do is connected to time, and I have only really touched the surface of that. I think that there is a lot more to say about time, and I think there are options we have about how to think about time in Cartesian frames. The one I presented is my favorite at the moment, but I am uncertain.
For example, one might want to think about an agent, and the collection of pairs of partitions U and V of W, such that the agent has a (multiplicative?) subagent that could choose U, while observing V. This collection of pairs is closed under coarsening in both arguments, and so one could talk about a sort of Pareto frontier of how refined you can make U given V or vice versa. I think this Pareto frontier looks a lot like time.
There is a sense in which Cartesian frames is a very updateless ontology, and thus I am concerned about how to make it play nicely with logical uncertainty. Indeed, Cartesian frames are basically assuming that we have a set of possible worlds, which is assuming that we have objects that are the possible world that are not realized. Logical uncertainty does not do well with this assumption. Extending Cartesian frames to connect up with logical uncertainty is a major open problem.
In agent simulates predictor, I am given a proof that I output a certain action, and then I must make a choice. In making this choice, I am determining whether or not I am given that proof in the first place. Further, the proof must in some sense compress my deliberation, or I would not be able to comprehend it. Thus, I feel that there are some details of the proof that are not "true inputs" for me.
I want to say that my deciding what I would do if I saw a proof is "earlier" than the proof according to some generalized notion of causality, or earlier in "logical time." I want to say that the only way to make the agent-simulates-predictor set-up make sense is to have the full proof itself not be a true input for me. I think that Cartesian frames is a step towards making continuous the notion of inputs and outputs, and so could help our thinking around this problem.
I think that our current ability to talk about agents contained within other agents is pretty limited, and Cartesian frames is a significant step forward on that. It would not surprise me if this could help with fixing our ontology around subsystem alignment. It could also help with our ontology around reasoning about committees, subcommittees, and members.
Preferences and goals
It might be interesting to put on top of this theory something that is dealing more with utilities, or something similar. Since this theory is basically a calculus of what agents could do, it seems likely that we could say interesting things by putting on top of it analysis of what agents should do.