It seems really hard to deceive a Bayesian agent who thinks you may be deceiving them, especially in a repeated game. I would guess there could be interesting theorems about Bayesian agents that are attempting to deceive one another; as in, in many cases their ability to deceive the other would be highly bounded or zero, especially if they were in a flexible setting with possible pre-commitment devices.
To give a simple example, agent A may tell agent B that they believe ω=0.8, even though they internally believe ω=0.5. However, if this were somewhat repeated or even relatively likely, then agent B would update negligibly, if at all, on that message.
Hm... I like the idea of an agent deceiving another due to it's bounds on computational time, but could imagine many stable (though smaller) solutions that wouldn't. I'm curious if a good bayesian agent could do "almost perfect" on many questions given limited computation. For instance, a good bayesian would be using bayesianism to semi-optimally use any set of computation (assuming it has some sort of intuition, which I assume is necessary?)
On being underspecified, it seems to me like in general our models of agent cognition forever have been pretty under