Hm... I like the idea of an agent deceiving another due to it's bounds on computational time, but could imagine many stable (though smaller) solutions that wouldn't. I'm curious if a good bayesian agent could do "almost perfect" on many questions given limited computation. For instance, a good bayesian would be using bayesianism to semi-optimally use any set of computation (assuming it has some sort of intuition, which I assume is necessary?)

On being underspecified, it seems to me like in general our models of agent cognition forever have been pretty underspecified, so would definitely agree here. "Ideal" bayesian agents are somewhat ridiculously overpowered and unrealistic.

I found the simulations around ProbMods to be interesting at modeling similar things; I think I'd like to see a lot more simulations for this kind of work.

ozziegooen's Shortform

by ozziegooen 31st Aug 2019127 comments