I don't think that your typical prison inmate is a perfect Bayesian.
I rather think that that should be, ideally, adjusted so that overall utility is maximized (weighing the utility of prisoners equally as the utility of the rest), which will be vastly different both from reality and from your model assuming the above proposition.
I also find it funny when mathematicians pejoratively speak of "recreational mathematics" (problem solving) as opposed to theory building: "If I build a lego hat, that's just for fun, but if I build a lego Empire State Building, that's serious business!"
I don't want to conclude that lottery might be rational, but I don't think it is self-evident that the right way for deciding between different probability distributions of utility is to compare the expectation value. We are not living a large number of times, we are living once (and, even if we did, bare summated value would neglect justice).
(Not meant as a rhetoric question): Does "mathematical analysis" really mean that someone with an IQ of 170 has (in average) a real advantage to someone with an IQ of 160 (if you don't count effects on information processing ability and reaction time) in solving really hard mathematical problems, or is it rather a combination of clicking fast, knowing how the monsters will react and calcing through what will happen if you do X?
At least the first part could be said word-by-word for modern-day astrophysics, except that this is socially accepted and the guys and gals doing it are (in most cases) being paid for (and even the people seeing fundamental knowledge over the universe as goal in itself will agree that there are far more important things to divert workforce to)
Not "almost all are completely convinced"; according to this poll, 61 supposed experts "thought P != NP" (which does not imply that they would bet their house on it), 9 thought the opposite and 22 offered no opinion (the author writes that he asked "theorists", partly people he knew, but also partly by posting to mailing lists - I'm pretty sure he filtered out the crackpots and that enough of the rest are really people working in the area)
Even that case wouldn't increase the likelyhood of P != NP to 1-epsilon, as experts have been wrong in past and their greater confidence could stem from more reinforcement through groupthink or greater exposition to things they simply understand wrong rather than a better overview. Somewhere in Eliezers posts, a study is referenced where something happens only in 70 % of the cases when an expert says that he is 99 % sure; in another referenced study, people raised their subjective confidence in something vastly more than they actually changed their mind when they got greater exposition to an issue which means that the experts confidence doesn't prove much more than the non-experts (who had light exposition to an issue) confidence.