I don't know of any sources, short of an allusion or two in my comment history, but I don't recommend digging for them. One point I think I've made in the past is that an implication of viewing statistics as a method of modeling and thus approximating our uncertainty is that Gelman's posterior predictive checks have limits, though they're still useful. If posterior predictive checking tells you some part of your model is wrong but you otherwise have good reason to believe that part is an accurate representation of your true uncertainty, it might still be a good idea to leave that part alone.
Andrew Gelman recently linked a new article entitled "Induction and Deduction in Bayesian Data Analysis." At his blog, he also described some of the comments made by reviewers and his rebuttle/discussion to those comments. It is interesting that he departs significantly from the common induction-based view of Bayesian approaches. As a practitioner myself, I am happiest about the discussion on model checking -- something one can definitely do in the Bayesian framework but which almost no one does. Model checking is to Bayesian data analysis as unit testing is to software engineering.
Added 03/11/12
Gelman has a new blog post today discussing another reaction to his paper and giving some additional details. Notably: