Funny enough, as a direct result of reading the sequences, I got super obsessed with Bayesian stats and that eventually resulted in writing PyMC3 (which is the software used in the book).

Can someone give me an example problem where this particular approach to AI and reasoning hits the ball out of the park? In my mind, it's difficult to justify a big investment in learning a new subfield without a clear use case where the approach is dramatically superior to other methods.

To be clear, I'm not looking for an example of where the Bayesian approach in general works, I'm looking for an example that justifies the particular strategy of scaling up Bayesian computation, past the point where most analysts would give up, by using MCMC-style inference.

(As an example, deep learning advocates can point to the success of DL on the ImageNet challenge to motivate interest in their approach).

There's not that many that I know of. I do think its much more intuitive and lets you build more nuanced models that are useful for social sciences. You can fit the exact model that you want instead of needing to fit your case in a preexisting box. However, I don't know of too many examples where this is hugely practically important.

The lack of obviously valuable use cases is part of why I stopped being that interested in MCMC, even though I invested a lot in it.

There is one important industrial application of MCMC: hyperparameter sampling in Bayesian optimization (Gaussian Processes + priors for hyper parameters). And the hyperparameter sampling does substantially improve things.

The Bayesian method is the natural approach to inference, yet it is hidden from readers behind chapters of slow, mathematical analysis. The typical text on Bayesian inference involves two to three chapters on probability theory, then enters what Bayesian inference is. Unfortunately, due to mathematical intractability of most Bayesian models, the reader is only shown simple, artificial examples. This can leave the user with a so-what feeling about Bayesian inference. In fact, this was the author's own prior opinion.

Bayesian Methods for Hackers is designed as a introduction to Bayesian inference from a computational/understanding-first, and mathematics-second, point of view. Of course as an introductory book, we can only leave it at that: an introductory book. For the mathematically trained, they may cure the curiosity this text generates with other texts designed with mathematical analysis in mind. For the enthusiast with less mathematical-background, or one who is not interested in the mathematics but simply the practice of Bayesian methods, this text should be sufficient and entertaining.

Just started reading this text, and I currently find it very instructive for someone trying to get a handle on Bayesianism from a CS perspective.

Funny enough, as a direct result of reading the sequences, I got super obsessed with Bayesian stats and that eventually resulted in writing PyMC3 (which is the software used in the book).

Can someone give me an example problem where this particular approach to AI and reasoning hits the ball out of the park? In my mind, it's difficult to justify a big investment in learning a new subfield without a clear use case where the approach is dramatically superior to other methods.

To be clear, I'm not looking for an example of where the Bayesian approach in general works, I'm looking for an example that justifies the particular strategy of scaling up Bayesian computation, past the point where most analysts would give up, by using MCMC-style inference.

(As an example, deep learning advocates can point to the success of DL on the ImageNet challenge to motivate interest in their approach).

There's not that many that I know of. I do think its much more intuitive and lets you build more nuanced models that are useful for social sciences. You can fit the exact model that you want instead of needing to fit your case in a preexisting box. However, I don't know of too many examples where this is hugely practically important.

The lack of obviously valuable use cases is part of why I stopped being that interested in MCMC, even though I invested a lot in it.

There is one important industrial application of MCMC: hyperparameter sampling in Bayesian optimization (Gaussian Processes + priors for hyper parameters). And the hyperparameter sampling does substantially improve things.

Just started reading this text, and I currently find it very instructive for someone trying to get a handle on Bayesianism from a CS perspective.