James Nightingale 01:35 PM @Filippo What do you think about using nested sampling (to investigate / diagnose curved distributions (or difficult to fit distributions in general)? Filippo Pagani 01:37 PM @James Nightingale: In my understanding, Nested Sampling estimates the Bayes Factor (right?), that is, the normalisation constant. I don't know exactly how Nested Sampling does that, but it's a very similar problem to sampling from the distribution with MCMC, and you can use the Hybrid Rosenbrock (or any other benchmark model) to make sure that Nested Sampling is not giving you garbage. In fact, it would be interesting to test what shapes break it, and how extreme they have to be (if you're interested in that sort of things). I would need to play around with Nested Sampling to know more about how to use it for diagnosing "non-convergence" Sarah Casura 01:39 PM Also a question for Filippo: What are the drawbacks of the "smarter" MCMC algorithms you mentioned (i.e. why isn't everybody using them)? Are they slower/more difficult to use (in terms of coding/parameter choices/stopping criteria)/etc.? Filippo Pagani 01:43 PM First of all, the computational cost is higher, i.e. it takes you longer to get the same amount of samples. If the posterior is simple, you're better off using a simpler algorithm. Then yes, sometimes they are harder to tune. However, people are trying more and more to find automatic ways to tune them, so this is less and less of a problem. Good question! Filippo Pagani 01:45 PM What you generally want is to know (more or less) the shape of your posterior, so that you can pick the right algorithm for the job Filippo Pagani 03:46 PM Quick question to Everyone: of those who use Nested Sampling, do you use the vanilla version, or do you use some variant of it? If so, which one? I’ve used PolyChord, which uses slice nested sampling