Thank you for the response! I’ll keep up the blockquote trend!
I’d do that, especially because, with your current parametrization,
sigma is transformed and then fed to the random walk, which is then retransformed and fed to the StudentT, so it’s really hard to see what
Exponential(50) implies on the outcome scale and whether it’s justified – my guess is 50 is really gigantic, especially if your data go from -2.5 to 2.5.
That is very true! I tried decreasing it to 10 and then to 1 (still without much justification, just to see what happens). With 10 it is still considerably slow. With 1, it increased to around 10% in 15 minutes. However, what happens in all these cases is that the step time decreases significantly with time. At around 10% it is considerably slower than the start, where an increment in 1 percentage point only took a couple of seconds. The method is carried out with 4 chains.
I am still yet to check whether the values are justified though. I just figured that I would try varying the lambda parameter a bit before I did that. (I’m fairly new to the model, so I will have to do some research to see which values are reasonable.)
Not necessarily: maybe you’re using the same model, but to model a different data generating process – are you using the same data and are interested in the same phenomenon as in the tutorial?
That is a good point! The tutorial investigates the same phenomenon. I did use slightly more data than the tutorial (AMZN from 2006-01-01 to recent) but even when changing it to the exact same data as that of the tutorial (AMZN from 2006-01-01 to 2015-12-31) it is pretty much equally slow. The data is also plotted before the MCMC-algorithm is started, and I’m able to compare that plot to the one in the book, and it seems identical.
I guess my biggest concern at this point is that my code is not using the C++ compiler correctly. If it is my computer specifications that are not up to par, then I will accept that faith, but it would really bother me if my code is using the Python-implementation instead of the C++ compiler, making everything a lot slower without me even knowing. Is there any way of making sure that the C++ compiler is used other than noticing that the warning has disappeared?
Just for reference, I will post the full code below along with my computer specifications (lambda happens to be 10.0 in this version). Just in case I have missed to include something important. The code is almost identical to that of the tutorial, apart from minor adjustments. The panda library is giving me a warning that I understand and will fix, but I figured at this stage it won’t have any impact as far as debugging is concerned.
Specifications: Windows 64-bit, Intel Core i7-3770K (3.50GHz, 3.90GHz), 16 GB RAM