GaussianRandomWalk for PyMC v5.6

I read the Gaussian Process smoothing example in the PyMC project website, I found the example is based on PyMC version3, and the GaussianRandomWalk API used in the example is not compatible with the version 5. The code in example is here:

///Gaussian Process smoothing example
LARGE_NUMBER = 1e5
model = pm.Model()
with model:
    smoothing_param = shared(0.9)
    mu = pm.Normal("mu", sigma=LARGE_NUMBER)
    tau = pm.Exponential("tau", 1.0 / LARGE_NUMBER)
    z = GaussianRandomWalk("z", mu=mu, tau=tau / (1.0 - smoothing_param), shape=y.shape)
    obs = pm.Normal("obs", mu=z, tau=tau / smoothing_param, observed=y)

I got the following error messages in PyMC v5.6:

RandomWalk.rv_op() got an unexpected keyword argument 'tau'

Then I modified the code for ‘z’ in the model like this:

z = GaussianRandomWalk("z", mu=mu, sigma=1.0/np.sqrt(tau / (1.0 - smoothing_param)), shape=y.shape)

Then the error messages disappeared and I got the expected smoothing result.

But I want to ask is there example gallay that is compatible with PyMC5, or it is still encouraged to use PyMC3 now?

Welcome!

You should definitely be using the most recent version of PyMC (5.8.0 at time of this post). Unfortunately, some examples haven’t been updated yet. PRs updating examples are welcome!

Although you already solved you own problem, if you prefer the precision parameterization, you can also make a random walk from any variable you like using cumsum() on a sequence of random variables. For example:

with pm.Model() as model:
    smoothing_param = shared(0.9)
    mu = pm.Normal("mu", sigma=LARGE_NUMBER)
    tau = pm.Exponential("tau", 1.0 / LARGE_NUMBER)
    z = pm.Normal('z', mu=mu, tau=tau / (1 - smoothing_param), shape=y.shape).cumsum()
    obs = pm.Normal("obs", mu=z, tau=tau / smoothing_param, observed=y)

PyMC is able to automatically infer the logp of the distribution that results from applying cumsum operation.

Thanks for your explaination, I will learn more about PyMC version 5 to try the new features.

Technically speaking there’s no logp inference there, just a deterministic transformation.

Oh ok. I thought I had seen it somewhere given as an example of a case where the new logp inference could be used. I guess I played myself.