Hello everyone,
I’ve been studying shared random streams in order to include a variable in my model that’s random but not updated. This discussion makes sense to me, but I can’t seem to implement something similar in v5.
import pymc as pm
import numpy as np
import pytensor.tensor as pt
srng = pt.random.utils.RandomStream(seed=123)
true_mu, true_sigma = 5., 2.
y_obs = np.random.randn(50) * true_sigma + true_mu
with pm.Model() as m:
mu = pm.Deterministic('mu', srng.normal(4.9, .1))
sigma = pm.HalfCauchy('sigma', 5.)
y = pm.Normal('y', mu, sigma, observed=y_obs)
trace = pm.sample()
gives me
ValueError: Random variables detected in the logp graph: {normal_rv{0, (0, 0), floatX, False}.out}.
This can happen when DensityDist logp or Interval transform functions reference nonlocal variables,
or when not all rvs have a corresponding value variable.
Any ideas? Thank you very much!