Fixed variance of some parameters

I have a vector of model parameters which I am currently using in a deterministic fashion within my pymc3 model likelihood function (just a 1d numpy array, not a pymc3 RV) - a single point estimate of the parameters. These parameters come from a separate model (on paper) and the model has a known variance. Is there a way to incorporate the variance associated with these parameters, without making them RVs subject to inference by my observed data? If I make them typical pymc3 parameters, their variance term is inferred and overfits the model, instead of utilizing the known variance of the parameters.

I’m sure I could manually scrape the source data from the paper, and infer the the parameter model in pymc3, but that seems like overkill. Alternatively, I could add a dimension to the numpy parameter array with samples from the normal distributions specified by the known variance, but that also seems like overkill, and a pain to track all of the shape changes.

Is there a better way to accomplish this?

Anyone know a better way around this?

Are you saying that

# n is the number of params
# point_est_of_params is some n-length array
# var_of_params is some n-length array
x = pm.Normal("x", mu=point_est_of_params, sigma=tt.sqrt(var_of_params), len=n)

won’t do what you want?

Correct. That gives a prior on x, which is then subject to inference based upon observed data.

I do have an n-length array of params, x, with a shared variance, var_of_x. However, I’d like that variance to remain fixed, as opposed to being an inferred value. Something like:

# priors
theta = pm.Normal("theta", mu=0, sigma=100, shape=(data_array.shape))
model_error = pm.ChiSquared("model_error", nu=5)

# likelihood
y_mu = pm.math.dot(theta, data_array) + x
y = pm.Normal("y", mu=y_mu, sig=model_error)

except with some more complicated transformations going on in the likelihood function.
I’d like for x in this situation to be a random variable. By using x as a point estimate right now, my model_error term is accounting for all of the variance, when in reality I know that x should have var_of_x, which would reduce the model_error.

But var_of_x is not inferred in your model, it’s a fixed hyperparameter.

Take this model:

theta = pm.Normal("theta", mu=0, sigma=100, shape=(data_array.shape))
model_error = pm.ChiSquared("model_error", nu=5)

x = pm.Normal("x", mu=point_est_of_x, sigma=tt.sqrt(var_of_x))

y_mu = pm.math.dot(theta, data_array) + x
y = pm.Normal("y", mu=y_mu, sig=model_error)

You seem to be saying that var_of_x is estimated in this model, but it isn’t. Honestly, I think this model does what you want.