Passing pymc3 distribution variable to theano function

Hi,

I’m trying to use a pymc variable as an input to a theano function, so I can fit it as a free-parameter in my model. For some context, the idea for the model is that subjects are trying to learn about the variance of normal distribution and use that to make decision about stimuli, and I’m trying to model the length of time using a multilevel regression model. The term that I want to fit “attention scalar” is a population parameter (for now) which determines how much subjects are biased to think there is more or less variance than there actually is. This is the code:
`

with pm.Model() as full_model:
theano.config.compute_test_value = 'off'

## function for a update of gamma distribution on a single observation
def update_variance(observed_value, alpha, beta, bias_scalar):
    alpha+=.5
    beta+=((observed_value**2) * (1/bias_scalar))/2
    #observed_variance=((beta/alpha))
    
    return  alpha, beta

## initialize beta and alpha to 1
beta_m = T.ones((292,17))
alpha_m = T.ones((292,17))
bias_scalar_m=pm.Normal('bias_scalar_m', 1, 1)

observations = T.matrix('observations')

# Symbolic description of the result
result, updates = theano.scan(fn=update_variance,
                          outputs_info=[alpha_m, beta_m],
                          sequences=observations,
                          non_sequences=bias_scalar_m)


run_learner = theano.function(inputs=[observations, bias_scalar_m], outputs=result, updates=updates)

alphas, betas=run_learner(lv_ct_array,bias_scalar_m)

learned_errors=np.sqrt(betas[:,1,:].transpose().flatten()/alphas[:,1,:].transpose().flatten())


## random effects
int_mu = pm.Normal('int_mu', 1, 1)
int_sd = pm.HalfCauchy('int_sd',5)

lvCT_mu = pm.Normal('lvCT_mu', 1, 1)
lvCT_sd = pm.HalfCauchy('lvCT_sd',5)

intercept = pm.Normal('intercept', mu=int_mu, sd=int_sd, shape=subjs)

lvCT = pm.Normal('lvCT', mu=lvCT_mu, sd=lvCT_sd, shape=subjs)

# Model error
epsilon = pm.HalfCauchy('epsilon',1e2)

# Expected value
y_hat = intercept[rt_regression_data.subjs_idx]+ lvCT[rt_regression_data.subjs_idx] * (rt_regression_data.lvCT/learned_errors)

alpha_est=(y_hat**2/epsilon)
beta_est=y_hat/epsilon

y_like = pm.Gamma('y_like\n', alpha=alpha_est, beta=beta_est, observed=rt_regression_data.rt.values)

full_model_trace=pm.sample(500, chains=2, cores=1, n_init=1000, tune=1000, target_accept=.95)`

When I pass the variable to the function I get the following error

TypeError Traceback (most recent call last)
in
31 run_learner = theano.function(inputs=[observations,attention_scalar_m], outputs=result, updates=updates)
32
—> 33 alphas, betas=run_learner(lv_ct_array,attention_scalar_m)
34
35 learned_errors=np.sqrt(betas[:,1,:].transpose().flatten()/alphas[:,1,:].transpose().flatten())
C:\Python\lib\site-packages\theano\compile\function_module.py in call(self, *args, **kwargs)
811 s.storage[0] = s.type.filter(
812 arg, strict=s.strict,
→ 813 allow_downcast=s.allow_downcast)
814
815 except Exception as e:
C:\Python\lib\site-packages\theano\tensor\type.py in filter(self, data, strict, allow_downcast)
85 if isinstance(data, Variable):
86 raise TypeError(
—> 87 'Expected an array-like object, but found a Variable: ’
88 'maybe you are trying to call a function on a (possibly ’
89 ‘shared) variable instead of a numeric array?’)
TypeError: Bad input argument with name “attention_scalar_m” to theano function with name “:31” at index 1 (0-based). Expected an array-like object, but found a Variable: maybe you are trying to call a function on a (possibly shared) variable instead of a numeric array?

Any suggestion on how to get this working, or how to restructure this to work would be greatly appreciated! I’ve already tried initiating the “bias_scalar_m” variable as a matrix and a vector, but to no avail.

Thank you in advance.

To fix the error, you can remove the tf.function, as you dont need to compile a function within a pymc3 model - everything is represented as a tensor within the model context.

Now to the problem you are trying to solve - not sure your current setup would work, sounds like you want to do something like a EM or optimization within sampling. The following might be helpful:


When I take out the function syntax, beta and alpha remain as an array of ones, and never get updated. Is there anyway to fix this? I’m using this post as model, and they don’t compile a function as well, but its not obvious to me how to get these variables to update then.

The problem here is that, unless the tensor is wrapped as a free_RVs in the pm.Model(), it will only be ran once at the model set up. Because we clone the node that is the free_RVs and call it repeatedly. This means that anything you want to be updated at every sample/function call you need to wrap it into a pm.Var. One solution is to assign beta and alpha as pm.Flat with an array of ones as test_value, and then wrap the update_variance as an ArrayStep like in the 2 links I shared above. It is a pretty advance use case so please take a look at the links :grin:.