Stacking priors problem

Hi,

I have been struggling with this for considerable amount of time now and haven’t had much of a success. In short, I’m calibrating the parameters of the bunch of surrogate models (linear or/and RSM).

I often use Uniform distribution for priors but there are cases when I can be more “precise” with prior definition. I’ve tried various approaches and the code below is the closest to what I need except it only calibrates the last parameter (clearly the for loop doesn’t work as intended). Can you please advice on best way to implement this. I’ve tried np.append, tt.stack and tt.concatenate but just can’t get it right.

with pm.Model(): 
        for i, pri in enumerate(pri_inputs):
            if pri[0] == 1:
                priors = pm.Uniform("priors_{}".format(i), lower=pri[1], upper=pri[2])
            elif pri[0] == 2:
                priors = pm.TruncatedNormal("priors_{}".format(i), mu=pri[1], sigma=pri[2], lower=pri[3], upper=pri[4])
 
        # Define the surrogate models for the outputs using the priors defined above
        linear = sm_linear * priors
        rsm = tt.sum(sm_rsm * priors * priors.T, axis=2)
        out_mu = sm_intercept + tt.sum(linear, axis=0) + tt.sum(rsm, axis=1)
 
        # Define loglikelihood as Multivariate Normal with defined covariance matrix and observations
        loglikelihood = pm.MvNormal(
            "loglikelihood", mu=out_mu, cov=true_cov, observed=measured
        )
 
        # Inference
        step = pm.NUTS()  # using NUTS sampling
        trace = pm.sample(3500, step=step, cores=1, progressbar=True)

If I assumed that all the priors are Uniform I can replace the for loop with the code below and the model works perfectly fine. Any suggestions would be much appreciated.

        priors = pm.Uniform( "priors", pri_inputs[:, 1], pri_inputs[:, 2], shape=len(pri_inputs))[:, None]

Regards,
Pawel

1 Like

I don’t like responding to my own messages but I think, I solved the problem described above. For those who may come across similar issue it is worth taking a look at @junpenglao post here. I also struggled a bit with theano and shape parameter. There is fantastic description by @lucianopaz here. Finally the code that works for me is as follows:

     with pm.Model():
        priors = []
        for i, pri in enumerate(pri_inputs):
            if pri[0] == 1:
                priors.append(pm.Uniform("priors_{}".format(i), lower=pri[1], upper=pri[2]))
            elif pri[0] == 2:
                priors.append(pm.TruncatedNormal("priors_{}".format(i), mu=pri[1], sigma=pri[2], lower=pri[3], upper=pri[4]))

        priors = tt.stack(priors)[:, None]
    
        # Define the surrogate models for the outputs using the priors defined above
        linear = sm_linear * priors
        rsm = tt.sum(sm_rsm * priors * priors.T, axis=2)
        out_mu = sm_intercept + tt.sum(linear, axis=0) + tt.sum(rsm, axis=1)

        # Define loglikelihood as Multivariate Normal with defined covariance matrix and observations
        loglikelihood = pm.MvNormal("loglikelihood", mu=out_mu, cov=true_cov, observed=measured)

        # Inference
        step = pm.NUTS()  # using NUTS sampling
        trace = pm.sample(draws=samples, step=step, cores=2, progressbar=True)
3 Likes