Error in weighted inference model

I have been reading this topic and the linked posts within and I want to use the Potential function to scale the logp according to the uncertainty in the observed data.

I am trying to adapt this example :

import numpy as np
import pymc3 as pm
import matplotlib.pyplot as plt

# set random seed for reproducibility
np.random.seed(12345)

x = np.arange(5,400,10)*1e3

# Parameters for gaussian
amp_true    = 0.2
size_true   = 1.8
ps_true     = 0.1

#Gaussian function
gauss = lambda x,amp,size,ps: amp*np.exp(-1*(np.pi**2/(3600.*180.)*size*x)**2/(4.*np.log(2.)))+ps
f_true = gauss(x=x,amp=amp_true, size=size_true, ps=ps_true )

# add noise to the data points
noise = np.random.normal(size=len(x)) * .02
f = f_true + noise
f_error = np.ones_like(f_true) * 0.05 * f.max()

with pm.Model() as model3:
    amp = pm.Uniform('amp', 0.05, 0.4, testval= 0.15)
    size = pm.Uniform('size', 0.5, 2.5, testval= 1.0)
    ps = pm.Normal('ps', 0.13, 40, testval=0.15)

    gauss = pm.Deterministic('gauss',amp*np.exp(-1*(np.pi**2*size*x/(3600.*180.))**2/(4.*np.log(2.)))+ps)

    y = pm.Normal('y', mu=gauss, sd=f_error, observed=f)

    trace=pm.sample(2000)

# extract and plot results
y_min = np.percentile(trace.gauss,2.5,axis=0)
y_max = np.percentile(trace.gauss,97.5,axis=0)
y_fit = np.percentile(trace.gauss,50,axis=0)
plt.plot(x,f_true,'b', marker='None', ls='-', lw=1, label='True')
plt.errorbar(x,f,yerr=f_error, color='r', marker='.', ls='None', label='Observed')
plt.plot(x,y_fit,'k', marker='+', ls='None', ms=5, mew=1, label='Fit')
plt.fill_between(x, y_min, y_max, color='0.5', alpha=0.5)
plt.legend()
plt.show()

which results in

Figure_1

However, when I try to define a model using

# add noise to the data points
noise = np.random.normal(size=len(x)) * .02
f = f_true + noise
f_error = np.ones_like(f_true) * 0.05 * f.max()
w = 1.0 / f_error**2
y_sd = np.ones_like(f_true)

with pm.Model() as model3:
    amp = pm.Uniform('amp', 0.05, 0.4, testval= 0.15)
    size = pm.Uniform('size', 0.5, 2.5, testval= 1.0)
    ps = pm.Normal('ps', 0.13, 40, testval=0.15)

    gauss = pm.Deterministic('gauss',amp*np.exp(-1*(np.pi**2*size*x/(3600.*180.))**2/(4.*np.log(2.)))+ps)

    y_ = pm.Potential('Y_obs', w * pm.Normal('y', mu=gauss, sd=y_sd).logp(f))

    trace = pm.sample(2000)

I get this error:

Traceback (most recent call last):
File “/home/user/PycharmProjects/weighted_inference.py”, line 46, in
y_ = Potential(‘Y_obs’, w * Normal(‘y’, mu=gauss, sd=y_sd).logp(f))

TypeError: For compute_test_value, one input test value does not have the requested type.
The error when converting the test value to that variable type:
Wrong number of dimensions: expected 0, got 1 with shape (40,).

I wonder if anyone could please tell me the right way to do it.

Every pymc3.Potential or pymc3.Normal or pymc3.DistributionYouAreCalling has a parameter called shape. You have to look for any variable that has a shape of 40 and change it, in fact, you have to be careful with shapes.

By the way, please use

import pymc3 as pm
# or
import pymc3

instead of

from pymc3 import *

And I am getting errors with the first code you share.

Thank you very much. I have corrected the codes.

I fail to understand why the expected dimension is zero. In the reply by @junpenglao in this topic the Potential function does not include a shape value. (Actually, it seems to me that if you combine the input question with the provided answer none of the functions include a shape attribute)

The dimension 40 comes from the number of data points (and their weights)

# add noise to the data points
noise = np.random.normal(size=len(x)) * .02
f = f_true + noise
f_error = np.ones_like(f_true) * 0.05 * f.max()
w = 1.0 / f_error**2
y_sd = np.ones_like(f_true)

with pm.Model() as model3:
    amp = pm.Uniform('amp', 0.05, 0.4, testval= 0.15)
    size = pm.Uniform('size', 0.5, 2.5, testval= 1.0)
    ps = pm.Normal('ps', 0.13, 40, testval=0.15)

    gauss2 = pm.Deterministic('gauss',amp*np.exp(-1*(np.pi**2*size*x/(3600.*180.))**2/(4.*np.log(2.)))+ps)

    y_ = pm.Potential('Y_obs', w * pm.Normal.dist(mu=gauss2, sd=y_sd, shape=(40)).logp(f))

    trace3 = pm.sample(2000)

Compare the line y_ I wrote with

y_ = pm.Potential('Y_obs', w * pm.Normal('y', mu=gauss, sd=y_sd).logp(f))

2 Likes

Thank you very much! That works perfectly.

It is my fault I missed the .dist method. What is the difference between “pm.Normal” and “pm.Normal.dist”?

https://docs.pymc.io/prob_dists.html

1 Like

I get it.Thank you very much again!!