Sample posterior predictive error

I tried to generate new datasets from inferred parameters using pm.sample_posterior_predictive(). But it does not work.

My codes are listed below:

#Load libraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pymc3 as pm
import theano.tensor as tt

#generate a dataset for parameter estimation (used as observations in model set up)

mu = 1/3.4
x1 = np.random.exponential(size=100, scale = mu)

p0 = 0.4
x2 = np.random.binomial(size=100, n=1, p= p0)

X = pd.DataFrame({'x1':x1,'x2':x2})

#define a customzied likelihood
N = X.shape[1]  #number of dimension of the random vector

W = [1/N] * N    #weight vector for each feature (each component in X)
				#this can be ajusted by user, will be set an inuput of the
				#simulation program

discrete_status = np.array([False,True,]) #tell sampler which components are discrete
def my_density(theta,W):
	def logp(X):
		def log_expo(lam,x):
			return( tt.sum(tt.log(lam) - lam * x) )

		def log_bernoulli(p,x):
			return( tt.sum(1.* ( tt.switch( x, tt.log(p), tt.log(1 - p) ))) )
		LL = np.array([
		return(,LL) )

#set up model

#hyperparameters for exponential distribution
hyper_expo_lower = 0.000001
hyper_expo_upper = 10
#hyperparameters for prior of bernoulli distribution
hyper_bern_lower = 0
hyper_bern_upper = 1

with pm.Model() as model:
	##set prior for parameters
	#for X0 
	r_0 =  pm.Uniform('r_0',lower = hyper_expo_lower, upper = hyper_expo_upper)
	#for X1
	p_1 =  pm.Uniform('p_1',lower = hyper_bern_lower, upper = hyper_bern_upper)
	##wrap parameters into the vector tetha, which is used as one argument of weighted likelihood
	tetha = np.array([
				r_0, #for X0
				p_1, #for X1
	##set likelihood
	joint_obs = pm.DensityDist('joint_obs', 
							observed={'X' : X, #observations
#sampling paramters from posterior
with model:    
	step = pm.Metropolis()
	step.discrete = discrete_status
	step.any_discrete = True
	step.all_discrete = False
	samples = pm.sample(1000, step=step,cores=1,tune=500)

#look at results of parameter estimation

#posterior predictive sampling. Got Error here!!
ppc = pm.sample_posterior_predictive(samples, model=model, samples=100, size=10)

The last step gives error message:

ValueError: Distribution was not passed any random method Define a custom random method and pass it as kwarg random

Many thanks for your answer.

The issue is that you are using DensityDist with a custom logp, but no random method was supplied. The error message refers to the fact that your custom distribution doesn’t know how to be sampled, since only a logp was given. All of the probability distributions in PyMC3 have a logp method, and also a random method. You’ll also have to supply DensityDist with a random arg, which can draw samples from your distribution given its parameter values.

Thank you very much. Could you please also tell me where can I find some good examples on how to implement random() for a customized likelihood, especially for the weighted likelihood I use here. This likelihood a weighted sum of log likelihood of an Exponential and a Bernoulli distribution.

Sorry for the slow response. Honestly I think the best place to look would be in PyMC3 source code. For example here. Take a look at the draw_values and generate_samples functions, but the actual RV sampling is done with scipy here.

1 Like