Pm.traceplot() error

I built a model using a costum likelihood, including both contiunous and discrete RVs, and ran sampling from posterior successfully, with the following codes:

with model:    
    step = pm.Metropolis()
    step.discrete = discrete_status
    step.any_discrete = True
    step.all_discrete = False
    samples = pm.sample(1000, step=step,cores=1,tune=500)

I inspected the results using pm.summary(), and it looks normal.

However, when I type the following codes for plotting parameters’ posterior distribution:

pm.traceplot(samples)

it returned error message:

AttributeError: 'MultiObservedRV' object has no attribute 'observations'

Please help! Many thanks

Hi, this is probably a bug in ArviZ lib. Can you create an issue in github so we remember to fix it.

Ps. If needed, we can create the issue for you

Thank you for response. Sure, please feel free to create the issue. Let me know if you need further information from me.

A reproducible example would be useful – I don’t know an easy way of getting a MultiObservedRV off the top of my head! I guess you are feeding observations into more than one spot?

Please see my model definition:

import pandas as pd #To work with dataset
import numpy as np #Math library
import seaborn as sns #Graph library that use matplot in background
import matplotlib.pyplot as plt #to plot some parameters
import pymc3 as pm

import theano.tensor as tt

#load data
X = pd.read_csv('/my/data/file/path')
X_values = X.to_numpy()


def my_density(theta,W):
    
    def logp(X):
        #X is the data containing the first 23 features
    
        def log_expo(lam,x):
            return( tt.sum(tt.log(lam) - lam * x) )

        def log_bernoulli(p,x):
            return( tt.sum(1.* ( tt.switch( x, tt.log(p), tt.log(1 - p) ))) )
    
    
        LL = np.array([
          log_expo(tetha[0],X[:,0]),
          log_bernoulli(tetha[1],X[:,1]),
         ])
    
        return( np.dot(W,LL) )
    return(logp)


#hyperparameters for exponential distribution
hyper_expo_lower = 0.000001
hyper_expo_upper = 10
               
#hyperparameters for prior of bernoulli distribution : X3 - X23
hyper_bern_lower = 0
hyper_bern_upper = 1


with pm.Model() as model:
    ##set prior for parameters
    #for X0 
    r_0 =  pm.Uniform('r_0',lower = hyper_expo_lower, upper = hyper_expo_upper)
    #for X1
    p_1 =  pm.Uniform('p_1',lower = hyper_bern_lower, upper = hyper_bern_upper)

    tetha = np.array([
              r_0, #for X0
              p_1, #for X1
             ])

##set likelihood
joint_obs = pm.DensityDist('joint_obs', 
                           my_density(tetha,#paramters
                                      W,#weight
                                     ),
                           observed={'X' : X, #samples for X0 - X1
                                    }     
                          )    

In the above model, This model contains 2 random variables (X has 2 dimensions). X is the data matrix: each column is a random variable, and each row is a sample.

my_density() is the costum likelihood. It accepts 2 arguments. The first argument tetha contains parameters for the two dimensions.

The second argument W is a weight vector used in likelihood definition, whose values are known.

The data is input as: ‘X’ : X

Let me know if you find any problem.

I think any DensityDist would give this error - it is a tricky one because you can feed non-observed to DensityDist.

Is there any way to solve the problem?
Can you tell me how can I get these samples (dataframe each column is a parameter and each row is a sample from the joint distribution of parameters)? If I can get these samples, I can draw them using other python visualization tools.

You can use pm.trace_to_dataframe for that.

I made a PR in ArviZ. What is still missing, is minimal reproducible what we could test against.

That said, there are some other corner cases for DensityDist that will probably fail in future.