What is the suggested way to use a LogStudentT variable, a la LogNormal? Is there a more straightforward way than implementing a new Distribution?
The use-case is the following. I’d like to do model comparison between a StudentT and a LogStudentT using the “loo” criterion. The data I’m working with has some outliers, hence the robust distribution.
Thanks for any help or suggestions.
PS. I really like pymc. Thanks for maintaining and developing it.
Im curious @ricardoV94, in the docs you linked for CustomDist it says:
In some cases, if a user provides a random function that returns an Aesara PyTensor graph, PyMC will be able to automatically derive the appropriate logp graph when performing MCMC sampling.
Thanks, I really appreciate the proposed solution and comments!
When I run the following code:
import pymc as pm
import numpy as np
def log_studentt(nu, mu, sigma, size):
pm.math.exp(pm.StudentT.dist(nu, mu = mu, sigma=sigma, size=size))
data = np.random.standard_t(10,size=(15))
data = data + 1.
with pm.Model() as m:
nu = 10
mu = pm.Normal("mu",mu=0,sigma=10)
sigma = 1
pm.CustomDist("log_studentt", nu, mu, sigma, random=log_studentt, observed=data)
idata = pm.sample(4000, tune = 2000)
az.plot_trace(idata)
I get the error:
NotImplementedError: Attempted to run logp on the CustomDist ‘log_studentt’, but this method had not been provided when the distribution was constructed. Please re-build your model and provide a callable to 'log_studentt’s logp keyword argument.
I’m running PyMC v5.0.2.
Does that mean it didn’t infer a logp graph, and/or did I make a mistake?