More information (what you have tried, etc) would help. In general, you want to wrap the logp in potential as a start.
Thank you for your reply. I have implemented the logp function, but since this distribution is defined on the unit sphere instead of the Cartesian coordinate system, I don’t know how to customize this distribution when estimating the hidden variables using the MCMC method. Here is my code.
def bessel(v, kappa):
return iv(v, kappa)
def _get_vmf_likelihood_term(value, mu, kappa):
return tt.exp(kappa * tt.dot(mu, value))
def _get_vmf_normalization_numerator(p, kappa):
return kappa ** (0.5 * p - 1)
def _get_vmf_normalization_denom(p, kappa):
return (2 * pi) ** (0.5 * p) * bessel(0.5 * p - 1, kappa)
def vmf_log_pdf(value, mu, kappa, C):
likelihood = _get_vmf_likelihood_term(value, mu, kappa)
normalization_numerator = _get_vmf_normalization_numerator(C, kappa)
normalization_denominator = _get_vmf_normalization_denom(C, kappa)
return tt.log(likelihood) + tt.log(normalization_numerator) - tt.log(normalization_denominator)
class vMF(pm.Continuous):
def __init__(self, mu, kappa, C, *args, **kwargs):
shape = np.atleast_1d(mu.shape)[-1]
kwargs.setdefault("shape", shape)
super(vMF, self).__init__(*args, **kwargs)
self.mu_arr = mu
self.mu = mu = tt.as_tensor_variable(mu)
self.kappa = kappa
self.C = C
def logp(self, value):
value = tt.as_tensor_variable(value)
mu = self.mu
kappa = self.kappa
C = self.C
return vmf_log_pdf(value, mu, kappa, C)
You can try passing a circular transformation to the class __init__
similar to VonMises distribution