Yes, in your example above, you could just rewrite the model with a theano logp function:
def loglike(alpha, N, M_min, M_max):
def logfunc(value):
D = tt.mean(tt.log(value))*N
c = (1.0 - alpha)/(tt.pow(M_max, 1.0-alpha)
- tt.pow(M_min, 1.0-alpha))
return N*tt.log(c) - alpha*D
return logfunc
with pm.Model() as basic_model:
# Priors for unknown model parameters
alpha2 = pm.Normal('alpha2', mu=3, sd=10)
N2 = t.constant(1000000)
M_min2 = t.constant(1.0)
M_max2 = t.constant(100.0)
# Likelihood (sampling distribution) of observations
Y_obs = pm.DensityDist('Y_obs', loglike(alpha2, N2, M_min2, M_max2), observed=Masses)
trace = pm.sample(1000)
As for cases where likelihood function is not available, there are not yet good solutions to it. You can modify the SMC sampler, as discussed in this github issue.
In general, these kinds of problem are better with a likelihood-free inference approach such as approximate bayesian computation (ABC). There are some libraries allow you to do that (e.g., GitHub - elfi-dev/elfi: ELFI - Engine for Likelihood-Free Inference), and we will likely implement something similar in the near future.
I would say no, as you cannot evaluate the likelihood that is essential for the MH acceptance step.