Meaning of model.logp, a beginner question

I simply want to see the likelihood of an observed binomial variable of some length, where the binomial probability p_i for the different elements in the muli-binomial is given by some function.
(I want to infer lambda 1, given a certain “counts_total” vector.)

import numpy as np
import pymc3 as pm

counts_total=np.array([110,  75,  77,  63,  62])
print('shape counts_total: {}'.format(counts_total.shape))

with pm.Model() as model:
    lambda1 = pm.Normal('lambda1',10,10)
#     fitted_=np.arange(0,5)
    b = pm.Binomial('b',n=1000,p=p,shape=len(fitted_),observed=counts_total[:5])

This gives me the error:
Missing required input: lambda1

Looks as if lambda1 was not initialized?

Not sure what you want to do here, but in short you cannot directly call b.logp() because it is not conditioned on its dependence correctly. If you want to check the model logp you can try model.logp(model.test_point).

thank you for the answer. The model logp then is the likelihood of the observed variable at the test values, right?

nope, it’s the likelihood function of all your free parameters depending on the observed value.

Hm… so, this would be the whole numerator of the RHS in the bayes theorem right? Essentially, the number that is used/compared btw. steps to decide keep or drop… (sry, beginner, and only have this basic idea).

EDIT: In case some other newb ever wonders. Tested a bit (on pymc2.x.x) on a small example and found that the model logp seems to be the sum of the logp of the prior variables (without possible intermediate/latent stochastic variables though) and the observed variable, all at their current value.

However, I cannot yet get my had around yet how latent variables are treated formally when arguing why/how e.g. a Metropolis converges to the right posterior (i.e. why the latent’s logp’s are not included…). I understand thats a purely theoretical question, but if anyone could point me to an explanation I would appreciate.