Sampling posterior predictive from shifted Lognormal

Dearl all,

I have implemented a shifted Lognormal

class ShiftedLognormal(pymc3.Lognormal):
def __init__(self, mu, sigma, shift, tau=None, sd=None, *args, **kwargs):
    transform = pymc3.distributions.transforms.lowerbound(shift)
    super().__init__(mu=mu, sd=sigma, tau=tau,  *args, **kwargs,
                     transform=transform)
    self.shift = shift
    self.mean += shift
    self.mode += shift
def random(self):
    return super().random() + self.shift

def logp(self, x):
    return super().logp(x - self.shift)

According to a suggestion found here: https://github.com/pymc-devs/pymc3/issues/864

It seems to work well. However, if I want to sample_posterior_predictive, I get the error:

random() got an unexpected keyword argument ‘point’

I also tried using a non-shifted Lognormal and adding a shift variable to the data. This works, too, but gives another error when sampling the PP.

Are there perhaps other methods I need to override in the ShiftedLognormal shown above? (And how?)

Any help or ideas would be very welcome

Many thanks

Hmm, when I add point and size to random it samples from the PP.
But instead of values it returns a list of un-evaluted theano, see below:

class ShiftedLognormal(pymc3.Lognormal):
def __init__(self, mu, sigma, shift, tau=None, sd=None, *args, **kwargs):
    transform = pymc3.distributions.transforms.lowerbound(shift)
    super().__init__(mu=mu, sd=sigma, tau=tau,  *args, **kwargs,
                     transform=transform)
    self.shift = shift
    self.mean += shift
    self.mode += shift
def random(self, point, size):
    return super().random(point=point, size=size) + self.shift

def logp(self, x):
    return super().logp(x - self.shift)

sampling the PP leads to:
{‘y_obs’: array([Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,
Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0,

What am I doing wrong?

You need draw_values to fetch concrete value instead of tensor from the graph: https://github.com/pymc-devs/pymc3/blob/master/pymc3/distributions/distribution.py#L524

2 Likes

Thanks for the quick reply!

I don’t see the draw_values method. Was it added recently? (currently running PyMC3 3.6)

Edit: OK, found it. Just saw the function is not member of the Distribution class as I thought …

This works (almost)

It works but the PP has shape (N_SAMPLE, 1, N_OBS) … Where is the additional dimension in the center coming from?

Thanks & regrads

I am not convinced that it is necessary to have a new class for your use case - shifting does not change the logp so you might consider doing below in your model instead:

with pm.Model() as m:
    ...
    x = pm.Lognormal(...)
    x_shift = x + shift

I need it as a likelihood distribution. I did try something like you suggest:

with pm.Model() as m:
    ...
    shift = pm......
    x = pm.Lognormal('y_obs' ... observed = data + shift)
   

But this leads to:

Cannot resolve inputs for ['y_obs']

When sampling the PP.

I see, in this case i think you need to do data - shift (which should works)

otherwise, try doing self.random and self.logp instead of super().{method} in your implementation.

Yes, I meant to write data minus shift. But this gives the can’t resolve inputs error. It works for everything else though (i.e. I can fit the model and get meaningful results) — just sampling from the PP gives the error. Not sure if that’s a bug or expected behavior.

That’s for your further suggestions! I’ll try that.