Ordered transformation that allows increments to be 0

I’m playing around with transformations because I want to create a model where the values of a random variable are non-decreasing. Using the ordered transformation in pymc3 works well but seems to put very little weight on very small differences between sequential elements.

Here is a little code and a figure to show what I mean:

import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm, halfnorm


def ordered_bad(y):
    """This is the `backward` method of `Ordered` from pymc3.

    """
    x = np.zeros_like(y)
    x[..., 0] += y[..., 0]
    x[..., 1:] += np.exp(y[..., 1:])
    return np.cumsum(x, axis=-1)


def ordered_good(y):
    """I want something like this instead.

    """
    x = np.zeros_like(y)
    x[..., 0] += y[..., 0]
    x[..., 1:] += np.abs(y[..., 1:])
    return np.cumsum(x, axis=-1)


def diffs(x):
    """Differences between levels.

    """
    return x[..., 1:] - x[..., :-1]


if __name__ == '__main__':
    y = norm.rvs(size=(int(1e6), 4))
    x1 = diffs(ordered_bad(y))
    x2 = diffs(ordered_good(y))
    plt.hist(x2[..., 0], normed=True, bins=200, label="includes 0", alpha=0.5)
    plt.hist(x1[..., 0], normed=True, bins=2000, label="doesn't include 0", alpha=0.5)
    plt.legend()
    plt.xlim(0, 4)
    plt.savefig("tmp.png")

tmp

The orange distribution is the one created by the ordered transform applied to a normal random variable. It makes sense that it resembles a log-normal distribution because it includes exp in its calculation. However I would really like something that the second transformation (where exp is replaced with abs). But will this cause problems for NUTS because the transformation can’t be reversed?

Hmm, if it is too close to zero the model is actually unidentifiable right? As the inferenced values of the two adjacent free variables are exchangeable.

Well this is my current solution, and it does sample nicely:

n = 4  # num of elements in RV array
x0 = pm.Normal(name="_x_0", shape=1)  # first element in RV array
x1 = pm.HalfNormal(name="_x_1", shape=n - 1)  # remaining elements
x = pm.Deterministic("x", tt.cumsum(tt.concatenate([x0, x1]), axis=-1))  # array of ordered RVs

I think I see your point about identifiability, however I think in my case it isn’t a problem. Suppose the first and second elements of observed y are almost the same. The first RV x0 here will go to the value of the first element in y, and the first element in x1 will go to 0. Exchanging these will give different predictions.

I suppose exchangeability comes in if any of the other elements of y are almost the same though, right?

I see, I think what you are doing is fine as well (as long as you dont do additional normalization on x you should have not problem of identifiability)