How to generate an increasing sequence?

To be clear, @bob-carpenter 's suggestion is related to the logp computation for a hypothetical “ordered bounded uniform” prior. Your could implement that in PyMC as a custom prior, but it’s not typically the way we suggest you work. In PyMC you are declaring a generative forward model (as opposed to the logp/density model that is declared in STAN). The implied logp is then automatically worked out for you by inspecting the forward graph (that generates the random draws). So the following is perfectly valid and works out to similar (but not identical! Mine starts with an unconstrained uniform latent quantity and transforms into a non uniform constrained prior whereas Bob’s suggestion starts with a constrained non uniform quantity and transform it into a constrained uniform prior):

    depth_points_unsorted = pm.Uniform("depth_points_unsorted", lower=0, upper=max_depth, shape=num_layers)
    depth_points = pm.Deterministic('depth_points', pt.sort(depth_points_unsorted), dims=['layer'])

You should not, in general, ever be handling jacobian adjustments yourself or working out log probabilities in PyMC. That said, if you’re really interested in that approach, you can use built-in transformations to help you. For example, we have the ordered transformation that automatically handles the transformation (and jacobian adjustment) that Bob is talking about. These transformations can be chained together, so you could also apply an interval transformation to force the values to be in a certain range (this is actually always quietly applied for you in the background when you use pm.Uniform). The disadvantage of doing this is that you lose access to prior and posterior predictive sampling, because the forward model is not aware of the transformations (which only act on the logp graph). You also have to be more careful about initial values for your priors, as we will see below.

For comparison, here are the resulting priors under both approaches. First mine:

import pymc as pm
import arviz as az
import matplotlib.pyplot as plt
import pytensor.tensor as pt
from pymc.distributions.transforms import Ordered
import numpy as np

def plot_layers(idata, group='prior'):
    fig, ax = plt.subplots(figsize=(14, 4), dpi=144)
    for layer in coords['layer']:
        az.plot_posterior(idata[group],
                          var_names=['depth_points'], 
                          ax=ax, 
                          coords={'layer':layer}, 
                          hdi_prob='hide', 
                          point_estimate=None,
                          c=plt.color_sequences['tab10'][layer],
                          label=f'Layer {layer}')
    ax.legend()
    plt.show()

max_depth = 10
num_layers = 5
coords = {'layer':range(num_layers)}
with pm.Model(coords=coords) as m:
    depth_points_unsorted = pm.Uniform("depth_points_unsorted", lower=0, upper=max_depth, dims=['layer'])
    depth_points = pm.Deterministic('depth_points', pt.sort(depth_points_unsorted), dims=['layer'])
    idata = pm.sample_prior_predictive()
plot_layers(idata)

And using a transform (with automatic Jacobian adjustment):


with pm.Model(coords=coords) as m:
    depth_points = pm.Uniform("depth_points", lower=0, upper=max_depth, dims=['layer'],
                              transform=Ordered(),
                              # You have to manually specify that the initial values satisfy the ordering constraint, otherwise you will get a "-inf logp at the initial value" error
                              initval=np.linspace(1e-4, max_depth - 1e-4, num_layers))
    # Cannot use sample_prior_predictive here, you will just get back untransformed uniform draws if you do
    idata = pm.sample() 
plot_layers(idata, group='posterior')

As you can see, the results are very similar.

2 Likes