Best practice for parallel model selection, especially avoidance of recompilation

Hey @lucidfrontier45,

There is a way to speed up compilation time, however, in most situations you want to avoid doing this because it trades compilation time with sampling time.

EDIT

That being said, at least on my machine, the total time is faster when I do the switch in compilation mode. Probably because the model is so simple.

Nevermind, I spoke too soon. The faster compilation time was probably because of caching.

You can DISREGARD the following.

You can try changing the pytensor compilation mode like this:

with pm.Model() as model:
  with pytensor.config.change_flags(mode='FAST_COMPILE'):
    # data
    X_ = pm.Data("X", X)
    y_ = pm.Data("y", y)

    # Prior: P(w|α)
    w_ = pm.Normal("w", mu=0.0, sigma=5.0, shape=(len(w), ))
    b_ = pm.Normal("b", mu=0.0, sigma=5.0)

    alpha = pm.HalfNormal("alpha", sigma=5.0)

    # Predictor: z = f(x, w)
    z = pt.dot(X_, w_) + b_

    # Likelihood: P(y|z)
    pm.NegativeBinomial("y_obs", mu=pt.exp(z), alpha=alpha, observed=y_)

What this does is reduce the number of graph rewrites (optimizations).