Hi @jessegrabowski , thanks for the detailed answer. At first glance it seems a little far fetched indeed, because the situation I am facing already occurs without all that complexity.
Here is the simplest version of a full model for which I already encounter the problem I mentioned earlier:
T = ... # temperature time-series starting in 1900, with 0 around 2000s, and heading to around 2 in 2100
with Model() as model:
q = HalfCauchy("q", .15) # rather large (ignorant) prior
a = Normal("a", 0, .5) # rather large (ignorant) prior
c = Normal("rate_1993_2018", 0.25, .06) # prior for 1993 to 2018 linear trend in mm / yr
rate = q*T^2 + a*T + c # a quadratic model on the rate of sea level
slr = pt.cumsum(rate)
pm.Normal("slr_1900_to_1990", slr[90] - slr[0], .3, observed=.4) # obs constraint on cumulative SLR in mm
pm.Potential("slr_2005_2100", pm.logp(...., slr[200] - slr[105])) # the log-normal potential discussed previously
pm.sample()
Now this model is still a small part of the full model, which include other contributions to sea level and also includes local observations (tide gauges, satellite, GPS…), spatial patterns, spatially-correlated ocean variability and other effects. But this does not really matter here. The point is, how to include the log-normal knowledge in the model, when I cannot use that directly as prior? Well, in this case I could resolve this analytically by integrating over konwn T, using slr_2005_2100 as a true log-normal prior, and making q a deterministic parameter that depends on slr_2005_2100, a and c. That way I could get rid of the potential. That should be equivalent as far as I understand it…though I’m a bit wary of doing that kind of analytical inversion. And it won’t be possible if I update my quadratic model to something else. Anyway, I feel we are talking about two different situations.