How to get a non-trainable random variable?

I want to add some noise to the variable \alpha like this:

\alpha=\alpha*exp(\varepsilon)

where \varepsilon~Normal(0,\sigma) is a separate, independent noise term (non-trainable)
and \sigma~HalfNormal(sigma=1) (trainable).

The distribution of random variables in pymc is trainable as default,
but I want \varepsilon to always be a normal distribution with mu=0 and sigma=\sigma when sampling.

How can i do this?
Any help is really appreciated, thanks!

1 Like

You need to implement a custom step sampler that always updates the value of the variable with a new draw from the prior, regardless of the model logp.

This thread may be useful: Custom sampling step - #32 by ricardoV94

Thank you so much! it works.

Hi @wmylxmj!
I’m trying to implement something similar. Could you share some part of your code that solved the problem?

import pymc3 as pm
import arviz as az
import numpy as np
import matplotlib.pyplot as plt
from pymc3.step_methods.arraystep import BlockedStep

class UpdateIndependentNoiseTermStep(BlockedStep):

    def __init__(self, epsilon, sigma, model=None):
        pm.modelcontext(model)
        self.vars = [epsilon]
        self.epsilon = epsilon
        self.sigma = sigma
        pass

    def step(self, point: dict): 
        sigma = self.sigma
        if not isinstance(1.0*self.sigma, float):
            sigma = np.exp(point[self.sigma.transformed.name])
            pass
        point[self.epsilon.name] = pm.Normal.dist(mu=0, sigma=sigma).random()
        return point
    
    pass
import pymc3 as pm
import arviz as az
import numpy as np
import matplotlib.pyplot as plt
from pymc3.step_methods.arraystep import BlockedStep
from custom import UpdateIndependentNoiseTermStep

data = 100 * np.exp(1) ** pm.Normal.dist(mu=0, sigma=0.2).random(size=2000)
plt.plot(data)

d1

with pm.Model() as model:
    alpha = pm.Normal(r"$\alpha$", mu=100, sigma=10)
    sigma = pm.HalfStudentT(r"$\sigma$", nu=3, sigma=0.15)
    epsilon = pm.Normal(r"$\varepsilon$", mu=0.0, sigma=sigma)
    update_fake_step = UpdateIndependentNoiseTermStep(epsilon=epsilon, sigma=sigma)
    alpha_eps = pm.Deterministic(r"$\alpha_{\varepsilon}$", alpha * np.exp(1) ** epsilon)
    obs = pm.Normal("obs", mu=alpha_eps, sigma=1, observed=data)
    step = pm.Metropolis(vars=[alpha, sigma])
    trace = pm.sample(4000, cores=4, chains=4, tune=10000, step=[step, update_fake_step], init='advi', return_inferencedata=True)
    pass
az.plot_trace(trace)

d2

az.summary(trace)
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
𝛼 102.131 5.782 90.983 113.800 0.244 0.174 607.0 756.0 1.01
πœ€ 0.002 0.273 -0.422 0.449 0.002 0.004 15976.0 4481.0 1.00
𝜎 0.171 0.218 0.000 0.445 0.006 0.004 1251.0 1517.0 1.00
π›Όπœ€ 108.864 142.697 61.872 152.625 1.751 1.238 9790.0 4724.0 1.00

Roughly like this.

1 Like