This is something I am a bit curious about too. I initially had asked my self if supplying

```
c = pm.Normal("c",15, 3)
```

would achieve a similar affect. Probably not to full effect and highly dependant on how the model evolves. Differences from a “regular noise” are: this affects your likelihood by multiplying the prior by the probability of sampled c given this distribution. Moreover, given your model, it is possible that the posterior for c could be something quite different which means that eventually then your model will be sampling from that posterior rather than something that looks like this normal.

I think one way to achieve this would be the suggestion here by @ricardoV94 where you define the step function of your pm.Normal manually (in the link below this is shown for Dirichlet):

What I am interested in trying our but never did is whether one can use random number generator of pytensor for this purpose:

```
from pytensor.tensor.random.utils import RandomStream
rng = RandomStream()
sample = rng.normal(0, 1, size=(2, 2))
```

It seems it used to work when pymc was with Theano via

```
srng = tt.shared_randomstreams.RandomStreams(seed=234)
mu = pm.Deterministic('mu', srng.normal(avg=4.9, std=.1))
```

but never had the chance to try this. See also, again with Theano,

Also one has got to ask then if you use such a non-conventional addition to your model, how to evaluate such stuff as convergence statistics, r values etc etc. They might come out with unexpected values and then would that mean your model is not converging correctly or is it because you are stepping out side of the known territory of Bayesian modelling? Anyone?