Hi, I’m relatively new to defining custom distributions so any basic pointers are welcome. For context, I’m working on building a multivariate stochastic volatility model (an extension to the example in the PyMC3 docs). Let’s say that I have n dimensions; I am trying to implement the model:

\log(r_i) \sim t(\nu, \mu, \Sigma_i)

s_i \sim MvNormal(s_{i-1}, \Sigma_{\sigma})

v_i = exp(-2s_i)

\Sigma_i = diag(v_i) \, \cdot \Pi \, \cdot diag(v_i)

where \nu is the degree of freedom, \mu is some vector of constant returns, and \Pi is some constant correlation matrix. I’m essentially defining a multivariate (correlated) random walk for the latent volatility process s_i, and then constructing the covariance matrix \Sigma_i for the observed returns at each time t_i by scaling some constant correlation matrix \Pi by a positive function of the volatility process v_i (representing the volatility in each dimension).

**My trouble is this:** from what I can tell, the standard MvStudentT RV in PyMC3 (docs, source) cannot handle the case in which each of my observations has a difference covariance. So to complete my model, I gather that I need to define a custom distribution to handle this case by either

- Defining the logp function and use pm.DensityDist (described here), or
- Define the logp function and use pm.Potential.

I have tried the second approach, but ran into some trouble. My current code is below:

```
def batch_mult_t_logp(mu, nu, vols, corr, nobs, obs):
'''
Defines log probability for a batch of multivariate Student T's with different covariances.
'''
logp = 0.0
for j in range(nobs):
vol_array = tt.nlinalg.diag(vols[j,:])
cov_array = tt.nlinalg.matrix_dot(vol_array, corr, vol_array)
logp += pm.MvStudentT.dist(mu=mu, nu=nu, cov=cov_array).logp(obs[j,:])
return logp
with pm.Model() as model:
# Latent volatility.
vol_chol, vol_corr, vol_stds = pm.LKJCholeskyCov(
"vol_chol", n=ndim, eta=2.0, sd_dist=pm.Exponential.dist(1.0), compute_corr=True
)
vol_cov = pm.Deterministic("vol_cov", vol_chol.dot(vol_chol.T))
vols = pm.MvGaussianRandomWalk("alpha", mu=0, shape=(nobs, ndim), chol=vol_chol)
vol_process = pm.Deterministic('vol_process', tt.exp(-2*vols))
# Prior over correlation matrices for observed returns
corr_nu = pm.Uniform('corr_nu', 0, 5)
C_triu = pm.LKJCorr('C_triu', corr_nu, ndim)
C = pm.Deterministic('C', tt.fill_diagonal(C_triu[np.zeros((ndim, ndim), dtype=np.int64)], 1.))
# Prior for mean of observed returns
mu = pm.Normal("mu", 0.0, 1.0, shape=ndim)
# Tail parameter
nu1 = pm.HalfNormal("nu_minus_2", sigma=1)
nu = pm.Deterministic("nu", 2.0+nu1)
# Observed RV
batch_mult_t = pm.Potential("likelihood", batch_mult_t_logp(mu, nu, vol_process, C, nobs, data))
```

I know that I should probably be using Theano’s scan() instead of a for loop. Any pointers/examples would be much appreciated!