Hi, I’m more than a year late, but just in case you’re still interested in this topic I’ll leave a comment here. If you’re interested in the difference between conditions, the best way to conduct inference in that respect, to avoid a big loss of information (acknowledged uncertainty), would be to include both conditions in your model and then take the difference between the posteriors of each condition (ex-post, rather than taking the difference between groups from the observed signal). In that case you could do something like:
with pm.Model() as model:
mu = pm.Normal('mu', mu=0.0, sigma=1.0, shape=(n_channels, n_samples, n_conditions))
std = pm.HalfNormal('std', 1.0)
nu = pm.Exponential('nu', 1 / (nu_mean - nu_min)) + nu_m
y = pm.StudentT('y', mu=mu, nu=nu, sigma=std, observed=data)
trace = pm.sample(1000)
Ideally, however, you would like to have the mu parameter as a linear regression formula (i.e. with an intercept and a slope), so it can tell the change induced as a function of condition, that could allow you to add varying intercepts and/or slopes by stimuli and participant as well. I have taken an approach like that here: https://onlinelibrary.wiley.com/doi/full/10.1111/ejn.15470 (that still needs much improvement though). If you’re interested in accounting for the time domain across the epoch, then you could give a look to this: Bayesian models for event-related potentials time-series and electrode correlations estimation | bioRxiv . That implements a method for getting covariance/correlation matrices via LKJ priors. These could also be implemented in more conventional linear regression models, I would also recommend a lecture by McElreath on this topic, though a different one: Statistical Rethinking 2022 Lecture 14 - Correlated Varying Effects - YouTube.
Sorry if I’m too late and this is irrelevant now. But maybe someone else may find it useful as well.