Hi everyone,

I am trying to model a mixture of 2 normal distributions with changing weights:

X:explanatory variables

B: parameters

y:obs

for a simple regression we would have

Y~N(XB, sigma)

I want a Normal mixture :

Mixture( N(XB, sigma), N(mu, sigma_2) )

The issue is p (weight of mixture).

p depends on XB so that the mixture will give more weight to N(XB, sigma) when XB is high and more weight to N(mu, sigma_2) when XB is low.

My idea on how to achieve this:

p = (1/(1+exp(XB) ) -> shape = n_samples

assign = pm.Categorical( “assign”, tt.stack([p, 1-p]), shape=[2, n_samples] )

(the probabilility of the categorical distribution depends on XB, so these probabilities will be different for all samples)

centers = tt.stack([ pm.Normal(“center_0”, mu=XB, sd=1, shape=n_samples),

pm.Normal(“center_1”, mu=0, sd=1, shape=n_samples)

])

pm.Normal(“observations”, mu=centers[assign], sd = 1, observed=y )

The above is however, crashing by kernel every single time.

Would you guys have a working example of a Categorical weight driven by some regression/factor model?