# Zero One Inflated Beta Regression for an array of parameters

Hello everyone,

I am pretty new to Bayesian Inference so apologies in case this is a misunderstanding of the concepts. I am essentially trying to sample a variable “y” based on a Zero One Inflated Beta Regression model. Now, since this is zero one inflated beta regression, there are 3 distributions essentially that we need to look out for. I have modeled it as follows,

``````y_zero = pm.Uniform("y_zero", 0, epsilon)
y_one = pm.Uniform("y_one", 1-epsilon, 1)
y_beta = pm.Beta("y_beta", alpha=precision * m, beta=(1 - m) * precision)
``````

Now as per the requirements I have, I cannot simply put a single value of m in this equation. I have to instead use an array of m’s derived using the input data. PyMC3 promptly complains that this is not the right approach to this problem. How could one tackle this issue?

I have the BUGS script here in case that would provide more context,

``````"
model{
# priors
zero0 ~ dnorm(priors.zero_mu0, 1 / (priors.zero_sd0 ^ 2))
zero1 ~ dnorm(priors.zero_mu1, 1 / (priors.zero_sd1 ^ 2))
one0 ~ dnorm(priors.one_mu0, 1 / (priors.one_sd0 ^ 2))
one1 ~ dnorm(priors.one_mu1, 1 / (priors.one_sd1 ^ 2))
b0 ~ dnorm(priors.b0_mu, 1 / (priors.b0_sigma ^ 2))
b1 ~ dnorm(priors.b1_mu, 1 / (priors.b1_sigma ^ 2))
phi ~ dunif(0, 100)

# eqs 6, 7
for (i in 1:n) {
logit(alpha_zero[i]) <- zero0 + zero1 * x[i]
y.iszero[i] ~ dbern(alpha_zero[i])
logit(alpha_one[i]) <- one0 + one1 * x[i]
y.isone[i] ~ dbern(alpha_one[i])
}

for (i in 1:n.zero) {
y.zero[i] ~ dbern(0)
}
for (i in 1:n.one) {
y.one[i] ~ dbern(1)
}

# likelihood for mu and phi (precision) - probit link function
for (i in 1:n.cont) {
y.c[i] ~ dbeta(p[i], q[i])
p[i] <- mu2[i] * phi
q[i] <- (1 - mu2[i]) * phi
probit(mu2[i]) <- (1 / b0) * log(x.c[i] / b1)
}
}
"
``````

The x’s here are the input data I am referring to.

Are you trying to sample the variable y or is that a likelihood (the values are observed)? In the JAGS snippet it seems the observed data is just split into `y.zero`, `y.one` and `y.c` which you could do similarly in PyMC by adding three observed variables, two bernoulli and one beta, with the respective observed values.

Yes, I am trying to sample the variable “y” and as per my understanding, I would have to use the equations representing the likelihood to do the sampling.

I thought of splitting the data into multiple parts as you mentioned but wouldn’t that mean that I am just supplying an array of zeros, that of ones, and another with everything in between to the 3 distributions defined? If that is the case, it does not seem like the right thing to do.

Another thing that I am confused about is this part of the BUGS model,

``````      for (i in 1:n.cont) {
y.c[i] ~ dbeta(p[i], q[i])
p[i] <- mu2[i] * phi
q[i] <- (1 - mu2[i]) * phi
probit(mu2[i]) <- (1 / b0) * log(x.c[i] / b1)
}
``````

If I directly translate this to PyMC3, I would be generating n samples for every observation made which is a computational nightmare. How does one normally implement this using this library?

One would use numpy-like vectorization, instead of looping. You can look at linear regression examples in pymc-examples.

1 Like

Yes, I am trying to sample the variable “y” and as per my understanding, I would have to use the equations representing the likelihood to do the sampling.

Are you sampling y or is y your observed data?

I am sampling y_zero, y_one, and y_beta based on the observations from y. Sorry for the confusion.

Just to be clear if you have something like the following

``````with pm.Model() as m:
mu = ...
sigma = ...
y = pm.Normal("y", mu, sigma, observed=data)
``````

You wouldn’t be sampling y, it’s just a likelihood term used to infer the other unobserved paramerers that go into mu and sigma. y itself is observed and fixed

If that’s your case, you can indeed partition your data into 3 likelihood terms (observed variables). The ones and zeros go into Bernoulli variables paramereized by `p` or `logit_p` and the others into a `Beta` suitably parametrized.

This might be relevant: Zero inflated Normal - #5 by ricardoV94

Thanks for explaining this! I believe I misunderstood what was actually happening with the code.

The samples should be drawn out for the parameters and not for y’s which are being used in the likelihood.

I have updated my code to the following now,

``````    with pm.Model():
# Priors for unknown model parameters
theta1 = pm.Normal("theta1", mu=theta1_mu, sigma=theta1_sd)
theta2 = pm.Normal("theta2", mu=theta2_mu, sigma=theta2_sd)
theta3 = pm.Normal("theta3", mu=theta3_mu, sigma=theta3_sd)
theta4 = pm.Normal("theta4", mu=theta4_mu, sigma=theta4_sd)
theta5 = pm.Normal("theta5", mu=theta5_mu, sigma=theta5_sd)
theta6 = pm.Normal("theta6", mu=theta6_mu, sigma=theta6_sd)
precision = pm.Uniform("precision", 0, 100)

PI_0 = pm.Deterministic(f"PI_0", pm.math.invlogit(theta3 + theta4 * v))
PI_1 = pm.Deterministic(f"PI_1", pm.math.invlogit(theta5 + theta6 * v))
mu = pm.Deterministic(f"mu", pm.math.invprobit((1/theta2_mu) * pm.math.log(v/theta1_mu)))

# Likelihood (sampling distribution) of observations
y_zero = pm.Bernoulli(f"y_zero", p=PI_0,  observed=y_zero_obs)
y_one = pm.Bernoulli(f"y_one", p=PI_1, observed=y_one_obs)
y_beta = pm.Beta(f"y_beta", alpha=mu * precision, beta=(1 - mu) * precision, observed=y_beta_obs)

trace = pm.sample(3000, tune=1000, chains=3, cores=1)
``````

I am using the data generated using numpy as the observations

``````y_zero_obs = np.random.binomial(size, pi0, size=size)
y_one_obs = np.random.binomial(size, (1 - pi0) * pi1, size=size)
y_beta_obs = np.random.beta(mu_obs + epsilon, (1-mu_obs) + epsilon, size=size)
``````

But this gives me the following error

``````pymc3.exceptions.SamplingError: Initial evaluation of model at starting point failed!
Starting values:
{'theta1': array(320.), 'theta2': array(0.15), 'theta3': array(89.9811098), 'theta4': array(-111.96347195), 'theta5': array(-89.9811098), 'theta6': array(61.71286695), 'precision_interval__': array(0.)}

Initial evaluation results:
theta1                 -4.14
theta2                  2.59
theta3                 -3.12
theta4                 -3.33
theta5                 -3.12
theta6                 -2.74
precision_interval__   -1.39
y_zero                  -inf
y_one                  -0.00
y_beta                  -inf
Name: Log-probability of test_point, dtype: float64
``````

Thanks again for the help so far!

That could be precision issues. Try to parametrize the probabilities on the logit scale and pass `logit_p` directly to `Bernoulli`. You may also consider updating from pymc3 to newer versions of pymc > 5.0, as there have been many improvements in stability and model debugging.

1 Like

Thanks for the help again! I tinkered with the data generated and changed the Bernoulli distributions to have logit_p parameterization. The sampling is working I believe but when I attempt to plot the trace using Arviz it complains that the sampling has only generated a single or a finite number of values

The code:

``````PI_0 = pm.Deterministic("PI_0", theta3 + theta4 * v)
PI_1 = pm.Deterministic("PI_1", theta5 + theta6 * v)

y_zero = pm.Bernoulli("y_zero", logit_p=PI_0,  observed=y_zero_obs)
y_one = pm.Bernoulli("y_one", logit_p=PI_1, observed=y_one_obs)
y_beta = pm.Beta("y_beta", alpha=mu * precision, beta=(1 - mu) * precision, observed=y_beta_obs)

trace = pm.sample(1500, tune=1000, chains=3, cores=1)
``````

The plot:

``````az.plot_trace(data=trace)
az.summary(trace)
``````

The warnings:

``````Got error No model on context stack. trying to find log_likelihood in translation.
warnings.warn("Your data appears to have a single value or no finite values")
``````

Since this is related to arviz, I could open a different thread if its more suitable. Thanks again for the help so far!