Using estimates from zero-inflated negativebinomial regression in optimization problem

Assume that I wish to optimize an investment budget into two different investment opportunities. In case one, we assume that we have data on previous investments and sales for each opportunity. We also assume that the responses (let’s assume sales) are not correlated between the two investments, and that the investments do not interact in any way. The response variable for both investments can be assumed to follow a zero-inflated negative binomial distribution. We assume that sales for investment 1 can be modeled according to y_{1, t} = e^{b_{1, t}} * x_{1, t}^{exponent_{1, t}} where y_{1, t} is the sales attributed to investment 1 at timestep t. Investment 2 can be modeled in the same fashion.

My question then is if it is sufficient to just multiply our posteriors over 1 - \psi with our posteriors over e^{b_{1, t}}, and then wrap it in a deterministic variable, to get the appropriate coefficient e^{b_{1, t, \text{zero-inflation-scaled}}} that we can later feed into our optimization problem, where we try to maximize the sales of our investments:

\max_{x_c} \sum_{c \in \{1, 2\}} \sum_{\text{posterior_sample}} (1 - \psi_{c, \text{posterior_sample}}) \cdot e^{b_{1, \text{posterior_sample}}} \cdot x_c^{\text{exponent}_{c, \text{posterior_sample}}}

subject to some constraints such as:

\sum_{c \in \{1, 2\}} x_c \leq B \quad \text{and} \quad x_c \geq 0 \quad \forall c,

where posterior_sample denotes some sampled set of posteriors.

consider the code below:

def zero_inflated_negative_binomial_test():
    # Generating the required variables
    x_1 = np.random.lognormal(sigma=0.5, size=1000)
    response_1 = np.array(
        [max(0.0, round(x)) for x in np.exp(2 * x_1 ** 0.5) + np.random.normal(scale=1,
                                                                               size=1000)])
    zeros = np.random.binomial(n=1, p=0.4, size=1000)

    # Insert zeros in response_1 based on the zeros array
    response_1[zeros == 0] = 0

    import pymc as pm
    with pm.Model() as model:
        b_1 = pm.HalfNormal('b_1', sigma=10)
        exponent = pm.Beta('exponent', alpha=1, beta=1)

        mu = pm.math.exp(b_1) * x_1 ** exponent

        alpha = pm.Exponential("alpha", 10)
        psi = pm.Beta("psi", alpha=1, beta=1)

        truecoefficient = pm.Deterministic(name="truecoefficient", var=(1 - psi) * pm.math.exp(b_1))

        _ = pm.ZeroInflatedNegativeBinomial(
            name="outcome",
            psi=psi,
            mu=mu,
            alpha=alpha,
            observed=response_1
        )

        import pymc.sampling_jax as jax
        trace = jax.sample_blackjax_nuts(1000,
                                         tune=1000,
                                         target_accept=0.97,
                                         cores=2,
                                         random_seed=0,
                                         idata_kwargs={"log_likelihood": True},
                                         return_inferencedata=True)

        var_names = ["truecoefficient", "b_1", "exponent", "psi"]
        inf = pm.sample_posterior_predictive(trace, var_names=var_names,random_seed=0, extend_inferencedata=True)

        print("_")

if __name__ == "__main__":
    zero_inflated_negative_binomial_test()

where as truecoefficient represents the zero-inflated scaled e^{b_{1, t}}, the estimates of this would then replace (1 - \psi)e^{b_1} in the optimization problem. If this is true, we could e.g easily feed our optimizer with negative binomial models and zero inflated negative binomial models.

Similarly, considering an model such as y_t = intercept + b_{1, t} * x_{1, t} + b_{2, t} * x_{2, t} could we simply multiply the right hand sides posteriors and future investment proposals with psi and feed it into the optimization problem? points predictions would obviously not be accurate since we not consider the likelihood function and does not discretize the output, but shouldn’t this way of doing it be sufficient to get some sort of mean prediction?