Combine two distributions to new one & scale parameter modifying probability depending on category

Hi,
thank you for the answer and the links. I already had a look at many of them.
Currently I only have it implemented in julia with the Turing lib.

@model function full_model(x, y, bias_x, bias_y, levels,
    n_suc, n_trials, variance)

    σ²_ff ~ truncated(Normal(0, 100), 0, Inf)
    pred_err_scale ~ Normal(0, 2)
    log_level_scale ~ Normal(0, 3)

    for i in  (f0s)
        pred_bias = intercept + slope * bias_x[i]
        pred_stim  = intercept + slope * x[i]
        prediction_error = bias_y[i]- pred_bias
        biased_prediction = pred_stim  + pred_err_scale * prediction_error
        bias_distr = Normal(pred_bias, sqrt(variance))
        stim_distr = Normal(y[i], sqrt(σ²_ff))
        combined = combine_distributions(bias_distr, sim_distr)
        p_o = cdf(combined, exp(log_levelscale) * level[i] + y[i])        
        n_suc[i] ~ Binomial(n_trial[i], p_o)
    end
end

in pymc I have sth along the lines

with pm.Model() as full_model:
    
    x_bias = pm.Data('x_bias', df_.x_bias)
    y_bias = pm.Data('y_bias', df_.y_bias)
    y = pm.Data('y', df_.y)
    x = pm.Data('x', df_.x)
    level = pm.Data('level', df_.level)
    nr_suc = pm.Data('nr_suc', df_.nr_suc)
    nr = pm.Data('nr', df_.nr)
    
    intercept = pm.Data('intercept', hb_intercept)
    slope = pm.Data('slope', hb_slope)
    var = pm.Data('var', hb_var)
    
    sigma = pm.TruncatedNormal('sigma', mu=0, sigma=100, lower=0)
    pred_err_scale = pm.Normal('pred_err_scale', mu=0, sigma=2)
    log_level_scale = pm.Normal('log_level_scale', mu=0, sigma=3)
    
    pred_bias  = intercept + slope * x_bias
    pred_stim = intercept + slope * x
    
    prediction_error = y_bias - pred_bias
    
    biased_prediction  = pred_stim + pred_err_scale * prediction_error
    
    bias_distr  = pm.Normal('bias_distr', mu=pred_bias, sigma=np.sqrt(var))
    stim_distr = pm.Normal('stim_distr', mu=y, sigma=sigma)
    combined = pm.Normal('combined',bias_distr+stim_distr) 
    
    value = np.exp(log_level_scale) * level + y
    p = pm.logcdf(combined, value)
    
    n_os = pm.Binomial('respone',n=nr, p=p, observed = nr_suc)

    trace = pm.sample(draws=4000, chains=4, tune=1000, return_inferencedata=True)

but the model does not work with “Initial evaluation of model at starting point failed!” error because the responses are estimated to be -inf, so my assumption was that the new distribution is not correctly implemented, as an infinite amount of responses cannot be estimated with p taking a value from 0 to 1 (which it should be limited to due to the cdf right?).

Initial evaluation results:
{'sigma': -4.42, 'pred_err_scale': -1.71, 'log_level_scale': -2.06, 'bias_distr': -59.35, 'stim_distr': -77.95, 'combined': -61.15, 'respone': -inf}

I have also tried with the pm.NormalMixture and with wraping a loop around it as in the initial example but it gives me the error that elementwise calculation of logcdf is not possible or that the parameters already exist.