I don’t quite understand why you applied censoring as above. Conventionally you have two limits say lower = 0 and upper =10, any data that is below lower is recorded as 0 (left censoring) and any data higher than 10 is recorded as 10 (right censoring). In your case, y (your data to fit) and y is given as
yerr = [0.175, 0.059, 0.011, 0.417, 0.01, 0.024, 0.021, 0.013, 0.044, 0.029, 0.083, 0.049, 0.115, 0.011, 0.023, 0.078, 0.028, 0.098, 0.026, 0.034, 0.038, -1, -1, -1, -999, -999, -999, -999]
y = [14.392 , 13.393 , 14.493 , 14.898 , 13.187 ,
13.512 , 14.396 , 13.489 , 12.652 , 13.838 ,
13.506 , 12.796 , 12.794 , 14.586 , 14.204 ,
13.107 , 13.702 , 12.483 , 13.125 , 13.392 ,
14.314 , 14.458 , 14.944 , 14.777 , 11.76328143,
12.30545265, 12.02984059, 11.66659327]
and your are censoring via yerr with the following:
right_censored = (y_err < -10)
left_censored = (y_err > -10) & (y_err < -0)
Are you really trying to censor or just want to model data with unexpected error separately?
If the latter, I don’t think you should be using censoring kind of approaches with CDFs but possibly three different likelihoods all with a powerlaw and maybe different parameters?
Or is there a reason why you went with a censoring kind of approach here that I am not understanding? In any case if you are confident that censoring is what you need, one thing I would not do is use separate sigmas like in
y_likelihood = pm.Normal("y_likelihood", mu= A * at.power(x_un, B), sigma=np.abs(y_err_un), observed=y_un)
left_censored = pm.Potential("left_censored", normal_lcdf( A * at.power(x_lc, B), σ, y_lc))
right_censored = pm.Potential("right_censored", normal_lccdf( A * at.power(x_rc, B), σ, y_rc))
Because the whole point of censoring is to assume censored and uncensored data comes from the same distribution but you can not pinpoint the exact location of the censored data so you use CDFs instead of PDFs. So if you want to model sigma, I would go with
y_likelihood = pm.Normal("y_likelihood", mu= A * at.power(x_un, B), sigma=σ, observed=y_un)
From this change I get something that looks like
where as your code gives for me
which I assume is coming from the fact that your censored and uncensored data are coming from different distributions.
Again this depends on your expectations but normally uncensored and censored data are assumed to come from the same distribution and usually left censored and right censored data are constant low and high values. So to change these assumptions would mean that you are moving away from the territory of censored data modelling so you would really need to make sure your model is reasonable. I suggest you read the link I have posted above which clearly shows you the assumptions of censored data modelling.
ps:
I would also urge you to update pymc (as you are using aesora which suggests your pymc version might be old) and use pm.Censored to minimize the risk of making a mistake when setting this up. I have not checked to see if you have used the CDFs correctly, though it seems like it (when I want to manually do it I use CDF(x) and 1-CDF(x) which is lest confusing)



