Observations depending on random variables of the model

Dear pymc team,

I am pymc beginner. So if the question I am asking is too obvious I apologize. I introduce the problem below to see if you can give me a hint on how to proceed:

I am trying to implement a model of the following type:

(A-log(X))*(B-log(Y)) = Weibul(Alpha,Beta,Gamma),

Where A, B, Alpha, Beta and Gamma are input random variables of the Bayesian model and Weibull is the probabilistic distribution of shifted minimum Weibull which is not among the standard pymc distributions but which I have been able to implement using the customdist class. Finally, X is a deterministic independent variable of the experiment and Y is the observation of the experiment.

In the examples I have been looking at in the manual, the observations are assumed directly to a given probability distribution. But in my case, the observation looks like this:

log(Y) = A - Weibul(Alpha,Beta,Gamma)/(B-log(X))

and, therefore, my observation in addition to a probability distribution also depends on other random variables. The only thing I can think to do is to reconfigure my observation as (A-log(X))*(B-log(Y)), but in that case my observation is not independent, it depends on random variables. Unless you tell me otherwise, I think this is not correct.

Any hint how to convert my model to an implementable form in pymc?

In advance, thank you very much for your support.

Hodei

You can use CustomDist to transform the distribution instead of transforming the values.

def dist(Alpha, Beta, Gamma, B, X, size):
  return A - Weibull.dist(Alpha, Beta, Gamma, size=size) / (B - pt.log(X))
pm.CustomDist("log(Y)", Alpha, Beta, Gamma, B, X, dist=dist, observed=np.log(y))

Thank you very much Ricardo. I would not have thought of that. Your proposal allows me to include my model in pymc. Issue solved.

Let me know if it works! This is somewhat recent functionality