Hi,
I am kind of a newbie to pymc and previously worked with stan and Turing. My goal is to implement a model which depends on many stimulus properties with pymc where two distributions are combined to a new one, which further serves as pobability estimate of the number of responses. Depending on the category membership X (3 levels (-1,0,1)) the response is ‘biased’ to be more or less probable. I’ve tried to implement this, yet unsucessfully.
-
How can I access mu + sd from a distribution to create a combined distribution of those like
combined = pm.Normal('combined_distr', mu = distr1.mu+distr2.mu, sigma= distr1.sd+distr2.sd)
I tried different variations and also to set the to be combined distributions withpm.Normal.dist(mu,sd)
. Is there any possibility or built in math function to manipulate two distributions (addition, multiplication,…)? -
On this combined distribution I need to implement a free level scale parameter that modifies the point for log-probability estimation on the cdf depending on category membership:
pred = cdf(combined, level_scale * df.level + df.stimulus, observed = df.response)
I tried to write a custom distribution function but as I want to sample the scale-parameter I would need to implement some gradient search within this log-likelihood function as well, right?
I want estimate only one scale parameter for the levels and not one for each category.
I would appreciate some ideas.