There were *divergences after tuning. Increase `target_accept` or reparameterize. & The number of effective samples is smaller than 10% for some parameters

Alpha and beta do need to be positive. So you do probably want the priors for these parameters to have positive support (i.e., to suggest that only positive values are credible). But there is a difference between a prior that naturally has positive support (e.g., a gamma distribution) and a prior that is simply chopped off (i.e., truncated) at values <=0.

Consider the gamma distribution below. Note how the shape of this distribution between x=0 and x=4 strongly implies that negative values are not credible (even though are positive values of x). So as the sampler wanders semi-blindly around the parameter space, this distribution should discourage it from moving farther and farther left long before the sampler attempts to explore negative values.

image

So to repeat. The uniform prior you were using is totally fine from a mathematical point of view. But the sampler (i.e., MCMC/NUTS) is wandering around the parameter space, trying out parameter values one by one and distributions with “sharp corners” (e.g., truncated distributions) don’t provide much guidance about invalid regions of the parameter space until the sampler has already in such a region (at which point it’s too late, the divergence has already occured).