There are multiple models that we’re hoping to analyze, but right now we’re working on the simplest one, which involves a single parameter \Theta and no independent variables x_i. In this model, the likelihood function for a given observation y_i is
\ell = \begin{cases} \frac{ \Theta^2 \sqrt{\Theta^2 - T_i^2} + (1 - \Theta^2 - T_i^2)\tan^{-1}\sqrt{\Theta^2 - T_i^2}}{(\Theta^2 + 1 - T_i^2)} & \Theta \geq T_i \\ 0 & \Theta < T_i \end{cases}
where T_i^2 = 1 - y_i^2. For \Theta \gtrsim T_i, this means that \ell \approx \sqrt{2 T_i} (1 - T_i^2) \sqrt{\Theta - T_i} + \mathcal{O}(\Theta - T_i)^{3/2}, meaning that \ln \ell has a logarithmic divergence as \Theta \to T_i from above.
In the simple one-parameter case, you can actually impose a truncated prior to cure this — just figure out what the largest value of T_i is among all the observations y_i, and truncate the prior for \Theta there. But we’ll eventually want to expand our analysis to cases where there are more parameters and the observations depend on independent variables x_i. In these case, \Theta is replaced in the above likelihood function by a quantity f(\vec{\Theta}, x_i) that will depend on both the parameters and the dependent variables. This means that the constraints on parameter space will be much more complicated, which is why I’m loath to just truncate the prior and call it a day.
(I’m also having one of my students try adding artificial “observational noise” to the model, which would probably cure the problem by making any “impossible” observations y_i for a given value of \Theta merely “very very unlikely”. He’s making progress on it, but if we stall out on that idea I might post another thread here about how to implement it.)