I have a problem which can be simplified down to determining the two diameters of an ellipse given a measurement of the width of the ellipse at some arbitrary point. The attached figure shows my derivations where I show the measured diameter (d) as a function of the two radii of the ellipse (r1 and r2) which has been parameterized by theta and the angle that the diameter (d) of the ellipse has been measured at (alpha).
If I sample from values of alpha randomly distributed from 0 to pi / 2 with a = 1 and b = 2 I can get the following distribution:
np.random.seed(123) a = 1.0 b = 2.0 nsamples = 10000 def compute_shadow(a, b, alpha): theta = np.arctan(-b / a * np.tan(alpha)) return 2 * (a * np.cos(theta) * np.cos(alpha) - b * np.sin(theta) * np.sin(alpha)) alphas = np.random.rand(nsamples) * np.pi/2 ds = np.array([compute_shadow(a, b, alpha) for alpha in alphas]) fig = plt.figure(figsize=(9,9)) ax = fig.add_subplot(111) ax.hist(ds, density=True) ax.plot(ds, np.zeros(ds.size), color='k', marker='x') ax.set_ylim([0,None])
which implies that one should be able to infer the dimensions of the ellipse from snapshots of the diameter. I have much less than this in terms of samples (say 10) and I want to get an idea of the uncertainty in the ellipse dimensions.
I parameterize in terms of r1 (which I assume is the smaller radius) and r2 - r1 defining priors based on the observed data
r1 mu = min(observed diameters)
r1 sd = 0.1 * (r1 mu)
dr mu = max(observed diameters) - min(observed diameters)
dr sd = 0.1 * (dr mu)
and then form my pymc3 model
with pymc3.Model() as model: _r1 = pymc3.TruncatedNormal('r1', lower=0, mu=r1_mu, sd=r1_sd) _dr = pymc3.TruncatedNormal('dr', lower=0, mu=dr_mu, sd=dr_sd) _eps = pymc3.HalfNormal('eps', sd=1) _alphas = pymc3.Uniform('alphas', lower=0, upper=np.pi/2, shape=(observed_diameters.size,)) _r2 = pymc3.Deterministic('r2', _r1 + _dr) _thetas = pymc3.Deterministic('thetas', theano.tensor.arctan(-_r2 / _r1 * theano.tensor.tan(_alphas))) _ds = pymc3.Deterministic('ds', 2 * (_r1 * theano.tensor.cos(_thetas) * theano.tensor.cos(_alphas) - _r2 * theano.tensor.sin(_thetas) * theano.tensor.sin(_alphas))) _likelihood = pymc3.Normal('likelihood', mu=_ds, sd = _eps, observed=observed_diameters)
This model however gives me a lot of divergences especially as the number of measured diameters increases.
Is there a better way to cast this problem? It occurs to me that I’m effectively trying to fit some custom distribution given my observations so maybe there is some approach to reparametrize in that way?