I have a model **M** that estimates **S**, an array of 100 elements in [0.0, 1.0]. I also have a synthetically-generated dataset **D** that is used to test the model. **D** includes the true (ground truth) value for **S**. I would like to measure the accuracy of **M** by comparing **S_M** to **S_D**, i.e. by evaluating a function **accuracy_model()** that measures the distance from **S_M** to **S_D**. And when I consider a change to **M**, I would like to use **accuracy_model()** to see if the potential change makes **M** more accurate or less accurate.

**S_D** is an array of floats, but **S_M** is an array of distributions; each element of **S_M** is a distribution of floats, with the posterior sampled as an approximation of that distribution. I think the best way to measure the distance from an individual element of **S_D** (e.g. **S_D[1]**) to the corresponding element of **S_M** (e.g. **S_M[1]**) is to find the log probability of **S_D[1]** within the distribution **S_M[1]**. Then the best measure of the total distance from **S_D** to **S_M** is the sum of those log probabilities of each individual element. (If my accuracy measure is flawed, please tell me.)

Does pymc3 have support for measuring the log probability of a quantity (i.e. **S_D**) against the posterior trace (i.e. **S_M**)?