Implementing the DirichletMultinomial distribution in PyMC3

Let’s talk about how to implement and test the DM distribution.

Relevant PRs:
https://github.com/pymc-devs/pymc3/pull/4373 (active)
https://github.com/pymc-devs/pymc3/pull/3639 (closed)

2 Likes

@bsmith89 Thanks for opening this topic. Here are some ideas:

  1. As mentioned in the PR, it probably makes more sense to handle init arguments as the multinomial is doing. So you can either pass a scalar or 1d array for n, and either a 1d array or 2d array for alpha. I think that would be the most intuitive for users. I assume we can convert them all to the same structure after the init, as you were already doing with the padding for the n.

  2. Looking at the Dirichlet and the Multinomial, I would think the random method should probably go through the generate_samples(). It seems that takes care of some basic shape checks that the standard unit tests were complaining about (such as expecting an error with random(size=-2))

  3. For testing the logp, we should probably have a simplified pmf function or precompute some more logps with which to test de distribution. For the case of an alpha of size 2 we can also compare directly with the betabinomial distribution.

What do you think? Anything I am missing?

1 Like

Sounds reasonable to me. If I find the energy in the next week or so, I’ll try and hook up __init__ to use those shapes correctly, at which point I don’t expect .random to be too much trouble. I’ve had trouble wrapping my head around the testing framework in the past, though, so you might have to take the lead on that.

1 Like

That’s great. I think I am getting a hang of the unit tests, so I don’t mind focusing on those.

I think the random method can be refactored to work pretty much like the Multinomial one, except we need to also generate p vectors for each Multinomial draw. I will wait for you to refactor the ‘init’ (also feel free to go ahead and remove the shape inference part I copied from the Dirichlet which I think is unnecessary?)

One small thing concerns the name of the alpha argument. iirc The Dirichlet and BetaBinomial are calling it a and not alpha. Should we adopt the same name to standardize it?

One small thing concerns the name of the alpha argument. iirc The Dirichlet and BetaBinomial are calling it a and not alpha . Should we adopt the same name to standardize it?

Sounds like a good idea. Will do.

Actually I think I was wrong. The beta and BetaBinomial call it alpha, while the Dirichlet calls it ‘a’. In any case, I think it still makes more sense to use ‘a’ since it is in line with the Dirichlet.

The best would be for all to use alpha, but that’s probably beyond the scope of this PR.

2 Likes

About to hack on this for a bit. I haven’t really collaborated on a shared PR before; what’s a good workflow?

Should I merge changes from your PR branch into my own branch and vice-versa? Or should one of us give the other push access to our PyMC3 fork?

This documentation makes it sound like maybe you need to give me push access to your PR branch…?

Or, maybe I submit PRs to your PR branch… Is that too much?

That would certainly be the easiest for your guys, if @ricardoV94 is ok with that of course

1 Like

I added you as a collaborator to my fork of the pymc3 repo. Feel free to push changes to the dirichlet_multinomial_fork. Keep in mind that these will automatically appear in the open PR as well :slight_smile:.

Great! Should I force-push commits rebased onto the pymc3 master, or do you prefer to hold off on that? Last step?

For now, you can see the tiny bit of progress I’ve made on my own feature branch. Specifically, I aligned DirichletMultinomial.__init__ with Multinomial.__init__. 9 of 10 tests are passing, the only exception being the test of DM.random() in pymc3.tests.test_distributions_random.

However, in the process I’m also seeing how horribly deficient the tests we’ve got are, apparently reflecting the fact that I basically copied over the Multinomial tests and these are also lacking. Specifically, I’ve noticed a bug in calculating the mode, and some non-obvious-API issues.

Specifically,

with pm.Model() as model:
    pm.DirichletMultinomial('x', n=[1], alpha=[[0.3, 0.3, 0.4]], shape=[1, 3])
model.test_point

Prints: {'x': array([[1, 0, 0]])}, but should be [0, 0, 1].
and the exact same thing is true for pm.Multinomial('x', n=[1], p=[[0.3, 0.3, 0.4]], shape=[1, 3]). Seems like I should submit an issue for the Multinomial implementation, but the fix for that will presumably also work for DirichletMultinomial, so it’s not obvious what the best order-of-operations is there…

The other problem is that you have to explicitly pass the correct shape. While this isn’t wrong per se, I believe that it deviates from most of the other distributions (except for Multinomial which also has this un-obvious behavior). We should be able to infer the correct shape directly from n and alpha, and that seems like the right thing to do for user quality-of-life.

On the other hand, I’m not at all sure how to correctly deal with polymorphic n and alpha, or even how to easily construct a distributions shape from the symbolic shapes of two input tensors, so this seems challenging. It’s not clear to me how other distributions handle that; I’ll get around to reading into the Dirichlet implementation next.

Seems like there’s some value in writing (failing) tests for both of these issues, so that we at least know what we’re aiming for. Presumably those tests can also be cloned for Multinomial if someone wants to tackle the analogous issues there.

On a separate topic, do we want to prettify our patch? Right now the pymc3/distributions/multivariate.py has not been formatted, but I could keep all the patches I’m writing consistent with Black.

Finally: Along with some unit-test-driven-development, seems like there could be value in putting together an example notebook for this patch. I could submit a parallel PR to the pymc-examples repo, presumably with a worked-out DM example implemented with both the explicit latent variable and the marginalize distribution. Would be great for integration testing.

There’s a lot in the above, so TL;DR:

  • You can find my own feature branch here: GitHub - bsmith89/pymc3 at dirichlet-multinomial-2 . As of now, all I’ve done is aligned the DM __init__ with Multinomial’s implementation.
  • I rebased my branch onto the upstream master. Force-pushing that your PR branch seems problematic if we’re collaborating on it…should we simply hold off on rebasing that until the very end?
  • The mode is wrong for both Multinomial and DirichletMultinomial. Someone should submit an issue. Whatever patches the Multinomial’s bug will probably also work for the DM.
  • Requiring an explicit shape be passed to the DM (and the Multinomial) is unexpected, but I’m not sure how to do better.
  • Fleshing out the tests before or in-parallel with the implementation would actually be very helpful for me, if you’re up to it.
  • We should also plan for an example notebook, which I’d be happy to put together.
  • How do we want to deal with code-formatting (i.e. Black)?

Hi :slight_smile:

Feel free to work on your feature branch, you can also invite me to collaborate. Then once we are ready we can either open a new PR or merge into my current one. No point in spamming the PyMC3 repo.

Good catch! I think the best is to find the solution first and then open a PR to specifically fix it for the Multinomial. The DirichletMultinomial will already be fixed from the “start”…

Does the current Multinomial not handle all cases correctly? If it does, in theory we could “copy” their initialization / logp code, no? If you found cases that fail that would be very important!

I agree with you that having some comprehensive tests can help developing around the shape thing. I will check if there is anything already in the codebase so that we don’t reinvent the wheel.

I think we can do that when we are done, before updating the PR.

Agree completely! Worked examples are always got to have around.

Do you have any examples that may be interesting? In my field (Cognitive Science) there is this interesting paper that tries to model Look Away responses based on a latent DM model: https://onlinelibrary.wiley.com/doi/pdf/10.1111/desc.12083?casa_token=lvAhEoXdrtEAAAAA:M8OJF7QxjFHoX0hEKskyP7D9-oCtpqkLUV4947YTPMBPws8gcicxQ-lzlkR--RG_J2vQIWzsE0WqmR8

There is also a previous paper with the same idea, but applied to group level data (also with much less explanation for their methods): The Goldilocks Effect: Human Infants Allocate Attention to Visual Sequences That Are Neither Too Simple Nor Too Complex

Obviously if you already have worked examples, that is even better!


I will check the tests for the shape issue. I can also write some tests for the mode if you didn’t start already.

Let me know if there is anything else I can help you with. Great work by the way!

Correct, unless I am confused, the current multinomial also requires that shape be explicitly passed. The shape (and mode) code at my feature branch HEAD is basically the same for both the DM and Multinomial. As it should be, I think, but it’s also not correct for either.

Put together a super simple example. Nice to see our current implementation works cleanly under NUTS.

Ultimately, I agree that a more real-world problem would be nice. Topic modeling would be the obvious use-case (and probably get the best search engine coverage); my own work is community ecology. Implementing a mixture-of-DMs model could be interesting, i.e. from Holmes2012. Your “look away” example seems cool, too. :slight_smile:

Happy New Year!

I went ahead and cloned all of the Multinomial tests over for the DirichletMultinomial for my fork.
See 5cd3b62.

They all seem to be working correctly, except for perhaps tests.test_distributions_random.test_dirichlet_multinomial where the ref_rand implementation appears to be producing surprising results. @ricardoV94 I’d suggest you at least cherry-pick that one commit onto your PR branch, unless you’ve made independent progress on testing.

I also made progress on the DM random method, which I’ve also implemented based on the way Multinomial does it. Basically the full DirichletMultinomial implementation (and tests) are turning into approximately a copy of the Multinomial, which seems like the right thing to do.

It feels like the next order of business is figuring out why five of the tests are failing. Specifically, three of the parameterizations of tests.test_distributions.test_dirichlet_multinomial_random are failing, as well as test_batch_dirichlet_multinomial. There’s obviously some issue with the shape logic…

The last failing test is thetests.test_distributions_random.test_dirichlet_multinomial, where, as I mentioned above, I think my reference RNG is doing something weird. Haven’t been able to figure that out. :confused:

I’d love to get another set of eyes on those failing tests. As of this moment everything’s up-to-date on my personal feature branch, and the example notebook still runs fine.

Hope everyone’s having a good 2021 so far!

Thanks @bsmith89. I will check it tomorrow and pull your changes + investigate the failing tests. Great work :slight_smile:

@bsmith89

Happy new year!

I only managed to start checking your branch today. I investigated the mode issue, and it seems that there is no simple algorithm to compute the mode of the Multinomial (and by association the DirichletMultinomial). An iterative algorithm is needed, as exemplified in this reference1, reference2.

The current Multinomial approach to compute the mode is just a two-step approximation, and as you found it can fail often, specially for small n and large dimensions. To be honest, I don’t know what the mode is needed for (perhaps for the init step of some samplers), but it is probably fine to have the kind of approximation errors we see here. I would raise this question in the PR to make sure…

I did not have time to look at the need for the shape argument, but it seems to me that I can create Multinomial dists without specifying the shape explicitly. When you say it fails, do you mean the logp is incorrect, the random does not work, or is it something else?

I will look at the current random failing tests, which seem much more important than the mode issue. I hope to make faster progress in the next couple of days.

Oh! Thanks for looking into that. I wasn’t aware that the modes for those two distributions are not easy to calculate, but I guess it makes some sense now that I think about it. I have to assume that the same approximation that’s been working for the Multinomial will be okay for the DM.

As far as I’m aware, the mode is used for model.test_point, and apparently this approximation has generally been sufficient in that capacity. The bigger issue in my mind is that the naming of that attribute suggests to a user that it should be the actual, correct mode. Perhaps it’s used indirectly to populate testval somewhere up the class hierarchy and we could do that explicitly rather than setting self.mode.

I think the only issue of correctness was with the mode. I recall finding some challenges in setting the Multinomial without an explicit shape argument, but I’m not remembering exactly what those were. I’ll mess around with shapes and report back if I find those issues again

Great! Yes, I agree that the remaining test failures are the bigger issue. Unless you’ve got any changes that I should pull into my branch, I may be able to work on troubleshooting those in parallel with you. I’ll post about it here if I figure anything out.

Sounds good to me. No changes yet on my side so let’s keep the same strategy.

Figured this one out! After copying over the Multnomial’s random implementation, I was still renormalizing alpha to sum-to-one as though it were the probs parameter for the Multinomial. Got rid of that sillyness and the .random now matches the reference implementation defined in tests.test_distributions_random.test_dirichlet_multinomial. Someone should definitely still review that code for correctness, though.

1 Like

Solved this one now, too. As with everything else, the fix was to conform to the Multinomial code as closely as possible. There’s a lot of complexity there that I wasn’t understanding before, and all the failed tests were either because I changed something I shouldn’t have or because I forgot to change something to make it work for the DM. :roll_eyes:

Mostly fixed now, though! Last failing test is tests.test_distributions.test_batch_dirichlet_multinomial.

1 Like