What are you using PyMC3 for?

I would love to hear more user stories and what people use PyMC3 for.

I use PyMC3 at Quantopian where we build Bayesian models to track algorithm performance. For example, we assume that daily returns of a strategy are Normally distributed with a random-walk prior on volatility (standard deviation of the returns distributions). This allows volatility to change over time. We also care about the probability of the mean returns being positive so we track the uncertainty over time and know when we have accumulated enough data to be confident about a strategy.

What are you using PyMC3 for? Any pain points?

I do all my data analysis of the behavioural experiments in PyMC3; my next step is applying it to other kinds of the experiments like EEG and fMRI.

Main pain point for me is explaining my models to non-Bayesian colleagues :upside_down:

1 Like

I use PyMC to apply hierarchical cognitive models to data from brief neurocognitive tests completed by people with psychosis and controls. Typically, such models are used in basic psychology for experiments with many hundreds of trials per subject, but relatively few subjects. In psychiatric research, we often don’t have the luxury of collecting lots of data from the individuals with mental illness. Hierarchical models are potentially very useful here because they recover parameters more efficiently than non-hierarchical models.

Here is a paper using PyMC3 to implement a model of working-memory capacity, unfortunately paywalled for now:

http://www.sciencedirect.com/science/article/pii/S0920996417304905?via%3Dihub

3 Likes

I use pymc3 for analyzing X-ray diffraction data to determine better crystal structures of proteins. In the traditional framework intensity data from images are reduced step-wise and each step involves successive loss of information. This reduced data is the starting point for crystal structure modelling efforts for which the uncertainty is poorly defined. For example there are only very approximate methods for estimating uncertainty in atomic coordinates. My ultimate dream is to connect the modelling closer to the non-reduced (highly correlated) data and make model comparisons more quantitative within Bayesian framework. The main pain point is the large amount of data and atoms in our crystal structures. So I guess we are going
to keep solving limited problems in the foreseeable future.

2 Likes

I’m using PyMC3 for comparing results of different pricing strategies in a travel company. Specifically using it to update the share of traffic going to each strategy based on beliefs in which is the optimal strategy. As well as reporting of performance with uncertainties.
In short: Bayesian multi-armed bandit on multivariate pricing tests.

Next steps will be trying to automate setting pricing levels using Bayesian reinforcement learning - though think I’m a long way from figuring this one out…

Pain points: I found the documentation a bit “gappy”, and lacking in a clear guide to approaching modelling at an abstract level - i.e. there are many examples but often I don’t find I can take generalisable lessons from them. It’s dawning on me that this may be the case simple because of the extensive flexibility of Bayesian modelling rather than from bad examples. I realise that most of my pain points with PyMC3 stemmed from initially trying to fit its usage into that of standard machine learning APIs (i.e. from my own lack of understanding!). Realising that a model is in effect stateless was a big “aha”, and that model.fit() is not even close to model.sample() in reality! I suppose most people come to PyMC3 having already used MCMC methods so all that is clear to them.

Awesome points: I always found frequentist/standard statistical inference to be totally brittle and lacking in fundamentals, PyMC3 is a dream to use and just “feels right”. Great core community, specifically this discourse! Super great to have such active, helpful and friendly core members :smile:

3 Likes

Hi there!

Bumping this a little. At Sounds we use PyMC3 to study user behavior and perform A/B testing. Bayesian analysis is our tool of choice for several reason:

  • Generative model often succeed when others, more ad-hoc, fail. For instance, to compute the lifetime value (LTV) of users I’ve tried fancy regressions and what-not but it just doesn’t beat a beta-geometric model.
  • Sometimes you DO need the uncertainties. In particular, knowing the uncertainty on the LTV allows use to make safer cash projections.
  • It is often the only method that answers the question you care about. In A/B testing it’s obvious: frequentist hypothesis testing ask “what is the probability to observe this effect if the null hypothesis is true”. This method controls for type I error (false positive), but it’s not really the question a product manager cares about. It sounds more like: “How sure are we that the new version is ‘better’ than the current one?”. “How much better is it?”. “Can we make a decision now or do we need to stop?”. Bayesian methods do exactly that. That was the epiphany for me.
  • Not emphasized enough, but Bayesian decision theory! This is often overlooked by new comers in the field, but the more I explore and the more I think it is a serious argument in favor of Bayesian methods. I invite you to read about this if you make any real-life decision based on the results of your modeling.

PyMC3 has been so useful to us, I can’t even begin to express how grateful we are for all the work that the team has put in. We’ve often found that the documentation was lacking and often had to go and read the code directly. Promised, we’ll try to fix this when we find the time :slight_smile:

6 Likes

I also tried to use PyMC3 for Bayesian reinforcement learning. However, I couldn’t make the sampling work. So essentially I wrote a function using Numpy which takes in model parameters (learning rate, softmax temperatures etc) and the participant’s learning data, and outputs the log-likelihood of the data. Is this also how you do your Bayesian RL? Or you write your RL model using PyMC in the first place? Would you be willing to share with me your code to see how you did it?

1 Like

Somehow I didn’t see this post before! Fitting a simple Reinforcement Learning model to behavioral data with PyMC3 (Jupyter NB) I get the answer would be: for Bayesian RL, yes, I’d have to rewrite it using theanos for pyMC to work~