Using Multivariate distributions for price optimization

Hi,

I’m pretty new to the PyMC3 and glad to see such a vibrant community. Currently, I’m working on a hierarchical model for price optimization. But being a newbie to more complex Bayesian modeling, I faced a lot of theoretical and technical difficulties regarding using multivariate distributions. Basically, I know how to implement such a model for a single product. But generalizing it to multiple related products with correlated demand functions is a big dilemma. So, I really wonder if you could point me in the right direction, either by introducing online resources you found helpful or by sharing your experience. I appreciate any help you can offer.

Thanks.

Can you show what you are doing with the single product model, and to what kind of data you are trying to generalize it? Without more info it is hard to help.

Thanks for your reply. For a single product I basically follow a procedure similar to the one outlined in this blog post:

But when dealing with multiple products, I’m not sure how to incorporate the price and demand of similar products into the model, how to form the covariance matrix, and how to choose the optimally informed prior for each item. I understand the broad definition of the problem, but I’m aware that similar dynamic pricing algorithms have been deployed in the past and I’m looking for a more detailed description.

How does your data look like? Giving you good hints really is harder if I don’t know exactly what you are modeling.
I don’t think you have to deal with covariance matrices. At least not in a first version of a model like that. You might use those to deal with things like: I’m selling two products that most people use and buy together, so the price of product 1 influences if people buy product 2. Sounds fun, but don’t start with something like that. :slight_smile:
You could start a model with several products something like this:

I’ll assume your data is in tidy form with two columns: price (dtype float) and product (dtype category).

n_products = data.product.nunique()
with pm.Model() as m2:
    loga = pm.Cauchy('loga',0,5, shape=n_products)
    c = pm.Cauchy('c',0,5, shape=n_products)
    logμ0 = (
        loga[data.product.cat.codes]
        + c[data.product.cat.codes] * np.log(data.price))
    μ0 = pm.Deterministic('μ0',np.exp(logμ0))
    qval = pm.Poisson('q', μ0, observed=q0)

Hi,

Thanks for your thorough response. You got the gist of my data. Basically, I have the record of transactions in the past two years which I can aggregate on a weekly basis, for example, to find the quantity sold and the unit price for each product at each period. In another table, for each product ID, I have the ID number of its corresponding class of products it belongs to.

The problem with your suggested model is that for each product, it just captures a single value for elasticity (parameter c) which in the literature is called the own elasticity. It doesn’t capture the so-called cross elasticity, i.e. the impact of all other products on the slope of price vs demand for a particular product. So, in general, the parameter c should be a matrix of nxn (n=number of products belonging to the same class). Because of that I was looking for the setup of some kind of a covariance matrix. Something along the line of what described in this blog post:

Thanks again for your time and consideration.

Hi akarimi-

You might consider, at least as a first pass, using a conditional model for these relationships such as

\mathbb{E}[n_i^{(t)}|p_i^{(t)},n_{s(i)}^{(t)},z_{p_i^{(t)}},n_i^{(t-1)},...] \sim N[\alpha + \beta_1g(p_i^{(t)}) + \beta_2g(n_{s(i)}^{(t)}) + \dots, \sigma_r^2]

where some of the example features are the number of units sold within the same sector as the ith unit (sector demand), price of the ith unit in comparison to similar items (as a ratio or z-score), the number of units of i sold in the previous time period (and so forth).

I find that the tractability and throughput of a solid BLM or BGAM outweigh the drawbacks of misspecification and feature extraction. If you have >200 items, this model will likely have fewer parameters than N(0, \Sigma) even with interaction terms and lags.

Hi Chris,

Thanks for your time. It looks interesting to be able to model this complex problem via Bayesian linear models. I wonder if you could elaborate a little more on the specifics of such a model, or if you know a resource outlining a similar model I wonder if you could let me know.

I think this probably falls into dynamic pricing models or demand forecasting. This overview (estimating own and cross-price effects) delves into the economic-model approach; while Dynamic Pricing Algorithms - Overview provides a general overview – note especially the use of pymc3. Notably, in the setting where the (units sold, price) data is only available for your firm – so you have no good proxies for economic variables such as aggregate sector demand &c – the suggestion here appears to be to draw baseline self-effects from a multivariate distribution, and include cannibalization effects marginally (via a promotional indicator, or some kind of relative price metric).

My experience in this area is largely second-hand, and I definitely do not have insight into what models a major retailer, airline, or hospitality brand might be using. That said, I can say that estimating a covariance matrix is almost always not what you want to do, and is for the most part intractable anyway.

1 Like

I’d concur with what the others have said so far.

Without more details on the scale etc. of your problem it’s hard to advise. The exponential decay model you linked to might be applicable, but again very hard to say without more info. It’s unlikely it would be much good for pricing in a hyper-competitive market (i.e. online), and nor does it (at least in the linked version) account for variability over time/season etc… From my experience of doing this sort of stuff for medium-large online businesses I’d quickly move towards non-parametric models. Build out an automated validation framework before anything else, and then loop over experiment<->investigate until you start measuring gains : )

1 Like

Thanks Chris and Daniel for your replies. I think you saved me a great deal of time by pointing towards more fruitful directions. My project is defined within a large online retailer and as you said in a very competitive market.

Compounding on both of your replies, I think the optimal approach would be first building a comprehensive demand model using the conventional ML techniques (RNNs would be a good option I guess) and then using its outcomes to define a sort of Bayesian additive model (?) to quantify the degree of uncertainty for the elasticity.

Hello Akarimi,

This thread aged and I stumbled upon it just now. I am curious: how did you managed by create your model? You made advancements on this subject?

Best