Too complicated model? Theano cache: refreshing lock and slow sampling

Hi,
I am currently working with a code which aims to infer knowledge about gamma-rays. Therefore I need many bins so the signal shows. The model I have for this is:

import pymc3 as mc
from theano import tensor as T
import theano; theano.gof.cc.get_module_cache().clear()
trace_out = []
resmat = resp # response matix of the detector 
truthdim = len(resmat)

prior_upper = 5550*np.ones(len(raw_py))  # 5550 is the highest event in the raw-data 
prior_lower = np.zeros(len(raw_py))        
    
with mc.Model():
    priors = { } 
    
    
    def wrapper(priorname='',low=[],up=[],other_args={},optimized=False):

        
        if (priorname in priors):
           
            priormethod = priors[priorname] 
            
        elif (hasattr(mc,priorname)):
             
            priormethod = getattr(mc,priorname)
        else:
            print( 'WARNING: prior name not found! Falling back to DiscreteUniform...' )
            priormethod = mc.DiscreteUniform

        truthprior = []

        for bin,(l,u) in enumerate(zip(low,up)):

            name = 'truth%d'%bin
            default_args = dict(name=name,mu=l,sigma=u)
            args = dict(list(default_args.items())+list(other_args.items()))
            #print(args)
            prior = priormethod(**args)
            truthprior.append(prior)

        return mc.math.stack(truthprior)

    truth = wrapper('DiscreteUniform', prior_lower, prior_upper) 
       
    def unfold():
                
                tresmat = np.array(resmat)
                reco = theano.dot(truth, tresmat)
                out = reco
                
                return out    


    unfolded = mc.Poisson('unfolded', mu=unfold(),
                                  observed=np.array(raw_py))  


    
    trace = mc.sample(2000, tune=1000, cores=1, 
                              chains=2, nuts_kwargs=None,
                              discard_tuned_samples=True,
                              progressbar=True)            

The problems I am having is:
1: Theano constantly give me the cache warning:
INFO (theano.gof.compilelock): Refreshing lock /home/user/.theano/compiledir_Linux-4.15–generic-x86_64-with-debian-buster-sid-x86_64-3.7.4-64/lock_dir/lock

2: I need a posterior for each bin, and I need to have up to 4000 bins in some cases, to be able to infer the information needed. I think this is making the sampling slow. Yesterday I had 1.3 samplings per second and the whole process took 5 hours for 20 000 samples, and that is only for 512 bins. Am I doing anything wrong?

3: If I have over 800 bins and a lot of samples it crashes after a long time. I will add a picture of the error:

More info: The data is counts of gamma-rays. So it can be many hundred thousand events in one spectrum. In many cases the data is skewed, with many zeros and just above zero and some events reaching very high.

All help is much appreciated!

Hi Hrima89

Your code is a bit unusual and tricky to follow. Have you looked at some of the examples on the PyMC3? e.g. this one

The unfold function just looks to be doing a dot product, no need to write a function for that. You can just call tt.dot(truth, tresmat)

Does the wrapper function generate len(raw_py) independent DiscreteUniforms? You should be able to do that in one line of code, something like this

with pm.Model() as model:
    truth = pm.DiscreteUniform('truth', lower=0, upper=100, shape=64)

Is it possible to normalize your data to [0, 1] or zero-mean and unit standard deviation? The upper limit of 5550 in the discrete uniform may cause problems. Even better it is preferable to work with continuous distributions rather than discrete. You can keep the observed as Poisson.

2 Likes

Hi,

Thanks for the reply!

  • I will check out this example, thank you!

  • That is true, the unfold function might not be necessary. Maybe a stupid question: can including this function make the program slower?

  • The wrapper function makes a DiscreteUniform for each bin in raw_py. So if I have 800 bins and 2000 samples, it will make 800 ‘truth’ bins and sample 2000 for each bin. I think this might be the main cause of my problems, since this is tedious obviously. Then again, I need a posterior for the value in each bin…I have tried:

with pm.Model() as model:
truth = pm.DiscreteUniform(‘truth’, lower=0, upper=100, shape=len(raw_py))

but if I remember correctly this makes one truth-posterior distribution over the whole length of the data set?

  • The upper limit is the second reason for my problems. I have not tried normalization yet because I have been unsure how to treat the response matrix (called resmat in the code). This contains the probabilities for different gamma-ray reactions for each bin. But writing it now, I do not see why the response matrix should be in the way of normalizing the data. It contains probabilities, i.e already normalized between 0 and 1, so it should be the same if I normalize the data or not. So in short: I will try this!

  • I will try continuous distributions as well. I have tried it before, but will include it when I have normalized the data.

Thanks again for the help!

Hi, I dont quite follow what you mean regarding the bins and samples. Is it possible to post a simplified version of your model?

It’s quite rare to use DiscreteUniform in bayesian modeling.

Hi,

Yes, I will try to explain better.

I have a gamma-ray spectrum, in the figure called raw_py (black).
Raw_py has 30 bins (in the figure the x-axis shows number of bins).
So I make a prior_upper and prior_lower (shown as dotted lines in the figure) for each bin.
Bin number 0 has uniform prior: f = 1/(200-0), and so on. -> This is made with the wrapper function in the code I posted above.
After sampling I will get 30 posterior distributions, one for each bin (x-axis in figure).
I choose the unfolded (called mean+std in figure, blue dotted line) by finding the mean and std of each posterior distribution (i.e called truth in the code) for each bin.

  • When I have 30 bins the code works okay, but most of the time I need over 300 bins to be able to see what is happening in the spectrum. Then the code takes many hours to run.

A little side info of what the figure shows:
As you see the unfolded (mean+std) looks different than raw_py (raw-data), this is because the unfolded has re-distributed the counts into the true gamma-ray peaks. So what this code does is to correct for the response of the detector. The aim is a to estimate the truth-spectrum without the response/smearing of the detector. If you are familiar with convolution, that essentially what is happening here.

I hope I was able to answer your question so you understand better what my troubles are. If not, just ask. This is a complex case, so I understand if it is a mouthful to jump into on the fly, so thank you so much for trying!