GaussianRandomWalk from the example notebook errors out with ValueError: Input dimension mis-match

I am experimenting with the code from https://github.com/twiecki/WhileMyMCMCGentlySamples/blob/master/content/downloads/notebooks/random_walk_deep_net.ipynb but the GaussianRandomWalk line errors out with ‘ValueError: Input dimension mis-match. (input[0].shape[1] = 1, input[1].shape[1] = 2)’. Can someone let me know what I’m missing or if there’s any updated notebooks. I installed pymc with conda forge command and there are no import issues.

Detailed log :

ValueError Traceback (most recent call last)
in
8
9 # This is the central trick, PyMC3 already comes with this distribution
—> 10 w = pm.GaussianRandomWalk(‘w’, sd=step_size,
11 shape=(interval, 2))
12

D:\Softwares\Anaconda3\lib\site-packages\pymc3\distributions\distribution.py in new(cls, name, *args, **kwargs)
45 total_size = kwargs.pop(‘total_size’, None)
46 dist = cls.dist(*args, **kwargs)
—> 47 return model.Var(name, dist, data, total_size)
48 else:
49 raise TypeError(“Name needs to be a string but got: {}”.format(name))

D:\Softwares\Anaconda3\lib\site-packages\pymc3\model.py in Var(self, name, dist, data, total_size)
919 if getattr(dist, “transform”, None) is None:
920 with self:
–> 921 var = FreeRV(name=name, distribution=dist,
922 total_size=total_size, model=self)
923 self.free_RVs.append(var)

D:\Softwares\Anaconda3\lib\site-packages\pymc3\model.py in init(self, type, owner, index, name, distribution, total_size, model)
1368 self.tag.test_value = np.ones(
1369 distribution.shape, distribution.dtype) * distribution.default()
-> 1370 self.logp_elemwiset = distribution.logp(self)
1371 # The logp might need scaling in minibatches.
1372 # This is done in Factor.

D:\Softwares\Anaconda3\lib\site-packages\pymc3\distributions\timeseries.py in logp(self, x)
230 x_i = x[1:]
231 mu, sigma = self._mu_and_sigma(self.mu, self.sigma)
–> 232 innov_like = Normal.dist(mu=x_im1 + mu, sigma=sigma).logp(x_i)
233 return self.init.logp(x[0]) + tt.sum(innov_like)
234 return self.init.logp(x)

D:\Softwares\Anaconda3\lib\site-packages\pymc3\distributions\continuous.py in logp(self, value)
516 mu = self.mu
517
–> 518 return bound((-tau * (value - mu)**2 + tt.log(tau / np.pi / 2.)) / 2.,
519 sigma > 0)
520

D:\Softwares\Anaconda3\lib\site-packages\theano\tensor\var.py in mul(self, other)
153 # and the return value in that case
154 try:
–> 155 return theano.tensor.mul(self, other)
156 except (NotImplementedError, AsTensorError):
157 return NotImplemented

D:\Softwares\Anaconda3\lib\site-packages\theano\gof\op.py in call(self, *inputs, **kwargs)
672 thunk.outputs = [storage_map[v] for v in node.outputs]
673
–> 674 required = thunk()
675 assert not required # We provided all inputs
676

D:\Softwares\Anaconda3\lib\site-packages\theano\gof\op.py in rval()
860
861 def rval():
–> 862 thunk()
863 for o in node.outputs:
864 compute_map[o][0] = True

D:\Softwares\Anaconda3\lib\site-packages\theano\gof\cc.py in call(self)
1737 print(self.error_storage, file=sys.stderr)
1738 raise
-> 1739 reraise(exc_type, exc_value, exc_trace)
1740
1741

D:\Softwares\Anaconda3\lib\site-packages\six.py in reraise(tp, value, tb)
701 if value.traceback is not tb:
702 raise value.with_traceback(tb)
–> 703 raise value
704 finally:
705 value = None

ValueError: Input dimension mis-match. (input[0].shape[1] = 1, input[1].shape[1] = 2)

Any help is appreciated. Thanks.

Hmmm, the problem seems to be in the step_size shape. I would just make it into a matrix with (1, n_dim) but it is not broadcasting I don’t know why.

Nevertheless, I just reshaped a bit to ensure that the shapes matched and it works.

step_size = pm.HalfNormal('step_size', 
                              sd=np.ones(n_dim), 
                              shape=(1,n_dim))
step_size_ = tt.tile(step_size, (interval,1))

# This is the central trick, PyMC3 already comes with this distribution
w = pm.GaussianRandomWalk('w', sd=step_size_, 
                              shape=(interval, 2))

Hope it helps!

2 Likes

Apologies for the late response. This one works. Thank you very much!