Concatenation pytensor tensors

I have one prior that is a joint normal distribution and another that is a single scalar normal distribution. Together, they make up the parameters to my model. My model uses a black box likelihood function. The problem is that pytensor complains that the vector and scalar can’t be broadcast because they are of different lengths. What I would like to do is combine them into one single pytensor tensor. But I can’t figure out how. Please advise.

I’m sure for some of you, this is super easy but for us novices, this seems to work.

with pm.model():
m = pm.Normal(“m”,mu=[0,1],sigma=[0,1])
c = pm.Uniform(“c”, lower = 0, upper=1)
param = pt.as_tensor([0,0,0])
param = pt.set_subtensor(param[0:2],m[0:2])
param = pt.set_subtensor(param[2],c)

Then you can pass param to a custom likelihood, without the broadcast problem.

You can also use pt.stack, which works like np.stack. Should be more efficient as well

It depends a little bit on the desired shape. If you just want to concatenate a vector with a scalar to generate a slightly longer vector:

with pm.Model():
    a = pm.Normal('a', mu=[0,0], sigma=[1,1])
    b = pm.Uniform('b',lower = 0, upper = 1, shape=1)
    c = pt.concatenate([a,b])
    
c.eval()

It also sounded like you might want to broadcast them instead? If you are looking for a 2 x 2 matrix you could also try.

with pm.Model():
    a = pm.Normal('a', mu=[0,0], sigma=[1,1])
    b = pm.Uniform('b',lower = 0, upper = 1, shape=1)
    b = pt.broadcast_to(b,a.shape)
    c = pt.stack([a,b],axis=1)
    
c.eval()

Lastly, I was getting some very weird behaviour out of your solution. Looks like when you build param with pt.as_tensor(), it infers you have an integer tensor. So it effectively rounds the results of m and c. Try this instead:

with pm.Model():
    m = pm.Normal("m",mu=[0,0],sigma=[1,1])
    c = pm.Uniform("c", lower = 0, upper=1)
    param = pt.as_tensor([0.,0.,0.])
    param = pt.set_subtensor(param[:2],m)
    param = pt.set_subtensor(param[2],c)

pm.draw(param)
1 Like

Nice catch on the zeros. Thanks.