What op is used for hadamrd product of tensors?

What is pytensor operation for hadamard product of two tensors? I want to elementwise multiply two tensors of size (X,Y,Z) and * does not work for it.

* should work. Do you have an example where it seems to be failing?

import jax
jax.config.update('jax_platform_name', 'cpu')

import numpy as np
import pymc as pm
import pytensor
import pytensor.tensor as pt
from pymc.pytensorf import collect_default_updates
u = np.zeros((400,86,10))
y = np.ones((86,400))
T = 86
latent_variables_ar = 10
with pm.Model()as mod:
    def step(x, A, Q):
        innov = pm.MvNormal.dist(mu=0, tau=Q)

        next_x = pt.nlinalg.matrix_dot(x,A) + innov

        return next_x, collect_default_updates([x, A, Q], [next_x])
    x0_ar = pt.zeros(latent_variables_ar)
    mu2_ar = np.zeros(latent_variables_ar)
    sd_dist_ar = pm.Exponential.dist(1.0, shape=latent_variables_ar)
    chol2_ar, corr_ar, stds_ar = pm.LKJCholeskyCov('chol_cov_ar', n=latent_variables_ar, eta=2,
    sd_dist=sd_dist_ar, compute_corr=True)
    A_ar = pm.MvNormal('A_ar', mu=mu2_ar, chol=chol2_ar, shape=(latent_variables_ar,latent_variables_ar))

    sigmas_Q_ar = pm.HalfNormal('sigmas_Q_ar', sigma=1, shape= (latent_variables_ar))
    Q_ar = pt.diag(sigmas_Q_ar)    
    ar_states_pt, ar_updates = pytensor.scan(step, 
                                              outputs_info=[x0_ar], 
                                              non_sequences=[A_ar, Q_ar],
                                              n_steps=T, 
                                              strict=True)
    mod.register_rv(ar_states_pt, name='ar_states_pt', initval=pt.zeros((T, latent_variables_ar)))
    

    lambdas = pm.Deterministic("lambdas",pt.math.mean(u*(pt.transpose(pt.tile(pt.transpose(ar_states_pt).reshape((latent_variables_ar,T,1)),400),axes=(2,1,0))),axis=1) )
    obs = pm.Poisson('obs', lambdas, observed=y)
with mod:
    inference = pm.ADVI()
    tracker = pm.callbacks.Tracker(
    mean= inference.approx.mean.eval,  # callable that returns mean
    std= inference.approx.std.eval  # callable that returns std
    )
    approx = pm.fit(n= 20000, method=inference, callbacks=[tracker],obj_optimizer=pm.adam(learning_rate=0.25), obj_n_mc=10)

 

idata = approx.sample(2000) 

In the above code, u*(pt.transpose(pt.tile(pt.transpose(ar_states_pt).reshape((latent_variables_ar,T,1)),400),axes=(2,1,0))) does not work even though the dimension of u and the other term is exactly same (400,86,10). When I run the above code, I get the below error-
ValueError: Incompatible Elemwise input shapes [(86, 400), (400, 10)]

The multiplication in the line you point out isn’t where the error is being raised. The error is raised in the line obs = pm.Poisson('obs', lambdas, observed=y), because lambdas is shape (400, 10), but y is shape (86,400).

pm.model_to_graphviz is a nice tool to diagnose these types of shape errors.

Thanks a lot! I realized that I was doing pt.math.mean wrong which resulted in a dimension mismatch with the observed data. The interesting thing is that I was using pm.model_to_graphviz(mod), but there was no error and it does not show the dimension of the deterministic variable.