Using pm.CustomDist and got error:TypeError: rv_op() got an unexpected keyword argument 'mu'

hi all, I am trying to custom a shifted lognormal function, as you can see in the following codes, I just need to add a shift based on the lognormal distribution. I go through some posts and I found this https://www.pymc.io/projects/docs/en/latest/api/distributions/generated/pymc.CustomDist.html
so, I modify my codes based on this post, so here is my code:

function part:

def dist(
    mu: TensorVariable,
    sigma: TensorVariable,
    shift: TensorVariable,
    size: TensorVariable,
)-> TensorVariable:
    return pm.LogNormal.dist(mu, sigma, size=size) + shift 

model part: since the model definition is a bit complex, I just post some key lines here.

coords = {"cond": df_flanker['Condition'].unique(),
          "time": df_flanker['Time'].unique(),
          "subj": df_flanker['subj_num'].unique(),
          "trial": df_flanker['trial_idx']}

with pm.Model(coords=coords) as model_3:
  ...
    ndt_i = pm.Deterministic('ndt_i', phi*rt_min, dims=["subj","time"])
  ...
    mu = pm.Deterministic("mu",pm.math.stack([mu_i_base,mu_i_incon]), dims=["cond","subj","time"])
  ...
    sigma = pm.Deterministic("sigma", pm.math.exp(sigma_stack), dims=["cond","subj","time"])

    time_idx = pm.MutableData("time_idx",df_flanker.Time, dims="trial")
    subj_idx = pm.MutableData("subj_idx",df_flanker.subj_num, dims="trial")
    cond_idx = pm.MutableData("cond_idx",df_flanker.Condition, dims="trial")

    pm.CustomDist("likelihood",
                    mu = mu[cond_idx,subj_idx,time_idx], 
                    sigma=sigma[cond_idx,subj_idx,time_idx],
                    shift=ndt_i[subj_idx, time_idx],
                    dist=dist,
                    observed=df_flanker.RT - ndt_i[subj_idx, time_idx].eval(),
                    dims="trial")

and i got the error

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[28], line 93
     90 subj_idx = pm.MutableData("subj_idx",df_flanker.subj_num, dims="trial")
     91 cond_idx = pm.MutableData("cond_idx",df_flanker.Condition, dims="trial")
---> 93 pm.CustomDist("likelihood",
     94                 mu = mu[cond_idx,subj_idx,time_idx], 
     95                 sigma=sigma[cond_idx,subj_idx,time_idx],
     96                 shift=ndt_i[subj_idx, time_idx],
     97                 dist=dist,
     98                 observed=df_flanker.RT - ndt_i[subj_idx, time_idx].eval(),
     99                 dims="trial")

File /opt/conda/lib/python3.9/site-packages/pymc/distributions/distribution.py:958, in CustomDist.__new__(cls, name, dist, random, logp, logcdf, moment, ndim_supp, ndims_params, dtype, *dist_params, **kwargs)
    956 if dist is not None:
    957     kwargs.setdefault("class_name", f"CustomDist_{name}")
--> 958     return _CustomSymbolicDist(
    959         name,
    960         *dist_params,
    961         dist=dist,
    962         logp=logp,
    963         logcdf=logcdf,
    964         moment=moment,
    965         ndim_supp=ndim_supp,
    966         **kwargs,
    967     )
    968 else:
    969     kwargs.setdefault("class_name", f"CustomDist_{name}")

File /opt/conda/lib/python3.9/site-packages/pymc/distributions/distribution.py:308, in Distribution.__new__(cls, name, rng, dims, initval, observed, total_size, transform, *args, **kwargs)
    305     elif observed is not None:
    306         kwargs["shape"] = tuple(observed.shape)
--> 308 rv_out = cls.dist(*args, **kwargs)
    310 rv_out = model.register_rv(
    311     rv_out,
    312     name,
   (...)
    317     initval=initval,
    318 )
    320 # add in pretty-printing support

File /opt/conda/lib/python3.9/site-packages/pymc/distributions/distribution.py:622, in _CustomSymbolicDist.dist(cls, dist, logp, logcdf, moment, ndim_supp, dtype, class_name, *dist_params, **kwargs)
    614 if moment is None:
    615     moment = functools.partial(
    616         default_moment,
    617         rv_name=class_name,
    618         has_fallback=True,
    619         ndim_supp=ndim_supp,
    620     )
--> 622 return super().dist(
    623     dist_params,
    624     class_name=class_name,
    625     logp=logp,
    626     logcdf=logcdf,
    627     dist=dist,
    628     moment=moment,
    629     ndim_supp=ndim_supp,
    630     **kwargs,
    631 )

File /opt/conda/lib/python3.9/site-packages/pymc/distributions/distribution.py:385, in Distribution.dist(cls, dist_params, shape, **kwargs)
    383 ndim_supp = getattr(cls.rv_op, "ndim_supp", None)
    384 if ndim_supp is None:
--> 385     ndim_supp = cls.rv_op(*dist_params, **kwargs).owner.op.ndim_supp
    386 create_size = find_size(shape=shape, size=size, ndim_supp=ndim_supp)
    387 rv_out = cls.rv_op(*dist_params, size=create_size, **kwargs)

TypeError: rv_op() got an unexpected keyword argument 'mu'

maybe it is due to the definition of dims or something? but I am new to the pymc so it is still difficult for me to understand the error, I don’t know how to modify the above code, so maybe some of you can help me out, thank a lot!

The CustomDist parameters must be passed as positional arguments, not by keyword.

    pm.CustomDist("likelihood",
                    mu[cond_idx,subj_idx,time_idx], 
                    sigma[cond_idx,subj_idx,time_idx],
                    ndt_i[subj_idx, time_idx],
                    dist=dist,
                    observed=df_flanker.RT - ndt_i[subj_idx, time_idx].eval(),
                    dims="trial")

thanks! does the positional arguments means that in pm.CustomDist, I should pass the parameter the same as in the dist function function, so there I should pass mu instead of mu[cond_idx,subj_idx,time_idx]?

if so, but there the mu would change according to the dims, so maybe I should construct CustomDist in some alternative ways? do you have any advice?

You shouldn’t have to change anything about the dist function just because you’re passing positional arguments. It’s just like a python function, the variable you defined is passed but given the name of the function parameter.

You should index outside just like you do with a vanilla LogNormal distribution. So what I showed above should be correct

oh! now I see, thank you so much, it worked.

here is one more thing, i try to do the sampling, but it failed and says:
Some of the observed values of variable likelihood are associated with a non-finite logp: here is the full error, it is a bit long: :sweat_smile:


point={'L_R_mu_interval__': array([0.]), 'L_R_sigma_interval__': array([0.]), 'mu_mean_base': array([0., 0.]), 'mu_sd_base_log__': array([0., 0.]), 'sigma_mean_base': array([0., 0.]), 'sigma_sd_base_log__': array([0., 0.]), 'mu_mean_delta': array([0., 0.]), 'mu_sd_delta_log__': array([0., 0.]), 'sigma_mean_delta': array([0., 0.]), 'sigma_sd_delta_log__': array([0., 0.]), 'ndt_mean': array([0., 0.]), 'ndt_sigma': array([0., 0.]), 'mu_i_base_pr': array([[0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.]]), 'sigma_i_base_pr': array([[0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.]]), 'mu_i_delta_pr': array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]), 'sigma_i_delta_pr': array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]), 'ndt_i_pr': array([[0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.],
       [0., 0.]])}

The variable likelihood has the following parameters:
0: [45120] [id A] <Vector(int64, shape=(1,))>
1: AdvancedSubtensor [id B] <Vector(float64, shape=(?,))>
 ├─ Join [id C] <Tensor3(float64, shape=(2, 47, 2))> 'mu'
 │  ├─ 0 [id D] <Scalar(int8, shape=())>
 │  ├─ Add [id E] <Tensor3(float64, shape=(1, 47, 2))>
 │  │  ├─ ExpandDims{axes=[0, 1]} [id F] <Tensor3(float64, shape=(1, 1, 2))>
 │  │  │  └─ mu_mean_base [id G] <Vector(float64, shape=(2,))>
 │  │  └─ ExpandDims{axis=0} [id H] <Tensor3(float64, shape=(1, 47, 2))>
 │  │     └─ Mul [id I] <Matrix(float64, shape=(47, 2))>
 │  │        ├─ Exp [id J] <Matrix(float64, shape=(1, 2))>
 │  │        │  └─ ExpandDims{axis=0} [id K] <Matrix(float64, shape=(1, 2))>
 │  │        │     └─ mu_sd_base_log__ [id L] <Vector(float64, shape=(2,))>
 │  │        └─ mu_i_base_pr [id M] <Matrix(float64, shape=(47, 2))>
 │  └─ Add [id N] <Tensor3(float64, shape=(1, 47, 2))>
 │     ├─ ExpandDims{axes=[0, 1]} [id O] <Tensor3(float64, shape=(1, 1, 2))>
 │     │  └─ mu_mean_base [id G] <Vector(float64, shape=(2,))>
 │     ├─ ExpandDims{axis=0} [id P] <Tensor3(float64, shape=(1, 47, 2))>
 │     │  └─ Mul [id I] <Matrix(float64, shape=(47, 2))>
 │     │     └─ ···
 │     ├─ ExpandDims{axes=[0, 1]} [id Q] <Tensor3(float64, shape=(1, 1, 2))>
 │     │  └─ mu_mean_delta [id R] <Vector(float64, shape=(2,))>
 │     └─ DimShuffle{order=[x,1,0]} [id S] <Tensor3(float64, shape=(1, 47, ?))>
 │        └─ Dot22 [id T] <Matrix(float64, shape=(?, 47))> 'mu_i_delta_tilde'
 │           ├─ Dot22 [id U] <Matrix(float64, shape=(?, ?))> 'L_S_mu'
 │           │  ├─ AllocDiag{offset=0, axis1=0, axis2=1} [id V] <Matrix(float64, shape=(?, ?))>
 │           │  │  └─ Exp [id W] <Vector(float64, shape=(2,))> 'mu_sd_delta'
 │           │  │     └─ mu_sd_delta_log__ [id X] <Vector(float64, shape=(2,))>
 │           │  └─ Cholesky{lower=True, destructive=False, on_error='raise'} [id Y] <Matrix(float64, shape=(?, ?))>
 │           │     └─ Add [id Z] <Matrix(float64, shape=(?, ?))>
 │           │        ├─ [[1. 0.]
 [0. 1.]] [id BA] <Matrix(float64, shape=(2, 2))>
 │           │        ├─ AdvancedSetSubtensor [id BB] <Matrix(float64, shape=(?, ?))>
 │           │        │  ├─ Alloc [id BC] <Matrix(float64, shape=(2, 2))>
 │           │        │  │  ├─ 0.0 [id BD] <Scalar(float64, shape=())>
 │           │        │  │  ├─ 2 [id BE] <Scalar(int8, shape=())>
 │           │        │  │  └─ 2 [id BE] <Scalar(int8, shape=())>
 │           │        │  ├─ Sub [id BF] <Vector(float64, shape=(1,))> 'L_R_mu'
 │           │        │  │  ├─ Sigmoid [id BG] <Vector(float64, shape=(1,))>
 │           │        │  │  │  └─ L_R_mu_interval__ [id BH] <Vector(float64, shape=(1,))>
 │           │        │  │  └─ Sub [id BI] <Vector(float64, shape=(1,))>
 │           │        │  │     ├─ [1.] [id BJ] <Vector(float64, shape=(1,))>
 │           │        │  │     └─ Sigmoid [id BG] <Vector(float64, shape=(1,))>
 │           │        │  │        └─ ···
 │           │        │  ├─ [0] [id BK] <Vector(uint8, shape=(1,))>
 │           │        │  └─ [1] [id BL] <Vector(uint8, shape=(1,))>
 │           │        └─ Transpose{axes=[1, 0]} [id BM] <Matrix(float64, shape=(?, ?))>
 │           │           └─ AdvancedSetSubtensor [id BB] <Matrix(float64, shape=(?, ?))>
 │           │              └─ ···
 │           └─ mu_i_delta_pr [id BN] <Matrix(float64, shape=(2, 47))>
 ├─ cond_idx [id BO] <Vector(int32, shape=(?,))>
 ├─ subj_idx [id BP] <Vector(int32, shape=(?,))>
 └─ time_idx [id BQ] <Vector(int32, shape=(?,))>
2: AdvancedSubtensor [id BR] <Vector(float64, shape=(?,))>
 ├─ Exp [id BS] <Tensor3(float64, shape=(2, 47, 2))> 'sigma'
 │  └─ Join [id BT] <Tensor3(float64, shape=(2, 47, 2))> 'sigma_stack'
 │     ├─ 0 [id D] <Scalar(int8, shape=())>
 │     ├─ Add [id BU] <Tensor3(float64, shape=(1, 47, 2))>
 │     │  ├─ ExpandDims{axes=[0, 1]} [id BV] <Tensor3(float64, shape=(1, 1, 2))>
 │     │  │  └─ sigma_mean_base [id BW] <Vector(float64, shape=(2,))>
 │     │  └─ ExpandDims{axis=0} [id BX] <Tensor3(float64, shape=(1, 47, 2))>
 │     │     └─ Mul [id BY] <Matrix(float64, shape=(47, 2))>
 │     │        ├─ Exp [id BZ] <Matrix(float64, shape=(1, 2))>
 │     │        │  └─ ExpandDims{axis=0} [id CA] <Matrix(float64, shape=(1, 2))>
 │     │        │     └─ sigma_sd_base_log__ [id CB] <Vector(float64, shape=(2,))>
 │     │        └─ sigma_i_base_pr [id CC] <Matrix(float64, shape=(47, 2))>
 │     └─ Add [id CD] <Tensor3(float64, shape=(1, 47, 2))>
 │        ├─ ExpandDims{axes=[0, 1]} [id CE] <Tensor3(float64, shape=(1, 1, 2))>
 │        │  └─ sigma_mean_base [id BW] <Vector(float64, shape=(2,))>
 │        ├─ ExpandDims{axis=0} [id CF] <Tensor3(float64, shape=(1, 47, 2))>
 │        │  └─ Mul [id BY] <Matrix(float64, shape=(47, 2))>
 │        │     └─ ···
 │        ├─ ExpandDims{axes=[0, 1]} [id CG] <Tensor3(float64, shape=(1, 1, 2))>
 │        │  └─ sigma_mean_delta [id CH] <Vector(float64, shape=(2,))>
 │        └─ DimShuffle{order=[x,1,0]} [id CI] <Tensor3(float64, shape=(1, 47, ?))>
 │           └─ Dot22 [id CJ] <Matrix(float64, shape=(?, 47))> 'sigma_i_delta_tilde'
 │              ├─ Dot22 [id CK] <Matrix(float64, shape=(?, ?))> 'L_S_sigma'
 │              │  ├─ AllocDiag{offset=0, axis1=0, axis2=1} [id CL] <Matrix(float64, shape=(?, ?))>
 │              │  │  └─ Exp [id CM] <Vector(float64, shape=(2,))> 'sigma_sd_delta'
 │              │  │     └─ sigma_sd_delta_log__ [id CN] <Vector(float64, shape=(2,))>
 │              │  └─ Cholesky{lower=True, destructive=False, on_error='raise'} [id CO] <Matrix(float64, shape=(?, ?))>
 │              │     └─ Add [id CP] <Matrix(float64, shape=(?, ?))>
 │              │        ├─ [[1. 0.]
 [0. 1.]] [id BA] <Matrix(float64, shape=(2, 2))>
 │              │        ├─ AdvancedSetSubtensor [id CQ] <Matrix(float64, shape=(?, ?))>
 │              │        │  ├─ Alloc [id BC] <Matrix(float64, shape=(2, 2))>
 │              │        │  │  └─ ···
 │              │        │  ├─ Sub [id CR] <Vector(float64, shape=(1,))> 'L_R_sigma'
 │              │        │  │  ├─ Sigmoid [id CS] <Vector(float64, shape=(1,))>
 │              │        │  │  │  └─ L_R_sigma_interval__ [id CT] <Vector(float64, shape=(1,))>
 │              │        │  │  └─ Sub [id CU] <Vector(float64, shape=(1,))>
 │              │        │  │     ├─ [1.] [id BJ] <Vector(float64, shape=(1,))>
 │              │        │  │     └─ Sigmoid [id CS] <Vector(float64, shape=(1,))>
 │              │        │  │        └─ ···
 │              │        │  ├─ [0] [id BK] <Vector(uint8, shape=(1,))>
 │              │        │  └─ [1] [id BL] <Vector(uint8, shape=(1,))>
 │              │        └─ Transpose{axes=[1, 0]} [id CV] <Matrix(float64, shape=(?, ?))>
 │              │           └─ AdvancedSetSubtensor [id CQ] <Matrix(float64, shape=(?, ?))>
 │              │              └─ ···
 │              └─ sigma_i_delta_pr [id CW] <Matrix(float64, shape=(2, 47))>
 ├─ cond_idx [id BO] <Vector(int32, shape=(?,))>
 ├─ subj_idx [id BP] <Vector(int32, shape=(?,))>
 └─ time_idx [id BQ] <Vector(int32, shape=(?,))>
3: AdvancedSubtensor [id CX] <Vector(float64, shape=(?,))>
 ├─ Mul [id CY] <Matrix(float64, shape=(47, 2))> 'ndt_i'
 │  ├─ Exp [id CZ] <Matrix(float64, shape=(47, 2))>
 │  │  └─ Switch [id DA] <Matrix(float64, shape=(47, 2))>
 │  │     ├─ Lt [id DB] <Matrix(bool, shape=(47, 2))>
 │  │     │  ├─ Add [id DC] <Matrix(float64, shape=(47, 2))>
 │  │     │  │  ├─ ExpandDims{axis=0} [id DD] <Matrix(float64, shape=(1, 2))>
 │  │     │  │  │  └─ ndt_mean [id DE] <Vector(float64, shape=(2,))>
 │  │     │  │  └─ Mul [id DF] <Matrix(float64, shape=(47, 2))>
 │  │     │  │     ├─ ExpandDims{axis=0} [id DG] <Matrix(float64, shape=(1, 2))>
 │  │     │  │     │  └─ ndt_sigma [id DH] <Vector(float64, shape=(2,))>
 │  │     │  │     └─ ndt_i_pr [id DI] <Matrix(float64, shape=(47, 2))>
 │  │     │  └─ [[-1.]] [id DJ] <Matrix(float32, shape=(1, 1))>
 │  │     ├─ Sub [id DK] <Matrix(float64, shape=(47, 2))>
 │  │     │  ├─ Log [id DL] <Matrix(float64, shape=(47, 2))>
 │  │     │  │  └─ Mul [id DM] <Matrix(float64, shape=(47, 2))>
 │  │     │  │     ├─ [[0.5]] [id DN] <Matrix(float64, shape=(1, 1))>
 │  │     │  │     └─ Erfcx [id DO] <Matrix(float64, shape=(47, 2))>
 │  │     │  │        └─ Mul [id DP] <Matrix(float64, shape=(47, 2))>
 │  │     │  │           ├─ [[-0.70710679]] [id DQ] <Matrix(float64, shape=(1, 1))>
 │  │     │  │           └─ Add [id DC] <Matrix(float64, shape=(47, 2))>
 │  │     │  │              └─ ···
 │  │     │  └─ Mul [id DR] <Matrix(float64, shape=(47, 2))>
 │  │     │     ├─ [[0.5]] [id DN] <Matrix(float64, shape=(1, 1))>
 │  │     │     └─ Sqr [id DS] <Matrix(float64, shape=(47, 2))>
 │  │     │        └─ Add [id DC] <Matrix(float64, shape=(47, 2))>
 │  │     │           └─ ···
 │  │     └─ Log1p [id DT] <Matrix(float64, shape=(47, 2))>
 │  │        └─ Mul [id DU] <Matrix(float64, shape=(47, 2))>
 │  │           ├─ [[-0.5]] [id DV] <Matrix(float64, shape=(1, 1))>
 │  │           └─ Erfc [id DW] <Matrix(float64, shape=(47, 2))>
 │  │              └─ Mul [id DX] <Matrix(float64, shape=(47, 2))>
 │  │                 ├─ [[0.70710679]] [id DY] <Matrix(float64, shape=(1, 1))>
 │  │                 └─ Add [id DC] <Matrix(float64, shape=(47, 2))>
 │  │                    └─ ···
 │  └─ [[0.22698 ... .24872  ]] [id DZ] <Matrix(float64, shape=(47, 2))>
 ├─ subj_idx [id BP] <Vector(int32, shape=(?,))>
 └─ time_idx [id BQ] <Vector(int32, shape=(?,))>
The parameters evaluate to:
0: [45120]
1: [0. 0. 0. ... 0. 0. 0.]
2: [1. 1. 1. ... 1. 1. 1.]
3: [0.11349 0.11349 0.11349 ... 0.12436 0.12436 0.12436]
Some of the observed values of variable likelihood are associated with a non-finite logp:
 value = 0.013057221998227633 -> logp = -inf
 value = 0.12245559233225234 -> logp = -inf
 value = 0.12279559233225235 -> logp = -inf
 value = 0.1324055923322523 -> logp = -inf
 value = 0.13028559233225234 -> logp = -inf
 value = 0.13515559233225233 -> logp = -inf
 value = 0.09796559233225233 -> logp = -inf
 value = 0.12287559233225231 -> logp = -inf
 value = 0.10151559233225232 -> logp = -inf
 value = 0.0845055923322523 -> logp = -inf
 value = 0.10644559233225231 -> logp = -inf
 value = 0.1179155923322523 -> logp = -inf
 value = 0.1217555923322523 -> logp = -inf
 value = 0.1133155923322523 -> logp = -inf
 value = 0.10489559233225232 -> logp = -inf
 value = 0.1310855923322523 -> logp = -inf





I see this post have the same error, but after browsing it I still have no cluehttps://discourse.pymc.io/t/some-of-the-observed-values-are-associated-with-a-non-finite-logp/12909

That means you have some observed values that are not possible under your distribution like values smaller than shift. Or you have invalid parameters like negative sigma.

Or even if the values are valid they may underflow/overflow because they are too extreme (like very small values for a large mu and small sigma or very large values for a small mu and sigma)

Calling model.debug() may help pinpoint the problem

got it, I will check the model definition again, thanks a lot!