@cluhmann is right. Have a look at the following
import bambi as bmb
import pandas as pd
data = {
"x": [738886, 738926, 738950, 739014, 739065, 739123, 739145, 739192, 739218, 739239],
"y": [48, 47, 46, 45, 43, 42, 42, 42, 41, 41],
"phs1": [1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
"phs2": [0, 0, 0, 0, 0, 1, 1, 1, 1, 1]
}
df = pd.DataFrame(data)
model = bmb.Model("y ~ x + x:phs1 + x:phs2", df)
model
Formula: y ~ x + x:phs1 + x:phs2
Family: gaussian
Link: mu = identity
Observations: 10
Priors:
target = mu
Common-level effects
Intercept ~ Normal(mu: 43.7, sigma: 37539.2986)
x ~ Normal(mu: 0.0, sigma: 0.0508)
x:phs1 ~ Normal(mu: 0.0, sigma: 0.0)
x:phs2 ~ Normal(mu: 0.0, sigma: 0.0)
Auxiliary parameters
sigma ~ HalfStudentT(nu: 4.0, sigma: 2.4515)
Notice first the huge standard deivation on the intercept and the “zero” sigma for the slope deflections.
If you then inspect the internal priors (to circumvent the rounding in the summary)
print(model.components["mu"].common_terms["x"].prior.args)
print(model.components["mu"].common_terms["x:phs1"].prior.args)
print(model.components["mu"].common_terms["x:phs2"].prior.args)
{'mu': array(0.), 'sigma': array(0.05079222)}
{'mu': array(0.), 'sigma': array(1.65875211e-05)}
{'mu': array(0.), 'sigma': array(1.6582692e-05)}
You see the ones for the deflections are way smaller than the ones for the slope.
This problem still happens when you do y ~ 1 + x:phs1 + x:phs2
(which gives identified slopes for your data, not the case previously).
When you have such magnitudes in numerical values it makes sense to apply some transformation. Subtracting by the minimum seems to work well in this case, and you don’t need to transform slopes. You could also just apply z-score standardization.
Out of curiosity, what kind of measurement y
is? Asking to double-check if the normal likelihood makes sense.