Sampling errors when running hierarchical model using hyperpriors

I have the following hierarchical model, where y is 101x13, X_1 is 101x13x26 and X_2 is 101x41.
There are 101 groups, 13 observations per group, 26 level 1 features (K_1) and 41 level 2 features (K_2).

with pm.Model() as model:
    # Hyperpriors
    mu_v_2 = pm.Normal(name="mu_v_2", mu=0.0, sigma=1.0, shape=())
    mu_K_2 = pm.Normal(name="mu_K_2", mu=0.0, sigma=1.0, shape=(n_vars_2, 1))
    mu_K_1 = pm.Normal(name="mu_K_1", mu=0.0, sigma=1.0, shape=n_vars_1)
    sigma_v_2 = pm.HalfNormal(name="sigma_v_2", sigma=1.0, shape=())
    sigma_K_2 = pm.HalfNormal(name="sigma_K_2", sigma=1.0, shape=(n_vars_2, 1))
    sigma_K_1 = pm.HalfNormal(name="sigma_K_1", sigma=1, shape=n_vars_1)

    # Level 2 intercept
    v_2 = pm.Normal(name="v_2", mu=mu_v_2, sigma=sigma_v_2, shape=())

    # Level 2 variables
    K_2 = pm.Normal(name="K_2", mu=mu_K_2, sigma=sigma_K_2, shape=(n_vars_2, 1))

    # Level 1 intercept
    v_1 = pm.Deterministic(name="v_1", var=pm.math.dot(X_2, K_2) + v_2)

    # Level 1 variables
    K_1 = pm.Normal(name="K_1", mu=mu_K_1, sigma=sigma_K_1, shape=n_vars_1)

    # Model error
    eps_1 = pm.Gamma(name="eps_1", alpha=9.0, beta=4.0, shape=())

    # Model mean
    y_hat = pm.Deterministic(name="y_hat", var=pm.math.dot(X_1, K_1) + v_1)

    # Likelihood
    y_like = pm.Normal(name="y_like", mu=y_hat, sigma=eps_1, observed=y)

with model:
    trace = pm.sample(draws=5000, chains=2, tune=5000, init="auto", return_inferencedata=True,
                      random_seed=SEED, cores=1)

However, when I run the model, I get the following errors:

There were 2982 divergences after tuning. Increase target_accept or reparameterize.
The acceptance probability does not match the target. It is 0.6530916769639405, but should be close to 0.8. Try to increase the number of tuning steps.
There were 6779 divergences after tuning. Increase target_accept or reparameterize.
The acceptance probability does not match the target. It is 0.40813842195230965, but should be close to 0.8. Try to increase the number of tuning steps.
The rhat statistic is larger than 1.2 for some parameters.
The estimated number of effective samples is smaller than 200 for some parameters.

If I run the model without using hyperpriors, the model works fine and there are no errors. I already tried different parameters for the hyperpriors and also increased draws and tune to 10,000 without success.

Moreover, it appears (when running without hyperpriors) that all level 2 features’ parameter are insignificant, which is quite unreasonable regarding the data at hand.