ADVI fit with forbidden prior parameter values runs without any error!

I have a simple Bayesian inference model involving the prior for the regression coefficient given by a Beta distribution, with mean mu=0.1 and standard deviation sigma=0.9. When I use pyMC variational inference fit with ADVI, the inferred regression coefficient looks pretty reasonable (close to 0.5, which I used to generate the input data). Everything looks as expected.

Except that the Beta distribution cannot have a standard deviation of 0.9! Its maximum standard deviation is sqrt(mu(1-mu)), which is 0.3 in this case. Prior predictive check does return an error. But the variational inference does not even seem to bother, and gives pretty fine results. How is this possible?

import numpy as np
import pymc as pm
import arviz as az

# Generate data
np.random.seed(100)
x=np.linspace(0, 1, 100)
y=0.5*x + np.random.normal(loc=0,scale=0.2,size=100)

# Define PyMC model
with pm.Model() as beta_model:        
    w = pm.Beta("w",mu = 0.1,sigma = 0.9)    
    y=pm.Normal('y',
                mu=w*x,
                sigma=1,
                observed = y)
    
# Sampling with approximate posterior
with beta_model:
     posterior_draws=pm.fit(n=20000,method="advi",
                            random_seed=100,progressbar=True)
     inferred_w = az.summary(posterior_draws.sample(50000, random_seed=100)) 
     
# View inferred parameter    
print(inferred_w) 

Returns the following

mean     sd  hdi_3%  hdi_97%  mcse_mean  mcse_sd  ess_bulk  ess_tail  r_hat
w  0.508  0.255   0.087     0.94      0.001    0.001   49383.0   48409.0    NaN

I could be wrong but I think all ADVI is doing is finding the best normal approximation to the posterior?