Arviz Bayes factor for a directional hypothesis

This question concerns the use of : az.plot_bf()
arviz.plot_bf — ArviZ 0.17.0 documentation It appears to be a neat way to obtain the bayes factor and is illustrated with a nice plot. It uses the Savage Dickey method of getting the bayes factor (from the y-ordinates of the prior-y-ordinate at x=0 and the posterior-y-ordinate at x = 0).
My problem is that it appears to fail badly when estimating a DIRECTIONAL HYPOTHESIS. eg HI diff>0.
The prior in the example given is:diff_mu= pm.HalfNormal('diff', sigma=6) which has a correct height of :
y_ordinate = 0.13 given by:

rv_prior=pm.HalfNormal.dist(sigma=6)
# Get the y-ordinate at x=0
y_ordinate=np.exp(pm.logp(rv_prior,0)).eval()
print('y_ordinate =',f'{y_ordinate:.3f}')

and also verified by histogram of prior samples and also by az.plot_kde(prior_samples).
In comparison the az.plot_bf() shows the maximum for a half normal distribution at the wrong value and a y_ordinate at x=0 too low(~0.75). I have included some test code below to illustrate problem.

The pymc website :arviz.plot_bf — ArviZ 0.17.0 documentation) does not specify that this function is only suitable for two sided or non-directional hypothesis.

import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pymc as pm
import xarray as xr
import seaborn as sns
import pandas as pd
import scipy.stats as stats
print(f"Runing on PyMC v{pm.__version__}")
print(f"Runing on Arviz v{az.__version__}")
az.style.use('arviz-darkgrid')


df=pd.DataFrame({'difference':[-3.5,7.2,21.4,20.1,26.1,5.5,9.7,14.6]})
df.describe().T


with pm.Model() as model_diff:
   
    # Prior for differences
    # directional hypothesis
    diff_mu= pm.HalfNormal('diff', sigma=6)  # Adjust the scale as per expectation
    
    # non-directional hypothesis
    diff_mu = pm.Normal('diff', mu=0,sigma=6)  # Adjust the scale as per expectation
    
     # Priors for standard deviations
    diff_std = pm.Uniform('u_std', lower=0, upper=50)  # Prior for standard deviation of U_group    
       
    # Likelihoods for the observed data
    likelihood = pm.Normal('likelihood', mu=diff_mu, sigma=diff_std, observed=df['difference'] )
    
    # Sampling
    idata = pm.sample(20000, tune=2000,random_seed=600) #500
    idata.extend(pm.sample_prior_predictive())

# Summary statistics
az.summary(idata)

# Bayes Factor for directional hypothesis

# samples
posterior_samples=idata.posterior['diff'].values.flatten()
prior_samples=idata.prior['diff'].values.flatten()

idata_bf = az.from_dict(posterior={"effect_size":posterior_samples},
                     prior={"effect_size":prior_samples})

az.plot_bf(idata_bf, var_name="effect_size", ref_val=.01,textsize=8,xlim=(-0,22))

# Plot the histogram of prior samples.
plt.figure(figsize=(6, 4))
plt.hist(prior_samples, bins=10, density=True, alpha=0.3, color='blue', label='prior_samples')
plt.xlabel('Value')
plt.ylabel('Density')
plt.title('Histogram of prior_samples \n(for ordinate at x=0)')
plt.ylim(0,0.06)

# Plot the actual half normal given in the Model


rv_prior=pm.HalfNormal.dist(sigma=6)
# Get the y-ordinate at x=0
y_ordinate=np.exp(pm.logp(rv_prior,0)).eval()
print('y_ordinate =',f'{y_ordinate:.3f}')



# Plot the kde
az.plot_kde(pm.draw(rv_prior,100000),label=f'\nArviz_kde of prior samples\ny_ordinate = {y_ordinate:.3f}' )
plt.legend(fontsize=10)
plt.show()


image

Could you look over the code and the functionality of Az.plot_bf() to see if it is presently limited to only non-directional hypothesis. It would appear that if it is incorrect at present that it would simply take an adjustment of its internal kde fitting to correct the problem.
Thanking you for your time and expertise. DD