This has little to no relation to frequentist p-values. Their only relation is they are both probabilities, hence the p-value name.
As I said above, this is returning the integral of the orange part of the histogram/probability density. You compute the maximum (or any other summary statistic, the summary is generally called T, hence the name of t_stat, again not very meaningful) of the observed data and plot the result. This is the point you see (the line where color changes in the video). Then you compute the maximum of each posterior predictive sample and plot the histogram/kde of all those values. This is the curve (the histogram in the video you shared).
The bayesian p-value is the integral of the density below the dot, that is, the probability that the T of the posterior predictive is smaller than the T of the observed data (in your case, T=maximum). The expectation is for that value to be around .5 as ideally if we can treat the 1000 samples from the posterior predictive as samples from the true data generating process. But all values are possible and probable even if not as probable as 0.5.
I generally prefer checking with raw samples, and use a T statistic later on if I am specially interested in that summary. I wrote LOO-PIT tutorial — Oriol unraveled which should be useful for plot_bpv (with kind="u_stat") and with plot_loo_pit (used in the blog post)