You can indeed just set w=0.25 and re-sample the model. In this case the model is simple and sampling is cheap, so it’s probably the way to go. I just wanted to add that if the model were expensive to sample, you could also get conditional probabilities from the joint posterior using boolean masks. The only wrinkle is that, since w is a continuous variable, you’re not going to have any samples at exactly w=0.25 from which to construct the conditional distribution. To get around this, I binned the values of w into chunks of width 0.1, so I built p(\theta | w \in [0.2, 0.3), D):
bins = np.linspace(0.2, 0.8, 7)
axes = az.plot_pair(trace, var_names=['w', 'theta'], marginals=True)
main_plot = axes[1][0]
ymin, ymax = main_plot.get_ylim()
main_plot.vlines(bins, ymin=ymin, ymax=ymax, ls='--', color='k', zorder=3)
The conditional probability you are interested in is the cluster of points in the first bin, between 0.2 and 0.3. It’s pretty far out on the tail, so there isn’t much to go on (hence the wider bin size).
You can grab values of theta in that bin, and then sample from these points as your conditional posterior:
w_bins = np.digitize(post['w'], bins=bins)
theta_given_w25 = post['theta'][w_bins == 1]
For comparison, this “tabular” method gives the conditional posterior mean of theta as 0.6141472 and std of 0.10960547, while resampling the model with w=0.25 gives the mean as 0.611 and std of 0.113, so you can see they are roughly equivalent.