How about leaving sm in T=f(x1, sm)
as a parameter but then adding a likelihood to the system of the form
pm.Normal("sm_obs", sm, sd, observed=sm_obs)
Is there any reason why you want to estimate sm as a parameter after you have constructed the posteriors for A_ and B_ ? In the “experimental setup” are calibration for sm giving you likely values for sm or are they fixed (like how x are assumed to be fixed in a linear regression).
That being said also there is something called an error-in-variables model where you create some latent variables for something that would normally be fixed but might have error in its measurement (again like the x variable in a linear regression). That might be also useful to you, see for instance:
https://mc-stan.org/docs/2_21/stan-users-guide/bayesian-measurement-error-model.html