yep
Taking the example from statsmodels, it goes something like this:
import numpy as np
import statsmodels.api as sm
Y = np.asarray([1,3,4,5,2,3,4]) + np.arange(1,8) - 5
x = np.arange(1,8)
weights = np.arange(1,8)
X = sm.add_constant(x)
wls_model = sm.WLS(Y, X, weights)
results = wls_model.fit()
results.params
# ==> array([-2.08333333, 1.0952381 ])
import pymc3 as pm
with pm.Model() as m:
intercept = pm.Normal('c', 0., 10.)
beta = pm.Normal('b', 0., 5.)
# Total variance
sigma = pm.HalfNormal('s', 2.)
# Scale by weights
# The weights are presumed to be (proportional to) the
# inverse of the variance of the observations.
tau = 1 / sigma**2 * (weights / weights.sum())
pm.Normal('y', beta*x + intercept, tau=tau, observed=Y)
trace = pm.sample(return_inferencedata=True)
pm.summary(trace)['mean']
# c -1.971
# b 1.076
# s 0.618
# Name: mean, dtype: float64
There is a bit of divergence so the prior needs some more care, but that’s the general idea.