(Inverse?) Linear Regression - predicting the indepednent variable via two applications of Bayes rule

Hello!

I am quite new to Bayesian analysis and come with minimal statistical experience but a strong mathematical background. Currently, I am working on a simple Bayesian Linear Regression problem. However, the regression is somewhat complicated as I wish to predict the values of the independent variable using observations of the dependent variable. From what I understand, this is almost similar to an inverse linear regression problem.

Prior to this project, I would have simply run an OLS regression in the opposite direction, as is traditionally done in my field. However, I was turned onto the work of an individual who employed two instances of Bayesian analysis - first, to perform a linear regression (x → y) in order to find the regression parameters (slope, intercept, and uncertainty), followed by a second implementation to estimate x from y using the aforementioned regression parameters. The author wrote their own code (GitHub - jesstierney/BayMBT: Bayesian calibration for the MBT proxy in soils, peats, and lakes), however, I am unable to follow the logic behind the implementation of the second Bayes rule. I would greatly appreciate if anyone could share insights into this problem, either from their own experience, or just off the top of their head. More generally, is this even a possibility with PyMC3 or 4? Thanks!