Thank you for your response. I apologize for my previous laziness in not exploring prior predictive distributions and conducting simulations, i blame it on the lack of a functioning computer atm. I was primarily considering regularization and ->my lack of intuition<-. My assumption was that the multivariate normal prior and GRW priors would provide more regularization to the model compared to independent normal priors, even if they resulted in strong correlations in the posterior distribution. Similarly, I believed that the weighting scheme would introduce some regularization into the model, but upon reflection, I may have been conceptually mistaken and the main and only source of regularization in this scenario would come from just setting more informative priors. Please correct me if my intuition is wrong.
In the case of the ice-cream example, my interest lies in understanding how my independent variables affect the dependent variable with respect to time-varying factors. Let’s assume I can invest in two different ice-cream vendors, and my independent variables are the amounts of money I wish to invest in each vendor. The dependent variable represents the return on my investments. Suppose one vendor specializes in selling hot ice creams (using various chemical and physical techniques), while the other vendor focuses on cold ice creams. My goal is to determine how my investments impact the return on investment considering these time-varying factors. To achieve this, I introduce parameter knots that are linked to time or climate variables, such as temperature, enabling me to plan optimally.
If I were to exclude the interaction between time or climate and my investments, treating them solely as control variables, I would fail to capture the fact that I prefer to invest more in the cold-ice-cream vendor during the summer and more in the hot-ice-cream vendor during the winter. This is because there would be no interaction between the time-varying parameters and my investment inputs.
Since i did not give any context, it was obviously impossible to see this, sry.
I might have misinterpreted the term latent components in this context though but i assume you mean to insert them as control-variables without interactions with our independent variables. The reasoning behind the weighting scheme is to introduce continuity(much like kernel smoothing)(although it remains non-differentiable), one could also extend this to arbitrary kernels also weighing in more than just the “nearest” coefficient etc.