Basic bayesian inference formulation

The former would be less computationally intensive - that’s unlikely to be a problem with a small model like this, but if you have a model where the number of parameters increases exponentially with data (e.g. for some types of time series models) then it becomes a problem. The latter is a more powerful model, because it can express the uncertainty in the parameters, and avoiding overfitting is good for applying your model to new data. I’d say both models are correct (insofar as models can ever be correct…); it depends on the use case which you might choose, but personally I’d lean towards the latter unless there’s a reason not to. Others with more experience than me may have more insight though