The best way to understand marginal effects of a given model, in my opinion, is to work out the partial derivatives.
First you need to know your link function, because this determines the functional form of the estimated mean. Schlepping through the source code, it looks like the default link function for beta regression is the inverse logit function. Per the documentation you linked, Bambi uses the mean-sigma parameterization of the beta distribution, so you’re directly modeling the mean and you don’t have to do anything crazy to recover parameters. Thus, to understand the effect of a change in the value of a covariate on the estimated mean, we just need to solve the following partial derivative:
\frac{\partial \hat \mu_i}{\partial x_j} = \frac{1}{1 + \exp(-X_i\beta)} = \beta_j\frac{\exp (-X_i\beta)}{(1 + \exp(-X_i\beta))^2} =\beta_j \hat \mu_i^2 \exp(-X_i\beta)
Where X_i is a 1 \times k row vector of covariate values (perhaps the i-th row of the design matrix, hence my choice of nomenclature) and \beta is a k \times 1 column vector of parameters, with \beta_j, j \leq k as the coefficient associated with the j-th covariate.
Some observations:
-
The marginal effect is a function of the the covariates themselves, both through the estimated average \mu, and in the exponent term. To get a single number, you have to plug in values for X to evaluate this expression. This could be a particular unit of interest in your study, or you could plug in the sample averages to get the so-called “average marginal effect” (this is what e.g. stata gives you with the
margins, dydxpost-estimation command (ok boomer)) -
Because we are working with logits, all the interpretation baked into logistic regression is available to us, if we want it. For example, one could interpret the above equation as marginal effects on the probability scale. I.e., given values for X_i, a unit change in x_j is associated with a such-and-such percent change in the probability of observing an outcome value of \hat \mu_i = 1.
-
Likewise, you could use the log-odds interpretation of the parameters themselves, by supposing that \hat \mu_i = \frac{1}{1 + \exp (-X_i \beta)} = \frac{\exp(X_i \beta)}{1 + \exp (X_i \beta)} = p_i, so that \ln \left ( \frac{p_i}{1 - p_i} \right ) = X_i \beta (p_i is the probability that \hat y_i = 1). In this interpretation, you can directly read off the coefficients, then go straight to wikipedia and read about log-odds for a half hour because you once again forgot how to interpret them (perhaps I’m projecting).
If your regression models a probability, the interpretations borrowed from logistic regression are quite appealing. If you are modeling something else, you will have to be more careful in your reasoning. In either case, staring at the partial derivative is a great place to start. I suggest making a lot of plots and experimenting with changes to the parameters. Desmos is a nice tool for doing this interactively.