Prior choice for discrete features - logistic regression

I’m not having any issues sampling and the model works fine. But so far I’m just setting my priors as a Normal distribution and that’s that. I would like to “know my model better”, and set different distributions to different features, do some prior/posterior predictive checks, etc.

Ah, I see. So, similar to what I did on my model, there I used Bernoulli indeed.

Exactly. And I know some of them are more linked to malicious commandlines than others. Similarly to a traditional approach where I would give more weights to these features — I would like to know how to “give them more weight”.

For example, on this application, say I have the feature invoke and the feature ls. I know by prior experience that ls is not linked to something malicious where invoke likely is. How should I set their prior distributions (or think about it), considering that?

Hmm… as I mentioned I’m very new to the statistical world, but my reasoning was: afaik the intercept represents my target variable when all features are equal to 0. In my case, if all features are set to 0, then there’s no commandline (it’s an empty string), and, by definition, cannot be malicious. Therefore I set constant = 0. Is that reasoning wrong?

Thanks in advance!