# A function which maps a linear function to an interval between zero and one is conjugate to beta distribution

#1

I have a question regarding computing the posterior of a parameters for a Bernoulli process. Assuming there is a linear function of elements of matrix \mathcal{H}, given as z_{i:}\mathcal{H}^{K\times K}z_{j:}^T where \mathbf{Z} is a binary matrix with N rows and K columns. The graphical model is given as , ignoring the prior for \mathbf{Z}, if the elements of \mathcal{H} are sampled via a \mathrm{Beta}(\pi_1,\pi_2) process and A_{ij}\sim\mathrm{Bernoulli}(\rho_{ij}) what can be a best process to map z_{i:}\mathcal{H}^{K\times K}z_{j:}^T between zero and one and make this integral tractable?

\int_{\mathcal{H}^k}P(\rho_{ij}|\mathcal{H}_k,z_i,z_j)P(\mathcal{H}_{k}|\pi_{1},\pi_{2})\mathrm{d}\mathcal{H}_{k}

meaning that there would be some conjugacy to be able to get rid of \mathcal{H}. I was thinking \rho_{ij}\sim1-\mathrm{e}^{(-\mathbf{z}_i^T \mathcal{H}\mathbf{z}_j)} but computing above integral becomes very difficult. Any suggestion?

#2

I think these kinds of model usually can be approximated with mean field, eg see https://jaan.io/how-does-physics-connect-machine-learning/

#3

@junpenglao You mean the best way to do inference for such a model is the variational inference and the Gibbs sampler won’t work?

#4

No, what I meant is that instead of computing an intractable integral you can try approximating a mean field.