The Autologistic is not a term I’ve heard except in terms of autoregressive neural models. The most appropriate one I think that is relevant is Markov Random Fields. The way I like to think of it is more of a binary segmentation problem where only some examples are observed. I have implemented a version using a binary glm without the condition on neighbours. Which worked fairly well as there was no real spatial information encoded. e.g.
with pm.Model() as model:
sd = pm.Exponential('sd',lam=3.0)
mean = pm.Normal('mu' ,mu=0., sd=10.)
data={
'x' : x.flatten(),
'y' : observations.flatten()
}
prior= {
'Intercept' : pm.Normal.dist(mu=mean, sd=sd),
'x' : pm.Normal.dist(mu=mean, sd=sd)
}
pm.GLM.from_formula('y ~ x', data, family=pm.glm.families.Binomial(), priors=prior)
trace = pm.sample(1000,init='adapt_diag')
This results in a very stark classification, spatially.
I was trying to implement something that now included spatial information, such as the neighbouring potentials so that neighbouring pixels would be a bit more likely to be uncertain as to the classification.