Inference from binary data & proper way to use GaussianRandomWalk

Hi everyone! I’m very new to both pymc3 and Bayesian inference in general so please be gentle with me…

I am trying to model the following time series:

phi_i ~ Normal(phi_(i-1), rw_step)
x_i ~ Bernoulli(p_i)
where p_i = sin(period * tau * i +phi_i)

I observe the values of x and want to obtain an estimate and posterior distribution for period. For simplicity assume that tau and rw_step are known, and that rw_step is small.

my code looks like this:

n = len(data)
i=np.arange(n)
basic_model = pm.Model()

with basic_model:
    period = pm.Normal('period', mu=110., sd=500.0)
    phase = pm.GaussianRandomWalk('phase',mu=0,sd=rw_step, shape=n)

    probs = pm.math.sin(tau*i+phase)** 2

    x = pm.Bernoulli('x', p=probs, shape=n, observed=data) 

I am running into trouble when I try to use find_MAP:

Warning: Desired error not necessarily achieved due to precision loss.
         Current function value: 10000000000000000159028911097599180468360808563945281389781327557747838772170381060813469985856815104.000000
         Iterations: 0
         Function evaluations: 113
         Gradient evaluations: 101
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-70-f85b0c7dfa75> in <module>()
----> 1 map_estimate = pm.find_MAP(model=basic_model )
      2 
      3 print(map_estimate)

C:\ProgramData\Anaconda3\lib\site-packages\pymc3\tuning\starting.py in find_MAP(start, vars, fmin, return_raw, model, *args, **kwargs)
    151                          "density. 2) your distribution logp's are " +
    152                          "properly specified. Specific issues: \n" +
--> 153                          specific_errors)
    154     mx = {v.name: mx[v.name].astype(v.dtype) for v in model.vars}
    155 

ValueError: Optimization error: max, logp or dlogp at max have non-finite values. Some values may be outside of distribution support. max: {'period': array(110.0), 'phase': array([ 0.,  0.,  0., ...,  0.,  0.,  0.])} logp: array(-inf) dlogp: array([-0.        ,  0.        , -0.12582933, ...,  0.3815204 ,
        0.25265876,  0.12582933])Check that 1) you don't have hierarchical parameters, these will lead to points with infinite density. 2) your distribution logp's are properly specified. Specific issues: 

Sampling the distribution did not produce useful results as well.

What am I doing wrong?

Note: general tips about this kind of model would be very welcome.

I’m a bit confused about what you are trying to achieve with the GaussianRandomWalk here. Maybe you mean something like this?

with pm.Model() as model:
    intercept = pm.Normal('intercept', mu=0, sd=5)
    amplitude = pm.HalfStudentT('amplitude', sd=2.5, nu=3)
    phase = pm.Uniform('phase', lower=0, upper=2*np.pi)
    period = pm.HalfNormal('period', sd=10)
    
    pred = (intercept
            + amplitude * pm.math.sin(period * i + phase))
    pm.Bernoulli('y', p=tt.logit(pred), observed=data)

I believe this is not what I meant. Maybe some more context would help:

I am conducting a series of Bernoulli experiments where the probability of success is dependent on time (i*tau here) like so:

p_i = sin(period * tau * i + phase)^2

The period and the phase (phi) are not directly observed, only the result of multiple successive experiments (data in the code). We want to model perturbations to the initial phase with a random walk.