Stick-breaking indian buffet process for an infinite latent feature model

Thanks @junpenglao for the reply and explanation. About your first point, the main difference of the stick-breaking construction with other implementations is that you do not force your sampler on any truncation level of the stick sizes. Your sampler supposedly should learn during the sampling process the proper lower limit of the stick size but what is surprising me is that the equation 25 in the paper is not log-concave for small values close to zero, where the log-concavity is the pre-requirement of using the adaptive rejection sampling. For small x values, the adaptive rejection sampler returns this error message when I run the sampler:
Trap: non-logcocavity detected by ARS update function.

Here is the plot I made to show that this function is not log-concave:

import numpy as np
import pylab as plt
def g(x, N, alpha):
     s=0.
     for i in xrange(1, N, 1):
         s+=1./i*(1-x)**i
     return alpha*s +(alpha-1)*np.log(x)+N*np.log(1.-x)

def h(x, N, alpha): 
      val = N*np.log(1-x)-np.log(x)
      return val 

x=np.linspace(0,0.005,500)

plt.plot(x,h(x,300,0.9),'k')
plt.plot(x,g(x,300,0.9),'b')

log-concave
h function is based on the simplification given in (A.11) eq. of the thesis.
Do you have any suggestion to avoid the problem of getting stuck in log-convex part?

Thanks.