The model is to estimate the Weibull and Gompertz survival functions. There is a variable P1 that you set to 0 for Weibull and 1 for Gompertz.
I think the model that I coded is set up correctly. When P=0 it works, and matches a published result. however, when I set P1=1, it crashed.
The problem statement is probably:
h=pm.math.exp(Alpha[s,a,0]+Alpha[s,a,1]*timen1)
(h=pm.math.exp(Alpha[s,a,0]+Alpha[s,a,1]*timen1/12) does not crash the model)
I think when the sample starts the coefficients Alpha[s,a,1] are too big, and it blows up the exp function. When I scale timen1 the model works.
I could divide timen1 by its max value (72), or by 12 to change times from months to years. Both work in my case.
However, rather than modify the data, I would rather have smaller coefficients that don’t crash the model. I’m looking for a way to get the model to work without re-scaling the data.
Model with data.
First order Fractional Polynomial.txt (9.2 KB)
Code without import data, plus error message:
P1=1.0
N=443
NT=4
NS=8
maxt=48.0
MEAN=np.array([0.0, 0.0])
PREC=np.array([[1.00000E-04, 0],[0.0, 1.00000E-04]])
S=data2["s"].astype(dtype=np.int32).to_numpy().flatten()-1
R=data2["r"].astype(dtype=np.int32).to_numpy().flatten()
Z=data2["z"].astype(dtype=np.int32).to_numpy().flatten()
A=data2["a"].astype(dtype=np.int32).to_numpy().flatten()-1
TIMEN=data2["time"].astype(dtype=np.float64).to_numpy()
TIMEN1=(np.equal(P1,0.).astype(dtype=np.int64)*np.log(TIMEN)+(1-np.equal(P1,0.).astype(dtype=np.int64))*np.power(TIMEN,P1))
DT=data2["dt"].astype(dtype=np.float64).to_numpy()
T1=data1[["t1"]].astype(dtype=np.int32).to_numpy().flatten()-1
T2=data1[["t2"]].astype(dtype=np.int32).to_numpy().flatten()-1
STUDYIDS=data1[["studyid"]].astype(dtype=np.int32).to_numpy()
STUDY=data1["study"].to_list()
TREATMENTS=["MP","MPT","CTDa","MPV"]
coords = {"treatments":TREATMENTS,"study":STUDY}
PREC=np.array([[1e-4,0.],[0.,1e-4]])
MEAN=np.array([0.,0.])
with pm.Model(coords=coords) as femodel:
nt=pm.ConstantData("nt",np.array([NT],dtype=np.int32))
prec=pm.ConstantData("prec",PREC)
t1=pm.ConstantData("t1",T1)
t2=pm.ConstantData("t2",T2)
s=pm.ConstantData("s",S)
r=pm.ConstantData("r",R)
z=pm.ConstantData("z",Z)
a=pm.ConstantData("a",A)
timen=pm.ConstantData("timen",TIMEN)
timen1=pm.ConstantData("timen1",TIMEN1)
dt=pm.ConstantData("dt",DT)
mean=pm.ConstantData("mean",MEAN)
d_=pm.MvNormal("d_",mu=mean,tau=prec,shape=(nt[0]-1,2))
d=pm.Deterministic("d",pm.math.concatenate([pm.math.zeros(shape=(1,2)),d_]),dims="treatments")
mu=pm.MvNormal("mu",mu=mean,tau=prec,shape=(8,2))
# Arm 1
Alpha1=mu
# Arm 2
Alpha2=pm.math.stack([mu[:,0]+d[t2,0]-d[t1,0],mu[:,1]+d[t2,1]-d[t1,1]],axis=1)
Alpha=pm.math.stack([Alpha1,Alpha2],axis=1)
h=pm.math.exp(Alpha[s,a,0]+Alpha[s,a,1]*timen1)
p=1-pm.math.exp(-h*dt)
y=pm.Binomial("y",n=z,p=p,observed=r)
fedata=pm.sample(10000,tune=5000)
### ERROR MESSAGE ###
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
---------------------------------------------------------------------------
SamplingError Traceback (most recent call last)
Cell In[140], line 30
27 p=1-pm.math.exp(-h*dt)
29 y=pm.Binomial("y",n=z,p=p,observed=r)
---> 30 fedata=pm.sample(10000,tune=5000)
File ~\anaconda3\Lib\site-packages\pymc\sampling\mcmc.py:722, in sample(draws, tune, chains, cores, random_seed, progressbar, step, nuts_sampler, initvals, init, jitter_max_retries, n_init, trace, discard_tuned_samples, compute_convergence_checks, keep_warning_stat, return_inferencedata, idata_kwargs, nuts_sampler_kwargs, callback, mp_ctx, model, **kwargs)
720 ip: dict[str, np.ndarray]
721 for ip in initial_points:
--> 722 model.check_start_vals(ip)
723 _check_start_shape(model, ip)
725 # Create trace backends for each chain
File ~\anaconda3\Lib\site-packages\pymc\model\core.py:1681, in Model.check_start_vals(self, start)
1678 initial_eval = self.point_logps(point=elem)
1680 if not all(np.isfinite(v) for v in initial_eval.values()):
-> 1681 raise SamplingError(
1682 "Initial evaluation of model at starting point failed!\n"
1683 f"Starting values:\n{elem}\n\n"
1684 f"Logp initial evaluation results:\n{initial_eval}\n"
1685 "You can call `model.debug()` for more details."
1686 )
SamplingError: Initial evaluation of model at starting point failed!
Starting values:
{'d_': array([[-6.00971201e-01, 2.92427765e-01],
[-7.05074601e-01, -7.98760624e-01],
[-7.13572854e-05, 7.06728906e-01]]), 'mu': array([[-1.19734297e-01, -5.04778866e-01],
[-7.94497852e-01, -8.42447101e-01],
[ 6.72822358e-01, -6.63009855e-01],
[ 6.14768849e-02, 5.03936542e-01],
[-5.47026089e-01, -5.17259998e-02],
[-1.88918133e-02, 6.32461311e-01],
[-3.68636218e-02, 8.74631714e-01],
[ 7.26899759e-01, 4.72501935e-04]])}
Logp initial evaluation results:
{'d_': -33.14, 'mu': -88.39, 'y': -inf}
You can call `model.debug()` for more details.