Likelihoods are fixed during sampling. If you have a normal likelihood you will have a normal posterior predictive.
Your distribution should be parametrized in a way that the random and logp methods match.
It sounds like you need to define some extra parameters that are learned during sample and which influence the magnitude of certain points (not just the magnitude of innovations) so that both the logp and random methods can reflect that behavior of the data.
Otherwise it seems like you are asking PyMC to model some aspect of your data out of thin air.