I am not aware of any reparametrization (other than adding a fudge factor). But why is this important? If your model concludes that the average is .9999 for the largest predictor, isn’t that good enough? I would imagine you have many more sources of noise in your data generating/collecting process that are larger than that .0001 you are focusing on.