Yes! A design goal of PyMC3 is to let the user worry about statistical modelling, and not worry about inference, and tuning attempts to automatically set some of the dozens of knobs available in modern MCMC methods.
As a basic, concrete example, Metropolis-Hastings MCMC starts at a point x, then draws x' from Normal(x, sd), and does some math to accept or reject x': if it rejects x', you add x to your samples again.
So how do we choose sd for the proposal distribution? There are some papers that suggest Metropolis-Hastings is most efficient when you accept 23.4% of proposed samples, and it turns out that lowering step size increases the probability of accepting a proposal. PyMC3 will spend the first 500 steps increasing and decreasing the step size to try to find the best value of sd that will give you an acceptance rate of 23.4% (you can even set different acceptance rates).
The problem is that if you change the step size while sampling, you lose the guarantees that your samples (asymptotically) come from the target distribution, so you should typically discard these. Also, there is typically a lot more adaptation going on in those first steps than just step_size.
tl;dr: The first tune steps allow the PyMC3 developers to adjust parameters based on best practices and current research.