Atleast one success in sum of bernoulli trials(chernoff bounds?)

The real statement i am trying to understand is stated in the simple proof of the johnson lindenstrauss lemma written by Sanjoy Dasgupta, Anupam Gupta and can be found here: https://cseweb.ucsd.edu/~dasgupta/papers/jl.pdf they state that:
“Repeating this projection O(n) times can boost the
success probability to the desired constant, giving us the claimed randomized polynomial
time algorithm.” at page 62.

As i see it this can be summarized into the following:

Each bernoulli trial is governed by an constant n where as p(X==1) = 1/N thus P(X==0) = 1 - 1/N.

If am trying to show that we can run \mathcal{O}(N) such trials to be sure that we can get atleast one success up to any constant of uncertainty, lets say 50 percent.

How can this be done using chernoff bounds(i have heard that they should be used)?

I assume the first step is to define an RV Z=\sum_{n} x_n and then try to bound it in some sort of way with the chernoff bounds.

Can someone help me out?