There is a jupyter notebook driven tutorial on github (full disclosure, it is my github).
(Solutions available here)
The noise or rather latent random variable can be generated pretty much however you like for example such as follows:
# Generate latent random variable to feed to the generator, by drawing from a uniform distribution
z = np.random.uniform(-1., 1., size=[batch_size, noise_dim])
Yet it makes sense to think about the activation function within the input layer of your generator, and pay attention to its sensitive range.
The generator takes this input as a seed to decode from that latent variable into the source datasets domain. So obviously the same random variable will lead to the exact same generated sample.
So you should constantly be drawing new samples while training, and do not keep the noise constant.