Wasserstein GAN implementation in TensorFlow and Pytorch
GAN is very popular research topic in Machine Learning right now. There are two types of GAN researches, one that applies GAN in interesting problems and one that attempts to stabilize the training.
Indeed, stabilizing GAN training is a very big deal in the field. The original GAN suffers from several difficulties, e.g. mode collapse, where generator collapse into very narrow distribution that only covers a single mode in data distribution. The implication of mode collapse is that generator can only generate very similar samples (e.g. a single digit in MNIST), i.e. the samples generated are not diverse. This problem of course violates the spirit of GAN.
Another problem in GAN is that there is no metric that tells us about the convergence. The generator and discriminator loss do not tell us anything about this. Of course we could monitor the training progress by looking at the data generated from generator every now and then. However, it is a strictly manual process. So, it would be great to have an interpretable metric that tells us about the training progress.
Note, code could be found here: https://github.com/wiseodd/generative-models
Wasserstein GAN (WGAN) is a newly proposed GAN algorithm that promises to remedy those two problems above.
For the intuition and theoritical background behind WGAN, redirect to this excellent summary (credits to the author).
The overall algorithm is shown below:
We could see that the algorithm is quite similar to the original GAN. However, to implement WGAN, we should notice few things from the above:
- No \( \log \) in the loss. The output of \( D \) is no longer a probability, hence we do not apply sigmoid at the output of \( D \)
- Clip the weight of \( D \)
- Train \( D \) more than \( G \)
- Use RMSProp instead of ADAM
- Lower learning rate, the paper uses \( \alpha = 0.00005 \)
WGAN TensorFlow implementation
The base implementation of GAN could be found in the past post. We need only to modify traditional GAN with respect to those items above. So first, let’s update our \( D \):
Next, we modify our loss by simply removing the \( \log \):
We then clip the weight of \( D \) after each gradient descent update:
Lastly, we train \( D \) more:
And that is it.
WGAN Pytorch implementation
The base implementation of original GAN could be found in the past post. Similar to the TensorFlow version, the modifications are quite straight forward. Note the codes below are inside each training iteration.
First, update \( D \):
Train \( D \) more: