This post should be quick as it is just a port of the previous Keras code. For the intuition and derivative of Variational Autoencoder (VAE) plus the Keras implementation, check this post. The full code is available in my Github repo: https://github.com/wiseodd/generative-models.
Let’s begin with importing stuffs.
Now, recall in VAE, there are two networks: encoder \( Q(z \vert X) \) and decoder \( P(X \vert z) \). So, let’s build our \( Q(z \vert X) \) first:
Our \( Q(z \vert X) \) is a two layers net, outputting the \( \mu \) and \( \Sigma \), the parameter of encoded distribution. So, let’s create a function to sample from it:
Let’s construct the decoder \( P(z \vert X) \), which is also a two layers net:
Note, the use of b.repeat(X.size(0), 1) is because this Pytorch issue.
Now, the interesting stuff: training the VAE model. First, as always, at each training step we do forward, loss, backward, and update.
Now, the forward step:
That is it. We just call the functions we defined before. Let’s continue with the loss, which consists of two parts: reconstruction loss and KL-divergence of the encoded distribution:
Backward and update step is as easy as calling a function, as we use Autograd feature from Pytorch:
After that, we could inspect the loss, or maybe visualizing \( P(X \vert z) \) to check the progression of the training every now and then.