Gibbs Sampler for LDA

Implementation of Gibbs Sampler for the inference of Latent Dirichlet Allocation (LDA)

Deriving normal equation

Deriving normal equation to solve least squares from calculus point of view

Boundary Seeking GAN

Training GAN by moving the generated samples to the decision boundary.

Least Squares GAN

2017 is the year GAN loss its logarithm. First, it was Wasserstein GAN, and now, it's LSGAN's turn.

CoGAN: Learning joint distribution with GAN

Original GAN and Conditional GAN are for learning marginal and conditional distribution of data respectively. But how can we extend them to learn joint distribution instead?

Why does L2 reconstruction loss yield blurry images?

It is generally known that L2 reconstruction loss found in generative models yields blurrier images than e.g. adversarial loss. But why?

Wasserstein GAN implementation in TensorFlow and Pytorch

Wasserstein GAN comes with promise to stabilize GAN training and abolish mode collapse problem in GAN.

InfoGAN: unsupervised conditional GAN in TensorFlow and Pytorch

Adding Mutual Information regularization to a GAN turns out gives us a very nice effect: learning data representation and its properties in unsupervised manner.

Maximizing likelihood is equivalent to minimizing KL-Divergence

We will show that doing MLE is equivalent to minimizing the KL-Divergence between the estimator and the true distribution.

Variational Autoencoder (VAE) in Pytorch

With all of those bells and whistles surrounding Pytorch, let's implement Variational Autoencoder (VAE) using it.