Natural Gradient Descent

Intuition and derivation of natural gradient descent.

Fisher Information Matrix

An introduction and intuition of Fisher Information Matrix.

Introduction to Annealed Importance Sampling

An introduction and implementation of Annealed Importance Sampling (AIS).

Gibbs Sampler for LDA

Implementation of Gibbs Sampler for the inference of Latent Dirichlet Allocation (LDA)

Boundary Seeking GAN

Training GAN by moving the generated samples to the decision boundary.

Least Squares GAN

2017 is the year GAN loss its logarithm. First, it was Wasserstein GAN, and now, it's LSGAN's turn.

CoGAN: Learning joint distribution with GAN

Original GAN and Conditional GAN are for learning marginal and conditional distribution of data respectively. But how can we extend them to learn joint distribution instead?

Why does L2 reconstruction loss yield blurry images?

It is generally known that L2 reconstruction loss found in generative models yields blurrier images than e.g. adversarial loss. But why?

Wasserstein GAN implementation in TensorFlow and Pytorch

Wasserstein GAN comes with promise to stabilize GAN training and abolish mode collapse problem in GAN.

InfoGAN: unsupervised conditional GAN in TensorFlow and Pytorch

Adding Mutual Information regularization to a GAN turns out gives us a very nice effect: learning data representation and its properties in unsupervised manner.