Summary

Following the trend in auto-encoders for generative modelling, Makhzani et al. propose an adversarial version using the framework of generative adversarial networks (Goodfellow et al., 2014). I have a bit of trouble following the terminology in these lines of works, so in my summary I will use different notation from the papers, following statistical conventions which I personally find more elucidating. And I’ll motivate it differently, coming from various divergence measure arguments.

I’ll first review generative adversarial networks (GANs). Suppose we observe data . GANs employ a pair of models , where is the generative model and is a discriminative model to help perform inference on .

  • The discriminative model aims to output high probability for samples which come from the empirical data distribution , and output low probability for samples which come from the generative model .
  • The generative model aims to fool the discriminative model, i.e., it consists of a prior and likelihood , and it aims to generate data most closely matching the observed data.

The objective follows a minimax game

which alternates between learning the discriminator and learning the generator . The intuition is that finding the optimal among a class of generative models corresponds to minimizing the Jensen-Shannon divergence

Minimizing directly is not possible; this is where the objective above, using an additional discriminative model, comes in handy. Recall also that maximum likelihood estimation minimizes (White, 1982). Using JSD leads to a more balanced approximation not underfitting or overfitting modes. It’s also worth noting that unlike approximate posterior inference, these are divergence measures directly on data distributions for , not on latent variable distributions for . For example, variational inference minimizes .

In adversarial auto-encoders, we go back to minimizing divergence measures on the latent variables. Recall that the variational auto-encoder’s objective (Kingma and Welling, 2014) is equivalently

where the first term is the reconstruction error—with the decoder evaluating codes from the encoder —and the second term is the regularizer. The adversarial auto-encoder’s objective simply changes the regularizer to , where we define the aggregated posterior

is just as intractable as minimizing in the data space, so it requires an adversarial network: replaces the original and replaces the original :

To train adversarial autoencoders, one alternates between minimizing the reconstruction error of the auto-encoder, i.e., , and training the adversarial network. The rest of the paper goes into standard experiments for semi-supervised learning on MNIST and SVHN. They also comment on an extension of generative moment matching networks (Li et al., 2015) to auto-encoders, also following these ideas.

Discussion

I see generative adversarial networks as fascinating inference algorithms because they’re a rare case where we can minimize a more general divergence measure between the generative model and the data distribution. It makes sense that one should avoid MLE’s KL objective, which can lead to overdispersed distributions, in favor of this approach. And it’s great to see attempts at trying to port these same ideas to the variational inference/auto-encoder setting.

The experiments seem to show that adversarial auto-encoders lead to higher visual fidelity than variational auto-encoders. It’s not clear to me why this is the case however. The fact that there’s even a KL term as a regularizer in the VAE objective is a simple byproduct of the variational lower bound. In the end, it is still minimizing . However, replacing the KL regularizer with makes it unclear what the underlying divergence measure is any longer. I buy that ensures less “holes” as approximates the prior , but there’s a weird conflict as simultaneously gets trained when minimizing the reconstruction error. The underlying divergence measure, if it exists, is most likely not as simple as —although if it were then that would be fantastic(?). Does this alternating procedure also necessarily converge?

Obligatory comment in relation to hierarchical variational models (Ranganath et al., 2015): Their “universal approximator posterior”, as a possible choice for the encoder , is a hierarchical variational model. To wield the intractability of the density they naively apply Monte Carlo estimates. We’ve found this not to work well in practice due to very high variance, forcing a more complicated inference procedure which we outline in our paper.

edit (12/1/15): A previous version of this post wrote instead of . Thanks to Alireza Makhzani for the correction.