I’m quite excited for NIPS this year, which starts this Monday.
It’s always interesting to look at the recent trends from browsing the schedule. Deep learning and reinforcement learning are of course a big highlight of NIPS, with three tutorials, a symposium, and many affiliated workshops. Approximate inference is becoming increasingly more mainstream as complex probabilistic models are fit with scalable inference strategies. Similarly, probabilistic programming—and the general concept of developing generic algorithms which fit broad classes of models—is becoming crucial to model prototyping. Gaussian processes are big just as they were in the last ICML: Bayesian optimization, Bayesian nonparametrics, and kernel methods are in general becoming more feasible and used in practice.
On personal details, I’ll be presenting a poster on Copula variational inference, which is joint work with Dave Blei and Edo Airoldi, trying to tackle the general problem of learning posterior correlation in variational inference. I’m also helping organize the Advances in Approximate Bayesian Inference workshop on Friday. We have an incredible list of talks lined up, and two panels which I’m sure will be insightful. On Saturday, I’ll be giving a talk at the Black Box Learning and Inference workshop. It covers my most recent research on developing a rich framework for variational inference (abstract below). The Stan team also has a webpage on NIPS.
Title: Variational models
Abstract:
Originally developed in the 1990s, variational inference has enjoyed renewed interest around scalable optimization for large datasets, black box strategies for inferring broad model classes, and neural networks for amortized inference. However, this research has been constrained by the lack of an expressive family of approximating distributions, which can generalize across many models. I will present the framework of variational models, which interpret the approximating family as a model of the posterior latent variables. This naturally leads to Bayesian constructs for developing expressive approximations. Following this, I will present a particular instantiation termed the variational Gaussian process. With applications to auto-encoders and deep recurrent attention models, I show that the variational model’s complexity grows efficiently and towards any distribution.
Joint work with Rajesh Ranganath and Dave Blei.