Dustin Tran

Computer Science Ph.D. Student at Columbia
dustin@cs.columbia.edu
Blog


I am a Ph.D. student in Computer Science at Columbia, where I am advised by David Blei and Andrew Gelman. I work in the fields of Bayesian statistics, machine learning, and deep learning. I am most interested in probabilistic models, whether it be in their development, inference, or more generally their foundations for computational and statistical analysis.

I currently work at OpenAI and am on leave from Columbia.

I lead development of Edward, a library for probabilistic modeling, inference, and criticism. I am also fortunate to be a member of the Stan development team. Previously, I was a Statistics Ph.D. student at Harvard before transferring to Columbia, where I worked with Edo Airoldi and also spent time at the Harvard Intelligent Probabilistic Systems group.

Recently, I have been giving the following talk:

  • Edward: A library for probabilistic modeling, inference, and criticism Slides

Curriculum Vitae

Publications


Preprints

Some of my work is available as preprints on arXiv.

Expectation propagation as a way of life: A framework for Bayesian inference on partitioned data
How to distribute inference with massive data sets and how to combine inferences from many data sets.
Andrew Gelman, Aki Vehtari, Pasi Jylänki, Tuomas Sivula, Dustin Tran, Swupnil Sahai, Paul Blomstedt, John P. Cunningham, David Schiminovich, Christian Robert

Deep and hierarchical implicit models
Combining the idea of implicit densities with hierarchical Bayesian modeling and deep neural networks.
Dustin Tran, Rajesh Ranganath, David M. Blei

Edward: A library for probabilistic modeling, inference, and criticism
Everything and anything about probabilistic models.
Dustin Tran, Alp Kucukelbir, Adji B. Dieng, Maja Rudolph, Dawen Liang, David M. Blei

Model criticism for Bayesian causal inference
How to validate inferences from causal models.
Dustin Tran, Francisco J. R. Ruiz, Susan Athey, David M. Blei

The $\chi$ divergence for approximate inference
Overdispersed approximations and upper bounding the model evidence.
Adji B. Dieng, Dustin Tran, Rajesh Ranganath, John Paisley, David M. Blei

Stochastic gradient descent methods for estimation with large data sets
Fast and statistically efficient algorithms for generalized linear models and M-estimation.
Dustin Tran, Panos Toulis, Edoardo M. Airoldi
Journal of Statistical Software, To appear

2017

Comment, "Fast approximate inference for arbitrarily large semiparametric regression models via message passing"
The role of message passing in automated inference.
Dustin Tran, David M. Blei
Journal of the American Statistical Association, 112(517):156–158, 2017

Automatic differentiation variational inference
An automated tool for black box variational inference, available in Stan.
Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, David M. Blei
Journal of Machine Learning Research, 18(14):1–45, 2017

Deep probabilistic programming
How to build a language with rich compositionality for modeling and inference.
Dustin Tran, Matthew D. Hoffman, Rif A. Saurous, Eugene Brevdo, Kevin Murphy, David M. Blei
International Conference on Learning Representations, 2017

2016

Operator variational inference
How to formalize computational and statistical tradeoffs in variational inference.
Rajesh Ranganath, Jaan Altosaar, Dustin Tran, and David M. Blei
Neural Information Processing Systems, 2016

Hierarchical variational models
A Bayesian formalism for constructing expressive variational families.
Rajesh Ranganath, Dustin Tran, David M. Blei
International Conference on Machine Learning, 2016

Spectral M-estimation with application to hidden Markov models
Applying M-estimation for sample efficiency and robustness in moment-based estimators.
Dustin Tran, Minjae Kim, Finale Doshi-Velez
Artificial Intelligence and Statistics, 2016

Towards stability and optimality in stochastic gradient descent
A stochastic gradient method combining numerical stability and statistical efficiency.
Panos Toulis, Dustin Tran, Edoardo M. Airoldi
Artificial Intelligence and Statistics, 2016

The variational Gaussian process
A powerful variational model that can universally approximate any posterior.
Dustin Tran, Rajesh Ranganath, David M. Blei
International Conference on Learning Representations, 2016

2015

Copula variational inference
Posterior approximations using copulas, which find meaningful dependence between latent variables.
Dustin Tran, David M. Blei, Edoardo M. Airoldi
Neural Information Processing Systems, 2015

Software


Edward


Edward is a Python library for probabilistic modeling, inference, and criticism. It is a testbed for fast experimentation and research with probabilistic models, ranging from classical hierarchical models on small data sets to complex deep probabilistic models on large data sets.

Stan


Stan is a probabilistic programming language featuring fast, highly optimized inference algorithms. It supports a large class of models, with a user base of roughly 10,000. Alp Kucukelbir and I are developing variational inference. I also work on algorithms for optimizing marginal distributions.

sgd


sgd offers a suite of large-scale estimation tools in R via stochastic gradient descent. In collaboration with Panos Toulis, sgd includes many stochastic gradient methods, built-in models, visualization tools, hypothesis testing, and convergence diagnostics. It is available on CRAN.