I am a research scientist at Google Brain. I am broadly interested in advancing science and intelligence, and where the ideas involve probability, programs, and/or neural nets.
At Google, I lead a team on reliable deep learning. Infrastructure I've worked on includes Uncertainty Baselines, Robustness Metrics, Edward2, TensorFlow Probability, Mesh TensorFlow, and Tensor2Tensor. Previously, I was a Ph.D. student at Columbia advised by David Blei and Andrew Gelman. I developed the original Edward language and was a member of the Stan development team.
Some talks:
Some of my work is available as preprints on arXiv.
Soft Calibration Objectives for Neural Networks
Trainable objective functions for calibration.
Archit Karandikar, Nicholas Cain, Dustin
Tran, Balaji Lakshminarayanan, Jonathon Shlens,
Michael C. Mozer, Becca Roelofs
Revisiting the Calibration of Modern Neural
Networks
Do modern neural networks truly make overconfident
predictions?
Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances
Hubis, Xiaohua Zhai, Neil Houlsby, Dustin
Tran, Mario Lucic
Uncertainty Baselines: Benchmarks for Uncertainty
& Robustness in Deep Learning
High-quality implementations of standard and SOTA methods
on a variety of tasks.
Zachary Nado, Neil Band, Mark Collier, Josip Djolonga,
Michael W. Dusenberry, Sebastian Farquhar, Angelos Filos,
Marton Havasi, Rodolphe Jenatton, Ghassen Jerfel, Jeremiah
Liu, Zelda Mariet, Jeremy Nixon, Shreyas Padhy, Jie Ren,
Tim G. J. Rudner, Yeming Wen, Florian Wenzel, Kevin
Murphy, D. Sculley, Balaji Lakshminarayanan, Jasper
Snoek, Yarin Gal, Dustin Tran
RecSim NG: Toward Principled Uncertainty Modeling
for Recommender Ecosystems
A platform for simulating multi-agent recommender systems
using probabilistic programming.
Martin Mladenov, Chih-Wei Hsu, Vihan Jain, Eugene Ie,
Christopher Colby, Nicolas Mayoraz, Hubert Pham,
Dustin Tran, Ivan Vendrov, Craig
Boutilier
On the discrepancy between density estimation and
sequence generation
On the correlation between log-likelihood and BLEU, and
when to use latent variable models.
Jason Lee, Dustin Tran, Orhan Firat,
Kyunghyun Cho
Measuring calibration in deep learning
How to measure accuracy of predicted probabilities.
Jeremy Nixon, Michael Dusenberry, Linchuan Zhang, Ghassen
Jerfel, Dustin Tran
NeuTra-lizing bad geometry in hamiltonian monte
carlo using neural transport
Perform HMC over difficult geometries by transforming the
space to unit Gaussian.
Matthew Hoffman, Pavel Sountsov, Joshua V. Dillon, Ian
Langmore, Dustin Tran, Srinivas Vasudevan
TensorFlow Distributions
A backend for efficient, composable manipulation of
probability distributions.
Joshua V. Dillon, Ian Langmore, Dustin
Tran, Eugene Brevdo, Srinivas Vasudevan, Dave
Moore, Brian Patton, Alex Alemi, Matt Hoffman, Rif A.
Saurous
Edward: A library for probabilistic modeling,
inference, and criticism
Everything and anything about probabilistic models.
Dustin Tran, Alp Kucukelbir, Adji B. Dieng,
Maja Rudolph, Dawen Liang, David M. Blei
Model criticism for Bayesian causal inference
How to validate inferences from causal models.
Dustin Tran, Francisco J. R. Ruiz, Susan
Athey, David M. Blei
Stochastic gradient descent methods for estimation with
large data sets
Fast and statistically efficient algorithms for
generalized linear models and M-estimation.
Dustin Tran, Panos Toulis, Edoardo M.
Airoldi
Journal of Statistical Software, To appear
Combining ensembles and data augmentation can harm your calibration
By adjusting data augmentation according to calibration,
we can exploit both marginalization and invariances.
Yeming Wen, Ghassen Jerfel, Rafael Muller, Michael W.
Dusenberry, Jasper Snoek, Balaji Lakshminarayanan,
Dustin Tran
International Conference on Learning Representations, 2021
Training independent subnetworks for robust prediction
Apply a multi-input multi-output configuration for simple and cheap marginalization.
Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah
Zhe Liu, Jasper Snoek, Balaji Lakshminarayanan, Andrew M.
Dai, Dustin Tran
International Conference on Learning Representations, 2021
Hyperparameter ensembles for robustness and
uncertainty quantification
Integrate over both weights and hyperparameters!
Florian Wenzel, Jasper Snoek, Dustin
Tran, Rodolphe Jenatton
Neural Information Processing Systems, 2020
Simple and principled uncertainty estimation with
deterministic deep learning via distance
awareness
Leverage spectral normalization and Gaussian processes.
Jeremiah Zhe Liu, Zi Lin, Shreyas Padhy, Dustin
Tran, Tania Bedrax-Weiss, Balaji
Lakshminarayanan
Neural Information Processing Systems, 2020
Efficient and scalable Bayesian neural nets with
rank-1 factors
Mixture posteriors, Cauchy priors, rank-1
parameterization.
Michael Dusenberry, Ghassen Jerfel, Yeming Wen, Yi-an Ma,
Jasper Snoek, Katherine Heller, Balaji Lakshminarayanan,
Dustin Tran
International Conference on Machine Learning, 2020
BatchEnsemble: An alternative approach to
efficient ensemble and lifelong learning
Efficient ensembles for uncertainty and lifelong learning.
Yeming Wen, Dustin Tran, Jimmy Ba
International Conference on Learning Representations, 2020
Analyzing the role of model uncertainty in
electronic health records
Where parameter uncertainty affects clinical decision-making.
Michael Dusenberry, Dustin Tran, Edward
Choi, Jonas Kemp, Jeremy Nixon, Ghassen Jerfel, Katherine
Heller, Andrew Dai
ACM Conference on Health, Inference, and Learning, 2020
Expectation propagation as a way of life: A
framework for Bayesian inference on partitioned
data
How to distribute inference with massive data sets and how
to combine inferences from many data sets.
Andrew Gelman, Aki Vehtari, Pasi Jylänki, Tuomas Sivula,
Dustin Tran, Swupnil Sahai, Paul
Blomstedt, John P. Cunningham, David Schiminovich,
Christian Robert
Journal of Machine Learning Research, 21(17):1–53, 2020
Bayesian Layers: A module for neural network uncertainty
A neural net-stylized primitive for distributions over functions.
Dustin Tran, Michael Dusenberry, Mark van
der Wilk, Danijar Hafner
Neural Information Processing Systems, 2019
Discrete flows: Invertible generative models for
discrete data
How to model with discrete invertible functions.
Dustin Tran, Keyon Vafa, Kumar Krishna
Agrawal, Laurent Dinh, Ben Poole
Neural Information Processing Systems, 2019
Noise contrastive priors for functional uncertainty
A prior for neural networks in data space.
Danijar Hafner, Dustin Tran, Alex Irpan,
Timothy Lillicrap, James Davidson
Uncertainty in Artificial Intelligence, 2019
Simple, distributed, and accelerated probabilistic
programming
Probabilistic programs on TPUs.
Dustin Tran, Matthew D. Hoffman, Dave
Moore, Christopher Suter, Srinivas Vasudevan, Alexey Radul,
Matthew Johnson, Rif A. Saurous
Neural Information Processing Systems, 2018
Autoconj: Recognizing and exploiting conjugacy
without a domain-specific language
The autointegrate analog of autodiff.
Matthew D. Hoffman, Matthew Johnson, Dustin
Tran
Neural Information Processing Systems, 2018
Mesh-TensorFlow: Deep learning for
supercomputers
Model parallelism made easier.
Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin
Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins,
HyoukJoong Lee, Mingsheng Hong, Cliff Young, Ryan Sepassi, Blake
Hechtman
Neural Information Processing Systems, 2018
Image Transformer
An image autoregressive model using only attention.
Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam
Shazeer, Alexander Ku, Dustin Tran
International Conference on Machine Learning, 2018
Implicit causal models for genome-wide association
studies
Generative models applied to causality in genomics.
Dustin Tran, David M. Blei
International Conference on Learning Representations, 2018
Flipout: Efficient pseudo-independent weight perturbations
on mini-batches
How to make weight perturbations in evolution strategies and
variational BNNs as mini-batch-friendly as activation perturbations
in dropout and batch norm.
Yeming Wen, Paul Vicol, Jimmy Ba, Dustin Tran,
Roger Grosse
International Conference on Learning Representations, 2018
Hierarchical implicit models and likelihood-free
variational inference
Combining the idea of implicit densities with hierarchical Bayesian
modeling and deep neural networks.
Dustin Tran, Rajesh Ranganath, David M.
Blei
Neural Information Processing Systems, 2017
Variational inference via $\chi$-upper bound
minimization
Overdispersed approximations and upper bounding
the model evidence.
Adji B. Dieng, Dustin Tran, Rajesh
Ranganath, John Paisley, David M. Blei
Neural Information Processing Systems, 2017
Comment, "Fast approximate inference for
arbitrarily large semiparametric regression models via
message passing"
The role of message passing in automated inference.
Dustin Tran, David M. Blei
Journal of the American Statistical Association,
112(517):156–158, 2017
Automatic differentiation variational inference
An automated tool for black box variational inference,
available in Stan.
Alp Kucukelbir, Dustin Tran, Rajesh Ranganath,
Andrew Gelman, David M. Blei
Journal of Machine Learning Research, 18(14):1–45, 2017
Deep probabilistic programming
How to build a language with rich compositionality for
modeling and inference.
Dustin Tran, Matthew D. Hoffman, Rif A.
Saurous, Eugene Brevdo, Kevin Murphy, David M. Blei
International Conference on Learning Representations, 2017
Operator variational inference
How to formalize computational and statistical tradeoffs in variational inference.
Rajesh Ranganath, Jaan Altosaar, Dustin
Tran, and David M. Blei
Neural Information Processing Systems, 2016
Hierarchical variational models
A Bayesian formalism for constructing expressive
variational families.
Rajesh Ranganath, Dustin Tran, David M.
Blei
International Conference on Machine Learning, 2016
Spectral M-estimation with application to hidden
Markov models
Applying M-estimation for sample efficiency and robustness
in moment-based estimators.
Dustin Tran, Minjae Kim, Finale Doshi-Velez
Artificial Intelligence and Statistics, 2016
Towards stability and optimality in stochastic gradient
descent
A stochastic gradient method combining numerical stability
and statistical efficiency.
Panos Toulis, Dustin Tran, Edoardo M.
Airoldi
Artificial Intelligence and Statistics, 2016
The variational Gaussian process
A powerful variational model that can universally
approximate any posterior.
Dustin Tran, Rajesh Ranganath, David M.
Blei
International Conference on Learning Representations, 2016
Copula variational inference
Posterior approximations using copulas, which find
meaningful dependence between latent variables.
Dustin Tran, David M. Blei, Edoardo M.
Airoldi
Neural Information Processing Systems, 2015