News

Research Interests

I am interested in building scalable, reliable and efficient probabilistic models for machine learning and data science. Currently, I focus on developing fast and robust inference methods with theoretical guarantees and their applications with deep neural networks on real-world big data.

Publications

DP-Fast MH: Private, Fast, and Accurate Metropolis-Hastings for Large-Scale Bayesian Inference

Wanrong Zhang, Ruqi Zhang
International Conference on Machine Learning (ICML), 2023
[Paper]

Long-tailed Classification from a Bayesian-decision-theory Perspective

Bolian Li, Ruqi Zhang
Preprint
[Paper]

Calibrating the Rigged Lottery: Making All Tickets Reliable

Bowen Lei, Ruqi Zhang, Dongkuan Xu, Bani K Mallick
International Conference on Learning Representations (ICLR), 2023
[Paper][Code]

Efficient Informed Proposals for Discrete Distributions via Newton’s Series Approximation

Yue Xiang, Dongyao Zhu, Bowen Lei, Dongkuan Xu, Ruqi Zhang
International Conference on Artificial Intelligence and Statistics (AISTATS), 2023
[Paper][Code]

On Equivalences between Weight and Function-Space Langevin Dynamics

Ziyu Wang, Yuhao Zhou, Ruqi Zhang, Jun Zhu
ICBINB Workshop @ NeurIPS, 2022
[Paper]

Sampling in Constrained Domains with Orthogonal-Space Variational Gradient Descent

Ruqi Zhang, Qiang Liu, Xin T. Tong
Neural Information Processing Systems (NeurIPS), 2022
[Paper]

A Langevin-like Sampler for Discrete Distributions

Ruqi Zhang, Xingchao Liu, Qiang Liu
International Conference on Machine Learning (ICML), 2022
[Paper] [Code][Video][Slides]

Low-Precision Stochastic Gradient Langevin Dynamics

Ruqi Zhang, Andrew Gordon Wilson, Christopher De Sa
International Conference on Machine Learning (ICML), 2022
[Paper] [Code][Video][Slides]

Meta-Learning Divergences for Variational Inference

Ruqi Zhang, Yingzhen Li, Christopher De Sa, Sam Devlin, Cheng Zhang
International Conference on Artificial Intelligence and Statistics (AISTATS), 2021
[Paper]

Asymptotically Optimal Exact Minibatch Metropolis-Hastings

Ruqi Zhang, A. Feder Cooper, Christopher De Sa
Neural Information Processing Systems (NeurIPS), 2020
Spotlight, acceptance rate 2.96%
[Paper][Code][Video][Slides]

AMAGOLD: Amortized Metropolis Adjustment for Efficient Stochastic Gradient MCMC

Ruqi Zhang, A. Feder Cooper, Christopher De Sa
International Conference on Artificial Intelligence and Statistics (AISTATS), 2020
[Paper] [Code]

Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning

Ruqi Zhang, Chunyuan Li, Jianyi Zhang, Changyou Chen, Andrew Gordon Wilson
International Conference on Learning Representations (ICLR), 2020
Oral, acceptance rate 1.85%
[Paper] [Code][Video][Slides]

Poisson-Minibatching for Gibbs Sampling with Convergence Rate Guarantees

Ruqi Zhang, Christopher De Sa
Neural Information Processing Systems (NeurIPS), 2019
Spotlight, acceptance rate 2.43%
[Paper] [Code] [Poster] [Slides]

Large Scale Sparse Clustering

Ruqi Zhang, Zhiwu Lu
International Joint Conference on Artificial Intelligence (IJCAI), 2016
[Paper]

Talks

A Langevin-like Sampler for Discrete Distributions

Spotlight presentation at ICML, July 2022 [Video][Slides]

Low-Precision Stochastic Gradient Langevin Dynamics

Spotlight presentation at ICML, July 2022 [Video][Slides]

Scalable and Reliable Inference for Probabilistic Modeling

Invited talk at Simons Institute, UC Berkeley, November 2021 [Video][Slides]

Asymptotically Optimal Exact Minibatch Metropolis-Hastings

Spotlight talk in Rising Stars in Data Science Workshop at University of Chicago, January 2021
Spotlight presentation at NeurIPS, December 2020 [Video][Slides]

Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning

Oral presentation at ICLR, April 2020 [Video][Slides]

Poisson-Minibatching for Gibbs Sampling with Convergence Rate Guarantees

Spotlight presentation at NeurIPS, December 2019 [Slides]

Teaching

Contact

Email: ruqiz@purdue.edu
Office: HAAS 228