News

Research Interests

I am interested in building scalable, reliable and efficient probabilistic models for machine learning and data science. Currently, I focus on developing fast and robust sampling and probabilistic inference methods with theoretical guarantees and their applications with deep neural networks on real-world big data.

Publications

Enhancing Low-Precision Sampling via Stochastic Gradient Hamiltonian Monte Carlo

Ziyi Wang, Yujie Chen, Qifan Song, Ruqi Zhang
Transactions on Machine Learning Research (TMLR), 2024
[Paper]

Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real World

Bowen Lei, Dongkuan Xu, Ruqi Zhang, Bani K Mallick
Transactions on Machine Learning Research (TMLR), 2024
[Paper][Code]

Gradient-based Discrete Sampling with Automatic Cyclical Scheduling

Patrick Pynadath, Riddhiman Bhattacharya, Arun Hariharan, Ruqi Zhang
Preprint, 2024
[Paper]

Position Paper: Bayesian Deep Learning in the Age of Large-Scale AI

Theodore Papamarkou, Maria Skoularidou, [and 20+ authors], Ruqi Zhang
Preprint, 2024
[Paper]

Training Bayesian Neural Networks with Sparse Subspace Variational Inference

Junbo Li, Zichen Miao, Qiang Qiu, Ruqi Zhang
International Conference on Learning Representations (ICLR), 2024
[Paper][Code]

Entropy-MCMC: Sampling from Flat Basins with Ease

Bolian Li, Ruqi Zhang
International Conference on Learning Representations (ICLR), 2024
[Paper][Code]

Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction

Bowen Lei, Dongkuan Xu, Ruqi Zhang, Shuren He, Bani K Mallick
Conference on Parsimony and Learning (CPAL), 2024
[Paper][Code]

Rethinking Data Distillation: Do Not Overlook Calibration

Dongyao Zhu, Bowen Lei, Jie Zhang, Yanbo Fang, Yiqun Xie, Ruqi Zhang, Dongkuan Xu
International Conference on Computer Vision (ICCV), 2023
[Paper][Code]

DISCS: A Benchmark for Discrete Sampling

Katayoon Goshvadi, Haoran Sun, Xingchao Liu, Azade Nova, Ruqi Zhang, Will Grathwohl, Dale Schuurmans, Hanjun Dai
Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track, 2023
Short Version at ICML Workshop on Sampling and Optimization in Discrete Space, 2023
[Paper][Code]

Long-tailed Classification from a Bayesian-decision-theory Perspective

Bolian Li, Ruqi Zhang
Symposium on Advances in Approximate Bayesian Inference (AABI), 2023
[Paper]

Analysis of Climate Campaigns on Social Media using Bayesian Model Averaging

Tunazzina Islam, Ruqi Zhang, Dan Goldwasser
AAAI/ACM Conference on AI, Ethics, and Society (AIES), 2023
Oral Presentation, top 11%
[Paper]

DP-Fast MH: Private, Fast, and Accurate Metropolis-Hastings for Large-Scale Bayesian Inference

Wanrong Zhang, Ruqi Zhang
International Conference on Machine Learning (ICML), 2023
[Paper][Code]

Calibrating the Rigged Lottery: Making All Tickets Reliable

Bowen Lei, Ruqi Zhang, Dongkuan Xu, Bani K Mallick
International Conference on Learning Representations (ICLR), 2023
[Paper][Code]

Efficient Informed Proposals for Discrete Distributions via Newton’s Series Approximation

Yue Xiang, Dongyao Zhu, Bowen Lei, Dongkuan Xu, Ruqi Zhang
International Conference on Artificial Intelligence and Statistics (AISTATS), 2023
[Paper][Code]

Sampling in Constrained Domains with Orthogonal-Space Variational Gradient Descent

Ruqi Zhang, Qiang Liu, Xin T. Tong
Neural Information Processing Systems (NeurIPS), 2022
[Paper][Code]

A Langevin-like Sampler for Discrete Distributions

Ruqi Zhang, Xingchao Liu, Qiang Liu
International Conference on Machine Learning (ICML), 2022
[Paper] [Code][Video][Slides]

Low-Precision Stochastic Gradient Langevin Dynamics

Ruqi Zhang, Andrew Gordon Wilson, Christopher De Sa
International Conference on Machine Learning (ICML), 2022
[Paper] [Code][Video][Slides]

Meta-Learning Divergences for Variational Inference

Ruqi Zhang, Yingzhen Li, Christopher De Sa, Sam Devlin, Cheng Zhang
International Conference on Artificial Intelligence and Statistics (AISTATS), 2021
[Paper]

Asymptotically Optimal Exact Minibatch Metropolis-Hastings

Ruqi Zhang, A. Feder Cooper, Christopher De Sa
Neural Information Processing Systems (NeurIPS), 2020
Spotlight Presentation, top 3%
[Paper][Code][Video][Slides]

AMAGOLD: Amortized Metropolis Adjustment for Efficient Stochastic Gradient MCMC

Ruqi Zhang, A. Feder Cooper, Christopher De Sa
International Conference on Artificial Intelligence and Statistics (AISTATS), 2020
[Paper] [Code]

Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning

Ruqi Zhang, Chunyuan Li, Jianyi Zhang, Changyou Chen, Andrew Gordon Wilson
International Conference on Learning Representations (ICLR), 2020
Oral Presentation, top 2%
[Paper] [Code][Video][Slides] [BlackJAX Implementation]

Poisson-Minibatching for Gibbs Sampling with Convergence Rate Guarantees

Ruqi Zhang, Christopher De Sa
Neural Information Processing Systems (NeurIPS), 2019
Spotlight Presentation, top 2.5%
[Paper] [Code] [Poster] [Slides]

Large Scale Sparse Clustering

Ruqi Zhang, Zhiwu Lu
International Joint Conference on Artificial Intelligence (IJCAI), 2016
[Paper]

Talks

Low-precision Sampling for Probabilistic Deep Learning

Invited talk at NeurIPS 2023 Workshop on ML with New Compute Paradigms, December 2023
[Video (my talk starts at 2:32:25)][Slides]

Sampling in Discrete and Constrained Domains

Invited talk at ICML 2023 Workshop on Structured Probabilistic Inference & Generative Modeling, July 2023
[Video (my talk starts at 5:32:25)][Slides]

A Langevin-like Sampler for Discrete Distributions

Spotlight presentation at ICML, July 2022 [Video][Slides]

Low-Precision Stochastic Gradient Langevin Dynamics

Spotlight presentation at ICML, July 2022 [Video][Slides]

Scalable and Reliable Inference for Probabilistic Modeling

Center for Data Science and Machine Learning, National University of Singapore, October 2022
Invited talk at Simons Institute, UC Berkeley, November 2021 [Video][Slides]

Asymptotically Optimal Exact Minibatch Metropolis-Hastings

Spotlight talk in Rising Stars in Data Science Workshop at University of Chicago, January 2021
Spotlight presentation at NeurIPS, December 2020 [Video][Slides]

Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning

Oral presentation at ICLR, April 2020 [Video][Slides]

Poisson-Minibatching for Gibbs Sampling with Convergence Rate Guarantees

Spotlight presentation at NeurIPS, December 2019 [Slides]

Teaching

Contact

Email: ruqiz@purdue.edu
Office: LWSN 2142F