UVA RL Meetup

The Reinforcement Learning Meetup @ University of Virginia
2-3pm Friday @ Rice Hall 204

This weekly meetup is organized by Shangtong Zhang and Chen-Yu Wei for UVA RL folks to share interesting RL papers.



Spring 2025

Date Presenter Paper or Topic
Feb 7 Zixuan Xie Analytic-DPM: An Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models
Tutorial on Diffusion Models for Imaging and Vision
Feb 14 Xinyu Liu Decoupled Functional Central Limit Theorems for Two-Time-Scale Stochastic Approximation
Feb 21 Amir Moeini Transformers Implement Functional Gradient Descent to Learn Non-Linear Functions In Context
Feb 28 (AAAI)    
Mar 7 Braham Snyder Target Networks and Over-parameterization Stabilize Off-policy Bootstrapping with Function Approximation
Mar 14 (Spring Break)    
Mar 21 Haolin Liu Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient RLHF
Mar 28 Amin Davoodabadi
(remote)
An Information-Theoretic Perspective on Intrinsic Motivation in Reinforcement Learning
Apr 4 Jiuqi Wang Can Looped Transformers Learn to Implement Multi-step Gradient Descent for In-context Learning?
Apr 11 Dylan Foster  
Apr 18    
Apr 25 (ICLR)    



Fall 2024

Date Presenter Paper
Sep 27 Shangtong Zhang The O.D.E. Method for Convergence of Stochastic Approximation and Reinforcement Learning
Oct 11 Chen-Yu Wei Equivalence Between Policy Gradients and Soft Q-Learning
Oct 25 Shuze Liu Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Nov 8 Ethan Blaser Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining
Nov 22 Haolin Liu Correcting the Mythos of KL-Regularization: Direct Alignment without Overoptimization via Χ2-Preference Optimization
Dec 6 Jiuqi Wang Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning
Denoising Diffusion Probabilistic Models
Understanding Diffusion Models: A Unified Perspective