A hands-on modern reinforcement learning course
A practice-first guide to modern RL, from classic control to LLM post-training, RLVR, and multimodal agents.
Course Preview · Overview · News · Contents · Course Outline · Experiment Code · Quick Start · Contributing
Note
We hope this open course gives more learners the courage to climb toward the frontier of intelligence and solve more of the hard problems on the path to AGI.
The course is evolving quickly. We recommend focusing on chapters that are not marked as under construction; chapters still in progress may contain mistakes, and corrections or suggestions are welcome.
Help Wanted
Because compute resources are limited, we are seeking GPU support. If you can help with GPU access, please contact physicoada@gmail.com.
Hands-On Modern RL is an open course for learning modern reinforcement learning through practice. Instead of the usual "formula first, black-box API later" route, this course takes a practice-first path: learners begin with runnable code and observable training behavior, then use those concrete traces to understand states, value functions, policy gradients, reward modeling, credit assignment, and the rest of the mathematical structure behind RL.
The course spans classic control and connects directly to current AI frontiers, including large language model (LLM) post-training, preference alignment with DPO and GRPO, reinforcement learning with verifiable rewards (RLVR), multi-turn tool-use agents, Agentic RL, and vision-language model (VLM) reinforcement learning.
The goal is to provide a solid ladder: from solving CartPole for the first time to building modern post-training and agent systems.
The course is organized around these engineering and teaching principles:
- Practice before formalism. Each major topic starts from experiments, metrics, failure cases, or implementation details, then introduces the mathematical abstraction.
- Theory explains behavior. MDPs, Bellman equations, policy gradients, GAE, PPO clipping, DPO objectives, and GRPO-style group advantages are introduced as tools for explaining what the code does.
- Modern RL goes beyond classic RL. The course covers classic control and deep RL, then moves into RLHF, preference optimization, RLVR, VLM reinforcement learning, and multi-turn agent training.
- Debugging is first-class. Training collapse, reward hacking, KL drift, entropy decay, OOM failures, and evaluation blind spots are treated as core material.
- Readable systems beat black boxes. Examples favor explicit implementations, inspectable metrics, and clear experiment boundaries so learners can modify and extend them.
This course is for learners who want to understand reinforcement learning by building and inspecting working systems.
It is especially useful for:
- Machine learning engineers moving from supervised learning into RL.
- Researchers and students preparing to read modern RL and alignment papers.
- LLM practitioners interested in RLHF, DPO, GRPO, RLVR, and post-training systems.
- Builders of tool-use agents, web agents, code agents, and evaluation pipelines.
- Self-learners who prefer code, experiments, and visual intuition before dense derivations.
Recommended background:
- Python programming experience.
- Basic PyTorch familiarity.
- Introductory linear algebra, probability, and calculus for machine learning.
- Ability to read papers and trace open-source training scripts.
The course includes math review appendices, so full mathematical fluency is not required on day one.
After completing the course, learners should be able to:
- Implement and explain the core RL loop: environment interaction, trajectory collection, reward feedback, policy updates, and evaluation.
- Connect MDPs, value functions, Bellman equations, TD learning, policy gradients, and advantage estimation to concrete training behavior.
- Read and modify implementations of DQN, REINFORCE, Actor-Critic, PPO, DPO, GRPO, and related methods.
- Reason about LLM post-training pipelines, including SFT, reward modeling, PPO-style RLHF, DPO-family methods, and RLVR training.
- Understand multi-turn interaction and credit assignment, and build tool-use, trajectory-synthesis, and Agentic RL systems.
- Extend reinforcement learning ideas to VLMs, embodied intelligence, multi-agent self-play, and other frontier areas.
- Diagnose common RL failure modes and design reasonable algorithms, engineering evaluations, and debugging workflows for new RL problems.
This repository is an active courseware project. Content is being expanded chapter by chapter, with emphasis on correctness, runnable examples, and a stable learning path.
- Course site: walkinglabs.github.io/hands-on-modern-rl
- Source content:
docs/ - Runnable examples:
code/ - Local verification:
npm run verify - License: CC BY-NC-SA 4.0
Issues and pull requests are welcome for typo fixes, conceptual corrections, reproducibility improvements, references, and focused course extensions.
Note: This course was created with AI assistance and has not yet been fully reviewed. It may contain factual mistakes or code that does not run as expected. Issues and pull requests are very welcome.
- [2026-05-13] 🚀 Major Upgrade: LLM and Traditional RL Hands-on Labs: Added reproducible training examples for Agentic RL (Deep Research / rLLM) and Traditional RL (Actor-Critic continuous control). Includes complete code and fine-tuning analysis for building an Agentic training system from scratch, along with new VLM RL (GeoQA geometry reasoning) hands-on experiments!
- [2026-05-02] Initial browsable open-source release for testing and feedback.
The course is under active development. Planned milestones:
- 2026-05-02: Initial open-source browsable release for community testing and feedback.
- 2026-05-10: Publish a first stable minor version, fix early typos, and stabilize Part 1 and Part 2 content and code.
- Late May 2026: Improve reproducible LLM RL experiments and add a full RLVR hands-on module with evaluation.
- Early June 2026: Deliver Agentic RL projects step by step, from single-tool use to complex Deep Research trajectory synthesis.
- Late June 2026: Add Unity-based embodied RL environments and trainable project examples.
- July 2026 and later: Expand multimodal frontier content with full VLM RL or Diffusion RL hands-on cases.
The course is divided into four parts plus appendices. The README keeps only the main modules; the online site contains the full chapter tree, diagrams, code references, and detailed navigation.
| Module | Description |
|---|---|
| Course Guide | Course positioning, learning path, and how to use the materials. |
| A Brief History of Reinforcement Learning | From trial-and-error learning to AlphaGo, RLHF, and LLM alignment. |
| Environment Setup | Installation and dependency setup for the course. |
| Chapter | Main Topic | What It Covers |
|---|---|---|
| 01 | CartPole | States, actions, rewards, policies, values, entropy, and training curves through a first runnable control task. |
| 02 | DPO Preference Fine-tuning | Preference data, DPO objectives, reward margins, accuracy, and the first bridge from RL intuition to LLM post-training. |
| Summary | Part 1 Summary | The practical intuition learners should have before entering formal RL theory. |
| Chapter | Main Topic | What It Covers |
|---|---|---|
| 03 | MDPs and Value Functions | Bandits, MDPs, value functions, Bellman equations, TD learning, Q-learning, policy objectives, data sources, and reward design. |
| 04 | Deep Q-Networks | From tabular Q-learning to DQN, replay buffers, target networks, CNN encoders, LunarLander, Atari, and visual game projects. |
| 05 | Policy Gradient and REINFORCE | Direct policy optimization, sampling-based gradients, baselines, and variance reduction. |
| 06 | Actor-Critic | Actor-critic architecture, advantage functions, TD-error critic training, and game-playing agents. |
| 07 | PPO | PPO experiments, clipped objectives, trust-region intuition, GAE, reward models, long-horizon planning, and BipedalWalker practice. |
| Summary | Part 2 Summary | The algorithmic patterns that repeat across classic and modern RL. |
| Chapter | Main Topic | What It Covers |
|---|---|---|
| 08 | The Full RLHF Pipeline | SFT, reward modeling, PPO-style RLHF, evaluation, scaling, and reward hacking. |
| 09 | Post-Training Alignment | DPO-family methods, GRPO, DeepSeek-R1 and DAPO, RLVR, financial tool-calling GRPO, policy distillation, sandboxed training, and industrial post-training practice. |
| 10 | Agentic RL | Multi-turn credit assignment, tool-use trajectories, agent evaluation, SWE/DeepCoder/FinQA-style labs, Deep Research agents, and end-to-end agentic training systems. |
| Summary | Part 3 Summary | What changes when RL is applied to language models, tools, and multi-step agent behavior. |
| Chapter | Main Topic | What It Covers |
|---|---|---|
| 11 | VLM Reinforcement Learning | VLM GRPO, visual rewards, multimodal reasoning frameworks, visual generation RL, and EasyR1 GeoQA practice. |
| 12 | Future Trends | Embodied intelligence, model-based RL, self-play, multi-agent systems, offline RL, and scaling trends. |
| Summary | Part 4 Summary | Frontier directions to explore after finishing the core course. |
| Appendix | Main Topic | What It Covers |
|---|---|---|
| A | Training Debugging Guide | Common RL training failures, symptoms, root causes, and fixes. |
| B | RL Engineering Practice | Training infrastructure, agent sandboxes, parallelism, monitoring, evaluation benchmarks, metrics, and industrial exercises. |
| C | Handwritten Code Cheatsheet | Compact code notes for SFT, PPO, DPO, GRPO, sampling, attention, and DAPO. |
| D | Learning Resources and Reproduction Projects | Curated resources and reproduction projects for expanding course examples. |
| E | Math Foundations for Reinforcement Learning | Linear algebra, probability, calculus, optimization, and information theory for RL. |
The code/ directory contains runnable examples aligned with course chapters. Each chapter's code is intentionally compact so it can be inspected, run, and modified independently.
| Area | Code Path | Representative Experiments |
|---|---|---|
| Classic control | code/chapter01_cartpole/ |
Train CartPole, inspect rewards and episode length, and compare PPO implementations. |
| Preference fine-tuning | code/chapter02_dpo/ |
Generate preference data, train with DPO, and compare model behavior before and after fine-tuning. |
| MDP and value learning | code/chapter03_mdp/ |
Run bandit strategies, solve GridWorld, and verify Bellman updates numerically. |
| Deep Q-learning | code/chapter04_dqn/ |
Implement replay buffers, target networks, and Double DQN variants. |
| Policy gradient | code/chapter05_policy_gradient/ |
Compare REINFORCE, baseline variants, and Actor-Critic updates. |
| PPO | code/chapter07_ppo/ |
Train LunarLander, inspect clipping, visualize GAE, and compare training stability. |
| RLHF | code/chapter08_rlhf/ |
Walk through SFT, reward model training, and PPO-style alignment. |
| Alignment and RLVR | code/chapter09_alignment/, code/chapter09_grpo_rlvr/ |
Explore DPO rewards, GRPO group advantages, and rule-based verifiable rewards. |
| VLM and agents | code/chapter10_agentic_rl/, code/chapter11_vlm_rl/ |
Build tool-use agent trajectory synthesis and implement multimodal model RL examples. |
| Advanced topics | code/chapter12_future_trends/ |
Study frontier directions including multi-agent RL and model-based RL. |
See code/README.md for a code index and chapter-specific dependency notes.
A practical path through the repository:
- Read the course guide and run the CartPole example.
- Skim the DPO chapter early, even before finishing all theory, to anchor the motivation for LLM post-training.
- Study Chapters 03-07 in order; this is the conceptual core.
- After understanding policy gradients and PPO, return to RLHF, DPO, GRPO, and RLVR.
- Use the debugging and engineering appendices whenever a training run behaves strangely.
- Treat frontier chapters as extensions: VLM reinforcement learning, Agentic RL, continuous control, multi-agent systems, and test-time reasoning.
Published course site:
https://walkinglabs.github.io/hands-on-modern-rl/
Requirements:
- Node.js >= 18.0.0
- npm
git clone https://github.com/walkinglabs/hands-on-modern-rl.git
cd hands-on-modern-rl
npm install
npm run devThen open the local VitePress URL shown in the terminal, usually:
http://localhost:5173
Before submitting a pull request that changes documentation structure, theme code, navigation, build scripts, or generated assets, run:
npm run verifyThis checks formatting, lints the VitePress theme, builds the site, and verifies expected build artifacts.
Most code examples use Python and are organized by chapter.
cd code
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtFor smaller installs, use chapter-specific requirements files:
pip install -r chapter01_cartpole/requirements.txt
python chapter01_cartpole/1-ppo_cartpole.pySome chapters may require additional system libraries, GPU support, model downloads, or environment-specific setup. Start with Chapter 01 before running examples that involve LLMs, VLMs, or heavy simulators.
hands-on-modern-rl/
|-- docs/ # VitePress course content
| |-- .vitepress/ # Site config, navigation, and theme overrides
| |-- public/ # Static assets copied into the built site
| |-- preface/ # Course introduction and history
| |-- chapter*/ # Main course chapters
| |-- appendix*/ # Supplementary material and references
| `-- summaries/ # Part-level review and summary notes
|-- code/ # Runnable examples aligned with chapters
|-- scripts/ # Maintenance and verification scripts
|-- package.json # Site scripts and dependencies
|-- AGENTS.md # Repository maintenance guide
`-- README.md # Main project overview
npm run dev # Start the local documentation server
npm run build # Build the static site
npm run preview # Preview the built site locally
npm run format # Format repository files with Prettier
npm run format:check # Check formatting
npm run lint # Lint VitePress theme code
npm run verify # Run format check, lint, build, and artifact verificationContributions should make the course clearer, more accurate, easier to reproduce, or easier to navigate.
Good contributions include:
- Fixing conceptual errors, formulas, diagrams, broken links, or typos.
- Improving explanations without changing the intended learning path.
- Adding small, reproducible experiments that clarify existing chapters.
- Improving scripts, build reliability, navigation, or accessibility.
- Adding high-quality references to papers, official documentation, or widely used open-source implementations.
Please keep pull requests focused. A good PR usually changes one chapter, one experiment, one group of diagrams, or one infrastructure issue at a time.
When adding content:
- Put course material under
docs/. - Use kebab-case for new directories and files.
- Prefer directory-based routes with
index.md. - Update
docs/.vitepress/config.mjswhen adding navigable pages. - Run
npm run verifybefore requesting review if your change touches config, theme, scripts, or generated site output. - Use Conventional Commits, such as
docs: clarify ppo clippingorfix: repair chapter link.
For repository-specific maintenance rules, see AGENTS.md.
Our team has also created other courses. Take a look:
For suggestions or feedback, scan the QR code to join the WeChat group (微信):
If you use this course in teaching materials, study notes, or derivative non-commercial educational work, please cite the repository:
@misc{hands_on_modern_rl,
title = {Hands-On Modern RL: Practice-first reinforcement learning from CartPole to LLM post-training and agentic systems},
author = {WalkingLabs},
year = {2026},
howpublished = {\url{https://github.com/walkinglabs/hands-on-modern-rl}},
note = {Open courseware repository}
}This course is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
You may share and adapt the material for non-commercial purposes, provided that you give appropriate credit and distribute derivative works under the same license.






