[ROCm] Enable VLLM triton FP8 moe for gfx1201, tuned for Qwen3-30B-A3B-FP8 tp=2 and Qwen/Qwen3.5-35B-A3B-FP8 tp=2#79
Open
big-yellow-duck wants to merge 2 commits intomainfrom
Open
[ROCm] Enable VLLM triton FP8 moe for gfx1201, tuned for Qwen3-30B-A3B-FP8 tp=2 and Qwen/Qwen3.5-35B-A3B-FP8 tp=2#79big-yellow-duck wants to merge 2 commits intomainfrom
big-yellow-duck wants to merge 2 commits intomainfrom
Conversation
tjtanaa
reviewed
Mar 12, 2026
vllm/_aiter_ops.py
Outdated
| """ | ||
| if current_platform.is_rocm() and IS_AITER_FOUND: | ||
| from vllm.platforms.rocm import on_gfx9 | ||
| from vllm.platforms.rocm import on_gfx9, on_gfx12x |
Member
There was a problem hiding this comment.
This should not be include here because this is only about pushing the triton tuned config json
|
@big-yellow-duck great job on this PR. |
Author
its a modified version of |
Author
|
the previous tuning was run with Tuning stepsTuned on 2x Radeon AI PRO 9700# run in vllm root
HIP_VISIBLE_DEVICES='0,1' python benchmarks/kernels/benchmark_moe.py --model Qwen/Qwen3-30B-A3B-Instruct-2507-FP8 --tp-size 2 --enable-expert-parallel --tune --save-dir rocm7.0-tune --dtype fp8_w8a8 then move the result so to the config dir #example
mv rocm7.0-tune/E=64,N=768,device_name=AMD_Radeon_R9700,dtype=fp8_w8a8,block_shape=[128,128].json vllm/model_executor/layers/fused_moe/configs/the same steps are repeated for |
e1e23e8 to
0b1dbe5
Compare
…oject#38413) Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
e3e741e to
c4eb217
Compare
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com> Signed-off-by: big-yellow-duck <jeffaw99@hotmail.com>
c4eb217 to
9e726ad
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Purpose
Enable Triton FP8 MoE for RDNA4 (
gfx12xx) in vLLM so FP8 MoE models can run on ROCm with these GPUs.This resolves vllm/issues/36105, where FP8 MoE model startup failed with
NotImplementedError: No FP8 MoE backend supports the deployment configuration.This PR also includes tuned Triton MoE performance improvements for:
Qwen/Qwen3-30B-A3B-Instruct-2507-FP8Qwen/Qwen3.5-35B-A3B-FP8Test Plan
benchmark Qwen/Qwen3-30B-A3B-Instruct-2507-FP8 and Qwen/Qwen3.5-35B-A3B-FP8 with triton moe tuned on 2 Radeon PRO 9700
Test Results
Qwen/Qwen3-30B-A3B-Instruct-2507-FP8
TTFT (ms)
TPOT (ms)
E2E Latency (ms)
Qwen/Qwen3.5-35B-A3B-FP8
TTFT (ms)
TPOT (ms)
E2E Latency (ms)
Accuracy checks
GSM8K Accuracy
Qwen/Qwen3-30B-A3B-Instruct-2507-FP8
Qwen/Qwen3.5-35B-A3B-FP8
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.