[ModelOpt] Fix broken Qwen3-235B-A22B-Instruct-2507-NVFP4 launch#18189
Conversation
|
/tag-and-rerun-ci again |
|
/tag-and-rerun-ci |
Summary of ChangesHello @vincentzed, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a critical loading issue for the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a packed_modules_mapping to the Qwen3MoeForCausalLM class to fix an issue with quantization, specifically for the Qwen3-235B model. The mapping correctly identifies fused modules, ensuring that quantization skipping rules are applied properly. The change is well-explained and appears to be a correct and necessary fix. I have one minor suggestion to improve code clarity by using typing.ClassVar for the new class attribute.
| # Mapping from fused module names to their component weight names. | ||
| # Required for quantization configs (e.g., ModelOpt FP4) to correctly identify | ||
| # which layers should be skipped based on the exclude_modules/ignore list. | ||
| packed_modules_mapping = { |
There was a problem hiding this comment.
It's because qkv and o proj are NVFP4 in these recipes, which is not the case for the nv chkpt
Nv does not show recipe for NVFP4 model.
ssshinigami
left a comment
There was a problem hiding this comment.
Looks not correct to change models for this fix. It is quantization specific things, and should be in quantization part.
…-project#18189) Signed-off-by: vincentzed <207368749+vincentzed@users.noreply.github.com>
…-project#18189) Signed-off-by: vincentzed <207368749+vincentzed@users.noreply.github.com>
* www/pr/ks: (265 commits) [BugFix][PD]Fix metadata_buffer_index leak when aborted in PD (sgl-project#17483) Refactoring Mooncake TE as a shared distributed component (sgl-project#17810) [ModelOPT] Support Qwen 3 Next Coder NVFP4 (sgl-project#18224) Update author information in pyproject.toml (sgl-project#18453) [Kimi-K2.5] Fix missing `quant_config` in `KimiK25` (sgl-project#18440) Add tensor parallelism support to LFM2 ShortConv layers (sgl-project#17777) [diffusion] chore: revise process title (sgl-project#18446) Fix TRT-LLM MLA backend applying k_scale to BF16 KV cache in BMM1 (sgl-project#18396) [diffusion] refactor: group component loaders under the component_loaders/ directory (sgl-project#18438) [ModelOpt] Fix broken Qwen3-235B-A22B-Instruct-2507-NVFP4 launch (sgl-project#18189) [diffusion] feat: support efficient sequence shard (sgl-project#18161) [CI] fix: notebook ci may not working (sgl-project#18417) fix: sync server_args.kv_cache_dtype when detecting FP8 KV cache (sgl-project#18394) [Fix] Fix backend selection after flashinfer version update (sgl-project#18364) [diffusion] platform: support WAN/FLUX/Qwen-Image/Qwen-Image-edit on Ascend (sgl-project#13662) fix: fix NVFP4 Kimi-K2.5 weight mapping and exclude list (sgl-project#18370) [diffusion] feat: support saving videos directly on the server to avoid the overhead of tensor transfer (sgl-project#18253) [diffusion] fix: respect dist_timeout option (sgl-project#18386) [Doc] Fix outdated `--fp4-gemm-backend` documentation (sgl-project#18350) [diffusion] fix: remove unnecessary norm_type argument from GLM-Image dits (sgl-project#18382) ...
Motivation
Support https://huggingface.co/nvidia/Qwen3-235B-A22B-Instruct-2507-NVFP4
Previously it failed to launch on SGLang.
However, the 30B NVFP4 always worked.
https://huggingface.co/nvidia/Qwen3-30B-A3B-NVFP4/blob/main/hf_quant_config.json
Root cause
Qwen3-235B-A22B-Instruct-2507-NVFP4 (Broken)
MoE Expert MLPs (gate_proj, up_proj, down_proj) NVFP4
Attention Projections (q_proj, k_proj, v_proj, o_proj) BF16
Router Gates (mlp.gate) BF16
Lm_head BF16
The config.json quantization_config.ignore list contains all 94 layers × 3 projections = 282
entries for q/k/v:
"ignore": [
"model.layers.0.self_attn.k_proj",
"model.layers.0.self_attn.q_proj",
"model.layers.0.self_attn.v_proj",
"model.layers.0.mlp.gate",
// ... repeated for all 94 layers
"lm_head"
]
Qwen3-30B-A3B-NVFP4 (Works)
MoE Expert MLPs (gate_proj, up_proj, down_proj) NVFP4
Attention Projections (q_proj, k_proj, v_proj, o_proj) NVFP4
Router Gates (mlp.gate) BF16
Lm_head BF16
The config.json quantization_config.ignore list contains only router gates:
"ignore": [
"model.layers.0.mlp.gate",
// ... repeated for all 48 layers
"lm_head"
]
Qwen3MoeForCausalLM has no packed_modules_mapping
Buggy behaviour:
Layer: model.layers.0.self_attn.qkv_proj
├── packed_modules_mapping = {}
├── is_layer_skipped("...qkv_proj", ignore_list, {})
│ ├── proj_name = "qkv_proj"
│ ├── "qkv_proj" in {} → False
│ └── Fallback: is "qkv_proj" in ignore_list? → NO (only q_proj, k_proj, v_proj are)
├── Returns: False (NOT skipped)
├── Quant method: ModelOptFp4LinearMethod
└── Creates param shape: [128, 2048] (FP4 packed, input_size/2 for k_proj shard)
30B
Layer: model.layers.0.self_attn.qkv_proj
├── packed_modules_mapping = {}
├── is_layer_skipped("...qkv_proj", ignore_list, {})
│ ├── proj_name = "qkv_proj"
│ ├── "qkv_proj" in {} → False
│ └── Fallback: is "qkv_proj" in ignore_list? → NO
├── Returns: False (NOT skipped)
├── Quant method: ModelOptFp4LinearMethod
└── Creates param shape: [64, 1024] (FP4 packed)
(Really, it oly worked by coincidence)
So we add packed_modules_mapping for Qwen3MoE
Which produces:
Layer: model.layers.0.self_attn.qkv_proj
├── packed_modules_mapping = {"qkv_proj": ["q_proj", "k_proj", "v_proj"], ...}
├── is_layer_skipped("...qkv_proj", ignore_list, mapping)
│ ├── proj_name = "qkv_proj"
│ ├── "qkv_proj" in mapping → True
│ ├── Expand to: [...q_proj, ...k_proj, ...v_proj]
│ └── All three in ignore_list? → YES
├── Returns: True (SKIPPED)
├── Quant method: UnquantizedLinearMethod
└── Creates param shape: [128, 4096] (BF16, full size)
30B behavoiur is unchange
Modifications
Add packed modules mapping
Accuracy Tests
Benchmarking and Profiling
Checklist
Review Process
/tag-run-ci-label,/rerun-failed-ci,/tag-and-rerun-ci