Skip to content

[Kimi-K2.5] Fix NVFP4 Kimi-K2.5 weight mapping and exclude list#18370

Merged
mickqian merged 2 commits intosgl-project:mainfrom
mmangkad:fix-kimi-k2-5-nvfp4
Feb 8, 2026
Merged

[Kimi-K2.5] Fix NVFP4 Kimi-K2.5 weight mapping and exclude list#18370
mickqian merged 2 commits intosgl-project:mainfrom
mmangkad:fix-kimi-k2-5-nvfp4

Conversation

@mmangkad
Copy link
Contributor

@mmangkad mmangkad commented Feb 6, 2026

Motivation

Make nvidia/Kimi-K2.5-NVFP4 load cleanly in SGLang. The checkpoint’s naming and hf_quant_config.json exclude list use language_model.layers.*, which doesn’t match SGLang’s current Kimi/Deepseek module prefixes. This mismatch caused weight shape errors and unintended quantization of self‑attn.

Modifications

  • Map language_model.layers.* to language_model.model.layers.* in KimiK25ForConditionalGeneration before loading weights.
  • Apply the same mapping to ModelOpt NVFP4 exclude_modules, and also add stripped variants (without language_model.) so the inner Deepseek prefixes (model.layers.*) are excluded as intended.

Accuracy Tests

python -m sglang.launch_server --model-path nvidia/Kimi-K2.5-NVFP4 --tensor-parallel-size 8 --trust-remote-code --reasoning-parser kimi_k2 --tool-call-parser kimi_k2 --moe-runner-backend flashinfer_trtllm --quantization modelopt_fp4 --model-loader-extra-config '{"enable_multithread_load": true,"num_threads": 119}'

GSM8K 20 shots

Accuracy: 0.936
Invalid: 0.001
Latency: 90.726 s
Output throughput: 1491.511 token/s

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@github-actions github-actions bot added the quant LLM Quantization label Feb 6, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @mmangkad, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses compatibility issues when loading the nvidia/Kimi-K2.5-NVFP4 model within SGLang. By introducing a mechanism to correctly map module prefixes and adjust quantization exclusion lists, it resolves discrepancies in checkpoint naming that previously led to weight shape errors and unintended quantization of model components, enabling the model to load cleanly.

Highlights

  • Kimi-K2.5 Weight Mapping Fix: Implemented a WeightsMapper in KimiK25ForConditionalGeneration to correctly map language_model.layers.* to language_model.model.layers.* when loading weights for the nvidia/Kimi-K2.5-NVFP4 model, resolving module prefix mismatches.
  • ModelOpt NVFP4 Exclude List Adjustment: Modified ModelOptQuantConfig to apply the same weight name mapping to exclude_modules and added stripped variants (without language_model.) to ensure correct exclusion of inner Deepseek prefixes during NVFP4 quantization, preventing unintended quantization of self-attention layers.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/layers/quantization/modelopt_quant.py
    • Imported WeightsMapper from sglang.srt.models.utils.
    • Added a new method apply_weight_name_mapper to ModelOptQuantConfig to dynamically adjust the exclude_modules list based on a provided WeightsMapper, ensuring proper handling of module name discrepancies for quantization.
    • Expanded the exclude_modules list to include both mapped and stripped variants of module names (e.g., language_model.layers.0.attn and layers.0.attn) to cover different naming conventions.
  • python/sglang/srt/models/kimi_k25.py
    • Imported WeightsMapper from sglang.srt.models.utils.
    • Defined a hf_to_sglang_mapper within KimiK25ForConditionalGeneration to specifically handle the nvidia/Kimi-K2.5-NVFP4 model's weight naming convention, mapping language_model.layers. to language_model.model.layers..
    • Integrated the hf_to_sglang_mapper into the load_weights method of KimiK25ForConditionalGeneration to automatically apply the necessary weight name transformations during model loading.
Activity
  • The author has provided accuracy test results for GSM8K 20 shots, showing an accuracy of 0.936, an invalid rate of 0.001, a latency of 90.726 s, and an output throughput of 1491.511 token/s.
  • The pull request includes a detailed checklist for formatting, unit tests, documentation, and benchmarking, indicating readiness for review.
  • Instructions for the review process, including pinging Merge Oncalls and triggering CI tests, are outlined.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a fix for loading nvidia/Kimi-K2.5-NVFP4 weights by correctly mapping weight names and quantization exclusion lists. The changes are well-implemented and address the issue described. A WeightsMapper is introduced in KimiK25ForConditionalGeneration to handle the prefix difference in layer names. This mapper is also applied to the exclude_modules in ModelOptQuantConfig to ensure quantization is correctly skipped for the intended modules. The implementation is clean and robust. I've made one suggestion to improve memory efficiency during weight loading.

@b8zhong b8zhong mentioned this pull request Feb 7, 2026
5 tasks
@seindum seindum mentioned this pull request Feb 7, 2026
5 tasks
@mickqian
Copy link
Collaborator

mickqian commented Feb 7, 2026

/tag-and-rerun-ci

@github-actions github-actions bot added the run-ci label Feb 7, 2026
@mickqian
Copy link
Collaborator

mickqian commented Feb 7, 2026

/rerun-failed-ci

@mickqian mickqian merged commit 7b83659 into sgl-project:main Feb 8, 2026
320 of 351 checks passed
@mmangkad mmangkad deleted the fix-kimi-k2-5-nvfp4 branch February 8, 2026 04:39
charlesHsuGG pushed a commit to charlesHsuGG/sglang that referenced this pull request Feb 9, 2026
Johnsonms pushed a commit to Johnsonms/sglang that referenced this pull request Feb 14, 2026
1StepForever pushed a commit to 1StepForever/sglang that referenced this pull request Feb 26, 2026
* www/pr/ks: (265 commits)
  [BugFix][PD]Fix metadata_buffer_index leak when aborted in PD (sgl-project#17483)
  Refactoring Mooncake TE as a shared distributed component (sgl-project#17810)
  [ModelOPT] Support Qwen 3 Next Coder NVFP4 (sgl-project#18224)
  Update author information in pyproject.toml (sgl-project#18453)
  [Kimi-K2.5] Fix missing `quant_config` in `KimiK25` (sgl-project#18440)
  Add tensor parallelism support to LFM2 ShortConv layers (sgl-project#17777)
  [diffusion] chore: revise process title (sgl-project#18446)
  Fix TRT-LLM MLA backend applying k_scale to BF16 KV cache in BMM1 (sgl-project#18396)
  [diffusion] refactor: group component loaders under the component_loaders/ directory (sgl-project#18438)
  [ModelOpt] Fix broken Qwen3-235B-A22B-Instruct-2507-NVFP4 launch (sgl-project#18189)
  [diffusion] feat: support efficient sequence shard (sgl-project#18161)
  [CI] fix: notebook ci may not working (sgl-project#18417)
  fix: sync server_args.kv_cache_dtype when detecting FP8 KV cache (sgl-project#18394)
  [Fix] Fix backend selection after flashinfer version update (sgl-project#18364)
  [diffusion] platform: support WAN/FLUX/Qwen-Image/Qwen-Image-edit on Ascend (sgl-project#13662)
  fix: fix NVFP4 Kimi-K2.5 weight mapping and exclude list (sgl-project#18370)
  [diffusion] feat: support saving videos directly on the server to avoid the overhead of tensor transfer (sgl-project#18253)
  [diffusion] fix: respect dist_timeout option (sgl-project#18386)
  [Doc] Fix outdated `--fp4-gemm-backend` documentation (sgl-project#18350)
  [diffusion] fix: remove unnecessary norm_type argument from GLM-Image dits (sgl-project#18382)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

quant LLM Quantization run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants