Fix TRT-LLM MLA backend applying k_scale to BF16 KV cache in BMM1#18396
Merged
Fridge003 merged 4 commits intosgl-project:mainfrom Feb 8, 2026
Merged
Fix TRT-LLM MLA backend applying k_scale to BF16 KV cache in BMM1#18396Fridge003 merged 4 commits intosgl-project:mainfrom
Fridge003 merged 4 commits intosgl-project:mainfrom
Conversation
When a model checkpoint contains KV cache scaling factors (k_scale/v_scale) but the KV cache dtype is BF16 (not FP8), the TRT-LLM MLA backend unconditionally applies k_scale in the BMM1 attention score computation. This is incorrect because k_scale is a quantization compensation factor that should only be applied when KV cache values are actually FP8-quantized. For example, with k_scale=0.06, attention scores are scaled down by ~16x, producing garbage output (degenerate repetition/random tokens). This fix gates k_scale behind a `self.data_type == torch.float8_e4m3fn` check in both the decode and extend/target_verify paths, so k_scale is only applied when the KV cache actually stores FP8 values. Affected code paths: - forward_decode: BMM1 scale computation (~L915) - forward_extend: target_verify/draft_extend BMM1 scale computation (~L1037)
Contributor
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
Fridge003
reviewed
Feb 8, 2026
Log a warning_once when the checkpoint has k_scale but KV cache dtype is not FP8, so users are aware the scaling factor is being ignored.
Fridge003
approved these changes
Feb 8, 2026
Collaborator
|
/tag-and-rerun-ci |
charlesHsuGG
pushed a commit
to charlesHsuGG/sglang
that referenced
this pull request
Feb 9, 2026
Johnsonms
pushed a commit
to Johnsonms/sglang
that referenced
this pull request
Feb 14, 2026
1StepForever
pushed a commit
to 1StepForever/sglang
that referenced
this pull request
Feb 26, 2026
* www/pr/ks: (265 commits) [BugFix][PD]Fix metadata_buffer_index leak when aborted in PD (sgl-project#17483) Refactoring Mooncake TE as a shared distributed component (sgl-project#17810) [ModelOPT] Support Qwen 3 Next Coder NVFP4 (sgl-project#18224) Update author information in pyproject.toml (sgl-project#18453) [Kimi-K2.5] Fix missing `quant_config` in `KimiK25` (sgl-project#18440) Add tensor parallelism support to LFM2 ShortConv layers (sgl-project#17777) [diffusion] chore: revise process title (sgl-project#18446) Fix TRT-LLM MLA backend applying k_scale to BF16 KV cache in BMM1 (sgl-project#18396) [diffusion] refactor: group component loaders under the component_loaders/ directory (sgl-project#18438) [ModelOpt] Fix broken Qwen3-235B-A22B-Instruct-2507-NVFP4 launch (sgl-project#18189) [diffusion] feat: support efficient sequence shard (sgl-project#18161) [CI] fix: notebook ci may not working (sgl-project#18417) fix: sync server_args.kv_cache_dtype when detecting FP8 KV cache (sgl-project#18394) [Fix] Fix backend selection after flashinfer version update (sgl-project#18364) [diffusion] platform: support WAN/FLUX/Qwen-Image/Qwen-Image-edit on Ascend (sgl-project#13662) fix: fix NVFP4 Kimi-K2.5 weight mapping and exclude list (sgl-project#18370) [diffusion] feat: support saving videos directly on the server to avoid the overhead of tensor transfer (sgl-project#18253) [diffusion] fix: respect dist_timeout option (sgl-project#18386) [Doc] Fix outdated `--fp4-gemm-backend` documentation (sgl-project#18350) [diffusion] fix: remove unnecessary norm_type argument from GLM-Image dits (sgl-project#18382) ...
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
When a model checkpoint contains KV cache scaling factors (k_scale/v_scale) but the KV cache dtype is BF16 (not FP8), the TRT-LLM MLA backend unconditionally applies k_scale in the BMM1 attention score computation. This is incorrect because k_scale is a quantization compensation factor that should only be applied when KV cache values are actually FP8-quantized.
For example, with k_scale=0.06, attention scores are scaled down by ~16x, producing garbage output (degenerate repetition/random tokens).
This fix gates k_scale behind a
self.data_type == torch.float8_e4m3fncheck in both the decode and extend/target_verify paths, so k_scale is only applied when the KV cache actually stores FP8 values.Affected code paths:
Motivation
When serving FP8-quantized DeepSeek-V3 checkpoints that contain per-layer
k_scale/v_scaletensors (from KV cache calibration), SGLang's weight loader (BaseKVCacheMethod) loads these intolayer.k_scale_floatregardless of what KV cache dtype the server is using. The TRT-LLM MLA backend then unconditionally appliesk_scalein BMM1:With BF16 KV cache (the default), key values are stored unscaled, so applying
k_scale(typical values 0.02-0.06) reduces attention scores by 16-50x. This produces completely degenerate output: repetitive tokens, garbage characters, or premature EOS.Modifications
Single file change in
python/sglang/srt/layers/attention/trtllm_mla_backend.py:forward_decode(~L915): Wrappedk_scalelookup inif self.data_type == torch.float8_e4m3fn, defaulting tok_scale = 1.0for non-FP8 KV cache.forward_extend(~L1037): Same change for the target_verify/draft_extend path.self.data_typeis set frommodel_runner.kv_cache_dtypeat init (line 288), so it correctly reflects the runtime KV cache dtype.Accuracy Tests
Tested on 8x NVIDIA B200 GPUs, TP=8, DeepSeek-V3 671B (FP8 weights),
temperature=0, 10 diverse prompts (factual, reasoning, code, creative, translation, QA).Model with k_scale/v_scale in checkpoint + BF16 KV cache (the bug scenario):
main(before)Paris 1 is111,1,0000000000...Paris. It is the largest city in France and the country's main center of culture and commerce.Quantum Quantum Quantum Quantum Quantum...(28x repetition)Quantum computing is a type of computing that uses quantum bits, or qubits, which can represent and store information...a 0.0.0.0.0.111.0.0.0 0000000...a distance of 150 miles. This is because 60 miles per hour multiplied by 2.5 hours equals 150 miles.\nn\n(5 tokens then EOS)if n <= 1: return False for i in range(2, int(n**0.5) + 1): if n % i == 0: return False return TruePhotos What is the process ofs,1. The // k k |...Photosynthesis is the process by which green plants, algae, and some bacteria convert light energy...Summary:
mainproduces 0/10 coherent outputs; this PR produces 10/10 coherent outputs.No regression (model without k_scale/v_scale in checkpoint):
Both
mainand this PR produce byte-identical outputs for all 10 prompts on a standard DeepSeek-V3 checkpoint without k_scale/v_scale tensors. Whenlayer.k_scale_floatisNone, both branches default tok_scale = 1.0.Benchmarking and Profiling
No performance impact. The change adds a single
if self.data_type == torch.float8_e4m3fncheck (a Python attribute comparison) before the existinggetattrcall. This is negligible compared to the GPU kernel execution time.Measured throughput on B200 with DeepSeek-V3 671B TP=8: ~106-118 tok/s on both branches (identical within noise).
Checklist
Review Process
/tag-run-ci-label,/rerun-failed-ci,/tag-and-rerun-ci