Skip to content

Fix TRT-LLM MLA backend applying k_scale to BF16 KV cache in BMM1#18396

Merged
Fridge003 merged 4 commits intosgl-project:mainfrom
debo3:fix/trtllm-mla-kscale-bf16-kv-cache
Feb 8, 2026
Merged

Fix TRT-LLM MLA backend applying k_scale to BF16 KV cache in BMM1#18396
Fridge003 merged 4 commits intosgl-project:mainfrom
debo3:fix/trtllm-mla-kscale-bf16-kv-cache

Conversation

@debo3
Copy link
Contributor

@debo3 debo3 commented Feb 7, 2026

When a model checkpoint contains KV cache scaling factors (k_scale/v_scale) but the KV cache dtype is BF16 (not FP8), the TRT-LLM MLA backend unconditionally applies k_scale in the BMM1 attention score computation. This is incorrect because k_scale is a quantization compensation factor that should only be applied when KV cache values are actually FP8-quantized.

For example, with k_scale=0.06, attention scores are scaled down by ~16x, producing garbage output (degenerate repetition/random tokens).

This fix gates k_scale behind a self.data_type == torch.float8_e4m3fn check in both the decode and extend/target_verify paths, so k_scale is only applied when the KV cache actually stores FP8 values.

Affected code paths:

  • forward_decode: BMM1 scale computation (~L915)
  • forward_extend: target_verify/draft_extend BMM1 scale computation (~L1037)

Motivation

When serving FP8-quantized DeepSeek-V3 checkpoints that contain per-layer k_scale/v_scale tensors (from KV cache calibration), SGLang's weight loader (BaseKVCacheMethod) loads these into layer.k_scale_float regardless of what KV cache dtype the server is using. The TRT-LLM MLA backend then unconditionally applies k_scale in BMM1:

k_scale = layer.k_scale_float if getattr(layer, "k_scale_float", None) is not None else 1.0
bmm1_scale = q_scale * k_scale * layer.scaling

With BF16 KV cache (the default), key values are stored unscaled, so applying k_scale (typical values 0.02-0.06) reduces attention scores by 16-50x. This produces completely degenerate output: repetitive tokens, garbage characters, or premature EOS.

Modifications

Single file change in python/sglang/srt/layers/attention/trtllm_mla_backend.py:

  • forward_decode (~L915): Wrapped k_scale lookup in if self.data_type == torch.float8_e4m3fn, defaulting to k_scale = 1.0 for non-FP8 KV cache.
  • forward_extend (~L1037): Same change for the target_verify/draft_extend path.

self.data_type is set from model_runner.kv_cache_dtype at init (line 288), so it correctly reflects the runtime KV cache dtype.

Accuracy Tests

Tested on 8x NVIDIA B200 GPUs, TP=8, DeepSeek-V3 671B (FP8 weights), temperature=0, 10 diverse prompts (factual, reasoning, code, creative, translation, QA).

Model with k_scale/v_scale in checkpoint + BF16 KV cache (the bug scenario):

Prompt main (before) This PR (after)
"The capital of France is" Paris 1 is111,1,0000000000... Paris. It is the largest city in France and the country's main center of culture and commerce.
"Explain quantum computing in simple terms:" Quantum Quantum Quantum Quantum Quantum... (28x repetition) Quantum computing is a type of computing that uses quantum bits, or qubits, which can represent and store information...
"If a train travels at 60 mph for 2.5 hours, it will cover" a 0.0.0.0.0.111.0.0.0 0000000... a distance of 150 miles. This is because 60 miles per hour multiplied by 2.5 hours equals 150 miles.
"def is_prime(n):" \nn\n (5 tokens then EOS) if n <= 1: return False for i in range(2, int(n**0.5) + 1): if n % i == 0: return False return True
"What is photosynthesis?" Photos What is the process ofs,1. The // k k |... Photosynthesis is the process by which green plants, algae, and some bacteria convert light energy...

Summary: main produces 0/10 coherent outputs; this PR produces 10/10 coherent outputs.

No regression (model without k_scale/v_scale in checkpoint):

Both main and this PR produce byte-identical outputs for all 10 prompts on a standard DeepSeek-V3 checkpoint without k_scale/v_scale tensors. When layer.k_scale_float is None, both branches default to k_scale = 1.0.

Benchmarking and Profiling

No performance impact. The change adds a single if self.data_type == torch.float8_e4m3fn check (a Python attribute comparison) before the existing getattr call. This is negligible compared to the GPU kernel execution time.

Measured throughput on B200 with DeepSeek-V3 671B TP=8: ~106-118 tok/s on both branches (identical within noise).

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

When a model checkpoint contains KV cache scaling factors (k_scale/v_scale)
but the KV cache dtype is BF16 (not FP8), the TRT-LLM MLA backend
unconditionally applies k_scale in the BMM1 attention score computation.
This is incorrect because k_scale is a quantization compensation factor
that should only be applied when KV cache values are actually FP8-quantized.

For example, with k_scale=0.06, attention scores are scaled down by ~16x,
producing garbage output (degenerate repetition/random tokens).

This fix gates k_scale behind a `self.data_type == torch.float8_e4m3fn`
check in both the decode and extend/target_verify paths, so k_scale is
only applied when the KV cache actually stores FP8 values.

Affected code paths:
- forward_decode: BMM1 scale computation (~L915)
- forward_extend: target_verify/draft_extend BMM1 scale computation (~L1037)
@gemini-code-assist
Copy link
Contributor

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

@github-actions github-actions bot added the blackwell SM100/SM120 label Feb 7, 2026
@debo3 debo3 requested a review from Fridge003 February 8, 2026 09:02
debo3 and others added 2 commits February 8, 2026 01:02
Log a warning_once when the checkpoint has k_scale but KV cache dtype
is not FP8, so users are aware the scaling factor is being ignored.
@Fridge003
Copy link
Collaborator

/tag-and-rerun-ci

@github-actions github-actions bot added the run-ci label Feb 8, 2026
@Fridge003 Fridge003 merged commit 031a652 into sgl-project:main Feb 8, 2026
243 of 267 checks passed
1StepForever pushed a commit to 1StepForever/sglang that referenced this pull request Feb 26, 2026
* www/pr/ks: (265 commits)
  [BugFix][PD]Fix metadata_buffer_index leak when aborted in PD (sgl-project#17483)
  Refactoring Mooncake TE as a shared distributed component (sgl-project#17810)
  [ModelOPT] Support Qwen 3 Next Coder NVFP4 (sgl-project#18224)
  Update author information in pyproject.toml (sgl-project#18453)
  [Kimi-K2.5] Fix missing `quant_config` in `KimiK25` (sgl-project#18440)
  Add tensor parallelism support to LFM2 ShortConv layers (sgl-project#17777)
  [diffusion] chore: revise process title (sgl-project#18446)
  Fix TRT-LLM MLA backend applying k_scale to BF16 KV cache in BMM1 (sgl-project#18396)
  [diffusion] refactor: group component loaders under the component_loaders/ directory (sgl-project#18438)
  [ModelOpt] Fix broken Qwen3-235B-A22B-Instruct-2507-NVFP4 launch (sgl-project#18189)
  [diffusion] feat: support efficient sequence shard (sgl-project#18161)
  [CI] fix: notebook ci may not working (sgl-project#18417)
  fix: sync server_args.kv_cache_dtype when detecting FP8 KV cache (sgl-project#18394)
  [Fix] Fix backend selection after flashinfer version update (sgl-project#18364)
  [diffusion] platform: support WAN/FLUX/Qwen-Image/Qwen-Image-edit on Ascend (sgl-project#13662)
  fix: fix NVFP4 Kimi-K2.5 weight mapping and exclude list (sgl-project#18370)
  [diffusion] feat: support saving videos directly on the server to avoid the overhead of tensor transfer (sgl-project#18253)
  [diffusion] fix: respect dist_timeout option (sgl-project#18386)
  [Doc] Fix outdated `--fp4-gemm-backend` documentation (sgl-project#18350)
  [diffusion] fix: remove unnecessary norm_type argument from GLM-Image dits (sgl-project#18382)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants