Skip to content

Add tensor parallelism support to LFM2 ShortConv layers#17777

Merged
ispobock merged 1 commit intosgl-project:mainfrom
tugot17:add-tp-for-lfm
Feb 8, 2026
Merged

Add tensor parallelism support to LFM2 ShortConv layers#17777
ispobock merged 1 commit intosgl-project:mainfrom
tugot17:add-tp-for-lfm

Conversation

@tugot17
Copy link
Contributor

@tugot17 tugot17 commented Jan 26, 2026

I just realised that the LFM models do not support the the tensor parallelsim. This is because we used the nn.Linear for in/out projections in ShortConv

This PR fixes this.

Changes

  • Use MergedColumnParallelLinear for in_proj to shard B, C, x projections separately.
  • Use RowParallelLinear with input_is_parallel for out_proj.
  • Shard conv_weight/conv_bias along hidden dimension.
  • Fix cache shape calculation by passing num_heads=tp_size (temporal state is empty).

Note on num_heads=tp_size: Mamba2StateShape.create() divides num_heads by tp_world_size for the temporal state shape. LFM2 ShortConv doesn't use temporal state (state_size=0), so the result is always empty regardless of the first dimension. We pass num_heads=tp_size to satisfy the divisibility check while keeping the fix local to LFM2.

Test:

Run server

MODEL="${MODEL:-LiquidAI/LFM2.5-1.2B-Instruct}"

sglang serve \
  --model-path "$MODEL" \
  --served-model-name "$MODEL-SGLANG-Internal" \
  --host "${HOST:-0.0.0.0}" \
  --port "${PORT:-30000}" \
  --tp 2 \
  --mem-fraction-static 0.80

Error on main:

  File "/root/sglang/python/sglang/srt/model_executor/model_runner.py", line 390, in __init__
    self.initialize(min_per_gpu_memory)
  File "/root/sglang/python/sglang/srt/model_executor/model_runner.py", line 562, in initialize
    self.init_memory_pool(min_per_gpu_memory)
  File "/root/sglang/python/sglang/srt/model_executor/model_runner_kv_cache_mixin.py", line 259, in init_memory_pool
    self.max_total_num_tokens = self.profile_max_num_token(total_gpu_memory)
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/sglang/python/sglang/srt/model_executor/model_runner_kv_cache_mixin.py", line 142, in profile_max_num_token
    rest_memory = self.handle_max_mamba_cache(rest_memory)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/sglang/python/sglang/srt/model_executor/model_runner_kv_cache_mixin.py", line 182, in handle_max_mamba_cache
    assert config.mamba2_cache_params.mamba_cache_per_req > 0
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/sglang/.venv/lib/python3.12/site-packages/transformers/configuration_utils.py", line 207, in __getattribute__
    return super().__getattribute__(key)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/sglang/python/sglang/srt/configs/lfm2.py", line 80, in mamba2_cache_params
    shape = Mamba2StateShape.create(
            ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/sglang/python/sglang/srt/configs/mamba_utils.py", line 111, in create
    temporal_state_shape = (divide(num_heads, tp_world_size), head_dim, state_size)
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/sglang/python/sglang/srt/distributed/utils.py", line 31, in divide
    ensure_divisibility(numerator, denominator)
  File "/root/sglang/python/sglang/srt/distributed/utils.py", line 23, in ensure_divisibility
    assert numerator % denominator == 0, "{} is not divisible by {}".format(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 1 is not divisible by 2

[2026-01-26 16:00:35] Received sigquit from a child process. It usually means the child failed.

Runs successfully on the branch:

Tests:

ONLY_RUN=LiquidAI/LFM2.5-1.2B-Instruct pytest test/registered/models/test_generation_models.py::TestGenerationModels::test_all_models -v -s 

1 passed, 5 warnings in 47.65s

Passes, also passes if manually change the test to run:

ModelCase(
        "LiquidAI/LFM2.5-1.2B-Instruct",
        tp_size=2,
        trust_remote_code=True,
    ),

Benchmarks

Benchmarks TP1:

  • IfBench: 46.47 vs. 47.33 reported
  • GPQA: 36.16 vs 38.89 reported

Benchmarks TP2:

  • IfBench: 44.33 vs. 47.33 reported
  • GPQA: 38.08 vs 38.89 reported

There is some variance in the among the runs, but it seems to work overall as expected (within the confidence interval). Run on 2xB200.

- Use MergedColumnParallelLinear for in_proj to shard B, C, x separately
- Use RowParallelLinear with input_is_parallel for out_proj
- Shard conv_weight/conv_bias along hidden dimension
- Fix cache shape calculation by passing num_heads=tp_size (temporal state is empty)
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @tugot17, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces tensor parallelism support to the LFM2 ShortConv layers, addressing a critical limitation where these models previously failed to run with tensor parallelism due to the use of non-parallelized linear layers. By integrating MergedColumnParallelLinear and RowParallelLinear for projections and correctly sharding convolution weights, the PR enables LFM2 models to leverage distributed computing for improved performance and scalability. The changes also include minor adjustments to cache shape calculation to ensure compatibility with the new parallel setup.

Highlights

  • Tensor Parallelism for LFM2 ShortConv: Enabled tensor parallelism support for LFM2 ShortConv layers, addressing previous limitations with distributed execution.
  • Parallel Linear Layers: Replaced standard nn.Linear with MergedColumnParallelLinear for in_proj and RowParallelLinear for out_proj to handle distributed projections.
  • Sharded Convolution Weights: Implemented sharding for conv_weight and conv_bias parameters along the hidden dimension, ensuring compatibility with tensor parallelism.
  • Cache Shape Calculation Fix: Adjusted Mamba2StateShape creation to pass num_heads=tp_size to satisfy divisibility checks, even though the temporal state for ShortConv is empty.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully integrates tensor parallelism into the LFM2 ShortConv layers, addressing the AssertionError related to divisibility checks when tp_size > 1. The changes correctly replace nn.Linear layers with MergedColumnParallelLinear and RowParallelLinear for input and output projections, respectively. Additionally, convolutional weights and biases are now sharded along the hidden dimension, and the Mamba2StateShape creation is adjusted to ensure compatibility with tensor parallelism. The removal of unused variables and simplified logic for attention layer IDs also contributes to code cleanliness.

Comment on lines +512 to +516
if ".conv.conv.weight" in name:
name = name.replace(".conv.conv.weight", ".conv.conv_weight")
loaded_weight = loaded_weight.squeeze(1) # (D, 1, K) -> (D, K)
if ".conv.conv.bias" in name:
name = name.replace(".conv.conv.bias", ".conv.conv_bias")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The change in weight naming from .conv.weight to .conv.conv.weight and .conv.bias to .conv.conv.bias suggests a discrepancy between the internal naming convention of the SGLang model and the HuggingFace checkpoint. While this fix addresses the loading issue, it would be beneficial to add a comment explaining this specific naming adaptation, especially if it's a common pattern for LFM2 models or a known quirk of the upstream checkpoint. This improves clarity for future maintainers.

@tugot17
Copy link
Contributor Author

tugot17 commented Jan 26, 2026

Hi @ispobock I realized over the weekend that in #16890 the support for TP will not work out of the box.

What do you think about this solution?

@ispobock
Copy link
Collaborator

ispobock commented Feb 8, 2026

/tag-and-rerun-ci

@github-actions github-actions bot added the run-ci label Feb 8, 2026
@ispobock
Copy link
Collaborator

ispobock commented Feb 8, 2026

@tugot17 Thanks for fixing!

@ispobock ispobock merged commit 656a3d7 into sgl-project:main Feb 8, 2026
195 of 217 checks passed
charlesHsuGG pushed a commit to charlesHsuGG/sglang that referenced this pull request Feb 9, 2026
Johnsonms pushed a commit to Johnsonms/sglang that referenced this pull request Feb 14, 2026
1StepForever pushed a commit to 1StepForever/sglang that referenced this pull request Feb 26, 2026
* www/pr/ks: (265 commits)
  [BugFix][PD]Fix metadata_buffer_index leak when aborted in PD (sgl-project#17483)
  Refactoring Mooncake TE as a shared distributed component (sgl-project#17810)
  [ModelOPT] Support Qwen 3 Next Coder NVFP4 (sgl-project#18224)
  Update author information in pyproject.toml (sgl-project#18453)
  [Kimi-K2.5] Fix missing `quant_config` in `KimiK25` (sgl-project#18440)
  Add tensor parallelism support to LFM2 ShortConv layers (sgl-project#17777)
  [diffusion] chore: revise process title (sgl-project#18446)
  Fix TRT-LLM MLA backend applying k_scale to BF16 KV cache in BMM1 (sgl-project#18396)
  [diffusion] refactor: group component loaders under the component_loaders/ directory (sgl-project#18438)
  [ModelOpt] Fix broken Qwen3-235B-A22B-Instruct-2507-NVFP4 launch (sgl-project#18189)
  [diffusion] feat: support efficient sequence shard (sgl-project#18161)
  [CI] fix: notebook ci may not working (sgl-project#18417)
  fix: sync server_args.kv_cache_dtype when detecting FP8 KV cache (sgl-project#18394)
  [Fix] Fix backend selection after flashinfer version update (sgl-project#18364)
  [diffusion] platform: support WAN/FLUX/Qwen-Image/Qwen-Image-edit on Ascend (sgl-project#13662)
  fix: fix NVFP4 Kimi-K2.5 weight mapping and exclude list (sgl-project#18370)
  [diffusion] feat: support saving videos directly on the server to avoid the overhead of tensor transfer (sgl-project#18253)
  [diffusion] fix: respect dist_timeout option (sgl-project#18386)
  [Doc] Fix outdated `--fp4-gemm-backend` documentation (sgl-project#18350)
  [diffusion] fix: remove unnecessary norm_type argument from GLM-Image dits (sgl-project#18382)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants