Skip to content

[diffusion] feat: support efficient sequence shard#18161

Merged
mickqian merged 6 commits intosgl-project:mainfrom
nono-Sang:feat_seq_shard
Feb 8, 2026
Merged

[diffusion] feat: support efficient sequence shard#18161
mickqian merged 6 commits intosgl-project:mainfrom
nono-Sang:feat_seq_shard

Conversation

@nono-Sang
Copy link
Contributor

@nono-Sang nono-Sang commented Feb 3, 2026

Motivation

For video models, when SGLang enables SP (Sequence Parallelism), it uses frame-level partitioning. The number of frames needs to be adjusted to meet the divisibility condition, and the number of padding tokens is pad_frames * latent_H * latent_W. However, we can directly partition along the frames * latent_H * latent_W dimension, significantly reducing the number of padding tokens and improving computational efficiency.

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@github-actions github-actions bot added the diffusion SGLang Diffusion label Feb 3, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @nono-Sang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a more efficient sequence parallelism strategy for video models, moving from frame-level partitioning to a comprehensive frames * H * W dimension sharding. This enhancement aims to drastically reduce the amount of padding required and boost computational efficiency during model inference. The changes involve updating configuration parameters, refining how latent data is sharded, and adapting the core wanvideo model's forward pass to properly manage the new sharding approach and its associated positional embeddings.

Highlights

  • Efficient Sequence Sharding: Introduced a new sp_shard_mode configuration option to enable sharding along the frames * H * W dimension for video models, significantly reducing padding tokens and improving computational efficiency compared to frame-level partitioning.
  • Dynamic Rotary Positional Embedding (RoPE) Adjustment: Implemented conditional logic within the wanvideo model's forward pass to correctly calculate Rotary Positional Embeddings (RoPE) for both the traditional 'frame' sharding and the new 'sequence' sharding modes, ensuring accurate positional encoding for distributed sequences.
  • Integrated Sharding and Gathering Logic: Modified the wanvideo model to handle padding, sharding of hidden states across SP ranks, and subsequent re-gathering using sequence_model_parallel_all_gather after processing, ensuring seamless distributed inference.
  • Frame Adjustment Disablement: Disabled the adjust_frames mechanism when the 'sequence' sharding mode is active, preventing redundant adjustments and optimizing the processing pipeline.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/multimodal_gen/configs/pipeline_configs/base.py
    • Added sp_shard_mode configuration parameter with a default value of 'frame'.
    • Modified shard_latents_for_sp to bypass frame-level sharding if sp_shard_mode is set to 'sequence'.
  • python/sglang/multimodal_gen/configs/sample/sampling_params.py
    • Disabled adjust_frames when sp_shard_mode is configured as 'sequence'.
  • python/sglang/multimodal_gen/runtime/models/dits/wanvideo.py
    • Imported sequence_model_parallel_all_gather and get_sp_group for distributed operations.
    • Implemented conditional logic in the forward method to manage 'sequence' sharding for hidden_states, including padding, sharding, and re-gathering.
    • Introduced _compute_rope_for_sequence_shard to calculate Rotary Positional Embeddings tailored for the new sequence sharding scheme.
  • python/sglang/multimodal_gen/runtime/pipelines_core/stages/denoising.py
    • Propagated the sp_shard_mode from pipeline_config to the batch object for consistent use across the pipeline.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an efficient sequence sharding mechanism for sequence parallelism in video models. The changes allow partitioning along the frames * H * W dimension, which reduces padding and improves computational efficiency compared to the previous frame-level partitioning.

The implementation is well-structured and consistent across the modified files:

  • A new sp_shard_mode configuration is added to PipelineConfig to switch between "frame" and "sequence" sharding modes.
  • The sampling parameters are adjusted to disable frame adjustment when sequence sharding is enabled, as it's no longer necessary.
  • The core logic in WanTransformer3DModel.forward correctly handles sequence padding, sharding, RoPE calculation for sharded sequences, and gathering the results.
  • The changes are propagated correctly through the denoising pipeline stage.

The code is clean and the logic appears to be sound. I don't see any issues with the implementation. This is a great improvement for sequence parallelism efficiency.

@mickqian
Copy link
Collaborator

mickqian commented Feb 8, 2026

/tag-and-rerun-ci

@github-actions github-actions bot added the run-ci label Feb 8, 2026
@RubiaCx
Copy link
Collaborator

RubiaCx commented Feb 8, 2026

/tag-and-rerun-ci

1 similar comment
@mickqian
Copy link
Collaborator

mickqian commented Feb 8, 2026

/tag-and-rerun-ci

@mickqian mickqian merged commit 43eecd8 into sgl-project:main Feb 8, 2026
132 of 138 checks passed
charlesHsuGG pushed a commit to charlesHsuGG/sglang that referenced this pull request Feb 9, 2026
Johnsonms pushed a commit to Johnsonms/sglang that referenced this pull request Feb 14, 2026
suppress_logs: bool = False

return_file_paths_only: bool = True
enable_sequence_shard: bool = False
Copy link
Collaborator

@mickqian mickqian Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we make it default enabled?

1StepForever pushed a commit to 1StepForever/sglang that referenced this pull request Feb 26, 2026
* www/pr/ks: (265 commits)
  [BugFix][PD]Fix metadata_buffer_index leak when aborted in PD (sgl-project#17483)
  Refactoring Mooncake TE as a shared distributed component (sgl-project#17810)
  [ModelOPT] Support Qwen 3 Next Coder NVFP4 (sgl-project#18224)
  Update author information in pyproject.toml (sgl-project#18453)
  [Kimi-K2.5] Fix missing `quant_config` in `KimiK25` (sgl-project#18440)
  Add tensor parallelism support to LFM2 ShortConv layers (sgl-project#17777)
  [diffusion] chore: revise process title (sgl-project#18446)
  Fix TRT-LLM MLA backend applying k_scale to BF16 KV cache in BMM1 (sgl-project#18396)
  [diffusion] refactor: group component loaders under the component_loaders/ directory (sgl-project#18438)
  [ModelOpt] Fix broken Qwen3-235B-A22B-Instruct-2507-NVFP4 launch (sgl-project#18189)
  [diffusion] feat: support efficient sequence shard (sgl-project#18161)
  [CI] fix: notebook ci may not working (sgl-project#18417)
  fix: sync server_args.kv_cache_dtype when detecting FP8 KV cache (sgl-project#18394)
  [Fix] Fix backend selection after flashinfer version update (sgl-project#18364)
  [diffusion] platform: support WAN/FLUX/Qwen-Image/Qwen-Image-edit on Ascend (sgl-project#13662)
  fix: fix NVFP4 Kimi-K2.5 weight mapping and exclude list (sgl-project#18370)
  [diffusion] feat: support saving videos directly on the server to avoid the overhead of tensor transfer (sgl-project#18253)
  [diffusion] fix: respect dist_timeout option (sgl-project#18386)
  [Doc] Fix outdated `--fp4-gemm-backend` documentation (sgl-project#18350)
  [diffusion] fix: remove unnecessary norm_type argument from GLM-Image dits (sgl-project#18382)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

diffusion SGLang Diffusion run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants