Skip to content

[Feat] add chunk_gated_delta_rule triton support#232

Merged
iforgetmyname merged 4 commits intosgl-project:mainfrom
ltcs11:main
Dec 4, 2025
Merged

[Feat] add chunk_gated_delta_rule triton support#232
iforgetmyname merged 4 commits intosgl-project:mainfrom
ltcs11:main

Conversation

@ltcs11
Copy link
Contributor

@ltcs11 ltcs11 commented Dec 3, 2025

  1. add all triton ops (when chunk_size=64) for chunk_gated_delta_rule, including as follows
  • chunk_gated_delta_rule_fwd_h
  • chunk_fwd_o
  • chunk_scaled_dot_kkt_fwd
  • chunk_local_cumsum_scalar
  • solve_tril(chunk 64 only)
  • recompute_w_u_fwd
  1. add ut test for chunk_gated_delta_rule

[TODO] currently this merge ops can only support VARLEN with head dim 128 and chunk_size 64 (for Qwen-Next-80B)

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ltcs11, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the chunk_gated_delta_rule implementation by offloading its core computations to highly optimized Triton kernels, specifically designed for NPU acceleration. This change aims to boost the performance of gated delta rule operations, particularly for scenarios involving variable-length sequences, while maintaining numerical accuracy through comprehensive testing.

Highlights

  • Triton Kernel Support: Introduced Triton kernel support for the chunk_gated_delta_rule operation, enabling NPU (Neural Processing Unit) acceleration for improved performance.
  • New chunk_ops.py Module: A new file chunk_ops.py has been added, centralizing various Triton kernels required for the gated delta rule, including chunk_gated_delta_rule_fwd_h_npu, chunk_fwd_o_npu, chunk_scaled_dot_kkt_fwd_npu, chunk_local_cumsum_scalar_npu, solve_tril_npu, and recompute_w_u_fwd_npu.
  • Refactored chunk.py: The chunk.py file has been updated to integrate and utilize the newly introduced Triton kernels for chunk_gated_delta_rule, and now includes optional L2 normalization for query and key tensors.
  • L2 Normalization Utility: A new l2norm.py file provides Triton-accelerated L2 normalization functionality, which is now optionally used within the chunk_gated_delta_rule.
  • Enhanced Utilities: The utils.py module has been extended with Triton-specific imports, fast math operations, a tensor_cache decorator, and helper functions for handling variable-length sequences (e.g., prepare_chunk_indices, prepare_chunk_offsets).
  • Comprehensive Testing: A new test file test_gated_delta.py has been added to validate the Triton implementation of chunk_gated_delta_rule_fwd against its native counterpart, covering various configurations including variable-length inputs and performance comparisons.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces Triton kernel implementations for chunk_gated_delta_rule to support NPU hardware, which is a significant feature addition. The changes include new Triton kernels for various sub-operations, utility functions, and corresponding tests.

My review has identified several critical bugs in the Triton kernels related to incorrect handling of batching and parallelism, which would lead to runtime errors or incorrect outputs for batch sizes greater than one. I've also pointed out some areas for improvement regarding code clarity, such as removing debug prints and documenting magic numbers used for performance tuning. The overall structure is good, but the identified bugs need to be addressed to ensure correctness and performance.

@iforgetmyname iforgetmyname merged commit f01adae into sgl-project:main Dec 4, 2025
4 checks passed
oagniqgnat added a commit to oagniqgnat/sgl-kernel-npu that referenced this pull request Dec 8, 2025
…unning

* upstream/main:
  rework release build (sgl-project#237)
  release build (sgl-project#231)
  Add two mixed-race tests: normal and low latency, normal and fused deep moe. (sgl-project#206)
  [Feat] add chunk_gated_delta_rule triton support (sgl-project#232)
  [Bugfix] add padding cases for causal_conv1d_update (sgl-project#235)
  [DFX] Adaptable to multiple model validations for fused moe (sgl-project#229)
  Add swiglu_oai for GPT-OSS (sgl-project#233)
oagniqgnat added a commit to oagniqgnat/sgl-kernel-npu that referenced this pull request Dec 25, 2025
…into main

* 'main' of https://github.com/sgl-project/sgl-kernel-npu: (44 commits)
  fix a2 deepep doc (sgl-project#279)
  prepare build and release for a2 (sgl-project#273)
  add deepep a2 doc (sgl-project#277)
  Add the long-sequence ant migration feature for the prefill combine operator. (sgl-project#267)
  Fixing Chinese character encoding issues (sgl-project#275)
  [Bugfix] fix TorchNpuHelper rename bugs (sgl-project#265)
  qwen3-next op optimize (sgl-project#257)
  fixup bug in conv1d_update_fn (sgl-project#259)
  add long sequence feature for normal deep_ep (sgl-project#254)
  modify md file (sgl-project#255)
  sgl-kernel-npu add release version (sgl-project#253)
  add a script for generalize test (sgl-project#131)
  fixing release ci (sgl-project#248)
  fix normal and low_latency layerd rdma_data_size when mixed running (sgl-project#246)
  Fixing the issue where the A2 notify_dispatch operator gets stuck on cann8.3 (sgl-project#245)
  rework release build (sgl-project#237)
  release build (sgl-project#231)
  Add two mixed-race tests: normal and low latency, normal and fused deep moe. (sgl-project#206)
  [Feat] add chunk_gated_delta_rule triton support (sgl-project#232)
  [Bugfix] add padding cases for causal_conv1d_update (sgl-project#235)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants