Skip to content

[AMD CI] Add moonshotai/Kimi-K2-Instruct-0905 testcases#17656

Merged
HaiShaw merged 9 commits intosgl-project:mainfrom
sogalin:kimi-k2
Jan 26, 2026
Merged

[AMD CI] Add moonshotai/Kimi-K2-Instruct-0905 testcases#17656
HaiShaw merged 9 commits intosgl-project:mainfrom
sogalin:kimi-k2

Conversation

@sogalin
Copy link
Contributor

@sogalin sogalin commented Jan 23, 2026

Motivation

Add kimi_k2_instruct model testcases to increase the test coverage on AMD GPUs.
Change CI job partition from 2 to 3.

Modifications

Add 1 test script : test/registered/amd/test_kimi_k2_instruct.py

Accuracy Tests

Accuracy: 0.956
Invalid: 0.000
Latency: 25.643 s
Output throughput: 5280.565 token/s

Benchmarking and Profiling

+-------------+--------+------------+-----------------+
| Latency (s) | Tokens | Acc Length | Speed (token/s) |
+-------------+--------+------------+-----------------+
| 21.410 | 1001 | 1.000 | 46.75 |
+-------------+--------+------------+-----------------+
speed=46.75

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @sogalin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request expands the continuous integration test coverage for AMD GPUs by integrating new test cases for the moonshotai/Kimi-K2-Instruct-0905 model. The changes ensure that this specific model's performance and accuracy are regularly validated on AMD hardware, utilizing optimized server configurations for attention mechanisms and model loading.

Highlights

  • New Model Test Cases: Introduces comprehensive test cases for the moonshotai/Kimi-K2-Instruct-0905 model.
  • AMD CI Integration: Registers the new test suite for execution on AMD CI, targeting the stage-c-test-large-8-gpu-amd-mi35x environment.
  • Performance and Accuracy Benchmarking: Includes tests for model accuracy using the GSM8K dataset (asserting >0.94) and single-batch inference speed (asserting >45 token/s).
  • Optimized Server Configuration: Configures the SGLang server with specific attention backends (triton for decode, aiter for prefill) and multithreaded model loading for AMD GPUs.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new test case for the moonshotai/Kimi-K2-Instruct-0905 model on AMD GPUs, which is a valuable addition for increasing test coverage. The code is well-structured and follows the project's testing conventions. My review includes a few suggestions to improve the robustness of URL parsing within the new test file, making it more maintainable and less prone to breaking from future changes to the test configuration.

@@ -0,0 +1,95 @@
import os
import unittest
from types import SimpleNamespace
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To robustly parse the base URL, it's recommended to use Python's built-in urlparse function. Please add the necessary import. This will make the URL handling in the test methods less brittle and more maintainable.

Suggested change
from types import SimpleNamespace
from types import SimpleNamespace
from urllib.parse import urlparse

Comment on lines +61 to +69
args = SimpleNamespace(
num_shots=8,
data_path=None,
num_questions=1319,
parallel=1319,
max_new_tokens=512,
host="http://127.0.0.1",
port=int(self.base_url.split(":")[-1]),
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Hardcoding the host URL and manually parsing the port is brittle. For instance, if DEFAULT_URL_FOR_TEST were to use localhost instead of an IP address, this test could fail. Using urlparse (with the import added at the top of the file) to deconstruct self.base_url is a more robust approach.

Suggested change
args = SimpleNamespace(
num_shots=8,
data_path=None,
num_questions=1319,
parallel=1319,
max_new_tokens=512,
host="http://127.0.0.1",
port=int(self.base_url.split(":")[-1]),
)
parsed_url = urlparse(self.base_url)
args = SimpleNamespace(
num_shots=8,
data_path=None,
num_questions=1319,
parallel=1319,
max_new_tokens=512,
host=f"{parsed_url.scheme}://{parsed_url.hostname}",
port=parsed_url.port,
)

self.assertGreater(metrics["accuracy"], 0.94)

def test_bs_1_speed(self):
args = BenchArgs(port=int(self.base_url.split(":")[-1]), max_new_tokens=2048)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to test_a_gsm8k, manually parsing the port from the URL is brittle. Using urlparse provides a more robust way to extract the port and improves maintainability.

Suggested change
args = BenchArgs(port=int(self.base_url.split(":")[-1]), max_new_tokens=2048)
parsed_url = urlparse(self.base_url)
args = BenchArgs(port=parsed_url.port, max_new_tokens=2048)

@michaelzhang-ai
Copy link
Collaborator

michaelzhang-ai commented Jan 23, 2026

Currently, 2 stage-c test mi35x test maybe the limit due to long queue time (since limited mi35x runner). We may either put dpsk3.2 or kimi mi35x in PR and put the other one in nightly. cc: @yctseng0211 @bingxche

@yctseng0211
Copy link
Collaborator

Currently, 2 stage-c test mi35x test maybe the limit due to very long queue time (since limited mi35x runner). We may either put dpsk3.2 or kimi mi35x in PR and put the other one in nightly. cc: @yctseng0211 @bingxche

we will move dpsk3.2 to 325 8-gpu with this PR : #17633

@HaiShaw HaiShaw merged commit 738b1ac into sgl-project:main Jan 26, 2026
148 of 159 checks passed
Chen-0210 pushed a commit to Chen-0210/sglang that referenced this pull request Jan 30, 2026
Johnsonms pushed a commit to Johnsonms/sglang that referenced this pull request Feb 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants