Skip to content

Fix prefill latency performance drop of bench serving#14592

Merged
Kangyan-Zhou merged 5 commits intosgl-project:mainfrom
gaopengff:gaopengf/fix_bench_serving_input_len
Jan 30, 2026
Merged

Fix prefill latency performance drop of bench serving#14592
Kangyan-Zhou merged 5 commits intosgl-project:mainfrom
gaopengff:gaopengf/fix_bench_serving_input_len

Conversation

@gaopengff
Copy link
Contributor

Motivation

When use prompt as input for bench serving, the server will get incorrect input length as its encode operation added special tokens. This may cause bad performance of prefill latency. We should truncate it.
Old:

input_ids(1024) -> client decode -> prompt -> server  encode -> input_ids(1025)

New:

input_ids(1024) -> truncate -> input_ids(1023) -> client decode -> prompt -> server encode -> input_ids(1024)

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @gaopengff, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a performance regression observed in the prefill latency of benchmark serving. The root cause was an discrepancy in input length calculation: when prompts were used, the server's tokenizer would add special tokens, increasing the effective input length beyond what was intended. This change implements a crucial truncation step that adjusts the input_lens by accounting for these special tokens, thereby ensuring accurate input lengths and restoring optimal prefill latency performance.

Highlights

  • Performance Fix: Addresses a performance drop in prefill latency during benchmark serving, specifically when prompts are used as input.
  • Input Length Correction: Corrects an issue where the server's encoding operation added special tokens, leading to an incorrect and inflated input length.
  • Truncation Logic: Introduces a truncation step for input_lens to subtract the number of special tokens that would be added by the tokenizer, ensuring the final encoded input length matches the intended length.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses a performance issue in benchmark serving caused by incorrect input lengths from server-side tokenization. The fix to truncate input lengths based on the number of special tokens is appropriate. I've provided one suggestion to further optimize the implementation by using a vectorized NumPy operation, which will improve performance, especially with a large number of prompts.

Comment on lines +1240 to +1242
num_special_tokens = int(tokenizer.num_special_tokens_to_add())
for i in range(num_prompts):
input_lens[i] = max(0, input_lens[i] - num_special_tokens)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better performance and code clarity, you can replace the loop with a vectorized NumPy operation. This is more efficient, especially for a large num_prompts.

Additionally, the int() cast is redundant as tokenizer.num_special_tokens_to_add() already returns an integer.

        num_special_tokens = tokenizer.num_special_tokens_to_add()
        input_lens = np.maximum(0, input_lens - num_special_tokens)

@mingfeima
Copy link
Collaborator

@gaopengff explain that vllm benchmark uses the same logic and link the code for it. That will be more straight forward.

@gaopengff
Copy link
Contributor Author

Vllm uses the same logic at https://github.com/vllm-project/vllm/blob/v0.12.0/vllm/benchmarks/datasets.py#L484, which subtract num_special_tokens to get real_input_len.

@mingfeima mingfeima requested a review from zhyncs January 8, 2026 02:00
@mingfeima
Copy link
Collaborator

@zhyncs could you please find someone to review this one? a fix for benchmark scripts~

@Kangyan-Zhou Kangyan-Zhou merged commit 7541da1 into sgl-project:main Jan 30, 2026
88 of 90 checks passed
charlesHsuGG pushed a commit to charlesHsuGG/sglang that referenced this pull request Jan 30, 2026
sfiisf pushed a commit to sfiisf/sglang that referenced this pull request Feb 5, 2026
Johnsonms pushed a commit to Johnsonms/sglang that referenced this pull request Feb 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants