Skip to content

[Frontend] Enforce tokenize=False when applying chat template#27205

Merged
DarkLight1337 merged 4 commits intomainfrom
vllm-ghsa-69j4-grxj-j64p
Oct 21, 2025
Merged

[Frontend] Enforce tokenize=False when applying chat template#27205
DarkLight1337 merged 4 commits intomainfrom
vllm-ghsa-69j4-grxj-j64p

Conversation

@russellb
Copy link
Copy Markdown
Member

Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
@russellb
Copy link
Copy Markdown
Member Author

previously reviewed and approved by @DarkLight1337

@mergify mergify bot added the frontend label Oct 20, 2025
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) October 20, 2025 15:25
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Oct 20, 2025
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a critical security vulnerability (GHSA-69j4-grxj-j64p) related to arbitrary code execution via chat templates. The changes effectively mitigate this risk by enforcing tokenize=False when applying HuggingFace chat templates and by rejecting tokenize and chat_template parameters within chat_template_kwargs. This prevents the vulnerable code path in the transformers library from being executed. The implementation is clean, and the accompanying tests correctly verify the new security constraints. The changes are well-targeted and appear to be a solid fix for the reported vulnerability.

@simon-mo simon-mo added this to the v0.11.1 milestone Oct 20, 2025
@Alexei-V-Ivanov-AMD
Copy link
Copy Markdown
Collaborator

Alexei-V-Ivanov-AMD commented Oct 20, 2025

@russellb if you care for executing AMD tests (https://buildkite.com/vllm/amd-ci/builds/410), please rebase your feature branch to any commit after introduction of the file test-amd.yaml (#26852).

@DarkLight1337 DarkLight1337 merged commit 3ada34f into main Oct 21, 2025
50 checks passed
@DarkLight1337 DarkLight1337 deleted the vllm-ghsa-69j4-grxj-j64p branch October 21, 2025 02:57
Zhuul pushed a commit to Zhuul/vllm that referenced this pull request Oct 21, 2025
…roject#27205)

Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
0xrushi pushed a commit to 0xrushi/vllm that referenced this pull request Oct 26, 2025
…roject#27205)

Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: 0xrushi <6279035+0xrushi@users.noreply.github.com>
0xrushi pushed a commit to 0xrushi/vllm that referenced this pull request Oct 26, 2025
…roject#27205)

Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: 0xrushi <6279035+0xrushi@users.noreply.github.com>
Chenyaaang pushed a commit to Chenyaaang/vllm that referenced this pull request Oct 28, 2025
…roject#27205)

Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
ilmarkov pushed a commit to neuralmagic/vllm that referenced this pull request Nov 7, 2025
…roject#27205)

Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
rtourgeman pushed a commit to rtourgeman/vllm that referenced this pull request Nov 10, 2025
…roject#27205)

Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
devpatelio pushed a commit to SumanthRH/vllm that referenced this pull request Nov 29, 2025
…roject#27205)

Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
npanpaliya pushed a commit to odh-on-pz/vllm-cpu that referenced this pull request Dec 9, 2025
…roject/vllm#27205)

Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
npanpaliya pushed a commit to odh-on-pz/vllm-cpu that referenced this pull request Dec 9, 2025
- vllm-project/vllm#25896
- vllm-project/vllm#27205
- vllm-project/vllm#27204
- vllm-project/vllm#27431
- chat_utils: fix resolve_chat_template_kwargs duplication
- vllm-project/vllm#27556
- vllm-project/vllm#25996
- requirements/rocm.txt: pin triton==3.3.0 (from build requirements)
- Dockerfile*.ubi: bump base image tag to 9.6-1760340988
- Dockerfile*.ubi: pre-download tiktoken tokenizers (o200k_base)
(https://issues.redhat.com/browse/INFERENG-2959)
- Dockerfile.ubi: add missing `cuda-cudart-devel` package, required for
deepgeemm JITs
- vllm-project/vllm#25999
- vllm-project/vllm#26416

Related: neuralmagic/nm-cicd#313
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

frontend ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants