Guided decoding with xgrammar for TurboMind#3965
Merged
lvhan028 merged 18 commits intoInternLM:mainfrom Oct 13, 2025
Merged
Conversation
8b3e766 to
8fd6d05
Compare
Contributor
|
good job! |
0362250 to
8bcbfff
Compare
9817089 to
4516ac7
Compare
Collaborator
Author
Done |
4516ac7 to
be27768
Compare
Collaborator
Author
Should tenatively solved in #4028 |
lvhan028
approved these changes
Oct 13, 2025
Collaborator
|
May update the "structed_output.md" |
lzhangzz
approved these changes
Oct 13, 2025
Collaborator
Author
Done |
This was referenced Oct 16, 2025
Skyseaee
pushed a commit
to Skyseaee/lmdeploy
that referenced
this pull request
Jan 4, 2026
* feat(turbomind): bring xGrammar into build * feat(turbomind): add skeleton for guided decoding layers * feat(turbomind): add implementation for naive bitmap mask with a loop * add ModelRequest support for xgrammar * feat: enable grammar init in turbomind * fix: fix some bug and add initial tests * feat: restructure the interface * feat: speedup with cuda inplace kernel * fix: fix test case * fix: use stream from context instead of the default stream * test: add matrix grammar test * fix: simplify the bitmap apply kernel * feat: move tensor allocation to ctor * test: temporarily disable pytorch engine tests as it is faulty * test: move timm to test requirements * fix: enable openai guided decoding function for turbomind * fix: fix `schema` not found issue by enforce pydantic serialize_by_alias * docs: modify docs for structured output
Skyseaee
pushed a commit
to Skyseaee/lmdeploy
that referenced
this pull request
Jan 4, 2026
Guided decoding with xgrammar for TurboMind (InternLM#3965) See merge request shopee/MLP/aip/llm/generater/lmdeploy!110
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Motivation
LMDeploy’s TurboMind backend is the fastest inference stack in the ecosystem, yet it still lacks Guided Decoding – a feature that is already available in the PyTorch backend and heavily requested by the community.
This PR closes the gap by bringing token-level, C++ native Guided Decoding to TurboMind while keeping the API 100 % compatible with the existing PyTorch backend.
The implementation is built on xGrammar (Apache-2.0), a high-performance C++ library that compiles JSON / Choice / Regex grammars into token FSMs and applies them with negligible overhead.
Modification
Build-system
xgrammaras a header-only dependency via CMakeFetchContent(CUDA & Python bindings disabled).xgrammar::tokenizer_infoandxgrammar::grammar_compilersymbols underlmdeploy::xgrammar.Core C++ changes
DynamicDecodeLayerpipeline extended with two new layers:GuidedDecodeMaskLayer: insetup()compiles / reuses grammar → builds per-request token bitmask; inforward()launches a light CUDA kernel to mask disallowed logits to-INF.GuidedDecodeUpdateLayer: inforward()callsmatcher->AcceptToken(output_id)to advance the FSM.Python frontend
guided_decodingutilities from PyTorch backend; no new API surface.turbo.TurboMindEnginenow accepts the sameresponse_format=/guided_json=/guided_choice=arguments.Checklist