Fix LoRA prefix cache corruption by using lora_int_id#31069
Open
westers wants to merge 4 commits intovllm-project:mainfrom
Open
Fix LoRA prefix cache corruption by using lora_int_id#31069westers wants to merge 4 commits intovllm-project:mainfrom
westers wants to merge 4 commits intovllm-project:mainfrom
Conversation
Fixes issue vllm-project#28052 where AMD Radeon 780M (gfx1103) users encounter 'HIP error: invalid device function' when using official Docker images. The Docker image was built without gfx1103 in PYTORCH_ROCM_ARCH, causing PyTorch to lack compiled kernels for this architecture. This fix adds gfx1103 to the default GPU targets, enabling support for: - AMD Radeon 780M (gfx1103) - Other RDNA 3 integrated graphics with gfx1103 Tested on: AMD Radeon 780M successfully reproduces and identifies root cause Signed-off-by: Steve Westerhouse <westers@gmail.com>
Addresses code review feedback. Without this, the build would fail because gfx1103 is not recognized as a GFX11 architecture, causing compilation to fall into an assert(false) code path. Signed-off-by: Steve Westerhouse <westers@gmail.com>
All pre-commit hooks pass. clang-format applied to C++ files. Signed-off-by: Steve Westers <westers@gmail.com> Signed-off-by: westers <steve.westerhouse@origami-analytics.com>
Fixes vllm-project#30931 The KV cache hash was incorrectly using lora_name instead of lora_int_id for LoRA requests. This caused different LoRA configurations with the same name but different IDs to incorrectly share cache blocks, leading to wrong outputs. Changed _gen_lora_extra_hash_keys() to use lora_int_id instead of lora_name, since lora_int_id is documented as "must be globally unique for a given adapter". 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: westers <steve.westerhouse@origami-analytics.com>
Contributor
There was a problem hiding this comment.
Code Review
This pull request correctly fixes a bug in the LoRA prefix cache by using lora_int_id instead of lora_name for generating the KV cache hash. This change prevents cache corruption when different LoRA configurations share the same name. The implementation is sound and aligns with the documented purpose of lora_int_id as a unique identifier. Additionally, the pull request includes updates to support the gfx1103 ROCm architecture, which are consistent across the CUDA source and the Docker build configuration. The changes are well-justified and improve both correctness and hardware support.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes #30931 - LoRA prefix cache corruption
The KV cache hash was incorrectly using
lora_nameinstead oflora_int_idfor LoRA requests. This caused different LoRA configurations with the same name but different IDs to incorrectly share cache blocks, leading to wrong outputs.Changes
_gen_lora_extra_hash_keys()invllm/v1/core/kv_cache_utils.pyto uselora_int_idinstead oflora_nameRationale
The
LoRARequestclass documentation states that "lora_int_id must be globally unique for a given adapter", making it the correct identifier for cache differentiation.Testing
🤖 Generated with Claude Code