Skip to content

Conversation

@danielhanchen
Copy link
Contributor

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @danielhanchen, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces key updates to enhance the compatibility of LoRA (Low-Rank Adaptation) with vLLM. The changes focus on refining the process of loading LoRA adapter tensors and embeddings, ensuring a more unified approach. Additionally, a previous constraint on the LoRA's extra vocabulary size has been relaxed, which should improve the flexibility and robustness of integrating various LoRA configurations within the vLLM framework.

Highlights

  • LoRA Tensor and Embedding Consolidation: The _load_adapter method now combines lora_tensors and lora_embeddings into a single dictionary before passing them to the LoRA model constructor, streamlining the data handling for adapter loading.
  • Robust Configuration Access: The calculation for target_embedding_padding has been updated to use getattr when accessing lora_extra_vocab_size from the configuration, making the code more resilient to cases where this attribute might be missing.
  • Removed Extra Vocab Size Validation: A previous validation check that raised an error if the LoRA's extra_vocab_size exceeded the configured lora_extra_vocab_size has been removed, potentially allowing for greater flexibility in LoRA model integration.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the LoRA loading mechanism for compatibility with vLLM. The key changes include merging LoRA tensors and embeddings into a single dictionary and making the lora_extra_vocab_size configuration optional for more flexibility. These changes appear to align with updates in vLLM. However, I have pointed out a potential issue with the removal of a validation check for the extra vocabulary size, which could lead to silent failures if not handled elsewhere. I've suggested reintroducing a more robust version of this check to maintain code safety.

raise ValueError(f"LoRA added vocab size {lora.extra_vocab_size} "
f"is greater than lora_extra_vocab_size "
f"{self.lora_config.lora_extra_vocab_size}.")
return lora
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The validation check for lora.extra_vocab_size has been removed. While making lora_extra_vocab_size optional in lora_config is a good improvement for flexibility, removing this validation entirely could be risky. If a LoRA adapter adds new vocabulary and lora_extra_vocab_size is not configured (defaulting to 0), it might lead to silent errors or incorrect model behavior if the underlying LoRAModel creation methods don't perform this validation.

It's safer to ensure this validation happens. If it's not handled elsewhere (e.g., within LoRAModel creation or peft_helper.validate_legal), please consider reintroducing a more robust version of this check.

        lora_extra_vocab_size = getattr(self.lora_config, "lora_extra_vocab_size", 0)
        if lora.extra_vocab_size > lora_extra_vocab_size:
            raise ValueError(
                f"LoRA added vocab size {lora.extra_vocab_size} is greater "
                f"than lora_extra_vocab_size {lora_extra_vocab_size}."
            )
        return lora

@danielhanchen danielhanchen merged commit e915bca into unslothai:nightly Nov 25, 2025
danielhanchen added a commit that referenced this pull request Nov 25, 2025
* Update gpt_oss.py

* torch compile

* Update attention_sink.py

* Update common.py

* Update common.py

* Patches

* Compiled mask creation

* Update attention_sink.py

* Update gpt_oss.py

* Update gpt_oss.py

* Revert

* Update gpt_oss.py

* Update gpt_oss.py

* Fix up

* Update attention_sink.py

* Update attention_sink.py

* Update utils.py

* Update attention_sink.py

* Update attention_sink.py

* Retry

* Update gpt_oss.py

* Update gpt_oss.py

* Fix Flex

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Bug fixes

* Update patching_utils.py

* Update patching_utils.py

* Update patching_utils.py

* Update rl_replacements.py

* Update patching_utils.py

* Update patching_utils.py

* Update patching_utils.py

* flash attn

* Update gpt_oss.py

* Update __init__.py

* Update attention_sink.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* dropout_p

* Update gpt_oss.py

* Update gpt_oss.py

* Update attention_sink.py

* Update gpt_oss.py

* Update gpt_oss.py

* fix

* Update attention_sink.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update loss_utils.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update loss_utils.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Versioning

* Update saving_utils.py

* Update saving_utils.py

* Update saving_utils.py

* Update saving_utils.py

* Update saving_utils.py

* Update saving_utils.py

* Update saving_utils.py

* Update saving_utils.py

* Fix Gemma 3

* Update misc.py

* Update rl_environments.py

* Update pyproject.toml

* Update rl_environments.py

* Update __init__.py

* Update empty_model.py

* Update empty_model.py

* Update empty_model.py

* Update empty_model.py

* Device type

* Update vllm_utils.py

* Update compiler.py

* Update empty_model.py

* Update vllm_utils.py

* Update empty_model.py

* Fixes

* Update empty_model.py

* Update empty_model.py

* Update __init__.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update rl_environments.py

* Update cross_entropy_loss.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update rl_environments.py

* Update vllm_utils.py

* Qwen3 VL vLLM (#324)

* qwen3 vl additional layers

* qwen3 fused vision qkv

* refactor for handling qwen 3 vl

* [WIP] fix backward pass issues

* out hidden size change

* Qwen 2.5 and qwen 3 conv3d->Linear vLLM changes

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update pyproject.toml

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update __init__.py

* Update compiler.py

* Update __init__.py

* Update vllm_utils.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Fix CE compile

* Update loss_utils.py

* Update cross_entropy_loss.py

* Fix

* Deepseekocr fix: save single model shard (#346)

* DeepSeekOCR Fix: check for saftensors_list shard naming convention

* turned off shard padding length check bc deepseeks padding is different

* if you try to copy the index.json file and the same file alredy exists it wil throw and error.

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update patching_utils.py

* Fp8 compressed (#358)

* [WIP] compressed tensors support for FP8

* [WIP 2/n] improve loading fake layer

* [WIP 3/n] improve loading fake layer

* [WIP 4/n] improve loading fake layer

* revert seq and token util calculation

---------

Co-authored-by: Datta Nimmaturi <venkatadattasainimmaturi@gmail.com>

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update gradient_checkpointing.py

* Update gpt_oss.py

* Update gpt_oss.py

* updates for vLLM compativility with lora (#359)

* updates for vLLM compativility with lora

* LoRA extra vocab cleanup

---------

Co-authored-by: Datta Nimmaturi <venkatadattasainimmaturi@gmail.com>

* Update vllm_utils.py

* Versioning

---------

Co-authored-by: Datta Nimmaturi <venkatadattasainimmaturi@gmail.com>
Co-authored-by: DoubleMathew <mmathew23@gmail.com>
danielhanchen added a commit that referenced this pull request Dec 1, 2025
* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Bug fixes

* Update patching_utils.py

* Update patching_utils.py

* Update patching_utils.py

* Update rl_replacements.py

* Update patching_utils.py

* Update patching_utils.py

* Update patching_utils.py

* flash attn

* Update gpt_oss.py

* Update __init__.py

* Update attention_sink.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* dropout_p

* Update gpt_oss.py

* Update gpt_oss.py

* Update attention_sink.py

* Update gpt_oss.py

* Update gpt_oss.py

* fix

* Update attention_sink.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update loss_utils.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update loss_utils.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Update gpt_oss.py

* Versioning

* Update saving_utils.py

* Update saving_utils.py

* Update saving_utils.py

* Update saving_utils.py

* Update saving_utils.py

* Update saving_utils.py

* Update saving_utils.py

* Update saving_utils.py

* Fix Gemma 3

* Update misc.py

* Update rl_environments.py

* Update pyproject.toml

* Update rl_environments.py

* Update __init__.py

* Update empty_model.py

* Update empty_model.py

* Update empty_model.py

* Update empty_model.py

* Device type

* Update vllm_utils.py

* Update compiler.py

* Update empty_model.py

* Update vllm_utils.py

* Update empty_model.py

* Fixes

* Update empty_model.py

* Update empty_model.py

* Update __init__.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update rl_environments.py

* Update cross_entropy_loss.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update rl_environments.py

* Update vllm_utils.py

* Qwen3 VL vLLM (#324)

* qwen3 vl additional layers

* qwen3 fused vision qkv

* refactor for handling qwen 3 vl

* [WIP] fix backward pass issues

* out hidden size change

* Qwen 2.5 and qwen 3 conv3d->Linear vLLM changes

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update pyproject.toml

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update __init__.py

* Update compiler.py

* Update __init__.py

* Update vllm_utils.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Fix CE compile

* Update loss_utils.py

* Update cross_entropy_loss.py

* Fix

* Deepseekocr fix: save single model shard (#346)

* DeepSeekOCR Fix: check for saftensors_list shard naming convention

* turned off shard padding length check bc deepseeks padding is different

* if you try to copy the index.json file and the same file alredy exists it wil throw and error.

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update patching_utils.py

* Fp8 compressed (#358)

* [WIP] compressed tensors support for FP8

* [WIP 2/n] improve loading fake layer

* [WIP 3/n] improve loading fake layer

* [WIP 4/n] improve loading fake layer

* revert seq and token util calculation

---------

Co-authored-by: Datta Nimmaturi <venkatadattasainimmaturi@gmail.com>

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update gradient_checkpointing.py

* Update gpt_oss.py

* Update gpt_oss.py

* updates for vLLM compativility with lora (#359)

* updates for vLLM compativility with lora

* LoRA extra vocab cleanup

---------

Co-authored-by: Datta Nimmaturi <venkatadattasainimmaturi@gmail.com>

* Update vllm_utils.py

* Versioning

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Update vllm_utils.py

* Fix

* Qwen3MoE

* Update qwen3_moe.py

* Update qwen3_moe.py

* LoRA extra vocab fix (#367)

* Update qwen3_moe.py

* Update qwen3_moe.py

* Update qwen3_moe.py

* Update qwen3_moe.py

* Update __init__.py

---------

Co-authored-by: Datta Nimmaturi <venkatadattasainimmaturi@gmail.com>
Co-authored-by: DoubleMathew <mmathew23@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants