[Model] support input embeddings for qwen2vl#8856
[Model] support input embeddings for qwen2vl#8856DarkLight1337 merged 15 commits intovllm-project:mainfrom
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
|
@fyabc can you assist OP to address #6613 (comment)? Thanks in advance for your help! |
|
Adding to our usage scenarios. |
|
Can vllm support this multmodal input format? image_embeds = ……
# mm_data['image'] = image_embeds
mm_data['image'] = {
"image_embeds": image_embeds,
"image_grid_thw": inputs["image_grid_thw"],
}
llm_inputs = {
"prompt": prompt,
"multi_modal_data": mm_data,
}
outputs = llm.generate([llm_inputs], sampling_params=sampling_params)
generated_text = outputs[0].outputs[0].text |
There was a problem hiding this comment.
Thanks for implementing this! Some comments.
Can vllm support this multmodal input format?
To support multiple images, videos, dynamic resolution, and M-RoPE in the future.
For now, I suggest you override the default input mapper (in register_image_input_mapper) to support it for this model specifically. To maintain a consistent API, the regular case of just inputting a tensor should still be supported.
|
You should also update the Supported Models page in the docs to indicate that the model supports image embeddings. |
We have already modified the default input mapper for Qwen2VL ( Right now it only works on Qwen2VL and does not affect the inputs of other models. |
Sorry, I forgot there is already an existing input mapper. In that case, you can update the existing one. |
|
@DarkLight1337 This ci's failure seems to have nothing to do with my changes. please have a look. |
|
You should merge in the changes from main branch to resolve the CI failures. |
Since the input format is different for this model, I suggest also adding a note to the documentation to explain how to input embeddings for Qwen2-VL. |
…d additional parameters
Thanks for your tips. |
|
While it is a rare use case and probably inconsequential to most users, it is a great enhancement to our production environment. |
Thanks, the example looks good.
@alex-jw-brooks are you already working on dynamic options to the input mapper? |
|
I tested Qwen2-VL and can verify that the model still works on regular image inputs. Assuming that you have already tested Qwen2-VL to work with embedding inputs, let's merge this first - follow-up work can be done in another PR. |
|
Hey @DarkLight1337 - yup, I am working on it and should have a PR for passing dynamic processor / mapper options up within the next week or so! |
|
@whyiug @DarkLight1337 sorry for late response, I have checked the update and its okay to me. Very thanks to your contribution! |
* [Build/CI] Upgrade to gcc 10 in the base build Docker image (vllm-project#8814) * [Docs] Add README to the build docker image (vllm-project#8825) * [CI/Build] Fix missing ci dependencies (vllm-project#8834) * [misc][installation] build from source without compilation (vllm-project#8818) * [ci] Soft fail Entrypoints, Samplers, LoRA, Decoder-only VLM (vllm-project#8872) Signed-off-by: kevin <kevin@anyscale.com> * [Bugfix] Include encoder prompts len to non-stream api usage response (vllm-project#8861) * [Misc] Change dummy profiling and BOS fallback warns to log once (vllm-project#8820) * [Bugfix] Fix print_warning_once's line info (vllm-project#8867) * fix validation: Only set tool_choice `auto` if at least one tool is provided (vllm-project#8568) * [Bugfix] Fixup advance_step.cu warning (vllm-project#8815) * [BugFix] Fix test breakages from transformers 4.45 upgrade (vllm-project#8829) * [Installation] Allow lower versions of FastAPI to maintain Ray 2.9 compatibility (vllm-project#8764) * [Feature] Add support for Llama 3.1 and 3.2 tool use (vllm-project#8343) Signed-off-by: Max de Bayser <mbayser@br.ibm.com> * [Core] rename`PromptInputs` and `inputs` (vllm-project#8876) * [misc] fix collect env (vllm-project#8894) * [MISC] Fix invalid escape sequence '\' (vllm-project#8830) Signed-off-by: Peter Pan <Peter.Pan@daocloud.io> * [Bugfix][VLM] Fix Fuyu batching inference with `max_num_seqs>1` (vllm-project#8892) * [TPU] Update pallas.py to support trillium (vllm-project#8871) * [torch.compile] use empty tensor instead of None for profiling (vllm-project#8875) * [Kernel] AQ AZP 4/4: Integrate asymmetric quantization to linear method (vllm-project#7271) * [Bugfix] fix for deepseek w4a16 (vllm-project#8906) Co-authored-by: mgoin <michael@neuralmagic.com> * [Core] Multi-Step + Single Step Prefills via Chunked Prefill code path (vllm-project#8378) Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com> * [misc][distributed] add VLLM_SKIP_P2P_CHECK flag (vllm-project#8911) * [Core] Priority-based scheduling in async engine (vllm-project#8850) * [misc] fix wheel name (vllm-project#8919) * [Bugfix][Intel] Fix XPU Dockerfile Build (vllm-project#7824) Signed-off-by: tylertitsworth <tyler.titsworth@intel.com> Co-authored-by: youkaichao <youkaichao@126.com> * [Misc] Remove vLLM patch of `BaichuanTokenizer` (vllm-project#8921) * [Bugfix] Fix code for downloading models from modelscope (vllm-project#8443) * [Bugfix] Fix PP for Multi-Step (vllm-project#8887) * [CI/Build] Update models tests & examples (vllm-project#8874) Co-authored-by: Roger Wang <ywang@roblox.com> * [Frontend] Make beam search emulator temperature modifiable (vllm-project#8928) Co-authored-by: Eduard Balzin <nfunctor@yahoo.fr> * [Bugfix] Support testing prefill throughput with benchmark_serving.py --hf-output-len 1 (vllm-project#8891) * [doc] organize installation doc and expose per-commit docker (vllm-project#8931) * [Core] Improve choice of Python multiprocessing method (vllm-project#8823) Signed-off-by: Russell Bryant <rbryant@redhat.com> Co-authored-by: youkaichao <youkaichao@126.com> * [Bugfix] Block manager v2 with preemption and lookahead slots (vllm-project#8824) * [Bugfix] Fix Marlin MoE act order when is_k_full == False (vllm-project#8741) Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com> * [CI/Build] Add test decorator for minimum GPU memory (vllm-project#8925) * [Build/CI] Set FETCHCONTENT_BASE_DIR to one location for better caching (vllm-project#8930) * [Model] Support Qwen2.5-Math-RM-72B (vllm-project#8896) * [Model][LoRA]LoRA support added for MiniCPMV2.5 (vllm-project#7199) * [BugFix] Fix seeded random sampling with encoder-decoder models (vllm-project#8870) Co-authored-by: Roger Wang <ywang@roblox.com> * [Misc] Fix typo in BlockSpaceManagerV1 (vllm-project#8944) * [Frontend] Added support for HF's new `continue_final_message` parameter (vllm-project#8942) * [Kernel][Model] Varlen prefill + Prefill chunking support for mamba kernels and Jamba model (vllm-project#8533) * [Model] support input embeddings for qwen2vl (vllm-project#8856) * [Misc][CI/Build] Include `cv2` via `mistral_common[opencv]` (vllm-project#8951) * [Model][LoRA]LoRA support added for MiniCPMV2.6 (vllm-project#8943) Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk> * [Model] Expose InternVL2 max_dynamic_patch as a mm_processor_kwarg (vllm-project#8946) * [Core] Make scheduling policy settable via EngineArgs (vllm-project#8956) * [Misc] Adjust max_position_embeddings for LoRA compatibility (vllm-project#8957) * [ci] Add CODEOWNERS for test directories (vllm-project#8795) Signed-off-by: kevin <kevin@anyscale.com> * [CI][SpecDecode] Fix spec decode tests, use flash attention backend for spec decode CI tests. (vllm-project#8975) * [Frontend][Core] Move guided decoding params into sampling params (vllm-project#8252) Signed-off-by: Joe Runde <Joseph.Runde@ibm.com> Co-authored-by: Nick Hill <nickhill@us.ibm.com> * [CI/Build] Fix machete generated kernel files ordering (vllm-project#8976) Signed-off-by: kevin <kevin@anyscale.com> Co-authored-by: Cody Yu <hao.yu.cody@gmail.com> * [torch.compile] fix tensor alias (vllm-project#8982) * [Misc] add process_weights_after_loading for DummyLoader (vllm-project#8969) * [Bugfix] Fix Fuyu tensor parallel inference (vllm-project#8986) * [Bugfix] Fix Token IDs Reference for MiniCPM-V When Images are Provided With No Placeholders (vllm-project#8991) Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com> * [Core] [Frontend] Priority scheduling for embeddings and in the OpenAI-API (vllm-project#8965) * [Doc] Update list of supported models (vllm-project#8987) * Update benchmark_serving.py to read and write json-datasets, results in UTF8, for better compatibility with Windows (vllm-project#8997) * [Spec Decode] (1/2) Remove batch expansion (vllm-project#8839) * [Core] Combined support for multi-step scheduling, chunked prefill & prefix caching (vllm-project#8804) Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com> Co-authored-by: Andrew Feldman <afeld2012@gmail.com> * [Misc] Update Default Image Mapper Error Log (vllm-project#8977) Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com> Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com> * [Core] CUDA Graphs for Multi-Step + Chunked-Prefill (vllm-project#8645) Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com> * [OpenVINO] Enable GPU support for OpenVINO vLLM backend (vllm-project#8192) * [Model] Adding Granite MoE. (vllm-project#8206) Co-authored-by: Nick Hill <nickhill@us.ibm.com> * [Doc] Update Granite model docs (vllm-project#9025) * [Bugfix] example template should not add parallel_tool_prompt if tools is none (vllm-project#9007) * [Misc] log when using default MoE config (vllm-project#8971) * [BugFix] Enforce Mistral ToolCall id constraint when using the Mistral tool call parser (vllm-project#9020) * [Core] Make BlockSpaceManagerV2 the default BlockManager to use. (vllm-project#8678) * [Frontend] [Neuron] Parse literals out of override-neuron-config (vllm-project#8959) Co-authored-by: Jerzy Zagorski <jzagorsk@amazon.com> * [misc] add forward context for attention (vllm-project#9029) * Fix failing spec decode test (vllm-project#9054) * [Bugfix] Weight loading fix for OPT model (vllm-project#9042) Co-authored-by: dvres <dvres@fri.uni-lj.si> * [Frontend][Feature] support tool calling for internlm/internlm2_5-7b-chat model (vllm-project#8405) * [CI/Build] Per file CUDA Archs (improve wheel size and dev build times) (vllm-project#8845) * [Misc] Enable multi-step output streaming by default (vllm-project#9047) * [Models] Add remaining model PP support (vllm-project#7168) Signed-off-by: Muralidhar Andoorveedu <muralidhar.andoorveedu@centml.ai> Signed-off-by: Murali Andoorveedu <muralidhar.andoorveedu@centml.ai> Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk> * [Misc] Move registry to its own file (vllm-project#9064) * [Bugfix] Reshape the dimensions of the input image embeddings in Qwen2VL (vllm-project#9071) * [Bugfix] Flash attention arches not getting set properly (vllm-project#9062) * [Model] add a bunch of supported lora modules for mixtral (vllm-project#9008) Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com> * Remove AMD Ray Summit Banner (vllm-project#9075) * [Hardware][PowerPC] Make oneDNN dependency optional for Power (vllm-project#9039) Signed-off-by: Varad Ahirwadkar <varad.ahirwadkar1@ibm.com> * [Core][VLM] Test registration for OOT multimodal models (vllm-project#8717) Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk> * Adds truncate_prompt_tokens param for embeddings creation (vllm-project#8999) Signed-off-by: Flavia Beo <flavia.beo@ibm.com> * [Kernel] Zero point support in fused MarlinMoE kernel + AWQ Fused MoE (vllm-project#8973) Co-authored-by: Dipika <dipikasikka1@gmail.com> Co-authored-by: Dipika Sikka <ds3822@columbia.edu> * [CI] Update performance benchmark: upgrade trt-llm to r24.07, and add SGLang (vllm-project#7412) * [Misc] Improved prefix cache example (vllm-project#9077) * [Misc] Add random seed for prefix cache benchmark (vllm-project#9081) * [Misc] Fix CI lint (vllm-project#9085) * [Hardware][Neuron] Add on-device sampling support for Neuron (vllm-project#8746) Co-authored-by: Ashraf Mahgoub <ashymahg@amazon.com> * [torch.compile] improve allreduce registration (vllm-project#9061) * [Doc] Update README.md with Ray summit slides (vllm-project#9088) * [Bugfix] use blockmanagerv1 for encoder-decoder (vllm-project#9084) Co-authored-by: Roger Wang <ywang@roblox.com> * [Bugfix] Fixes Phi3v & Ultravox Multimodal EmbeddingInputs (vllm-project#8979) * [Model] Support Gemma2 embedding model (vllm-project#9004) * [Bugfix] Deprecate registration of custom configs to huggingface (vllm-project#9083) * [Bugfix] Fix order of arguments matters in config.yaml (vllm-project#8960) * [core] use forward context for flash infer (vllm-project#9097) * [Bugfix] Fix try-catch conditions to import correct Flash Attention Backend in Draft Model (vllm-project#9101) * [Frontend] API support for beam search (vllm-project#9087) Co-authored-by: youkaichao <youkaichao@126.com> * [Misc] Remove user-facing error for removed VLM args (vllm-project#9104) * [Model] PP support for embedding models and update docs (vllm-project#9090) Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com> * [Bugfix] fix tool_parser error handling when serve a model not support it (vllm-project#8709) * [Bugfix] Fix incorrect updates to num_computed_tokens in multi-step scheduling (vllm-project#9038) Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com> * [Bugfix][Hardware][CPU] Fix CPU model input for decode (vllm-project#9044) * [BugFix][Core] Fix BlockManagerV2 when Encoder Input is None (vllm-project#9103) * [core] remove beam search from the core (vllm-project#9105) * [Model] Explicit interface for vLLM models and support OOT embedding models (vllm-project#9108) * [Hardware][CPU] Cross-attention and Encoder-Decoder models support on CPU backend (vllm-project#9089) * [Core] Refactor GGUF parameters packing and forwarding (vllm-project#8859) * [Model] Support NVLM-D and fix QK Norm in InternViT (vllm-project#9045) Co-authored-by: Roger Wang <ywang@roblox.com> Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn> * [Doc]: Add deploying_with_k8s guide (vllm-project#8451) * [CI/Build] Add linting for github actions workflows (vllm-project#7876) Signed-off-by: Russell Bryant <rbryant@redhat.com> * [Doc] Include performance benchmark in README (vllm-project#9135) * [misc] fix comment and variable name (vllm-project#9139) * Add Slack to README (vllm-project#9137) * [misc] update utils to support comparing multiple settings (vllm-project#9140) * [Intel GPU] Fix xpu decode input (vllm-project#9145) * [misc] improve ux on readme (vllm-project#9147) * [Frontend] API support for beam search for MQLLMEngine (vllm-project#9117) * [Core][Frontend] Add Support for Inference Time mm_processor_kwargs (vllm-project#9131) Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com> * Factor out common weight loading code * Fix EAGLE model loading * [Frontend] Add Early Validation For Chat Template / Tool Call Parser (vllm-project#9151) Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com> * Improve efficiency * Rename * Update LLaVA-NeXT-Video * [CI/Build] Add examples folder into Docker image so that we can leverage the templates*.jinja when serving models (vllm-project#8758) Signed-off-by: Peter Pan <Peter.Pan@daocloud.io> * [Bugfix] fix OpenAI API server startup with --disable-frontend-multiprocessing (vllm-project#8537) * Automatic loading and save memory * Rename * Update docstring * Simplify * Cleanup * Fully enable recursive loading * Clarify * [Doc] Update vlm.rst to include an example on videos (vllm-project#9155) Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com> * Fix incorrect semantics * Move function * Update error message * Fix Ultravox loading * spacing * [Doc] Improve contributing and installation documentation (vllm-project#9132) Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com> * Fix server * [Bugfix] Try to handle older versions of pytorch (vllm-project#9086) --------- Signed-off-by: kevin <kevin@anyscale.com> Signed-off-by: Max de Bayser <mbayser@br.ibm.com> Signed-off-by: Peter Pan <Peter.Pan@daocloud.io> Signed-off-by: tylertitsworth <tyler.titsworth@intel.com> Signed-off-by: Russell Bryant <rbryant@redhat.com> Signed-off-by: Joe Runde <Joseph.Runde@ibm.com> Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com> Signed-off-by: Muralidhar Andoorveedu <muralidhar.andoorveedu@centml.ai> Signed-off-by: Murali Andoorveedu <muralidhar.andoorveedu@centml.ai> Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com> Signed-off-by: Varad Ahirwadkar <varad.ahirwadkar1@ibm.com> Signed-off-by: Flavia Beo <flavia.beo@ibm.com> Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com> Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com> Co-authored-by: Michael Goin <michael@neuralmagic.com> Co-authored-by: fyuan1316 <yuanfang@alauda.io> Co-authored-by: youkaichao <youkaichao@gmail.com> Co-authored-by: Kevin H. Luu <kevin@anyscale.com> Co-authored-by: Pernekhan Utemuratov <pernekhan@deepinfra.com> Co-authored-by: Chirag Jain <jain.chirag925@gmail.com> Co-authored-by: Nick Hill <nickhill@us.ibm.com> Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk> Co-authored-by: Maximilien de Bayser <mbayser@br.ibm.com> Co-authored-by: Peter Pan <peter.pan@daocloud.io> Co-authored-by: Isotr0py <2037008807@qq.com> Co-authored-by: Brittany <24945384+bvrockwell@users.noreply.github.com> Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com> Co-authored-by: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com> Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com> Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com> Co-authored-by: Sebastian Schoennenbeck <sebastian.schoennenbeck@comma-soft.com> Co-authored-by: Tyler Titsworth <titswortht@gmail.com> Co-authored-by: youkaichao <youkaichao@126.com> Co-authored-by: tastelikefeet <58414341+tastelikefeet@users.noreply.github.com> Co-authored-by: Roger Wang <ywang@roblox.com> Co-authored-by: Edouard B. <eduard.r.balzin@gmail.com> Co-authored-by: Eduard Balzin <nfunctor@yahoo.fr> Co-authored-by: Chen Zhang <zhangch99@outlook.com> Co-authored-by: Russell Bryant <rbryant@redhat.com> Co-authored-by: sroy745 <142070531+sroy745@users.noreply.github.com> Co-authored-by: ElizaWszola <eliza@neuralmagic.com> Co-authored-by: Zilin Zhu <zilinzhu@tencent.com> Co-authored-by: Jee Jee Li <pandaleefree@gmail.com> Co-authored-by: juncheoll <127460634+juncheoll@users.noreply.github.com> Co-authored-by: danieljannai21 <100521221+danieljannai21@users.noreply.github.com> Co-authored-by: Mor Zusman <mor.zusmann@gmail.com> Co-authored-by: whyiug <whyiug@hotmail.com> Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com> Co-authored-by: Lily Liu <lilyliupku@gmail.com> Co-authored-by: Joe Runde <Joseph.Runde@ibm.com> Co-authored-by: Cody Yu <hao.yu.cody@gmail.com> Co-authored-by: Divakar Verma <137818590+divakar-amd@users.noreply.github.com> Co-authored-by: Alex Brooks <alex.brooks@ibm.com> Co-authored-by: vlsav <vl_sav@mail.ru> Co-authored-by: afeldman-nm <156691304+afeldman-nm@users.noreply.github.com> Co-authored-by: Andrew Feldman <afeld2012@gmail.com> Co-authored-by: Sergey Shlyapnikov <Sergeishlyapnikov@gmail.com> Co-authored-by: Shawn Tan <shawn@wtf.sg> Co-authored-by: Travis Johnson <tsjohnso@us.ibm.com> Co-authored-by: Guillaume Calmettes <guillaume.calmettes@gmail.com> Co-authored-by: xendo <xendoo@gmail.com> Co-authored-by: Jerzy Zagorski <jzagorsk@amazon.com> Co-authored-by: Domen Vreš <56541137+domenVres@users.noreply.github.com> Co-authored-by: dvres <dvres@fri.uni-lj.si> Co-authored-by: 代君 <sydnash@users.noreply.github.com> Co-authored-by: Murali Andoorveedu <37849411+andoorve@users.noreply.github.com> Co-authored-by: Prashant Gupta <prashantgupta@us.ibm.com> Co-authored-by: Simon Mo <simon.mo@hey.com> Co-authored-by: Varad Ahirwadkar <86718090+varad-ahirwadkar@users.noreply.github.com> Co-authored-by: Flávia Béo <119421251+flaviabeo@users.noreply.github.com> Co-authored-by: Dipika <dipikasikka1@gmail.com> Co-authored-by: Dipika Sikka <ds3822@columbia.edu> Co-authored-by: Kuntai Du <kuntai@uchicago.edu> Co-authored-by: Andy Dai <76841985+Imss27@users.noreply.github.com> Co-authored-by: Chongming Ni <chongmni@amazon.com> Co-authored-by: Ashraf Mahgoub <ashymahg@amazon.com> Co-authored-by: Zhuohan Li <zhuohan123@gmail.com> Co-authored-by: hhzhang16 <54051230+hhzhang16@users.noreply.github.com> Co-authored-by: Xin Yang <105740670+xyang16@users.noreply.github.com> Co-authored-by: TJian <tunjian1996@gmail.com> Co-authored-by: Brendan Wong <35351983+LunrEclipse@users.noreply.github.com> Co-authored-by: Yanyi Liu <wolfsonliu@163.com> Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn> Co-authored-by: TimWang <7367474+haitwang-cloud@users.noreply.github.com> Co-authored-by: Kunshang Ji <kunshang.ji@intel.com> Co-authored-by: Daniele <36171005+dtrifiro@users.noreply.github.com> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com> Co-authored-by: Rafael Vasquez <rafvasq21@gmail.com> Co-authored-by: bnellnm <49004751+bnellnm@users.noreply.github.com>
Signed-off-by: Alvant <alvasian@yandex.ru>
Signed-off-by: Amit Garg <mitgarg17495@gmail.com>
Signed-off-by: Sumit Dubey <sumit.dubey2@ibm.com>
Signed-off-by: LeiWang1999 <leiwang1999@outlook.com>
|
This is very needed |
FILL IN THE PR DESCRIPTION HERE
FIX #8857
BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE
PR Checklist (Click to Expand)
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]for bug fixes.[CI/Build]for build or continuous integration improvements.[Doc]for documentation fixes and improvements.[Model]for adding a new model or improving an existing model. Model name should appear in the title.[Frontend]For changes on the vLLM frontend (e.g., OpenAI API server,LLMclass, etc.)[Kernel]for changes affecting CUDA kernels or other compute kernels.[Core]for changes in the core vLLM logic (e.g.,LLMEngine,AsyncLLMEngine,Scheduler, etc.)[Hardware][Vendor]for hardware-specific changes. Vendor name should appear in the prefix (e.g.,[Hardware][AMD]).[Misc]for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
format.shto format your code.docs/source/if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.Adding or changing kernels
Each custom kernel needs a schema and one or more implementations to be registered with PyTorch.
Tensorsrequire meta-functions. Meta-functions should be implemented and registered in python so that dynamic dims can be handled automatically. See above documents for a description of meta-functions.torch.libary.opcheck()to test the function registration and meta-function for any registered ops. Seetests/kernelsfor examples.Notes for Large Changes
Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with
rfc-requiredand might not go through the PR.What to Expect for the Reviews
The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:
action-requiredlabel on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.Thank You
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!