Skip to content

[Doc] Update vlm.rst to include an example on videos#9155

Merged
DarkLight1337 merged 7 commits intovllm-project:mainfrom
sayakpaul:patch-1
Oct 8, 2024
Merged

[Doc] Update vlm.rst to include an example on videos#9155
DarkLight1337 merged 7 commits intovllm-project:mainfrom
sayakpaul:patch-1

Conversation

@sayakpaul
Copy link
Copy Markdown
Contributor

As discussed in #9128 (comment).

@github-actions
Copy link
Copy Markdown

github-actions bot commented Oct 8, 2024

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@DarkLight1337
Copy link
Copy Markdown
Member

I think it would be better for you to just directly put the example in the docs. I'd prefer it to have a similar format as the examples above though.

@sayakpaul
Copy link
Copy Markdown
Contributor Author

@DarkLight1337 LMK if the changes work for you.

Comment on lines +145 to +146
messages = [{"role": "user", "content": []}]
messages[0]["content"].append({"type": "text", "text": "Describe this set of frames. Consider the frames to be a part of the same video."})
Copy link
Copy Markdown
Member

@DarkLight1337 DarkLight1337 Oct 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be easier to read if you combine these two lines together, so that messages is contained in a single expression.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

base64_image = encode_image(video_frames[i]) # base64 encoding.

# Perform inference and log output.
outputs = llm.chat(messages)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you forget to input the images here?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

sayakpaul and others added 2 commits October 8, 2024 20:20
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
sayakpaul and others added 2 commits October 8, 2024 20:40
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Copy link
Copy Markdown
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docs look good. Thanks for adding this!

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) October 8, 2024 15:45
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Oct 8, 2024
@DarkLight1337 DarkLight1337 merged commit 1874c6a into vllm-project:main Oct 8, 2024
shajrawi pushed a commit to ROCm/vllm that referenced this pull request Oct 9, 2024
* [Build/CI] Upgrade to gcc 10 in the base build Docker image (vllm-project#8814)

* [Docs] Add README to the build docker image (vllm-project#8825)

* [CI/Build] Fix missing ci dependencies (vllm-project#8834)

* [misc][installation] build from source without compilation (vllm-project#8818)

* [ci] Soft fail Entrypoints, Samplers, LoRA, Decoder-only VLM (vllm-project#8872)

Signed-off-by: kevin <kevin@anyscale.com>

* [Bugfix] Include encoder prompts len to non-stream api usage response (vllm-project#8861)

* [Misc] Change dummy profiling and BOS fallback warns to log once (vllm-project#8820)

* [Bugfix] Fix print_warning_once's line info (vllm-project#8867)

* fix validation: Only set tool_choice `auto` if at least one tool is provided (vllm-project#8568)

* [Bugfix] Fixup advance_step.cu warning (vllm-project#8815)

* [BugFix] Fix test breakages from transformers 4.45 upgrade (vllm-project#8829)

* [Installation] Allow lower versions of FastAPI to maintain Ray 2.9 compatibility (vllm-project#8764)

* [Feature] Add support for Llama 3.1 and 3.2 tool use (vllm-project#8343)

Signed-off-by: Max de Bayser <mbayser@br.ibm.com>

* [Core] rename`PromptInputs` and `inputs` (vllm-project#8876)

* [misc] fix collect env (vllm-project#8894)

* [MISC] Fix invalid escape sequence '\' (vllm-project#8830)

Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>

* [Bugfix][VLM] Fix Fuyu batching inference with `max_num_seqs>1` (vllm-project#8892)

* [TPU] Update pallas.py to support trillium (vllm-project#8871)

* [torch.compile] use empty tensor instead of None for profiling (vllm-project#8875)

* [Kernel] AQ AZP 4/4: Integrate asymmetric quantization to linear method (vllm-project#7271)

* [Bugfix] fix for deepseek w4a16 (vllm-project#8906)

Co-authored-by: mgoin <michael@neuralmagic.com>

* [Core] Multi-Step + Single Step Prefills via Chunked Prefill code path (vllm-project#8378)

Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>

* [misc][distributed] add VLLM_SKIP_P2P_CHECK flag (vllm-project#8911)

* [Core] Priority-based scheduling in async engine (vllm-project#8850)

* [misc] fix wheel name (vllm-project#8919)

* [Bugfix][Intel] Fix XPU Dockerfile Build (vllm-project#7824)

Signed-off-by: tylertitsworth <tyler.titsworth@intel.com>
Co-authored-by: youkaichao <youkaichao@126.com>

* [Misc] Remove vLLM patch of `BaichuanTokenizer` (vllm-project#8921)

* [Bugfix] Fix code for downloading models from modelscope (vllm-project#8443)

* [Bugfix] Fix PP for Multi-Step (vllm-project#8887)

* [CI/Build] Update models tests & examples (vllm-project#8874)

Co-authored-by: Roger Wang <ywang@roblox.com>

* [Frontend] Make beam search emulator temperature modifiable (vllm-project#8928)

Co-authored-by: Eduard Balzin <nfunctor@yahoo.fr>

* [Bugfix] Support testing prefill throughput with benchmark_serving.py --hf-output-len 1 (vllm-project#8891)

* [doc] organize installation doc and expose per-commit docker (vllm-project#8931)

* [Core] Improve choice of Python multiprocessing method (vllm-project#8823)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: youkaichao <youkaichao@126.com>

* [Bugfix] Block manager v2 with preemption and lookahead slots (vllm-project#8824)

* [Bugfix] Fix Marlin MoE act order when is_k_full == False (vllm-project#8741)

Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [CI/Build] Add test decorator for minimum GPU memory (vllm-project#8925)

* [Build/CI] Set FETCHCONTENT_BASE_DIR to one location for better caching (vllm-project#8930)

* [Model] Support Qwen2.5-Math-RM-72B (vllm-project#8896)

* [Model][LoRA]LoRA support added for MiniCPMV2.5 (vllm-project#7199)

* [BugFix] Fix seeded random sampling with encoder-decoder models (vllm-project#8870)

Co-authored-by: Roger Wang <ywang@roblox.com>

* [Misc] Fix typo in BlockSpaceManagerV1 (vllm-project#8944)

* [Frontend] Added support for HF's new `continue_final_message` parameter (vllm-project#8942)

* [Kernel][Model] Varlen prefill + Prefill chunking support for mamba kernels and Jamba model (vllm-project#8533)

* [Model] support input embeddings for qwen2vl (vllm-project#8856)

* [Misc][CI/Build] Include `cv2` via `mistral_common[opencv]`  (vllm-project#8951)

* [Model][LoRA]LoRA support added for MiniCPMV2.6 (vllm-project#8943)

Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Model] Expose InternVL2 max_dynamic_patch as a mm_processor_kwarg (vllm-project#8946)

* [Core] Make scheduling policy settable via EngineArgs (vllm-project#8956)

* [Misc] Adjust max_position_embeddings for LoRA compatibility (vllm-project#8957)

* [ci] Add CODEOWNERS for test directories  (vllm-project#8795)

Signed-off-by: kevin <kevin@anyscale.com>

* [CI][SpecDecode] Fix spec decode tests, use flash attention backend for spec decode CI tests. (vllm-project#8975)

* [Frontend][Core] Move guided decoding params into sampling params (vllm-project#8252)

Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Co-authored-by: Nick Hill <nickhill@us.ibm.com>

* [CI/Build] Fix machete generated kernel files ordering (vllm-project#8976)

Signed-off-by: kevin <kevin@anyscale.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>

* [torch.compile] fix tensor alias (vllm-project#8982)

* [Misc] add process_weights_after_loading for DummyLoader (vllm-project#8969)

* [Bugfix] Fix Fuyu tensor parallel inference (vllm-project#8986)

* [Bugfix] Fix Token IDs Reference for MiniCPM-V When Images are Provided With No Placeholders (vllm-project#8991)

Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

* [Core] [Frontend] Priority scheduling for embeddings and in the OpenAI-API (vllm-project#8965)

* [Doc] Update list of supported models (vllm-project#8987)

* Update benchmark_serving.py to read and write json-datasets, results in UTF8, for better compatibility with Windows (vllm-project#8997)

* [Spec Decode] (1/2) Remove batch expansion (vllm-project#8839)

* [Core] Combined support for multi-step scheduling, chunked prefill & prefix caching (vllm-project#8804)

Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: Andrew Feldman <afeld2012@gmail.com>

* [Misc] Update Default Image Mapper Error Log (vllm-project#8977)

Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>

* [Core] CUDA Graphs for Multi-Step + Chunked-Prefill (vllm-project#8645)

Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>

* [OpenVINO] Enable GPU support for OpenVINO vLLM backend (vllm-project#8192)

* [Model]  Adding Granite MoE. (vllm-project#8206)

Co-authored-by: Nick Hill <nickhill@us.ibm.com>

* [Doc] Update Granite model docs (vllm-project#9025)

* [Bugfix] example template should not add parallel_tool_prompt if tools is none (vllm-project#9007)

* [Misc] log when using default MoE config (vllm-project#8971)

* [BugFix] Enforce Mistral ToolCall id constraint when using the Mistral tool call parser (vllm-project#9020)

* [Core] Make BlockSpaceManagerV2 the default BlockManager to use. (vllm-project#8678)

* [Frontend] [Neuron] Parse literals out of override-neuron-config (vllm-project#8959)

Co-authored-by: Jerzy Zagorski <jzagorsk@amazon.com>

* [misc] add forward context for attention (vllm-project#9029)

* Fix failing spec decode test (vllm-project#9054)

* [Bugfix] Weight loading fix for OPT model (vllm-project#9042)

Co-authored-by: dvres <dvres@fri.uni-lj.si>

* [Frontend][Feature] support tool calling for internlm/internlm2_5-7b-chat model (vllm-project#8405)

* [CI/Build] Per file CUDA Archs (improve wheel size and dev build times) (vllm-project#8845)

* [Misc] Enable multi-step output streaming by default (vllm-project#9047)

* [Models] Add remaining model PP support (vllm-project#7168)

Signed-off-by: Muralidhar Andoorveedu <muralidhar.andoorveedu@centml.ai>
Signed-off-by: Murali Andoorveedu <muralidhar.andoorveedu@centml.ai>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Move registry to its own file (vllm-project#9064)

* [Bugfix] Reshape the dimensions of the input image embeddings in Qwen2VL (vllm-project#9071)

* [Bugfix] Flash attention arches not getting set properly (vllm-project#9062)

* [Model] add a bunch of supported lora modules for mixtral (vllm-project#9008)

Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com>

* Remove AMD Ray Summit Banner (vllm-project#9075)

* [Hardware][PowerPC] Make oneDNN dependency optional for Power (vllm-project#9039)

Signed-off-by: Varad Ahirwadkar <varad.ahirwadkar1@ibm.com>

* [Core][VLM] Test registration for OOT multimodal models (vllm-project#8717)

Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Adds truncate_prompt_tokens param for embeddings creation (vllm-project#8999)

Signed-off-by: Flavia Beo <flavia.beo@ibm.com>

* [Kernel] Zero point support in fused MarlinMoE kernel + AWQ Fused MoE (vllm-project#8973)

Co-authored-by: Dipika <dipikasikka1@gmail.com>
Co-authored-by: Dipika Sikka <ds3822@columbia.edu>

* [CI] Update performance benchmark: upgrade trt-llm to r24.07, and add SGLang (vllm-project#7412)

* [Misc] Improved prefix cache example (vllm-project#9077)

* [Misc] Add random seed for prefix cache benchmark (vllm-project#9081)

* [Misc] Fix CI lint (vllm-project#9085)

* [Hardware][Neuron] Add on-device sampling support for Neuron (vllm-project#8746)

Co-authored-by: Ashraf Mahgoub <ashymahg@amazon.com>

* [torch.compile] improve allreduce registration (vllm-project#9061)

* [Doc] Update README.md with Ray summit slides (vllm-project#9088)

* [Bugfix] use blockmanagerv1 for encoder-decoder (vllm-project#9084)

Co-authored-by: Roger Wang <ywang@roblox.com>

* [Bugfix] Fixes Phi3v & Ultravox Multimodal EmbeddingInputs (vllm-project#8979)

* [Model] Support Gemma2 embedding model (vllm-project#9004)

* [Bugfix] Deprecate registration of custom configs to huggingface (vllm-project#9083)

* [Bugfix] Fix order of arguments matters in config.yaml (vllm-project#8960)

* [core] use forward context for flash infer (vllm-project#9097)

* [Bugfix] Fix try-catch conditions to import correct Flash Attention Backend in Draft Model (vllm-project#9101)

* [Frontend] API support for beam search (vllm-project#9087)

Co-authored-by: youkaichao <youkaichao@126.com>

* [Misc] Remove user-facing error for removed VLM args (vllm-project#9104)

* [Model] PP support for embedding models and update docs (vllm-project#9090)

Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>

* [Bugfix] fix tool_parser error handling when serve a model not support it (vllm-project#8709)

* [Bugfix] Fix incorrect updates to num_computed_tokens in multi-step scheduling (vllm-project#9038)

Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>

* [Bugfix][Hardware][CPU] Fix CPU model input for decode (vllm-project#9044)

* [BugFix][Core] Fix BlockManagerV2 when Encoder Input is None (vllm-project#9103)

* [core] remove beam search from the core (vllm-project#9105)

* [Model] Explicit interface for vLLM models and support OOT embedding models (vllm-project#9108)

* [Hardware][CPU] Cross-attention and Encoder-Decoder models support on CPU backend (vllm-project#9089)

* [Core] Refactor GGUF parameters packing and forwarding (vllm-project#8859)

* [Model] Support NVLM-D and fix QK Norm in InternViT (vllm-project#9045)

Co-authored-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>

* [Doc]: Add deploying_with_k8s guide (vllm-project#8451)

* [CI/Build] Add linting for github actions workflows (vllm-project#7876)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [Doc] Include performance benchmark in README (vllm-project#9135)

* [misc] fix comment and variable name (vllm-project#9139)

* Add Slack to README (vllm-project#9137)

* [misc] update utils to support comparing multiple settings (vllm-project#9140)

* [Intel GPU] Fix xpu decode input  (vllm-project#9145)

* [misc] improve ux on readme (vllm-project#9147)

* [Frontend] API support for beam search for MQLLMEngine (vllm-project#9117)

* [Core][Frontend] Add Support for Inference Time mm_processor_kwargs (vllm-project#9131)

Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

* Factor out common weight loading code

* Fix EAGLE model loading

* [Frontend] Add Early Validation For Chat Template / Tool Call Parser (vllm-project#9151)

Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

* Improve efficiency

* Rename

* Update LLaVA-NeXT-Video

* [CI/Build] Add examples folder into Docker image so that we can leverage the templates*.jinja when serving models (vllm-project#8758)

Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>

* [Bugfix] fix OpenAI API server startup with --disable-frontend-multiprocessing (vllm-project#8537)

* Automatic loading and save memory

* Rename

* Update docstring

* Simplify

* Cleanup

* Fully enable recursive loading

* Clarify

* [Doc] Update vlm.rst to include an example on videos (vllm-project#9155)

Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* Fix incorrect semantics

* Move function

* Update error message

* Fix Ultravox loading

* spacing

* [Doc] Improve contributing and installation documentation (vllm-project#9132)

Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>

* Fix server

* [Bugfix] Try to handle older versions of pytorch (vllm-project#9086)

---------

Signed-off-by: kevin <kevin@anyscale.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>
Signed-off-by: tylertitsworth <tyler.titsworth@intel.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
Signed-off-by: Muralidhar Andoorveedu <muralidhar.andoorveedu@centml.ai>
Signed-off-by: Murali Andoorveedu <muralidhar.andoorveedu@centml.ai>
Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com>
Signed-off-by: Varad Ahirwadkar <varad.ahirwadkar1@ibm.com>
Signed-off-by: Flavia Beo <flavia.beo@ibm.com>
Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
Co-authored-by: fyuan1316 <yuanfang@alauda.io>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Kevin H. Luu <kevin@anyscale.com>
Co-authored-by: Pernekhan Utemuratov <pernekhan@deepinfra.com>
Co-authored-by: Chirag Jain <jain.chirag925@gmail.com>
Co-authored-by: Nick Hill <nickhill@us.ibm.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Maximilien de Bayser <mbayser@br.ibm.com>
Co-authored-by: Peter Pan <peter.pan@daocloud.io>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Brittany <24945384+bvrockwell@users.noreply.github.com>
Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com>
Co-authored-by: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: Sebastian Schoennenbeck <sebastian.schoennenbeck@comma-soft.com>
Co-authored-by: Tyler Titsworth <titswortht@gmail.com>
Co-authored-by: youkaichao <youkaichao@126.com>
Co-authored-by: tastelikefeet <58414341+tastelikefeet@users.noreply.github.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Edouard B. <eduard.r.balzin@gmail.com>
Co-authored-by: Eduard Balzin <nfunctor@yahoo.fr>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: sroy745 <142070531+sroy745@users.noreply.github.com>
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Zilin Zhu <zilinzhu@tencent.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: juncheoll <127460634+juncheoll@users.noreply.github.com>
Co-authored-by: danieljannai21 <100521221+danieljannai21@users.noreply.github.com>
Co-authored-by: Mor Zusman <mor.zusmann@gmail.com>
Co-authored-by: whyiug <whyiug@hotmail.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
Co-authored-by: Lily Liu <lilyliupku@gmail.com>
Co-authored-by: Joe Runde <Joseph.Runde@ibm.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>
Co-authored-by: Divakar Verma <137818590+divakar-amd@users.noreply.github.com>
Co-authored-by: Alex Brooks <alex.brooks@ibm.com>
Co-authored-by: vlsav <vl_sav@mail.ru>
Co-authored-by: afeldman-nm <156691304+afeldman-nm@users.noreply.github.com>
Co-authored-by: Andrew Feldman <afeld2012@gmail.com>
Co-authored-by: Sergey Shlyapnikov <Sergeishlyapnikov@gmail.com>
Co-authored-by: Shawn Tan <shawn@wtf.sg>
Co-authored-by: Travis Johnson <tsjohnso@us.ibm.com>
Co-authored-by: Guillaume Calmettes <guillaume.calmettes@gmail.com>
Co-authored-by: xendo <xendoo@gmail.com>
Co-authored-by: Jerzy Zagorski <jzagorsk@amazon.com>
Co-authored-by: Domen Vreš <56541137+domenVres@users.noreply.github.com>
Co-authored-by: dvres <dvres@fri.uni-lj.si>
Co-authored-by: 代君 <sydnash@users.noreply.github.com>
Co-authored-by: Murali Andoorveedu <37849411+andoorve@users.noreply.github.com>
Co-authored-by: Prashant Gupta <prashantgupta@us.ibm.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Varad Ahirwadkar <86718090+varad-ahirwadkar@users.noreply.github.com>
Co-authored-by: Flávia Béo <119421251+flaviabeo@users.noreply.github.com>
Co-authored-by: Dipika <dipikasikka1@gmail.com>
Co-authored-by: Dipika Sikka <ds3822@columbia.edu>
Co-authored-by: Kuntai Du <kuntai@uchicago.edu>
Co-authored-by: Andy Dai <76841985+Imss27@users.noreply.github.com>
Co-authored-by: Chongming Ni <chongmni@amazon.com>
Co-authored-by: Ashraf Mahgoub <ashymahg@amazon.com>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
Co-authored-by: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Co-authored-by: Xin Yang <105740670+xyang16@users.noreply.github.com>
Co-authored-by: TJian <tunjian1996@gmail.com>
Co-authored-by: Brendan Wong <35351983+LunrEclipse@users.noreply.github.com>
Co-authored-by: Yanyi Liu <wolfsonliu@163.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: TimWang <7367474+haitwang-cloud@users.noreply.github.com>
Co-authored-by: Kunshang Ji <kunshang.ji@intel.com>
Co-authored-by: Daniele <36171005+dtrifiro@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Rafael Vasquez <rafvasq21@gmail.com>
Co-authored-by: bnellnm <49004751+bnellnm@users.noreply.github.com>
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Signed-off-by: Alvant <alvasian@yandex.ru>
garg-amit pushed a commit to garg-amit/vllm that referenced this pull request Oct 28, 2024
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Signed-off-by: Amit Garg <mitgarg17495@gmail.com>
sumitd2 pushed a commit to sumitd2/vllm that referenced this pull request Nov 14, 2024
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Signed-off-by: Sumit Dubey <sumit.dubey2@ibm.com>
LeiWang1999 pushed a commit to LeiWang1999/vllm-bitblas that referenced this pull request Mar 26, 2025
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Signed-off-by: LeiWang1999 <leiwang1999@outlook.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants