[Frontend] Use a proper chat template for VLM2Vec#9912
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
mgoin
left a comment
There was a problem hiding this comment.
Seems reasonable, thanks for supporting it through template
| wrap_dicts = (chat_template_text_format == "openai" | ||
| or (model_config.task == "embedding" | ||
| and model_config.is_multimodal_model) | ||
| or (model_config.hf_config.model_type | ||
| in MODEL_KEEP_MULTI_MODAL_CONTENT)) |
There was a problem hiding this comment.
Maybe we should auto-detect this in the future...
Signed-off-by: Richard Liu <ricliu@google.com>
Signed-off-by: Loc Huynh <jc1da.3011@gmail.com>
Signed-off-by: Sumit Dubey <sumit.dubey2@ibm.com>
Signed-off-by: LeiWang1999 <leiwang1999@outlook.com>
VLM2Vec is not intended to be a chat model, so the existing chat template from Phi-3.5-vision doesn't work well.
Building on #9759, this PR adds a chat template to convert the incoming messages into a format suitable for VLM2Vec and updates the docs accordingly.