[CI/Build] simplify Dockerfile build for ARM64 / GH200#11212
[CI/Build] simplify Dockerfile build for ARM64 / GH200#11212youkaichao merged 9 commits intovllm-project:mainfrom
Conversation
Signed-off-by: drikster80 <ed.sealing@gmail.com>
Signed-off-by: drikster80 <ed.sealing@gmail.com>
Signed-off-by: drikster80 <ed.sealing@gmail.com>
|
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
…/causal-conv1d/mamba/flashinfer/bitsandbytes
youkaichao
left a comment
There was a problem hiding this comment.
thanks for the great effort!
cc @simon-mo if you can help set up a GH200 machine for testing.
|
@drikster80 I'm planning to merge this because it contains much less modules to build from source. Your commits are kept here, and thanks for your great contribution! |
Sounds great. Glad it was able to get tested and merged. |
|
@cennn thanks for the great work! |
From PR: 10499
Fix Issue: 2021
This contribution focuses on simplifying the Dockerfile build process for ARM64 systems. Unnecessary build from source has been removed and requirements handling has been optimized to ensure the correct installation of torch and bitsandbytes for ARM64+CUDA compatibility. The changes have been tested on the Nvidia GH200 platform with models meta-llama/Llama-3.1-8B and Qwen/Qwen2.5-0.5B-Instruct
The following command was used to build and confirmed working on Nvidia GH200:
docker build . --target vllm-openai --platform "linux/arm64" -t cenncenn/vllm-gh200-openai:v0.6.4.post1 --build-arg max_jobs=66 --build-arg nvcc_threads=2 --build-arg torch_cuda_arch_list="9.0+PTX" --build-arg vllm_fa_cmake_gpu_arches="90-real" --build-arg RUN_WHEEL_CHECK='false'