Skip to content

Commit ebddb65

Browse files
Docs: add torch compile cache (#4151)
Co-authored-by: ybyang <ybyang7@iflytek.com>
1 parent 19fd57b commit ebddb65

File tree

6 files changed

+48
-32
lines changed

6 files changed

+48
-32
lines changed
Lines changed: 29 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,23 @@
11
# Hyperparameter Tuning
22

33
## Achieving Peak Throughput
4+
45
Achieving a large batch size is the most important thing for attaining high throughput.
56

67
When the server is running at full load, look for the following in the log:
78

89
```Decode batch. #running-req: 233, #token: 370959, token usage: 0.82, gen throughput (token/s): 4594.01, #queue-req: 317```
910

1011
### Tune Your Request Submission Speed
12+
1113
`#queue-req` indicates the number of requests in the queue. If you frequently see `#queue-req == 0`, it suggests you are bottlenecked by the request submission speed.
14+
1215
A healthy range for `#queue-req` is `50 - 500`.
16+
1317
On the other hand, do not make `#queue-req` too large because it will also increase the scheduling overhead on the server, especially when using the default longest-prefix-match schedule policy (`--schedule-policy lpm`).
1418

1519
### Tune `--schedule-conservativeness`
20+
1621
`token usage` indicates the KV cache memory utilization of the server. `token usage > 0.9` means good utilization.
1722
If you frequently see `token usage < 0.9` and `#queue-req > 0`, it means the server is too conservative about taking in new requests. You can decrease `--schedule-conservativeness` to a value like 0.3.
1823
The case of server being too conservative can happen when users send many requests with a large `max_new_tokens` but the requests stop very early due to EOS or stop strings.
@@ -22,18 +27,37 @@ On the other hand, if you see `token usage` very high and you frequently see war
2227
If you see `decode out of memory happened` occasionally but not frequently, it is okay.
2328

2429
### Tune `--dp-size` and `--tp-size`
25-
Data parallelism is better for throughput. When there is enough GPU memory, always favor data parallelism for throughput.
2630

27-
### Avoid out-of-memory by Tuning `--chunked-prefill-size`, `--mem-fraction-static`, `--max-running-requests`
31+
Data parallelism is better for throughput. When there is enough GPU memory, always favor data parallelism for throughput. Refer to [sglang router](../backend/sglang_router.md) for a better data parallelism rather than using `dp_size` parameter.
32+
33+
## Avoid out-of-memory by Tuning `--chunked-prefill-size`, `--mem-fraction-static`, `--max-running-requests`
34+
2835
If you see out of memory (OOM) errors, you can try to tune the following parameters.
2936
- If OOM happens during prefill, try to decrease `--chunked-prefill-size` to `4096` or `2048`.
3037
- If OOM happens during decoding, try to decrease `--max-running-requests`.
3138
- You can also try to decrease `--mem-fraction-static`, which reduces the memory usage of the KV cache memory pool and helps both prefill and decoding.
3239

33-
### Try Advanced Options
34-
- To enable torch.compile acceleration, add `--enable-torch-compile`. It accelerates small models on small batch sizes. This does not work for FP8 currently.
40+
## Enabling cache for `torch.compile`
41+
42+
To enable `torch.compile` acceleration, add `--enable-torch-compile`. It accelerates small models on small batch sizes. This does not work for FP8 currently. By default, `torch.compile` will automatically cache the FX graph and Triton in `/tmp/torchinductor_root`, which might be cleared according to the [system policy](https://serverfault.com/questions/377348/when-does-tmp-get-cleared). You can export the environment variable `TORCHINDUCTOR_CACHE_DIR` to save compilation cache in your desired directory to avoid unwanted deletion. You can also share the cache with other machines to reduce the compilation time.
43+
44+
SGLang uses `max-autotune-no-cudagraphs` mode of `torch.compile`. The auto-tuning can be slow.
45+
If you want to deploy a model on many different machines, you can ship the `torch.compile` cache to these machines and skip the compilation steps. This is based on [PyTorch official documentation](https://pytorch.org/tutorials/recipes/torch_compile_caching_tutorial.html).
46+
47+
*Examples*
48+
49+
1. Generate the cache by setting `TORCHINDUCTOR_CACHE_DIR` and running the model once.
50+
51+
```bash
52+
TORCHINDUCTOR_CACHE_DIR=/root/inductor_root_cache python3 -m sglang.launch_server --model meta-llama/Llama-3.1-8B-Instruct --enable-torch-compile
53+
```
54+
55+
2. Copy the cache folder to other machines and launch the server with `TORCHINDUCTOR_CACHE_DIR`.
56+
57+
58+
## Tune `--schedule-policy`
3559

36-
### Tune `--schedule-policy`
3760
If the workload has many shared prefixes, use the default `--schedule-policy lpm`. `lpm` stands for longest prefix match.
61+
3862
When you have no shared prefixes at all or you always send the requests with the shared prefixes together,
3963
you can try `--schedule-policy fcfs`. `fcfs` stands for first come first serve. `fcfs` has a lower scheduling overhead.

docs/backend/server_arguments.md

Lines changed: 16 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -3,31 +3,39 @@
33
## Common launch commands
44

55
- To enable multi-GPU tensor parallelism, add `--tp 2`. If it reports the error "peer access is not supported between these two devices", add `--enable-p2p-check` to the server launch command.
6-
```
6+
7+
```bash
78
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --tp 2
89
```
10+
911
- To enable multi-GPU data parallelism, add `--dp 2`. Data parallelism is better for throughput if there is enough memory. It can also be used together with tensor parallelism. The following command uses 4 GPUs in total. We recommend [SGLang Router](../router/router.md) for data parallelism.
10-
```
12+
13+
```bash
1114
python -m sglang_router.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --dp 2 --tp 2
1215
```
1316

1417
- If you see out-of-memory errors during serving, try to reduce the memory usage of the KV cache pool by setting a smaller value of `--mem-fraction-static`. The default value is `0.9`.
15-
```
18+
19+
```bash
1620
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --mem-fraction-static 0.7
1721
```
22+
1823
- See [hyperparameter tuning](hyperparameter_tuning.md) on tuning hyperparameters for better performance.
1924
- If you see out-of-memory errors during prefill for long prompts, try to set a smaller chunked prefill size.
20-
```
25+
26+
```bash
2127
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --chunked-prefill-size 4096
2228
```
23-
- To enable torch.compile acceleration, add `--enable-torch-compile`. It accelerates small models on small batch sizes. This does not work for FP8 currently.
29+
30+
- To enable torch.compile acceleration, add `--enable-torch-compile`. It accelerates small models on small batch sizes. This does not work for FP8 currently. You can refer to [`Enabling cache for torch.compile`](https://docs.sglang.ai/backend/hyperparameter_tuning.html#enabling-cache-for-torch-compile) for more details.
2431
- To enable torchao quantization, add `--torchao-config int4wo-128`. It supports other [quantization strategies (INT8/FP8)](https://github.com/sgl-project/sglang/blob/v0.3.6/python/sglang/srt/server_args.py#L671) as well.
2532
- To enable fp8 weight quantization, add `--quantization fp8` on a fp16 checkpoint or directly load a fp8 checkpoint without specifying any arguments.
2633
- To enable fp8 kv cache quantization, add `--kv-cache-dtype fp8_e5m2`.
2734
- If the model does not have a chat template in the Hugging Face tokenizer, you can specify a [custom chat template](custom_chat_template.md).
2835

2936
- To run tensor parallelism on multiple nodes, add `--nnodes 2`. If you have two nodes with two GPUs on each node and want to run TP=4, let `sgl-dev-0` be the hostname of the first node and `50000` be an available port, you can use the following commands. If you meet deadlock, please try to add `--disable-cuda-graph`
30-
```
37+
38+
```bash
3139
# Node 0
3240
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --tp 4 --dist-init-addr sgl-dev-0:50000 --nnodes 2 --node-rank 0
3341

@@ -49,15 +57,13 @@ Please consult the documentation below to learn more about the parameters you ma
4957
* `kv_cache_dtype`: Dtype of the kv cache, defaults to the `dtype`.
5058
* `context_length`: The number of tokens our model can process *including the input*. Note that extending the default might lead to strange behavior.
5159
* `device`: The device we put the model, defaults to `cuda`.
52-
* `chat_template`: The chat template to use. Deviating from the default might lead to unexpected responses. For multi-modal chat templates, refer to [here](https://docs.sglang.ai/backend/openai_api_vision.ipynb#Chat-Template).
60+
* `chat_template`: The chat template to use. Deviating from the default might lead to unexpected responses. For multi-modal chat templates, refer to [here](https://docs.sglang.ai/backend/openai_api_vision.ipynb#Chat-Template). **Make sure the correct** `chat_template` **is passed, or performance degradation may occur!!!!**
5361
* `is_embedding`: Set to true to perform [embedding](./openai_api_embeddings.ipynb) / [encode](https://docs.sglang.ai/backend/native_api#Encode-(embedding-model)) and [reward](https://docs.sglang.ai/backend/native_api#Classify-(reward-model)) tasks.
5462
* `revision`: Adjust if a specific version of the model should be used.
5563
* `skip_tokenizer_init`: Set to true to provide the tokens to the engine and get the output tokens directly, typically used in RLHF. Please see this [example for reference](https://github.com/sgl-project/sglang/blob/main/examples/runtime/token_in_token_out/).
5664
* `json_model_override_args`: Override model config with the provided JSON.
5765
* `delete_ckpt_after_loading`: Delete the model checkpoint after loading the model.
5866

59-
> [!IMPORTANT]
60-
> **Make sure the correct `chat_template` is passed, or performance degradation may occur.**
6167

6268
## Serving: HTTP & API
6369

@@ -178,7 +184,7 @@ Please consult the documentation below to learn more about the parameters you ma
178184

179185
* `enable_mixed_chunk`: Enables mixing prefill and decode, see [this discussion](https://github.com/sgl-project/sglang/discussions/1163).
180186
* `enable_dp_attention`: Enable [Data Parallelism Attention](https://lmsys.org/blog/2024-12-04-sglang-v0-4/#data-parallelism-attention-for-deepseek-models) for Deepseek models. Note that you need to choose `dp_size = tp_size` for this.
181-
* `enable_torch_compile`: Torch compile the model. This is an experimental feature.
187+
* `enable_torch_compile`: Torch compile the model. Note that compiling a model takes a long time but have a great performance boost. The compiled model can also be [cached for future use](https://docs.sglang.ai/backend/hyperparameter_tuning.html#enabling-cache-for-torch-compile).
182188
* `torch_compile_max_bs`: The maximum batch size when using `torch_compile`.
183189
* `cuda_graph_max_bs`: Adjust the maximum batchsize when using cuda graph. By default this is chosen for you based on GPU specifics.
184190
* `cuda_graph_bs`: The batch sizes to capture by `CudaGraphRunner`. By default this is done for you.

docs/references/advanced_deploy.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,4 +4,4 @@ Multi-Node Deployment
44
:maxdepth: 1
55

66
multi_node.md
7-
k8s.md
7+
deploy_on_k8s.md

docs/references/deepseek.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -61,8 +61,7 @@ If you encounter errors when starting the server, ensure the weights have finish
6161

6262
### Caching `torch.compile`
6363

64-
The DeepSeek series have huge model weights, it takes some time to compile the model with `torch.compile` for the first time if you have added the flag `--enable-torch-compile`. By default, `torch.compile` will automatically cache the FX graph and Triton in `/tmp/torchinductor_root`, which might be cleared according to the [system policy](https://serverfault.com/questions/377348/when-does-tmp-get-cleared). You can export the environment variable `TORCHINDUCTOR_CACHE_DIR` to save compilation cache in your desired directory to avoid unwanted deletion. You can also share the cache with other machines to reduce the compilation time. You may refer to the [PyTorch official documentation](https://pytorch.org/tutorials/recipes/torch_compile_caching_tutorial.html) and [SGLang Documentation](./torch_compile_cache.md) for more details.
65-
64+
The DeepSeek series have huge model weights, it takes some time to compile the model with `torch.compile` for the first time if you have added the flag `--enable-torch-compile`. You can refer [here](https://docs.sglang.ai/backend/hyperparameter_tuning.html#try-advanced-options) to optimize the caching of compilation results, so that the cache can be used to speed up the next startup.
6665
### Launch with One node of 8 H200
6766

6867
Please refer to [the example](https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3#using-docker-recommended). **Note that Deepseek V3 is already in FP8. So we should not run it with any quantization arguments like `--quantization fp8 --kv-cache-dtype fp8_e5m2`.** Also, `--enable-dp-attention` can be useful to improve for Deepseek V3/R1's throughput. Please refer to [Data Parallelism Attention](https://docs.sglang.ai/references/deepseek.html#multi-head-latent-attention-mla-throughput-optimizations) for detail.
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Kubernetes
1+
# Deploy On Kubernetes
22

33
This docs is for deploying a RoCE Network-Based SGLANG Two-Node Inference Service on a Kubernetes (K8S) Cluster.
44

docs/references/torch_compile_cache.md

Lines changed: 0 additions & 13 deletions
This file was deleted.

0 commit comments

Comments
 (0)