[Doc][Misc][v0.18.0] Add Parameter Description, best practices and FAQs in GLM5.md (#7909)
### What this PR does / why we need it? This PR updates the GLM-5 documentation to include: - Information about the first supported version (`vllm-ascend:v0.17.0rc1`). - Updated `--additional-config` parameters to use the new nested `ascend_compilation_config` structure. - Added `VLLM_ASCEND_BALANCE_SCHEDULING` environment variable to deployment scripts. - Improved formatting of deployment steps. - A new "Notice" section explaining optimization environment variables (`VLLM_ASCEND_ENABLE_FLASHCOMM1`, `VLLM_ASCEND_ENABLE_FUSED_MC2`, `VLLM_ASCEND_ENABLE_MLAPO`). - A "Best Practices" section for prefill-decode disaggregation. - An "FAQ" section addressing common tokenizer issues and function calling configuration. ### Does this PR introduce _any_ user-facing change? No, this is a documentation-only update. ### How was this patch tested? Documentation changes were verified for correctness and formatting. --------- Signed-off-by: Zhu Jiyang <zhujiyang2@huawei.com>
This commit is contained in:
@@ -4,6 +4,8 @@
|
||||
|
||||
[GLM-5](https://huggingface.co/zai-org/GLM-5) use a Mixture-of-Experts (MoE) architecture and targeting at complex systems engineering and long-horizon agentic tasks.
|
||||
|
||||
The `GLM-5` model is first supported in `vllm-ascend:v0.17.0rc1`, and the version of transformers need to be upgraded to 5.2.0.
|
||||
|
||||
This document will show the main verification steps of the model, including supported features, feature configuration, environment preparation, single-node and multi-node deployment, accuracy and performance evaluation.
|
||||
|
||||
## Supported Features
|
||||
@@ -160,7 +162,7 @@ vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/GLM5-w4a8 \
|
||||
--enable-chunked-prefill \
|
||||
--enable-prefix-caching \
|
||||
--async-scheduling \
|
||||
--additional-config '{"enable_npugraph_ex": true,"fuse_muls_add":true,"multistream_overlap_shared_expert":true}' \
|
||||
--additional-config '{"fuse_muls_add": true, "multistream_overlap_shared_expert": true, "ascend_compilation_config": {"enable_npugraph_ex": true}}' \
|
||||
--compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY"}' \
|
||||
--speculative-config '{"num_speculative_tokens": 3, "method": "deepseek_mtp"}'
|
||||
```
|
||||
@@ -196,7 +198,7 @@ vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/GLM5-w8a8 \
|
||||
--enable-chunked-prefill \
|
||||
--enable-prefix-caching \
|
||||
--async-scheduling \
|
||||
--additional-config '{"enable_npugraph_ex": true,"fuse_muls_add":true,"multistream_overlap_shared_expert":true}' \
|
||||
--additional-config '{"fuse_muls_add": true, "multistream_overlap_shared_expert": true, "ascend_compilation_config": {"enable_npugraph_ex": true}}' \
|
||||
--compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY"}' \
|
||||
--speculative-config '{"num_speculative_tokens": 3, "method": "deepseek_mtp"}'
|
||||
```
|
||||
@@ -236,7 +238,7 @@ vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/GLM-5-w4a8 \
|
||||
--enable-prefix-caching \
|
||||
--async-scheduling \
|
||||
--compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY"}' \
|
||||
--additional-config '{"enable_npugraph_ex": true,"fuse_muls_add":true,"multistream_overlap_shared_expert":true}' \
|
||||
--additional-config '{"fuse_muls_add": true, "multistream_overlap_shared_expert": true, "ascend_compilation_config": {"enable_npugraph_ex": true}}' \
|
||||
--speculative-config '{"num_speculative_tokens": 3, "method": "deepseek_mtp"}'
|
||||
```
|
||||
|
||||
@@ -284,6 +286,7 @@ export HCCL_SOCKET_IFNAME=$nic_name
|
||||
export OMP_PROC_BIND=false
|
||||
export OMP_NUM_THREADS=1
|
||||
export HCCL_BUFFSIZE=200
|
||||
export VLLM_ASCEND_BALANCE_SCHEDULING=1
|
||||
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
|
||||
|
||||
vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/GLM5-bf16 \
|
||||
@@ -301,7 +304,6 @@ vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/GLM5-bf16 \
|
||||
--max-model-len 8192 \
|
||||
--max-num-batched-tokens 4096 \
|
||||
--trust-remote-code \
|
||||
--no-enable-prefix-caching \
|
||||
--gpu-memory-utilization 0.95 \
|
||||
--compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY"}' \
|
||||
--speculative-config '{"num_speculative_tokens": 3, "method": "deepseek_mtp"}'
|
||||
@@ -328,6 +330,7 @@ export HCCL_SOCKET_IFNAME=$nic_name
|
||||
export OMP_PROC_BIND=false
|
||||
export OMP_NUM_THREADS=1
|
||||
export HCCL_BUFFSIZE=200
|
||||
export VLLM_ASCEND_BALANCE_SCHEDULING=1
|
||||
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
|
||||
|
||||
vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/GLM5-bf16 \
|
||||
@@ -347,7 +350,6 @@ vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/GLM5-bf16 \
|
||||
--max-model-len 8192 \
|
||||
--max-num-batched-tokens 4096 \
|
||||
--trust-remote-code \
|
||||
--no-enable-prefix-caching \
|
||||
--gpu-memory-utilization 0.95 \
|
||||
--compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY"}' \
|
||||
--speculative-config '{"num_speculative_tokens": 3, "method": "deepseek_mtp"}'
|
||||
@@ -380,6 +382,7 @@ export HCCL_SOCKET_IFNAME=$nic_name
|
||||
export OMP_PROC_BIND=false
|
||||
export OMP_NUM_THREADS=1
|
||||
export HCCL_BUFFSIZE=200
|
||||
export VLLM_ASCEND_BALANCE_SCHEDULING=1
|
||||
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
|
||||
|
||||
vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/GLM-5-w4a8 \
|
||||
@@ -398,10 +401,9 @@ vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/GLM-5-w4a8 \
|
||||
--max-model-len 131072 \
|
||||
--max-num-batched-tokens 4096 \
|
||||
--trust-remote-code \
|
||||
--no-enable-prefix-caching \
|
||||
--gpu-memory-utilization 0.95 \
|
||||
--compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY"}' \
|
||||
--additional-config '{"enable_npugraph_ex": true, "fuse_muls_add":true,"multistream_overlap_shared_expert":true}' \
|
||||
--additional-config '{"fuse_muls_add": true, "multistream_overlap_shared_expert": true, "ascend_compilation_config": {"enable_npugraph_ex": true}}' \
|
||||
--speculative-config '{"num_speculative_tokens": 3, "method": "deepseek_mtp"}'
|
||||
```
|
||||
|
||||
@@ -426,6 +428,7 @@ export HCCL_SOCKET_IFNAME=$nic_name
|
||||
export OMP_PROC_BIND=false
|
||||
export OMP_NUM_THREADS=1
|
||||
export HCCL_BUFFSIZE=200
|
||||
export VLLM_ASCEND_BALANCE_SCHEDULING=1
|
||||
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
|
||||
|
||||
vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/GLM-5-w4a8 \
|
||||
@@ -446,10 +449,9 @@ vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/GLM-5-w4a8 \
|
||||
--max-model-len 131072 \
|
||||
--max-num-batched-tokens 4096 \
|
||||
--trust-remote-code \
|
||||
--no-enable-prefix-caching \
|
||||
--gpu-memory-utilization 0.95 \
|
||||
--compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY"}' \
|
||||
--additional-config '{"enable_npugraph_ex": true, "fuse_muls_add":true,"multistream_overlap_shared_expert":true}' \
|
||||
--additional-config '{"fuse_muls_add": true, "multistream_overlap_shared_expert": true, "ascend_compilation_config": {"enable_npugraph_ex": true}}' \
|
||||
--speculative-config '{"num_speculative_tokens": 3, "method": "deepseek_mtp"}'
|
||||
```
|
||||
|
||||
@@ -546,6 +548,7 @@ export OMP_PROC_BIND=false
|
||||
export OMP_NUM_THREADS=1
|
||||
export HCCL_BUFFSIZE=200
|
||||
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
|
||||
export VLLM_ASCEND_BALANCE_SCHEDULING=1
|
||||
export VLLM_ASCEND_ENABLE_MLAPO=1
|
||||
|
||||
vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/GLM5-w8a8 \
|
||||
@@ -569,7 +572,7 @@ vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/GLM5-w8a8 \
|
||||
--enable-prefix-caching \
|
||||
--async-scheduling \
|
||||
--compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY"}' \
|
||||
--additional-config '{"enable_npugraph_ex": true,"fuse_muls_add":true,"multistream_overlap_shared_expert":true}' \
|
||||
--additional-config '{"fuse_muls_add": true, "multistream_overlap_shared_expert": true, "ascend_compilation_config": {"enable_npugraph_ex": true}}' \
|
||||
--speculative-config '{"num_speculative_tokens": 3, "method": "deepseek_mtp"}'
|
||||
```
|
||||
|
||||
@@ -595,6 +598,7 @@ export OMP_PROC_BIND=false
|
||||
export OMP_NUM_THREADS=1
|
||||
export HCCL_BUFFSIZE=200
|
||||
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
|
||||
export VLLM_ASCEND_BALANCE_SCHEDULING=1
|
||||
export VLLM_ASCEND_ENABLE_MLAPO=1
|
||||
|
||||
vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/GLM5-w8a8 \
|
||||
@@ -620,7 +624,7 @@ vllm serve /root/.cache/modelscope/hub/models/vllm-ascend/GLM5-w8a8 \
|
||||
--enable-prefix-caching \
|
||||
--async-scheduling \
|
||||
--compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY"}' \
|
||||
--additional-config '{"enable_npugraph_ex": true,"fuse_muls_add":true,"multistream_overlap_shared_expert":true}' \
|
||||
--additional-config '{"fuse_muls_add": true, "multistream_overlap_shared_expert": true, "ascend_compilation_config": {"enable_npugraph_ex": true}}' \
|
||||
--speculative-config '{"num_speculative_tokens": 3, "method": "deepseek_mtp"}'
|
||||
```
|
||||
|
||||
@@ -763,11 +767,9 @@ Before you start, please
|
||||
export VLLM_NIXL_ABORT_REQUEST_TIMEOUT=300000
|
||||
|
||||
export ASCEND_RT_VISIBLE_DEVICES=$1
|
||||
|
||||
export VLLM_ASCEND_ENABLE_FLASHCOMM1=1
|
||||
|
||||
export VLLM_ASCEND_ENABLE_FUSED_MC2=1
|
||||
export VLLM_ASCEND_ENABLE_MLAPO=1
|
||||
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
|
||||
|
||||
vllm serve /root/.cache/glm5-w8a8 \
|
||||
@@ -787,7 +789,7 @@ Before you start, please
|
||||
--seed 1024 \
|
||||
--served-model-name glm-5 \
|
||||
--max-model-len 131072 \
|
||||
--additional-config '{"enable_npugraph_ex": true, "fuse_muls_add":true,"multistream_overlap_shared_expert":true,"recompute_scheduler_enable" : true}' \
|
||||
--additional-config '{"fuse_muls_add": true, "multistream_overlap_shared_expert": true, "ascend_compilation_config": {"enable_npugraph_ex": true}}' \
|
||||
--max-num-batched-tokens 4096 \
|
||||
--trust-remote-code \
|
||||
--max-num-seqs 64 \
|
||||
@@ -847,7 +849,6 @@ Before you start, please
|
||||
export VLLM_ASCEND_ENABLE_FLASHCOMM1=1
|
||||
|
||||
export VLLM_ASCEND_ENABLE_FUSED_MC2=1
|
||||
export VLLM_ASCEND_ENABLE_MLAPO=1
|
||||
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
|
||||
|
||||
vllm serve /root/.cache/glm5-w8a8 \
|
||||
@@ -867,7 +868,7 @@ Before you start, please
|
||||
--seed 1024 \
|
||||
--served-model-name glm-5 \
|
||||
--max-model-len 131072 \
|
||||
--additional-config '{"enable_npugraph_ex": true, "fuse_muls_add":true,"multistream_overlap_shared_expert":true,"recompute_scheduler_enable" : true}' \
|
||||
--additional-config '{"fuse_muls_add": true, "multistream_overlap_shared_expert": true, "ascend_compilation_config": {"enable_npugraph_ex": true}}' \
|
||||
--max-num-batched-tokens 4096 \
|
||||
--trust-remote-code \
|
||||
--max-num-seqs 64 \
|
||||
@@ -950,7 +951,7 @@ Before you start, please
|
||||
--max-model-len 200000 \
|
||||
--max-num-batched-tokens 32 \
|
||||
--compilation-config '{"cudagraph_mode":"FULL_DECODE_ONLY", "cudagraph_capture_sizes":[4, 8, 12, 16,20,24,28, 32]}' \
|
||||
--additional-config '{"enable_npugraph_ex": true, "fuse_muls_add":true,"multistream_overlap_shared_expert":true,"recompute_scheduler_enable" : true}' \
|
||||
--additional-config '{"fuse_muls_add": true, "multistream_overlap_shared_expert": true, "ascend_compilation_config": {"enable_npugraph_ex": true}}' \
|
||||
--trust-remote-code \
|
||||
--max-num-seqs 8 \
|
||||
--gpu-memory-utilization 0.92 \
|
||||
@@ -1031,7 +1032,7 @@ Before you start, please
|
||||
--max-model-len 200000 \
|
||||
--max-num-batched-tokens 32 \
|
||||
--compilation-config '{"cudagraph_mode":"FULL_DECODE_ONLY", "cudagraph_capture_sizes":[4, 8, 12, 16,20,24,28, 32]}' \
|
||||
--additional-config '{"enable_npugraph_ex": true, "fuse_muls_add":true,"multistream_overlap_shared_expert":true,"recompute_scheduler_enable" : true}' \
|
||||
--additional-config '{"fuse_muls_add": true, "multistream_overlap_shared_expert": true, "ascend_compilation_config": {"enable_npugraph_ex": true}}' \
|
||||
--trust-remote-code \
|
||||
--max-num-seqs 8 \
|
||||
--gpu-memory-utilization 0.92 \
|
||||
@@ -1112,7 +1113,7 @@ Before you start, please
|
||||
--max-model-len 200000 \
|
||||
--max-num-batched-tokens 32 \
|
||||
--compilation-config '{"cudagraph_mode":"FULL_DECODE_ONLY", "cudagraph_capture_sizes":[4, 8, 12, 16,20,24,28, 32]}' \
|
||||
--additional-config '{"enable_npugraph_ex": true, "fuse_muls_add":true,"multistream_overlap_shared_expert":true,"recompute_scheduler_enable" : true}' \
|
||||
--additional-config '{"fuse_muls_add": true, "multistream_overlap_shared_expert": true, "ascend_compilation_config": {"enable_npugraph_ex": true}}' \
|
||||
--trust-remote-code \
|
||||
--max-num-seqs 8 \
|
||||
--gpu-memory-utilization 0.92 \
|
||||
@@ -1193,7 +1194,7 @@ Before you start, please
|
||||
--max-model-len 200000 \
|
||||
--max-num-batched-tokens 32 \
|
||||
--compilation-config '{"cudagraph_mode":"FULL_DECODE_ONLY", "cudagraph_capture_sizes":[4, 8, 12, 16,20,24,28, 32]}' \
|
||||
--additional-config '{"enable_npugraph_ex": true, "fuse_muls_add":true,"multistream_overlap_shared_expert":true,"recompute_scheduler_enable" : true}' \
|
||||
--additional-config '{"fuse_muls_add": true, "multistream_overlap_shared_expert": true, "ascend_compilation_config": {"enable_npugraph_ex": true}}' \
|
||||
--trust-remote-code \
|
||||
--max-num-seqs 8 \
|
||||
--gpu-memory-utilization 0.92 \
|
||||
@@ -1225,45 +1226,45 @@ Once the preparation is done, you can start the server with the following comman
|
||||
|
||||
1. Prefill node 0
|
||||
|
||||
```shell
|
||||
# change ip to your own
|
||||
python launch_online_dp.py --dp-size 4 --tp-size 8 --dp-size-local 2 --dp-rank-start 0 --dp-address $node_p0_ip --dp-rpc-port 10521 --vllm-start-port 6700
|
||||
```
|
||||
```shell
|
||||
# change ip to your own
|
||||
python launch_online_dp.py --dp-size 4 --tp-size 8 --dp-size-local 2 --dp-rank-start 0 --dp-address $node_p0_ip --dp-rpc-port 10521 --vllm-start-port 6700
|
||||
```
|
||||
|
||||
2. Prefill node 1
|
||||
|
||||
```shell
|
||||
# change ip to your own
|
||||
python launch_online_dp.py --dp-size 4 --tp-size 8 --dp-size-local 2 --dp-rank-start 2 --dp-address $node_p0_ip --dp-rpc-port 10521 --vllm-start-port 6700
|
||||
```
|
||||
```shell
|
||||
# change ip to your own
|
||||
python launch_online_dp.py --dp-size 4 --tp-size 8 --dp-size-local 2 --dp-rank-start 2 --dp-address $node_p0_ip --dp-rpc-port 10521 --vllm-start-port 6700
|
||||
```
|
||||
|
||||
3. Decode node 0
|
||||
|
||||
```shell
|
||||
# change ip to your own
|
||||
python launch_online_dp.py --dp-size 16 --tp-size 4 --dp-size-local 4 --dp-rank-start 0 --dp-address $node_d0_ip --dp-rpc-port 10523 --vllm-start-port 6721
|
||||
```
|
||||
```shell
|
||||
# change ip to your own
|
||||
python launch_online_dp.py --dp-size 16 --tp-size 4 --dp-size-local 4 --dp-rank-start 0 --dp-address $node_d0_ip --dp-rpc-port 10523 --vllm-start-port 6721
|
||||
```
|
||||
|
||||
4. Decode node 1
|
||||
|
||||
```shell
|
||||
# change ip to your own
|
||||
python launch_online_dp.py --dp-size 16 --tp-size 4 --dp-size-local 4 --dp-rank-start 4 --dp-address $node_d0_ip --dp-rpc-port 10523 --vllm-start-port 6721
|
||||
```
|
||||
```shell
|
||||
# change ip to your own
|
||||
python launch_online_dp.py --dp-size 16 --tp-size 4 --dp-size-local 4 --dp-rank-start 4 --dp-address $node_d0_ip --dp-rpc-port 10523 --vllm-start-port 6721
|
||||
```
|
||||
|
||||
5. Decode node 2
|
||||
|
||||
```shell
|
||||
# change ip to your own
|
||||
python launch_online_dp.py --dp-size 16 --tp-size 4 --dp-size-local 4 --dp-rank-start 8 --dp-address $node_d0_ip --dp-rpc-port 10523 --vllm-start-port 6721
|
||||
```
|
||||
```shell
|
||||
# change ip to your own
|
||||
python launch_online_dp.py --dp-size 16 --tp-size 4 --dp-size-local 4 --dp-rank-start 8 --dp-address $node_d0_ip --dp-rpc-port 10523 --vllm-start-port 6721
|
||||
```
|
||||
|
||||
6. Decode node 3
|
||||
|
||||
```shell
|
||||
# change ip to your own
|
||||
python launch_online_dp.py --dp-size 16 --tp-size 4 --dp-size-local 4 --dp-rank-start 12 --dp-address $node_d0_ip --dp-rpc-port 10523 --vllm-start-port 6721
|
||||
```
|
||||
```shell
|
||||
# change ip to your own
|
||||
python launch_online_dp.py --dp-size 16 --tp-size 4 --dp-size-local 4 --dp-rank-start 12 --dp-address $node_d0_ip --dp-rpc-port 10523 --vllm-start-port 6721
|
||||
```
|
||||
|
||||
### Request Forwarding
|
||||
|
||||
@@ -1308,6 +1309,16 @@ python load_balance_proxy_server_example.py \
|
||||
6721 6722 6723 6724
|
||||
```
|
||||
|
||||
**Notice:**
|
||||
|
||||
Some configurations for optimization are shown below:
|
||||
|
||||
- `VLLM_ASCEND_ENABLE_FLASHCOMM1`: Enable FlashComm optimization to reduce communication and computation overhead on prefill node. With FlashComm enabled, layer_sharding list cannot include o_proj as an element.
|
||||
- `VLLM_ASCEND_ENABLE_FUSED_MC2`: Enable following fused operators: dispatch_gmm_combine_decode and dispatch_ffn_combine operator.
|
||||
- `VLLM_ASCEND_ENABLE_MLAPO`: Enable fused operator MlaPreprocessOperation.
|
||||
|
||||
Please refer to the following python file for further explanation and restrictions of the environment variables above: [envs.py](https://github.com/vllm-project/vllm-ascend/blob/main/vllm_ascend/envs.py)
|
||||
|
||||
## Functional Verification
|
||||
|
||||
Once your server is started, you can query the model with input prompts:
|
||||
@@ -1346,3 +1357,29 @@ Refer to [Using AISBench for performance evaluation](../../developer_guide/evalu
|
||||
### Using vLLM Benchmark
|
||||
|
||||
Refer to [vllm benchmark](https://docs.vllm.ai/en/latest/contributing/benchmarks.html) for more details.
|
||||
|
||||
## Best Practices
|
||||
|
||||
In this chapter, we recommend best practices in prefill-decode disaggregation scenario with 1P1D architecture using 4 Atlas 800 A3 (64G × 16):
|
||||
|
||||
- Low-latency: We recommend setting `dp4 tp8` on prefill nodes and `dp4 tp8` on decode nodes for low latency situation.
|
||||
- High-throughput: `dp4 tp8` on prefill nodes and `dp8 tp4` on decode nodes is recommended for high throughput situation.
|
||||
|
||||
**Notice:**
|
||||
`max-model-len` and `max-num-seqs` need to be set according to the actual usage scenario. For other settings, please refer to the **[Deployment](#deployment)** chapter.
|
||||
|
||||
## FAQ
|
||||
|
||||
- **Q: How to solve ValueError: Tokenizer class TokenizersBackend does not exist or is not currently imported?**
|
||||
|
||||
A: Please update the version of transformers to 5.2.0
|
||||
|
||||
- **Q: How to enable function calling for GLM-5?**
|
||||
|
||||
A: Please add following configurations in vLLM startup command
|
||||
|
||||
```shell
|
||||
--tool-call-parser glm47 \
|
||||
--reasoning-parser glm45 \
|
||||
--enable-auto-tool-choice \
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user