## Summary
- Add `USE_MODELSCOPE_HUB=0` to both Online and Offline lm-eval sections
- Add explanatory notes about Docker containers launching with
`VLLM_USE_MODELSCOPE=True`
The Docker containers set `VLLM_USE_MODELSCOPE=True`, which causes
lm-eval to download datasets from ModelScope instead of HuggingFace,
resulting in "Repo not exists" errors. Setting `USE_MODELSCOPE_HUB=0`
disables this behavior.
Fixes#607
- vLLM version: v0.17.0
- vLLM main:
4034c3d32e
Signed-off-by: bazingazhou233-hub <bazingazhou233-hub@users.noreply.github.com>
Co-authored-by: bazingazhou233-hub <bazingazhou233-hub@users.noreply.github.com>
### What this PR does / why we need it?
Fix:
```
DeprecationWarning: max_tokens is deprecated in favor of the max_completion_tokens field.
```
- vLLM version: v0.14.1
- vLLM main:
d68209402d
Signed-off-by: shen-shanshan <467638484@qq.com>
### What this PR does / why we need it?
Refactor the DeepSeek-V3.2-Exp tutorial.
- vLLM version: v0.11.0
- vLLM main:
83f478bb19
---------
Signed-off-by: menogrey <1299267905@qq.com>
### What this PR does / why we need it?
Update the docker run command, specifically: add --shm-size=1g
### Does this PR introduce _any_ user-facing change?
users/developers using docker to pull vllm-ascend, the shared memory of
the container will be increased from the default 64MB to 1G
### How was this patch tested?
- vLLM version: v0.11.0rc3
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0
Signed-off-by: wangli <wangli858794774@gmail.com>
### What this PR does / why we need it?
Update user guide for using lm-eval
1. add using lm-eval on online server
2. add using offline datasets
- vLLM version: v0.10.0
- vLLM main:
9edd1db02b
---------
Signed-off-by: hfadzxy <starmoon_zhang@163.com>
### What this PR does / why we need it?
1. Enable pymarkdown check
2. Enable python `__init__.py` check for vllm and vllm-ascend
3. Make clean code
### How was this patch tested?
- vLLM version: v0.9.2
- vLLM main:
29c6fbe58c
---------
Signed-off-by: wangli <wangli858794774@gmail.com>
### What this PR does / why we need it?
Add developer guide for using lm-eval
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
test manually
---------
Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: Yikun Jiang <yikunkero@gmail.com>