## Summary
- Add `USE_MODELSCOPE_HUB=0` to both Online and Offline lm-eval sections
- Add explanatory notes about Docker containers launching with
`VLLM_USE_MODELSCOPE=True`
The Docker containers set `VLLM_USE_MODELSCOPE=True`, which causes
lm-eval to download datasets from ModelScope instead of HuggingFace,
resulting in "Repo not exists" errors. Setting `USE_MODELSCOPE_HUB=0`
disables this behavior.
Fixes #607
- vLLM version: v0.17.0
- vLLM main:
4034c3d32e
Signed-off-by: bazingazhou233-hub <bazingazhou233-hub@users.noreply.github.com>
Co-authored-by: bazingazhou233-hub <bazingazhou233-hub@users.noreply.github.com>
vLLM Ascend Plugin documents
Live doc: https://docs.vllm.ai/projects/ascend
Build the docs
# Install dependencies.
pip install -r requirements-docs.txt
# Build the docs.
make clean
make html
# Build the docs with translation
make intl
# Open the docs with your browser
python -m http.server -d _build/html/
Launch your browser and open:
- English version: http://localhost:8000
- Chinese version: http://localhost:8000/zh_CN