[Bugfix][CI] Remove V0 Spec Decode CI (#1656)
### What this PR does / why we need it?
To solve the error in the CI of long term test:
```bash
modelscope - ERROR - Repo JackFram/llama-68m not exists on either https://www.modelscope.cn/ or https://www.modelscope.ai/
```
Replace the hf model with modelscope model.
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: v0.9.1
- vLLM main:
71d1d75b7a
---------
Signed-off-by: Shanshan Shen <87969357+shen-shanshan@users.noreply.github.com>
This commit is contained in:
@@ -96,13 +96,8 @@ jobs:
|
||||
- name: Run vllm-project/vllm-ascend long term test
|
||||
run: |
|
||||
if [[ "${{ matrix.os }}" == "linux-arm64-npu-1" ]]; then
|
||||
# v0 spec decode test
|
||||
# TODO: Revert me when test_mtp_correctness is fixed
|
||||
# VLLM_USE_MODELSCOPE=True pytest -sv tests/e2e/long_term/spec_decode_v0/e2e/test_mtp_correctness.py # it needs a clean process
|
||||
pytest -sv tests/e2e/long_term/spec_decode_v0 --ignore=tests/e2e/long_term/spec_decode_v0/e2e/test_mtp_correctness.py
|
||||
# accuracy test single card
|
||||
pytest -sv tests/e2e/long_term/test_accuracy.py
|
||||
else
|
||||
# else
|
||||
# accuracy test multi card
|
||||
VLLM_USE_MODELSCOPE=True pytest -sv tests/e2e/long_term/test_deepseek_v2_lite_tp2_accuracy.py
|
||||
# VLLM_USE_MODELSCOPE=True pytest -sv tests/e2e/long_term/test_deepseek_v2_lite_tp2_accuracy.py
|
||||
fi
|
||||
|
||||
Reference in New Issue
Block a user