[Model] Add qwen3Next support in Main (#4596)
### What this PR does / why we need it? Add Qwen3Next support in main ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.11.2 - vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.2 --------- Signed-off-by: SunnyLee219 <3294305115@qq.com>
This commit is contained in:
2
.github/workflows/_e2e_test.yaml
vendored
2
.github/workflows/_e2e_test.yaml
vendored
@@ -286,4 +286,4 @@ jobs:
|
||||
VLLM_USE_MODELSCOPE: True
|
||||
run: |
|
||||
. /usr/local/Ascend/ascend-toolkit/8.3.RC2/bisheng_toolkit/set_env.sh
|
||||
#pytest -sv tests/e2e/multicard/test_qwen3_next.py
|
||||
pytest -sv tests/e2e/multicard/test_qwen3_next.py
|
||||
|
||||
Reference in New Issue
Block a user