wangxiyuan
7f2673ea2d
upgrade vLLM to main ( #4608 )
...
1. fix https://github.com/vllm-project/vllm/pull/28542
The model structure modifications we involved in are:
- Qwen2.5-VL(still exist some patch)
- Qwen2-VL
- Qwen2
- DeepSeek series
- Qwen-moe series
2. fix https://github.com/vllm-project/vllm/pull/29121
the output token now type changed from np to `list[list[int]]`
3. fix https://github.com/vllm-project/vllm/pull/29262
`xformers` backend for multimodal now has been deprecated
4. fix https://github.com/vllm-project/vllm/pull/29342
5. fix https://github.com/vllm-project/vllm/pull/28579
6. fix https://github.com/vllm-project/vllm/pull/28718
7. fix https://github.com/vllm-project/vllm/issues/28665
8. fix https://github.com/vllm-project/vllm/pull/26847
vllm introduced the `optimization-level`, some default config has been
changed, and the param `--enforce-eager` has been deprecated
9. fix http://github.com/vllm-project/vllm/pull/29223 it retuns tuple
for sampler.
10. fix https://github.com/vllm-project/vllm/pull/29471 we'll remove the
related patch to avoid this kind of error.
Co-authored-by: hfadzxy <starmoon_zhang@163.com >
Co-authored-by: wangli <wangli858794774@gmail.com >
- vLLM version: v0.11.2
---------
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com >
Signed-off-by: wangli <wangli858794774@gmail.com >
Signed-off-by: hfadzxy <starmoon_zhang@163.com >
Co-authored-by: wangli <wangli858794774@gmail.com >
Co-authored-by: hfadzxy <starmoon_zhang@163.com >
2025-12-02 22:10:52 +08:00
wangxiyuan
a1f142b7ad
Drop 0.11.0 support ( #4377 )
...
There is a lot hack code for v0.11.0, which makes the code hard to
upgrade to newer vLLM version. Since v0.11.0 will release soon. Let's
drop v0.11.0 support first. Then we'll upgrade to v0.11.2 soon.
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com >
2025-11-24 17:08:20 +08:00
Icey
d9cdc65854
Upgrade to new vllm commit ( #3719 )
...
### What this PR does / why we need it?
Upgrade to new vllm commit:
c9461e05a4
- Fix many imports, caused by
https://github.com/vllm-project/vllm/pull/26908
- Fix import ```sha256```, caused by
https://github.com/vllm-project/vllm/pull/27169
- Remove ```SchedulerConfig.send_delta_data```, caused by
https://github.com/vllm-project/vllm/pull/27142
- Fix ```FusedMoE``` because of dual stream execution, caused by
https://github.com/vllm-project/vllm/pull/26440
### Does this PR introduce _any_ user-facing change?
N/A
### How was this patch tested?
CI passed with new added/existing test.
- vLLM version: v0.11.0rc3
- vLLM main:
17c540a993
---------
Signed-off-by: MengqingCao <cmq0113@163.com >
Signed-off-by: Icey <1790571317@qq.com >
Co-authored-by: MengqingCao <cmq0113@163.com >
2025-10-25 15:36:32 +08:00
lidenghui1110
0f3939e5a9
[Feature]cpu offload connector ( #1659 )
...
This PR implements cpu offload connector to enable NPU kv cache offload
to host DRAM.
- vLLM version: v0.10.2
- vLLM main:
5aeb925452
Signed-off-by: lidenghui <lidenghui1110@gmail.com >
Signed-off-by: AlvisGong <gwly0401@163.com >
Signed-off-by: CalvinXKY <kyxiezju@163.com >
Co-authored-by: AlvisGong <gwly0401@163.com >
2025-09-23 14:25:05 +08:00