lidenghui1110
0f3939e5a9
[Feature]cpu offload connector ( #1659 )
...
This PR implements cpu offload connector to enable NPU kv cache offload
to host DRAM.
- vLLM version: v0.10.2
- vLLM main:
5aeb925452
Signed-off-by: lidenghui <lidenghui1110@gmail.com >
Signed-off-by: AlvisGong <gwly0401@163.com >
Signed-off-by: CalvinXKY <kyxiezju@163.com >
Co-authored-by: AlvisGong <gwly0401@163.com >
2025-09-23 14:25:05 +08:00
yiz-liu
88ca8a051c
[Feat][Graph] Support DeepSeek with ACL Graph ( #2707 )
...
### What this PR does / why we need it?
In memory of #677 , a long overdue milestone. Now DeepSeek V3/R1 should
be OK with ACL Graph.
### Does this PR introduce _any_ user-facing change?
None.
### How was this patch tested?
Working on it.
- vLLM version: v0.10.2
- vLLM main:
68dbde5dbb
---------
Signed-off-by: Yizhou Liu <liu_yizhou@outlook.com >
2025-09-16 17:50:17 +08:00
wangxiyuan
c556038ef0
[New model] Qwen3-next support ( #2917 )
...
### What this PR does / why we need it?
Add Qwen3-next support.
### Does this PR introduce _any_ user-facing change?
Yes, users can use Qwen3 next.
Related doc: https://github.com/vllm-project/vllm-ascend/pull/2916 the
tutorial will be ready in
[here](https://vllm-ascend.readthedocs.io/en/latest/tutorials/multi_npu_qwen3_next.html )
### How was this patch tested?
Doc CI passed
Related: https://github.com/vllm-project/vllm-ascend/issues/2884
Co-Authored-By: Angazenn <supperccell@163.com >
Co-Authored-By: zzzzwwjj <1183291235@qq.com >
Co-Authored-By: MengqingCao <cmq0113@163.com >
Co-Authored-By: linfeng-yuan <1102311262@qq.com >
Co-Authored-By: hust17yixuan <303660421@qq.com >
Co-Authored-By: SunnyLee219 <3294305115@qq.com >
Co-Authored-By: maoxx241 <maoxx241@umn.edu >
- vLLM version: v0.10.2
- vLLM main:
b834b4cbf1
---------
Signed-off-by: MengqingCao <cmq0113@163.com >
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com >
Signed-off-by: Angazenn <supperccell@163.com >
Signed-off-by: Your Name <you@example.com >
Signed-off-by: zzzzwwjj <1183291235@qq.com >
Signed-off-by: linfeng-yuan <1102311262@qq.com >
Signed-off-by: hust17yixuan <303660421@qq.com >
Co-authored-by: MengqingCao <cmq0113@163.com >
Co-authored-by: Angazenn <supperccell@163.com >
Co-authored-by: Your Name <you@example.com >
Co-authored-by: zzzzwwjj <1183291235@qq.com >
Co-authored-by: linfeng-yuan <1102311262@qq.com >
Co-authored-by: hust17yixuan <303660421@qq.com >
2025-09-16 01:17:42 +08:00
Icey
aa4d2a91ed
Refactor AscendMultiHeadLatentAttention ( #2826 )
...
### What this PR does / why we need it?
Register AscendMultiHeadLatentAttention as CustomOP, following vllm changes
### Does this PR introduce _any_ user-facing change?
N/A
### How was this patch tested?
CI passed with new added/existing test.
- vLLM version: main
- vLLM main:
b23fb78623
---------
Signed-off-by: Icey <1790571317@qq.com >
2025-09-10 11:26:11 +08:00