Logo
Explore Help
Register Sign In
EngineX/xc-llm-ascend
3
0
Fork 0
You've already forked xc-llm-ascend
Code Issues Pull Requests Projects Releases Wiki Activity
Files
2c175f5ed829ba432aec9966150decc98c96c7e9
xc-llm-ascend/vllm_ascend/worker
History
zhangxinyuehfad d781902ce9 [v0.18.0][CI] Fix releases/v0.18.0 ci test only support vllm v0.18.0 (#7686)
### What this PR does / why we need it?
Fix releases/v0.18.0 ci test only support vllm v0.18.0 

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
2026-03-26 18:36:04 +08:00
..
v2
[v0.18.0][CI] Fix releases/v0.18.0 ci test only support vllm v0.18.0 (#7686)
2026-03-26 18:36:04 +08:00
__init__.py
[Misc][V0 Deprecation] Remove Cache Engine Used for V0 Worker (#1878)
2025-07-19 09:42:32 +08:00
block_table.py
[Hybrid] support prefix cache for Qwen3.5/Next with --mamba-cache-mode align (#7103)
2026-03-15 09:44:09 +08:00
model_runner_v1.py
fix uncompatible between fc1 and non-sp-padding (#7643)
2026-03-25 23:23:37 +08:00
npu_input_batch.py
[Hybrid] support prefix cache for Qwen3.5/Next with --mamba-cache-mode align (#7103)
2026-03-15 09:44:09 +08:00
pcp_utils.py
feat(attention_cp): support chunked prefill for Qwen3Next with PCP&DCP (#6900)
2026-03-09 17:55:09 +08:00
worker.py
[Bugfix] Fix multi-instance serving OOM on single card (#7427)
2026-03-23 14:22:59 +08:00
Powered by Gitea Version: 1.24.3 Page: 482ms Template: 9ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API