Logo
Explore Help
Register Sign In
EngineX/xc-llm-ascend
3
0
Fork 0
You've already forked xc-llm-ascend
Code Issues Pull Requests Projects Releases Wiki Activity
Files
br/v0.18.0
xc-llm-ascend/vllm_ascend/worker
History
Jing Wang b6549b6e38 Add feature: priority
Signed-off-by: Jing Wang <jingwang96@qq.com>
2026-05-13 06:16:25 +00:00
..
v2
[v0.18.0][CI] Fix releases/v0.18.0 ci test only support vllm v0.18.0 (#7686)
2026-03-26 18:36:04 +08:00
__init__.py
[Misc][V0 Deprecation] Remove Cache Engine Used for V0 Worker (#1878)
2025-07-19 09:42:32 +08:00
block_table.py
[Hybrid] support prefix cache for Qwen3.5/Next with --mamba-cache-mode align (#7103)
2026-03-15 09:44:09 +08:00
model_runner_v1.py
[bugfix][0.18.0] Fix race in non-blocking num_accepted_tokens (#8764)
2026-04-27 23:28:52 +08:00
npu_input_batch.py
[Hybrid] support prefix cache for Qwen3.5/Next with --mamba-cache-mode align (#7103)
2026-03-15 09:44:09 +08:00
pcp_utils.py
feat(attention_cp): support chunked prefill for Qwen3Next with PCP&DCP (#6900)
2026-03-09 17:55:09 +08:00
worker.py
Add feature: priority
2026-05-13 06:16:25 +00:00
Powered by Gitea Version: 1.24.3 Page: 162ms Template: 7ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API