Logo
Explore Help
Register Sign In
EngineX/xc-llm-ascend
3
0
Fork 0
You've already forked xc-llm-ascend
Code Issues Pull Requests Actions Projects Releases Wiki Activity
Files
f6e5decc109663f39e44e5fb792988b18a74fd53
xc-llm-ascend/vllm_ascend/worker
History
wangxiyuan f6e5decc10 [CI] upgrade to vllm 0.9.0 (#959)
Upgrade to vllm 0.9.0.
0.8.5 will not be supported any more.

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-05-28 21:18:41 +08:00
..
__init__.py
port deepseekv2 and mtp to main branch (#429)
2025-04-19 17:38:18 +08:00
cache_engine.py
support deepseek quant & mix-parallel with graphmode (#585)
2025-04-23 16:23:25 +08:00
draft_model_runner.py
[CI] upgrade vllm to 0.8.5 (#715)
2025-04-30 09:15:50 +08:00
model_runner_v1.py
[CI] upgrade to vllm 0.9.0 (#959)
2025-05-28 21:18:41 +08:00
model_runner.py
[CI] upgrade to vllm 0.9.0 (#959)
2025-05-28 21:18:41 +08:00
multi_step_runner.py
[Performance]: Custom AscendC Kernel of Multi-Step Prepare Input (#814)
2025-05-20 09:31:30 +08:00
multi_step_worker.py
support multistep decode (#299)
2025-03-11 19:20:06 +08:00
pooling_model_runner.py
[MISC] Clean up torch_npu (#688)
2025-04-29 18:03:38 +08:00
worker_v1.py
[V1][LoRA][Test] V1 Engine LoRA support & e2e test (#893)
2025-05-22 19:20:51 +08:00
worker.py
[Disaggregated Prefill] P2P Disaggregated Prefill based on llm_datadist (#694)
2025-05-01 22:31:36 +08:00
Powered by Gitea Version: 1.24.3 Page: 187ms Template: 6ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API