Logo
Explore Help
Register Sign In
EngineX/xc-llm-ascend
3
0
Fork 0
You've already forked xc-llm-ascend
Code Issues Pull Requests Actions Projects Releases Wiki Activity
Files
eb390545ec486c48916cf42f246ff693171210c0
xc-llm-ascend/tests/ut
History
wangxiyuan 343955c7ac [CI] Follow vLLM FusedMoEParallelConfig interface change and clean up unused config (#1625)
This commit
78fe77534b
from vllm reverted the change for FusedMoEParallelConfig

This PR do the same to fix the CI error

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2025-07-04 17:54:33 +08:00
..
attention
Fix W8A8 fused moe bug (#1529)
2025-07-02 16:40:51 +08:00
distributed
add ut for kv tansfer module (#1531)
2025-07-02 16:14:52 +08:00
fake_weight
[CI] Add unit test framework (#1201)
2025-06-16 18:32:28 +08:00
ops
[CORE]initial support for torchair with non-mla backend (#1506)
2025-07-03 22:21:42 +08:00
patch/worker/patch_common
[CI] Fix FusedMoEConfig and input batch failure to recover CI (#1602)
2025-07-03 18:36:17 +08:00
quantization
[CORE]initial support for torchair with non-mla backend (#1506)
2025-07-03 22:21:42 +08:00
worker
[V1][ModelRunner] Support pooling model for v1 engine (#1359)
2025-06-30 16:31:12 +08:00
base.py
[Build] Add build info (#1386)
2025-06-27 09:14:43 +08:00
test_ascend_config.py
[CI] Follow vLLM FusedMoEParallelConfig interface change and clean up unused config (#1625)
2025-07-04 17:54:33 +08:00
test_platform.py
[CORE]initial support for torchair with non-mla backend (#1506)
2025-07-03 22:21:42 +08:00
test_utils.py
[Quantization]300I Duo support w8a8 quantization (#1560)
2025-07-03 22:12:46 +08:00
Powered by Gitea Version: 1.24.3 Page: 85ms Template: 7ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API