Logo
Explore Help
Register Sign In
EngineX/xc-llm-ascend
3
0
Fork 0
You've already forked xc-llm-ascend
Code Issues Pull Requests Projects Releases Wiki Activity
Files
5fed166a99cd35f6c8712d1d5144dee7aa247298
xc-llm-ascend/vllm_ascend/torchair
History
zouyida2052 ec98320285 correct bug to fix the value of max_num_tokens (#3933)
### What this PR does / why we need it?
correct bug to fix the value of max_num_tokens

- vLLM version: v0.11.0
- vLLM main:
83f478bb19

Signed-off-by: zouyida2052 <zouyida2002@gmail.com>
2025-11-03 14:17:51 +08:00
..
models
[Model][3/N] Refactor sfa into mla and remove deepseek_v3_2.py (#3769)
2025-10-30 17:06:38 +08:00
ops
[Bugfix] [MoE] fix error in deepseek when using allgather (#3824)
2025-10-29 14:51:39 +08:00
quantization
[Feat][quantization] Support new version w4a8 dynamic quantization for Linear layers (#3311)
2025-10-21 20:18:39 +08:00
__init__.py
[1/4][Refactor] Refactor torchair worker (#1885)
2025-07-21 11:50:46 +08:00
torchair_attention.py
[main] remove dbo code (#3712)
2025-10-25 15:53:01 +08:00
torchair_mla.py
[main] remove dbo code (#3712)
2025-10-25 15:53:01 +08:00
torchair_model_runner.py
correct bug to fix the value of max_num_tokens (#3933)
2025-11-03 14:17:51 +08:00
torchair_mtp_proposer.py
[FEAT] Refactor spec decode to support efficient padded speculation (#3528)
2025-10-30 16:53:05 +08:00
torchair_sfa.py
[main] remove dbo code (#3712)
2025-10-25 15:53:01 +08:00
torchair_worker.py
[CI] Upgrade vllm to newest commit (#3182)
2025-09-26 06:18:15 +08:00
utils.py
[BugFix] deepseek torchair adapt for torch_npu version (#3862)
2025-10-29 22:39:34 +08:00
Powered by Gitea Version: 1.24.3 Page: 203ms Template: 9ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API