fromck
|
74d4f804e8
|
add 2 kernels and optimize the calculation of topk_indices (#134)
Co-authored-by: chengxiaokang <chengxiaokang@baidu.com>
|
2026-01-22 10:29:28 +08:00 |
|
yuqilinaa
|
c9f00c132c
|
[Kernel] Enable fast random sample on Kunlun3 Platform with generators (#73)
Co-authored-by: Xinyu Dong <dongxinyu03@baidu.com>
|
2026-01-20 21:49:33 +08:00 |
|
WANG HAO
|
c404af3a41
|
[Feature] totaly support multi-lora support,latest xspeedgate needed (#133)
Co-authored-by: wanghao <wanghao@example.com>
|
2026-01-20 21:27:02 +08:00 |
|
youzeyu
|
92b40628cd
|
delete glmGlmForCausalLM register (#132)
Co-authored-by: hanhaowen <hanhaowen@baidu.com>
|
2026-01-20 19:22:33 +08:00 |
|
Li Wei
|
2a2d773ad0
|
[fix]bias bug in kunlun_scale_mm (#126)
|
2026-01-20 13:24:52 +08:00 |
|
Li Wei
|
f2019b145f
|
Revert "support glm47 in 0.11.0 version (#116)" (#123)
This reverts commit 9006e37979.
|
2026-01-20 10:46:11 +08:00 |
|
roger-lcc
|
9006e37979
|
support glm47 in 0.11.0 version (#116)
* support glm47 in 0.11.0 version
* support glm47 in 0.11.0 version
---------
Co-authored-by: luochencheng <luochencheng@baidu.com>
|
2026-01-19 20:26:26 +08:00 |
|
Li Wei
|
8f56cbf3ed
|
[refactor]update Kunlun classes with monkey patch (#122)
Signed-off-by: Li Wei <liwei.109@outlook.com>
|
2026-01-19 20:24:19 +08:00 |
|
baoqian426
|
2512259944
|
longcontext chunk make attention crash, fix it (#117)
Co-authored-by: root <root@rdtest-node1150.bcc-zwlt.baidu.com>
|
2026-01-17 18:38:23 +08:00 |
|
fromck
|
71a5a04e0a
|
[Misc]Specify that DS32 only supports --kv-cache-dtype bfloat16 (#119)
* [Kernel] add kernels to torch.ops
* [Misc]Specify that DS only supports --kv-cache-dtype bfloat16
---------
Co-authored-by: chengxiaokang <chengxiaokang@baidu.com>
|
2026-01-17 16:52:02 +08:00 |
|
Shiwen Tang
|
8988ad08b2
|
[Feature] Support Mixed-Precision Quantization for MoE (#112)
|
2026-01-14 18:42:18 +08:00 |
|
wzh
|
115eb32068
|
enable int8 bmm
|
2026-01-14 14:30:59 +08:00 |
|
Lidang Jiang
|
7ed71432ca
|
[Bug] Fix InternVL KeyError: ((1, 1, 3), '<i8') (#108)
|
2026-01-13 22:36:03 +08:00 |
|
roger-lcc
|
37cc307322
|
register apply_repetition_penalties_ in custom_op (#110)
* fix qwen2_vl for 0.11.0
* register apply_repetition_penalties_ in custom_op
---------
Co-authored-by: luochencheng <luochencheng@baidu.com>
|
2026-01-13 20:22:14 +08:00 |
|
baoqian426
|
fb424acca7
|
Merge pull request #106 from baoqian426/enable-full-cudagraph-deepseek
enable full cudagraph for deepseek
|
2026-01-13 09:57:56 +08:00 |
|
Jin Hanyu
|
bd90350968
|
[Bug] Fix no apply_top_k_top_p issue. (#101)
|
2026-01-12 16:38:03 +08:00 |
|
hanhaowen
|
ff8ebfa208
|
enable full cudagraph for deepseek
|
2026-01-12 15:18:12 +08:00 |
|
roger-lcc
|
0455b49519
|
[Bugs] fix qwen2_vl for 0.11.0 (#94)
Co-authored-by: luochencheng <luochencheng@baidu.com>
|
2026-01-09 15:05:40 +08:00 |
|
baoqian426
|
2c9b176e6e
|
[Feature] use for dp (#90)
|
2026-01-08 11:05:48 +08:00 |
|
baoqian426
|
eb40e8a07a
|
[Bugfix] fix can not import compressed_tensors (#87)
Co-authored-by: root <root@rdtest-node1150.bcc-zwlt.baidu.com>
|
2026-01-07 11:32:10 +08:00 |
|
Li Wei
|
1c1b84d78c
|
[fix]update compressed-tensors scheme
Deepseek v3.2 is supported now
Signed-off-by: Li Wei <liwei.109@outlook.com>
|
2026-01-06 22:30:27 +08:00 |
|
baoqian426
|
9c2b908908
|
Merge pull request #84 from xyDong0223/main
[Feature] DeepSeek Support MTP
|
2026-01-06 21:56:31 +08:00 |
|
dongxinyu03
|
26b311ccf5
|
[Feature] DeepSeek Support MTP
|
2026-01-06 21:37:21 +08:00 |
|
tangshiwen
|
f811ae968a
|
[fix] resolve cutlass_scaled_mm inference error
|
2026-01-06 20:52:12 +08:00 |
|
Li Wei
|
9533f68e99
|
[fix]matmul not support cuda graph
|
2026-01-06 17:32:45 +08:00 |
|
Li Wei
|
515a4eeda9
|
[dev] support compressed-tensors w8a8 quantization (#75)
* [dev] support compressed-tensors w8a8 quantization
Co-authored-by: Li Wei <liwei.109@outlook.com>
* [refact]update KunlunScaleMMKernel impl
* [rebase]resolve conflicts and remove redundant code
---------
Co-authored-by: tangshiwen <tangshiwen@baidu.com>
|
2026-01-06 13:51:53 +08:00 |
|
baoqian426
|
ee0f50e68f
|
[Feature] support deepseek v3/r1/v3.2 (#78)
* [Feature] support deepseek v3/r1/v3.2
* fix gpt_oss
* update readme
* update readme
---------
Co-authored-by: hanhaowen <hanhaowen@baidu.com>
|
2026-01-05 22:55:35 +08:00 |
|
Xinyu Dong
|
07bc24a555
|
[Bugs] Fix moe when without bias (#76)
|
2026-01-05 10:51:23 +08:00 |
|
callmelaoyi
|
b86953acf9
|
[Kernel] Qwen3-next 优化 recompute_w_u_fwd & chunk_fwd_o (#74)
Co-authored-by: yuanjizhong <yuanjizhong@baidu.com>
|
2026-01-05 10:24:51 +08:00 |
|
Xinyu Dong
|
fe666fb24f
|
[Feature] Support gpt-oss and update model list (#71)
* [Docs] Update Support Models
* [Feature] Support gpt-oss
* [Docs] fix model support list
* Fix Moe
* Fix
* Fix moe_ep
* remove gpt oss graph support , not yet
---------
Co-authored-by: hanhaowen <hanhaowen@baidu.com>
|
2026-01-04 21:19:49 +08:00 |
|
Joeegin
|
ded24f5026
|
[Model] Supporet InternVL2_5 on v0.11.0 (#72)
Co-authored-by: v_qiaoyijin <v_qiaoyijin@baidu.com>
|
2026-01-04 16:38:05 +08:00 |
|
hanhaowen
|
b015bb76fd
|
remove qwen2.py llama.py fix llama output
|
2025-12-31 11:39:37 +08:00 |
|
Xinyu Dong
|
b3c30a3cb9
|
[Feature] Support XiaoMi MIMO Flash V2 (#62)
* [Feature] Support MIMO Flash V2
|
2025-12-31 10:16:33 +08:00 |
|
Li Wei
|
9cee025f41
|
Merge pull request #59 from liwei109/aicapx-quant
[fix]remove weight_loader_v2 to suport cuda graph
|
2025-12-29 19:56:24 +08:00 |
|
baoqian426
|
45c6b8e927
|
Merge pull request #52 from liwei109/awq_gptq
[dev] support AWQ/GPTQ quantization for dense models
|
2025-12-24 17:05:26 +08:00 |
|
Li Wei
|
6546323c71
|
[dev] support AWQ/GPTQ quantization for dense models
|
2025-12-24 13:46:06 +08:00 |
|
Li Wei
|
383eb5459a
|
[refactor] remove redundant code in linear
|
2025-12-24 12:02:09 +08:00 |
|
Xinyu Dong
|
75d0bdae2f
|
Merge pull request #40 from ldh2020/v0.11.0dev
[Kernel] Optimize the performance of Qwen3-Next
|
2025-12-22 21:50:27 +08:00 |
|
hanhaowen
|
a4b9e92ca1
|
[Kernel] Replace native torch solve_tril by solve_tril_fwd kernel op
|
2025-12-22 17:37:19 +08:00 |
|
ldh2020
|
8261a09e2a
|
[Kernel] Optimize the selection and update OP of ssm state
|
2025-12-21 15:45:32 +08:00 |
|
ldh2020
|
b97c781300
|
[Kernel] Optimize the recurrent op
|
2025-12-21 11:22:06 +08:00 |
|
ldh2020
|
004e164bdb
|
[Kernel] Optimize the recurrent op
|
2025-12-21 11:18:00 +08:00 |
|
ldh2020
|
58c1db5073
|
[Bugfix] fix the bug of the flash_attention in Qwen3-Next
|
2025-12-21 10:34:43 +08:00 |
|
Xinyu Dong
|
6f96615ee3
|
Merge pull request #23 from ldh2020/v0.11.0dev
[Kernel] Use l2norm kernel op instead of triton op.
|
2025-12-19 15:26:18 +08:00 |
|
chenyili0619
|
2e2933d217
|
[Bug] Fixed the issue where an error occurred when the request included a seed.
|
2025-12-18 13:03:34 +08:00 |
|
ldh2020
|
fce97df908
|
[Kernel] Use l2norm kernel op instead of triton op.
|
2025-12-16 16:24:47 +08:00 |
|
Xinyu Dong
|
5a75795ade
|
[Model] Update llama.py
Remove redundancy
|
2025-12-15 21:28:56 +08:00 |
|
Xinyu Dong
|
7c7d0326c5
|
[Model] registry llama.py to vLLM
|
2025-12-15 21:21:28 +08:00 |
|
Xinyu Dong
|
ca059110b3
|
[Model] Supporet llama3 on v0.11.0
FULL AND PIECEWISE GRAPH ENBALE
|
2025-12-15 21:20:44 +08:00 |
|
ldh2020
|
cff4727fbb
|
[Kernel] Optimize the performance of causal_conv1d.
|
2025-12-12 17:22:35 +08:00 |
|