75 Commits

Author SHA1 Message Date
starkwj
34e04c5569 update base image 2026-03-02 18:46:04 +08:00
starkwj
4d8575115a add vxpu 2026-03-02 18:38:10 +08:00
Li Wei
e4c9b9f988 [Bugfix] cocopod ops can't be finded (#242)
Signed-off-by: Li Wei <liwei.109@outlook.com>
2026-03-02 15:49:24 +08:00
chanzhennan
82544aa0cc [Feature] Merge branch 'Qwen3-Next' into main && Support Qwen-next (#222)
Signed-off-by: xyDong0223 <dongxinyu03@baidu.com>
Co-authored-by: xyDong0223 <dongxinyu03@baidu.com>
2026-02-28 11:15:50 +08:00
Shiwen Tang
b82b6026d6 [BugFix] Adapt GLM5 config for transformers 4.57 (#207)
Signed-off-by: tangshiwen <tangshiwen@baidu.com>
2026-02-25 18:47:26 +08:00
Xinyu Dong
76ec220b43 [Bugsfix] Fix run failed (#198)
Signed-off-by: xyDong0223 <dongxinyu03@baidu.com>
2026-02-13 14:07:10 +08:00
Xinyu Dong
bf9369f733 Migrate XTorch operations to Kunlun operations (accelerating iteration) (#177)
Signed-off-by: dongxinyu03 <dongxinyu03@baidu.com>
2026-02-12 18:13:00 +08:00
Li Wei
744719587e [Feature] Support glmx (#194)
Signed-off-by: Li Wei <liwei.109@outlook.com>
Co-authored-by: tangshiwen <tangshiwen@baidu.com>
Co-authored-by: Xinyu Dong <dongxinyu03@baidu.com>
2026-02-12 15:40:42 +08:00
Xinyu Dong
070bfa4a73 [Bugfix] Fixed Kunlun Graph Failed (#193)
Signed-off-by: dongxinyu03 <dongxinyu03@baidu.com>
2026-02-11 18:52:18 +08:00
fromck
fc48b79ae9 support glm4.7 mtp (#187)
Signed-off-by: chengxiaokang <chengxiaokang@baidu.com>
Co-authored-by: chengxiaokang <chengxiaokang@baidu.com>
2026-02-11 18:32:30 +08:00
WANG HAO
bd8c999335 Further optimize multi-lora inference,LoRA-enabled performance achieves 80%+ of non-LoRA performance (#190)
* optimize lora inference

Signed-off-by: wanghao <wanghao@example.com>

* further optimize multi-lora inference,LoRA-enabled performance achieves 80%+ of non-LoRA performance

Signed-off-by: wanghao <wanghao@example.com>

---------

Signed-off-by: wanghao <wanghao@example.com>
Co-authored-by: wanghao <wanghao@example.com>
2026-02-11 12:04:14 +08:00
WANG HAO
6f30bc439d clean pr for ds.2 mtp support (#164)
* Add MTP support in eagle.py

Signed-off-by: wanghao129 <wanghao129@baidu.com>

* new pr for mtp

Signed-off-by: wanghao129 <wanghao129@baidu.com>

* Revert formatting changes in deepseek_v2.py

Signed-off-by: wanghao129 <wanghao129@baidu.com>

---------

Signed-off-by: wanghao129 <wanghao129@baidu.com>
Co-authored-by: wanghao129 <wanghao129@baidu.com>
2026-02-02 15:23:33 +08:00
fromck
6f12830839 [Kernel] add topk_per_row to optimize the calculation of topk_indexes (#168)
Signed-off-by: chengxiaokang <chengxiaokang@baidu.com>
Co-authored-by: chengxiaokang <chengxiaokang@baidu.com>
2026-02-02 11:07:49 +08:00
astrophel0
726cefb7a3 [dev]add glm4.7 tool-parser (#151)
Signed-off-by: zhangzhenyi <zhangzhenyi@baidu.com>
Co-authored-by: Li Wei <liwei.109@outlook.com>
2026-02-01 13:53:47 +08:00
Li Wei
71bd70ad6c [Feature] support compressed-tensors w4a16 quantization (#154)
- native int4 kimi model inference is supported

Signed-off-by: Li Wei <liwei.109@outlook.com>
2026-01-27 19:56:22 +08:00
Shiwen Tang
0711c1abfa [Feature] Support AWQ MoE W4A16 Quantization (#142)
Signed-off-by: tangshiwen <tangshiwen@baidu.com>
Co-authored-by: Li Wei <liwei.109@outlook.com>
2026-01-26 18:56:05 +08:00
baoqian426
1eaa1336ac [Bugfix]remove mla patch, server args no need --compilation-config for ds v3.1 (#145)
Signed-off-by: baoqian426 <1354987947@qq.com>
2026-01-23 15:59:43 +08:00
fromck
0ce5f1a3f7 Add kernels to optimize RoPE and the decoding stage (#143)
Co-authored-by: chengxiaokang <chengxiaokang@baidu.com>
2026-01-23 10:29:52 +08:00
fromck
74d4f804e8 add 2 kernels and optimize the calculation of topk_indices (#134)
Co-authored-by: chengxiaokang <chengxiaokang@baidu.com>
2026-01-22 10:29:28 +08:00
yuqilinaa
c9f00c132c [Kernel] Enable fast random sample on Kunlun3 Platform with generators (#73)
Co-authored-by: Xinyu Dong <dongxinyu03@baidu.com>
2026-01-20 21:49:33 +08:00
WANG HAO
c404af3a41 [Feature] totaly support multi-lora support,latest xspeedgate needed (#133)
Co-authored-by: wanghao <wanghao@example.com>
2026-01-20 21:27:02 +08:00
youzeyu
92b40628cd delete glmGlmForCausalLM register (#132)
Co-authored-by: hanhaowen <hanhaowen@baidu.com>
2026-01-20 19:22:33 +08:00
Li Wei
2a2d773ad0 [fix]bias bug in kunlun_scale_mm (#126) 2026-01-20 13:24:52 +08:00
Li Wei
f2019b145f Revert "support glm47 in 0.11.0 version (#116)" (#123)
This reverts commit 9006e37979.
2026-01-20 10:46:11 +08:00
roger-lcc
9006e37979 support glm47 in 0.11.0 version (#116)
* support glm47 in 0.11.0 version

* support glm47 in 0.11.0 version

---------

Co-authored-by: luochencheng <luochencheng@baidu.com>
2026-01-19 20:26:26 +08:00
Li Wei
8f56cbf3ed [refactor]update Kunlun classes with monkey patch (#122)
Signed-off-by: Li Wei <liwei.109@outlook.com>
2026-01-19 20:24:19 +08:00
baoqian426
2512259944 longcontext chunk make attention crash, fix it (#117)
Co-authored-by: root <root@rdtest-node1150.bcc-zwlt.baidu.com>
2026-01-17 18:38:23 +08:00
fromck
71a5a04e0a [Misc]Specify that DS32 only supports --kv-cache-dtype bfloat16 (#119)
* [Kernel] add kernels to torch.ops

* [Misc]Specify that DS only supports --kv-cache-dtype bfloat16

---------

Co-authored-by: chengxiaokang <chengxiaokang@baidu.com>
2026-01-17 16:52:02 +08:00
Shiwen Tang
8988ad08b2 [Feature] Support Mixed-Precision Quantization for MoE (#112) 2026-01-14 18:42:18 +08:00
wzh
115eb32068 enable int8 bmm 2026-01-14 14:30:59 +08:00
Lidang Jiang
7ed71432ca [Bug] Fix InternVL KeyError: ((1, 1, 3), '<i8') (#108) 2026-01-13 22:36:03 +08:00
roger-lcc
37cc307322 register apply_repetition_penalties_ in custom_op (#110)
* fix qwen2_vl for 0.11.0

* register apply_repetition_penalties_ in custom_op

---------

Co-authored-by: luochencheng <luochencheng@baidu.com>
2026-01-13 20:22:14 +08:00
baoqian426
fb424acca7 Merge pull request #106 from baoqian426/enable-full-cudagraph-deepseek
enable full cudagraph for deepseek
2026-01-13 09:57:56 +08:00
Jin Hanyu
bd90350968 [Bug] Fix no apply_top_k_top_p issue. (#101) 2026-01-12 16:38:03 +08:00
hanhaowen
ff8ebfa208 enable full cudagraph for deepseek 2026-01-12 15:18:12 +08:00
roger-lcc
0455b49519 [Bugs] fix qwen2_vl for 0.11.0 (#94)
Co-authored-by: luochencheng <luochencheng@baidu.com>
2026-01-09 15:05:40 +08:00
baoqian426
2c9b176e6e [Feature] use for dp (#90) 2026-01-08 11:05:48 +08:00
baoqian426
eb40e8a07a [Bugfix] fix can not import compressed_tensors (#87)
Co-authored-by: root <root@rdtest-node1150.bcc-zwlt.baidu.com>
2026-01-07 11:32:10 +08:00
Li Wei
1c1b84d78c [fix]update compressed-tensors scheme
Deepseek v3.2 is supported now

Signed-off-by: Li Wei <liwei.109@outlook.com>
2026-01-06 22:30:27 +08:00
baoqian426
9c2b908908 Merge pull request #84 from xyDong0223/main
[Feature] DeepSeek Support MTP
2026-01-06 21:56:31 +08:00
dongxinyu03
26b311ccf5 [Feature] DeepSeek Support MTP 2026-01-06 21:37:21 +08:00
tangshiwen
f811ae968a [fix] resolve cutlass_scaled_mm inference error 2026-01-06 20:52:12 +08:00
Li Wei
9533f68e99 [fix]matmul not support cuda graph 2026-01-06 17:32:45 +08:00
Li Wei
515a4eeda9 [dev] support compressed-tensors w8a8 quantization (#75)
* [dev] support compressed-tensors w8a8 quantization

Co-authored-by: Li Wei <liwei.109@outlook.com>

* [refact]update KunlunScaleMMKernel impl

* [rebase]resolve conflicts and remove redundant code

---------

Co-authored-by: tangshiwen <tangshiwen@baidu.com>
2026-01-06 13:51:53 +08:00
baoqian426
ee0f50e68f [Feature] support deepseek v3/r1/v3.2 (#78)
* [Feature] support deepseek v3/r1/v3.2

* fix gpt_oss

* update readme

* update readme

---------

Co-authored-by: hanhaowen <hanhaowen@baidu.com>
2026-01-05 22:55:35 +08:00
Xinyu Dong
07bc24a555 [Bugs] Fix moe when without bias (#76) 2026-01-05 10:51:23 +08:00
callmelaoyi
b86953acf9 [Kernel] Qwen3-next 优化 recompute_w_u_fwd & chunk_fwd_o (#74)
Co-authored-by: yuanjizhong <yuanjizhong@baidu.com>
2026-01-05 10:24:51 +08:00
Xinyu Dong
fe666fb24f [Feature] Support gpt-oss and update model list (#71)
* [Docs] Update Support Models

* [Feature] Support gpt-oss

* [Docs] fix model support list

* Fix Moe

* Fix

* Fix moe_ep

* remove gpt oss graph support , not yet

---------

Co-authored-by: hanhaowen <hanhaowen@baidu.com>
2026-01-04 21:19:49 +08:00
Joeegin
ded24f5026 [Model] Supporet InternVL2_5 on v0.11.0 (#72)
Co-authored-by: v_qiaoyijin <v_qiaoyijin@baidu.com>
2026-01-04 16:38:05 +08:00
hanhaowen
b015bb76fd remove qwen2.py llama.py fix llama output 2025-12-31 11:39:37 +08:00