Xiaoyu Zhang
|
49a5915f53
|
[ready b200] fuse allreduce+add_rmsnorm in prepare_attention + mlp module (#7775)
|
2025-07-10 15:12:39 -07:00 |
|
ronnie_zheng
|
766392c6bd
|
[feature]Ascend quantization support (#7791)
Co-authored-by: ichernob <ichernobnn@gmail.com>
Co-authored-by: liupeng <liupeng374@huawei.com>
|
2025-07-10 09:17:37 -07:00 |
|
likesen-alibaba
|
4a0d19198b
|
Fix bug of deepseek-v3 under DP+EP mode with large batchsize/seqlen (#6449)
|
2025-07-10 01:19:56 -07:00 |
|
Zaili Wang
|
5748241549
|
add sentencepiece as dependency explicitly (#7922)
|
2025-07-10 01:06:27 -07:00 |
|
Binyao Jiang
|
2d54d4bb64
|
Feat: Support Phi-3.5-MoE in SGLang (#7907)
|
2025-07-09 23:51:33 -07:00 |
|
Mick
|
b5e3d6031c
|
vlm: support video as an input modality (#5888)
|
2025-07-09 23:48:35 -07:00 |
|
kyleliang-nv
|
dd445a41f5
|
[feature] Add start step profile argument in /start_profile (#7608)
|
2025-07-09 18:42:15 -07:00 |
|
almaslof
|
f9df11ae86
|
Remove unused imports (#7898)
|
2025-07-09 22:36:48 +08:00 |
|
jianan-gu
|
d389bedf72
|
[CPU][Qwen3 MoE] Enable fused_topk CPU fusion and enhance FP8 TP padding (#7838)
|
2025-07-09 02:04:21 -07:00 |
|
Cheng Wan
|
d487555f84
|
[CI] Add deepep tests to CI (#7872)
|
2025-07-09 01:49:47 -07:00 |
|
Xinyuan Tong
|
e5888eddda
|
Fixes typo in assertion message (#7895)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
|
2025-07-09 01:47:14 -07:00 |
|
Yineng Zhang
|
066f4ec91f
|
chore: bump v0.4.9.post1 (#7882)
|
2025-07-09 00:28:17 -07:00 |
|
Yineng Zhang
|
b6b6268ccf
|
Revert "Embedding parallel by attn_tp (#7623)" (#7880)
|
2025-07-08 22:03:09 -07:00 |
|
Shangming Cai
|
64c5907e12
|
[PD] Add guidance for prefill bootstrap timeout (#7846)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
|
2025-07-08 21:00:34 -07:00 |
|
Chunyuan WU
|
128f16a817
|
[CPU]convert topk_weights to fp32 for INT8 and FP8 paths (for llama4) and fix LmHead weight pack (#7818)
|
2025-07-08 19:27:24 -07:00 |
|
ybyang
|
4986104618
|
Bump xgrammar's version to 0.1.20 (#7866)
|
2025-07-08 17:55:30 -07:00 |
|
Brayden Zhong
|
a37e1247c1
|
[Multimodal][Perf] Use pybase64 instead of base64 (#7724)
|
2025-07-08 14:00:58 -07:00 |
|
Xinyuan Tong
|
136c6e0431
|
fix: Handles input_embeds in GenerateReqInput when n>1 (#7830)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
|
2025-07-08 14:00:42 -07:00 |
|
Xinyuan Tong
|
43e20c0647
|
Support Mimo-VL (#7579)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
|
2025-07-08 14:00:25 -07:00 |
|
Xinyuan Tong
|
4bab50a6b5
|
Fix llama4 vision (#7840)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
|
2025-07-08 14:00:03 -07:00 |
|
Xiaoyu Zhang
|
2e7ab862e3
|
Fix illegal memory in trtllm allreduce fusion (#7864)
|
2025-07-08 11:47:17 -07:00 |
|
kk
|
653b873b91
|
Fix cache modules of triton import error (#7832)
|
2025-07-08 02:50:09 -07:00 |
|
Shangming Cai
|
d379bda4fa
|
[Bugfix] Fix two batch overlap with auto DeepEP Dispatch (#7853)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
|
2025-07-08 02:49:32 -07:00 |
|
Zhiyu
|
659907e32b
|
Enable ModelOpt Llama4 fp8 checkpoint deployment in SGLang (#7129)
|
2025-07-08 00:19:50 -07:00 |
|
SijiaYang
|
cb9d91ea8a
|
feat: support DeepSeek-R1-W4AFP8 model with ep-moe mode (#7762)
Signed-off-by: yangsijia.614 <yangsijia.614@bytedance.com>
|
2025-07-07 14:47:21 -07:00 |
|
Haohui Mai
|
076313bd09
|
[AMD] Fail gracefully when AITER is unavailable gfx90a GPUs (#7187)
|
2025-07-07 09:09:58 +00:00 |
|
Ziming Huang
|
9abe1163ac
|
fix duplicate args in schedule_batch (#7816)
|
2025-07-07 01:31:03 -07:00 |
|
Zhiqiang Xie
|
2fc824b84c
|
Kernels for efficient KV cache IO (#7313)
|
2025-07-06 22:53:36 -07:00 |
|
Yuan Luo
|
253454de9b
|
Integrate triton moe kernel (#7689)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
|
2025-07-06 20:05:49 -07:00 |
|
yuhsuan-t
|
8d4a01cbd7
|
Log the timestamps of each prefill/decode iteration (#6094)
Co-authored-by: yuhsuan-t <12108766+yuhsaun-t@users.noreply.github.com>
|
2025-07-07 01:57:27 +00:00 |
|
Nan Jiang
|
ba69c153f6
|
[RL]: Fix error tagging in multi-stage wake up (#7812)
Co-authored-by: hebiao064 <hebiaobuaa@gmail.com>
|
2025-07-06 16:51:29 -07:00 |
|
Stefan He
|
3589aa79b0
|
[RL] Fix illegal memory for _import_static_state (#7733)
Co-authored-by: nanjiangwill <willjiang2018@gmail.com>
|
2025-07-06 16:25:21 -07:00 |
|
Lifu Huang
|
ea4bf12286
|
Fix division-by-zero bug in LoRA triton kernels. (#7785)
|
2025-07-06 00:45:29 -07:00 |
|
fzyzcjy
|
a291439a59
|
Support logprobs in two-batch overlap (#7709)
|
2025-07-05 19:05:32 -07:00 |
|
JieXin Liang
|
54411f6afa
|
fix: disable dsv3_router_gemm in dsv3_nextn (#7793)
|
2025-07-05 19:01:01 -07:00 |
|
Yineng Zhang
|
ec5f9c6269
|
chore: bump v0.4.9 (#7802)
|
2025-07-05 17:40:29 -07:00 |
|
Yineng Zhang
|
62f5522ffe
|
chore: upgrade sgl-kernel v0.2.4 (#7801)
|
2025-07-05 17:37:40 -07:00 |
|
Lianmin Zheng
|
5589b75024
|
Add treemask mode to build_eagle_tree & release sgl-kernel 0.2.3 (#7756)
Co-authored-by: Pranjal Shankhdhar <pranjal.ssh@gmail.com>
|
2025-07-05 12:17:05 -07:00 |
|
JieXin Liang
|
c04a8a820b
|
[fix] fix misusing of is_cuda (#7790)
|
2025-07-05 04:02:14 -07:00 |
|
Cheng Wan
|
6c903611ca
|
Fix incorrect spec_num_draft_tokens in draft_extend (#7757)
|
2025-07-05 02:18:16 -07:00 |
|
Yineng Zhang
|
77cfea689d
|
chore: upgrade sgl-kernel v0.2.3 (#7786)
|
2025-07-05 01:55:55 -07:00 |
|
Cheng Wan
|
8fc910db03
|
DP Attention with Auto DeepEP Dispatch (#7222)
|
2025-07-05 01:54:24 -07:00 |
|
Gang Chen
|
ef8a29c429
|
Embedding parallel by attn_tp (#7623)
|
2025-07-04 23:21:56 -07:00 |
|
Leng Yue
|
8364608930
|
add model: qwen2-audio (#7596)
|
2025-07-04 21:13:10 -07:00 |
|
Cheng Wan
|
cb432f1770
|
saving hidden_states.clone() (#7705)
|
2025-07-04 20:07:42 -07:00 |
|
Ximingwang-09
|
1964c325de
|
[feat] Support EAGLE3 for Qwen (#7745)
Co-authored-by: 纬杭 <ximing.wxm@antgroup.com>
Co-authored-by: zyksir <zyksir@outlook.com>
|
2025-07-04 19:50:28 -07:00 |
|
Caproni
|
af5647748a
|
[Fix] Alloc return type error (#7778)
Signed-off-by: Capronir <839972205@qq.com>
|
2025-07-04 19:00:40 -07:00 |
|
Zilin Zhu
|
af46f299f9
|
[RL] add pause and continue generation for async rl training (#7419)
|
2025-07-04 18:49:49 -07:00 |
|
Zilin Zhu
|
16a6b1d83a
|
[RL] Add --nccl-port to prevent port conflict (#7418)
|
2025-07-04 18:48:57 -07:00 |
|
Lianmin Zheng
|
14229ccf8f
|
Move mem_fraction_static adjustment for multimodal models to server_args.py & Fix session control & Other cleanups (#7748)
|
2025-07-04 16:33:33 -07:00 |
|