Quanfeng Li
|
ef32677444
|
Fix positional argument (#7093)
|
2025-06-11 18:31:13 -07:00 |
|
Lifu Huang
|
021f76e4f4
|
[Perf] Refactor LoRAManager to eliminate stream syncs and redundant computations (#6994)
|
2025-06-11 16:18:57 -07:00 |
|
Faradawn Yang
|
777688b892
|
[feat]: Emit fixed-size KV blocks events (#6824)
|
2025-06-11 13:07:58 -07:00 |
|
Neo
|
0ca594eda9
|
[FIX]remove redundant code in logits_processor.py (#7079)
|
2025-06-11 11:49:30 -07:00 |
|
Zijian
|
31d6dee5c4
|
Support VILA models (#6106)
|
2025-06-11 11:47:25 -07:00 |
|
sogalin
|
02543b545c
|
Fix misusing the "_is_cuda". (#7091)
|
2025-06-11 11:21:31 -07:00 |
|
Baizhou Zhang
|
25a6a9aa22
|
Fix circular import in test_prefix_chunk_info.py (#7097)
|
2025-06-11 10:57:45 -07:00 |
|
Mick
|
83d87685c5
|
vlm: adapt internvl to VisionAttention (#6870)
|
2025-06-11 01:16:04 -07:00 |
|
Baizhou Zhang
|
2a5f0100e0
|
Fix GGuf and add back test_gguf.py (#7067)
|
2025-06-10 21:07:20 -07:00 |
|
Lianmin Zheng
|
dbdf76ca98
|
Clean up docs for server args and sampling parameters (generated by grok) (#7076)
|
2025-06-10 19:55:42 -07:00 |
|
Lianmin Zheng
|
6b12d6a8d5
|
Simplify the heuristics for setting --mem-fraction-static (#7054)
|
2025-06-10 19:01:39 -07:00 |
|
Yudi Xue
|
14c18d25df
|
Frontend language separate reasoning support (#6031)
|
2025-06-10 17:11:29 -07:00 |
|
Brayden Zhong
|
ca9291181d
|
[Feature] Add Logit Bias (#6579)
Co-authored-by: Cinjon Resnick <cinjon.resnick@gmail.com>
|
2025-06-10 15:39:25 -07:00 |
|
kyle-pena-kuzco
|
b56de8f943
|
Open AI API hidden states (#6716)
|
2025-06-10 14:37:29 -07:00 |
|
Xu Wenqing
|
a0e4d4eb53
|
Fix missing tool call id if tool call index >0 in streaming tool call output. (#7049)
Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com>
|
2025-06-10 12:56:43 -07:00 |
|
kk
|
8ea7df6114
|
[WA] fix output data is nan in CI test "test_moe_eval_accuracy_large.py" (#7021)
Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: HAI <hixiao@gmail.com>
|
2025-06-10 16:08:10 +00:00 |
|
Lianmin Zheng
|
4a102a2b02
|
Minor style fix in cuda_graph_runner.py (#7053)
|
2025-06-10 06:32:41 -07:00 |
|
Lianmin Zheng
|
6406408a70
|
Clean up server_args.py (#7037)
|
2025-06-10 05:34:29 -07:00 |
|
Lianmin Zheng
|
019851d099
|
Fix eagle on AMD (#7051)
|
2025-06-10 05:22:40 -07:00 |
|
Lianmin Zheng
|
2dae104dca
|
Minor cleanup of fa3 backend (#6999)
|
2025-06-10 03:58:44 -07:00 |
|
Yineng Zhang
|
4f723edd3b
|
chore: bump v0.4.7 (#7038)
|
2025-06-10 01:56:20 -07:00 |
|
yudian0504
|
81372f3bef
|
Fix fused_moe triton configs (#7029)
|
2025-06-09 23:23:03 -07:00 |
|
Byron Hsu
|
c2b16795b5
|
Add decode req pool (#6980)
|
2025-06-09 21:23:36 -07:00 |
|
fzyzcjy
|
f6ebba537a
|
Support both approximate and exact expert distribution collection (#6964)
|
2025-06-09 20:56:17 -07:00 |
|
Baizhou Zhang
|
6716b41786
|
Update default settings for blackwell (#7023)
|
2025-06-09 20:37:47 -07:00 |
|
Lianmin Zheng
|
dc0705a504
|
Simplify prepare_extend_after_decode (#6987)
|
2025-06-09 16:39:21 -07:00 |
|
Wenxuan Tan
|
a968c888c0
|
Fix torchvision version for Blackwell (#7015)
|
2025-06-09 15:50:19 -07:00 |
|
Baizhou Zhang
|
a979daac3b
|
Fallback to lower triton version for unfound fused moe configs (#7013)
|
2025-06-09 15:41:03 -07:00 |
|
ishandhanani
|
f1569876d5
|
feat: add direct routing strategy to DP worker (#6884)
|
2025-06-09 11:44:05 -07:00 |
|
fzyzcjy
|
e58423b2b9
|
Fix cutlass MLA gets almost zero accuracy (#6998)
|
2025-06-09 10:16:29 -07:00 |
|
Yineng Zhang
|
56ccd3c22c
|
chore: upgrade flashinfer v0.2.6.post1 jit (#6958)
Co-authored-by: alcanderian <alcanderian@gmail.com>
Co-authored-by: Qiaolin Yu <qy254@cornell.edu>
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
Co-authored-by: Mick <mickjagger19@icloud.com>
Co-authored-by: ispobock <ispobaoke@gmail.com>
|
2025-06-09 09:22:39 -07:00 |
|
Yueyang Pan
|
98c00a2df1
|
Fix torch profiler bugs for bench_offline_throughput.py (#6557)
|
2025-06-09 20:33:41 +08:00 |
|
Pan Lyu
|
451ffe74d9
|
support qwen3 emebedding (#6990)
|
2025-06-09 01:32:49 -07:00 |
|
Lifu Huang
|
b1e5a33ae3
|
Eliminate stream sync to speed up LoRA batch init (#6960)
|
2025-06-09 00:22:45 -07:00 |
|
Lianmin Zheng
|
9d5fa68b90
|
Use torch.compile to fuse flash attention decode metadata preparation (#6973)
|
2025-06-08 23:05:40 -07:00 |
|
fzyzcjy
|
de1350ea20
|
Minor remove one kernel for DeepSeek (#6977)
|
2025-06-08 17:41:35 -07:00 |
|
fzyzcjy
|
86fe943bc3
|
Fix expert distribution dumping causes OOM (#6967)
|
2025-06-08 17:41:14 -07:00 |
|
Lianmin Zheng
|
0c1f03a23d
|
Sync cuda graph runners (#6976)
|
2025-06-08 16:12:25 -07:00 |
|
Xiaoyu Zhang
|
3712abfaf9
|
Fuse routed scaling factor in deepseek (#6970)
|
2025-06-08 15:24:24 -07:00 |
|
Baizhou Zhang
|
971a0dfa32
|
Extend cuda graph capture bs for B200 (#6937)
|
2025-06-08 05:13:22 -07:00 |
|
fzyzcjy
|
2fc1299562
|
Remove unnecessary kernels of num_token_non_padded (#6965)
|
2025-06-08 05:09:17 -07:00 |
|
Lianmin Zheng
|
20d3ad3b58
|
Fix CI and triton moe Configs (#6974)
|
2025-06-08 05:06:46 -07:00 |
|
Xiaoyu Zhang
|
fa3592cfeb
|
rebase h20 fused_moe config (#6966)
|
2025-06-08 05:01:34 -07:00 |
|
Lianmin Zheng
|
608668e143
|
Slightly improve the sampler to skip unnecessary steps (#6956)
|
2025-06-08 03:18:54 -07:00 |
|
Yineng Zhang
|
1fb76ebb93
|
Revert "Fuse routed scaling factor in topk_reduce kernel (#6220)" (#6968)
|
2025-06-07 21:02:49 -07:00 |
|
Pavani Majety
|
c2c4f57f63
|
[DeepseekR1-FP4] Add Support for nvidia/DeepSeekR1-FP4 model (#6853)
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
|
2025-06-07 17:24:35 -07:00 |
|
Yineng Zhang
|
23881fa60c
|
chore: upgrade sgl-kernel v0.1.6.post1 (#6957)
|
2025-06-07 17:18:55 -07:00 |
|
Elfie Guo
|
3e56f557fd
|
Add a CUDA kernel for fusing mapping and weighted sum for MoE. (#6916)
Co-authored-by: Elfie Guo <elfiegxf@gmail.com>
|
2025-06-07 15:24:39 -07:00 |
|
Xu Wenqing
|
62fec60d81
|
Add H20 fused MoE kernel tuning configs for DeepSeek-R1/V3 (#6885)
Signed-off-by: Xu Wenqing <xuwq1993@qq.com>
|
2025-06-07 15:17:34 -07:00 |
|
JieXin Liang
|
e7759778e5
|
[misc] add is_cpu() (#6950)
|
2025-06-07 15:13:45 -07:00 |
|