Yineng Zhang
|
c807cd7c75
|
chore: update configurer (#9557)
|
2025-08-24 01:05:00 -07:00 |
|
Vincent Zhong
|
327f7b7c87
|
fix(grok): remove duplicate replicate_lm_head configuration (#9549)
|
2025-08-23 19:49:24 -07:00 |
|
Xiaotong Jiang
|
80425e59bb
|
[doc] deepseekv31 support (#9544)
|
2025-08-23 16:54:58 -07:00 |
|
Mingyi
|
af9d4eb038
|
[readme] Include additional resources for the SGLang x AMD SF Meetup event (#9547)
|
2025-08-23 16:51:16 -07:00 |
|
gongwei-130
|
fb107cfd75
|
feat: allow use local branch to build image (#9546)
|
2025-08-23 16:38:30 -07:00 |
|
Lianmin Zheng
|
97a38ee85b
|
Release 0.5.1 (#9533)
|
2025-08-23 07:09:26 -07:00 |
|
Lianmin Zheng
|
86d10d220f
|
Update grok.py and tiktoken tokenizer (#9532)
|
2025-08-23 05:40:18 -07:00 |
|
hzh0425
|
83871aa12d
|
feat(hicache): Supports 3fs-hicache compatibility with dp-attention (#9372)
|
2025-08-23 02:08:32 -07:00 |
|
fzyzcjy
|
b1b3f0b38f
|
Partially unify triton per token group quant kernels (#9485)
|
2025-08-23 02:07:31 -07:00 |
|
fzyzcjy
|
34e5e11f0f
|
Tiny make device_loading_context more static (#9478)
|
2025-08-23 02:07:15 -07:00 |
|
fzyzcjy
|
2600fc0d47
|
Overlapped weight offload (#8034)
|
2025-08-23 02:06:46 -07:00 |
|
hlu1
|
ccd3fb946e
|
[fix] Fix mxfp4 triton MoE tp bug (#9473)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
|
2025-08-23 01:48:40 -07:00 |
|
Chang Su
|
c9dd70fbde
|
tool-call(dsv3): Improve deepseek-v3 chat template and tool_choice = required (#9525)
|
2025-08-23 01:46:56 -07:00 |
|
Yineng Zhang
|
6b2b8bf0e1
|
fix: blackwell dsv3 fp8 issue temporary solution (#9530)
|
2025-08-23 01:33:21 -07:00 |
|
yuxingcyx
|
4edbe0d534
|
[benchmark] Add benchmark scripts for ceval and boolq (#8946)
Co-authored-by: chenyuxing <2818499974@qq.com>
Co-authored-by: hanqing <huang010706@126.com>
Co-authored-by: Muggle <62579327+trawolf@users.noreply.github.com>
Co-authored-by: ronnie_zheng <zl19940307@163.com>
|
2025-08-23 15:40:15 +08:00 |
|
fzyzcjy
|
0374304a2c
|
Add enable_flashinfer_mxfp4_bf16_moe for higher precision and slower moe backend (#9004)
|
2025-08-23 15:38:40 +08:00 |
|
Chanh Nguyen
|
127d4b0d5e
|
Support GC Freezing to improve latency & throughput (#9241)
Co-authored-by: Chanh Nguyen <cnguyen@linkedin.com>
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
|
2025-08-23 13:43:09 +08:00 |
|
Moein Khazraee
|
7e880286b5
|
Add support for extensions of interface and pre-registrations to NIXL HiCache (#9211)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
|
2025-08-22 20:06:13 -07:00 |
|
Bruce-x-1997
|
446c8e4cdb
|
[router] ignore client error when record failure in pd_router (#9503)
Co-authored-by: bruce.xu <bruce.xu@gmicloud.ai>
|
2025-08-22 14:19:45 -07:00 |
|
Keyang Ru
|
5ef545e678
|
[router] Move all protocols to spec.rs file (#9519)
|
2025-08-22 14:18:47 -07:00 |
|
sogalin
|
c4500233ff
|
Add Qwen3-30B-A3B-Thinking-2507 support on AMD GPUs. (#9456)
|
2025-08-22 13:14:42 -07:00 |
|
Hubert Lu
|
f445a1d9a3
|
[AMD] Fix Llama 4 FP8 accuracy issues on MI300X (#7699)
|
2025-08-22 13:13:45 -07:00 |
|
Kaixi Hou
|
e5638573c1
|
[NVIDA] [1/N] Nvfp4 Masked Gemm: Add quant op for the flashinfer grouped gemm (#9200)
|
2025-08-22 12:19:45 -07:00 |
|
Simo Lin
|
f556ac8bd8
|
[router] add json tool parser (#9516)
|
2025-08-22 12:13:04 -07:00 |
|
datdo-msft
|
110a65989b
|
[MTP] Force greedy sampling on AMD (#9127)
|
2025-08-22 11:14:43 -07:00 |
|
Simo Lin
|
49f9d02538
|
[router] tokenizer arch doc (#9513)
|
2025-08-22 09:52:33 -07:00 |
|
Wenxuan Tan
|
0f587e80d3
|
Use Tensor Core Decode when gqa group size >= 4 (#8624)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
2025-08-22 23:25:15 +08:00 |
|
huangtingwei
|
6078d5fcc0
|
[HiCacheStorage] backup optimization for MLA model (#8865)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
|
2025-08-22 18:03:51 +08:00 |
|
pansicheng
|
70cf4abccc
|
3fs zerocopy (#9109)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
|
2025-08-22 17:56:38 +08:00 |
|
Xuchun Shang
|
cebf45994b
|
[bugfix] Make --enable-hierarchical-cache and --disable-radix-cache mutually exclusive (#9452)
Signed-off-by: Xuchun Shang <xuchun.shang@linux.alibaba.com>
|
2025-08-22 17:49:52 +08:00 |
|
Qiaolin Yu
|
9c0c1e30b2
|
Disable torch.compile for get_last_loc_large_page_size_large_top_k (#9507)
Co-authored-by: ispobock <ispobaoke@gmail.com>
|
2025-08-22 02:05:02 -07:00 |
|
Mick
|
a1f011d09a
|
minor: determine mm attn backend based on platforms (#9303)
|
2025-08-22 01:08:41 -07:00 |
|
Qiaolin Yu
|
9ec314c6ac
|
Support speculative decoding in the trtllm_mha attention backend (#9331)
Co-authored-by: ispobock <ispobaoke@gmail.com>
|
2025-08-21 23:53:35 -07:00 |
|
Xinyuan Tong
|
fedfe91c1a
|
[Docs] Add doc and quick demo for gpt-oss responses api & buildin tools (#9497)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
2025-08-21 23:51:52 -07:00 |
|
kk
|
988accbc1e
|
Update docker file for supporting PD-Disaggregation on MI300x (#9494)
Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: Colin Wang <kangwang@amd.com>
|
2025-08-21 23:48:40 -07:00 |
|
Yineng Zhang
|
b6b2287e4b
|
chore: bump sgl-kernel v0.3.6.post2 (#9475)
|
2025-08-21 23:02:08 -07:00 |
|
Elfie Guo
|
243e745d07
|
Add trtllm_mla and cutlass_mla for ragged fmha for chunked prefill (#9480)
|
2025-08-21 23:01:36 -07:00 |
|
timmy-feng
|
61a0e600df
|
torch.compile() mrope (#9487)
|
2025-08-21 23:01:08 -07:00 |
|
Simo Lin
|
0f8cee8cd3
|
[router] fix router load guard tracking for streaming (#9491)
|
2025-08-21 22:48:29 -07:00 |
|
Chang Su
|
816c4c8572
|
[router] add tool parser base structure and partial json parser (#9482)
|
2025-08-21 22:08:56 -07:00 |
|
Xinyuan Tong
|
13ec8d427e
|
[Docs]Update reasoning parser doc & fix outdated link (#9492)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
2025-08-21 22:08:28 -07:00 |
|
Chayenne
|
05bd789791
|
[docs]: fix reasoning context in docs (#9483)
|
2025-08-21 20:04:12 -07:00 |
|
kousakawang
|
5fd311d33e
|
[code clean] add H20 cutlass groupGemm default config (#9333)
Co-authored-by: wanghanpei <wanghanpei@bytedance.com>
|
2025-08-21 19:23:29 -07:00 |
|
Chang Su
|
53e2cd464f
|
[router] remove all tokenizer metrics for performance (#9474)
|
2025-08-21 18:35:24 -07:00 |
|
Yongfei Xu
|
9708d353b7
|
Support MHA with chunked prefix cache for flashinfer/flashmla backend, support page size > 1 for MHA chunked prefix (#8616)
Co-authored-by: xuyongfei.xyf <xuyongfei.xyf@antgroup.com>
|
2025-08-21 18:19:44 -07:00 |
|
Hubert Lu
|
704ced1b2e
|
[AMD] Remove the deprecated C10_WARP_SIZE (#9356)
|
2025-08-21 18:16:35 -07:00 |
|
Pavani Majety
|
3cc3d9b950
|
Add Support for Page Size greater than 1 for Flashinfer MLA Backend (#8593)
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
|
2025-08-21 18:15:06 -07:00 |
|
Xinyuan Tong
|
0b3a5b1151
|
Update reasoning parser doc (#9468)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
2025-08-21 17:25:30 -07:00 |
|
Xinyuan Tong
|
6c855db82c
|
Revert "bugfix: Fix output_ids extraction in detokenizer_manager" (#9467)
|
2025-08-21 17:24:25 -07:00 |
|
Yineng Zhang
|
0f9318f7d0
|
feat: update auto_choose_speculative_params (#9470)
|
2025-08-21 17:12:12 -07:00 |
|