Commit Graph

  • 2ab97023e3 [router] add different policies for p node and d node (#8395) Simo Lin 2025-07-27 00:39:20 -07:00
  • 0bcc195f4e fix: minor fix TransportProxyTensor under tp (#8382) Mick 2025-07-27 15:38:49 +08:00
  • 91e3d1542e Update Cutlass in sgl-kernel to v4.1 (#8392) Baizhou Zhang 2025-07-27 00:36:15 -07:00
  • 85486b6f6f [NVIDIA] Add Flashinfer MoE blockscale fp8 backend (#8036) Kaixi Hou 2025-07-27 00:34:41 -07:00
  • e34cf6ad75 Fix bench script making input data on L2 cache (#7739) fzyzcjy 2025-07-27 15:30:24 +08:00
  • 62222bd27e Minor tool for comparison of benchmark results (#7974) fzyzcjy 2025-07-27 15:27:50 +08:00
  • ed0fdbf35b Tool to dump and compare internal activation tensors (#7976) fzyzcjy 2025-07-27 15:27:21 +08:00
  • b602f42354 Urgent Fix: intern-s1 chat-template matching (#8403) Xinyuan Tong 2025-07-27 00:22:31 -07:00
  • 426b74936a Add nvfp4 scaled mm benchmark. (#8401) Qi Yuhang 2025-07-27 14:18:04 +08:00
  • 528bd1ed85 HiCache, check before terminate prefetching (#8372) Zhiqiang Xie 2025-07-26 23:13:16 -07:00
  • 62a6b7c773 Add docker release flow for gb200 (#8394) kyleliang-nv 2025-07-26 21:25:07 -07:00
  • 761546315c Remove slot usage in code to be backward-compatible with python 3.9 (#8396) Lifu Huang 2025-07-26 21:24:22 -07:00
  • 5c705b1dce Add perf tests for LoRA (#8314) Lifu Huang 2025-07-26 14:55:22 -07:00
  • b7094a5ef1 model: support intern-s1 (#8350) RunningLeon 2025-07-27 04:48:51 +08:00
  • da0c026084 Tiny assert EPLB is used together with expert parallel (#8381) fzyzcjy 2025-07-26 18:20:39 +08:00
  • 3212c2ad3f vlm: optimize tensor transport (#6003) Mick 2025-07-26 17:41:01 +08:00
  • 534756749a chore: improvements on mm_utils (#7737) Mick 2025-07-26 17:38:56 +08:00
  • ce32bc2ba9 Extract update_weights from RL Engine to SGLang to keep simplicity and fix torch reduce (#8267) Stefan He 2025-07-26 02:00:59 -07:00
  • e236d8fee8 Save peak memory in logits processor (#8343) Cheng Wan 2025-07-26 01:46:42 -07:00
  • 4fa44d63c6 chore: improve mmmu benchmark (#7000) Mick 2025-07-26 16:19:45 +08:00
  • e6312d271d Uodate Dockerfile.gb200 to latest sglang (#8356) kyleliang-nv 2025-07-26 00:22:06 -07:00
  • 8af145b7dc Fix test_moe_fused_gate_combined sgl-kernel ci test (#8374) Ke Bao 2025-07-26 09:30:12 +08:00
  • 2272c2a5b5 chore: bump v0.4.9.post4 (#8305) Yineng Zhang 2025-07-25 17:12:47 -07:00
  • 3ec0b21229 [CI] Fix flaky threshold (#8370) Lianmin Zheng 2025-07-25 16:41:56 -07:00
  • 58c468f404 Fix FP4 MoE accuracy from missing routed_scaling_factor (#8333) Trevor Morris 2025-07-25 16:40:23 -07:00
  • f8ca2368b2 fix: kimi k2 xgrammar crash (#8367) Yineng Zhang 2025-07-25 15:44:01 -07:00
  • d8ee15643b [Feat] Add reasoning parser for Qwen/Qwen3-235B-A22B-Thinking-2507 (#8363) Chang Su 2025-07-25 14:59:42 -07:00
  • 7181ec8cfc fix: upgrade nccl version (#8359) Yineng Zhang 2025-07-25 14:59:02 -07:00
  • ed2e313eb6 Clean up server_args, triton cache manager (#8332) Lianmin Zheng 2025-07-25 14:14:51 -07:00
  • f8260f2539 [Bugfix][Feat] Add XML-ish grammar in EBNFComposer and fix misc bugs in Qwen3 detector (#8357) Chang Su 2025-07-25 12:03:16 -07:00
  • 12cb760a37 Add H20-3e fused MoE kernel tuning configs for Qwen3-Coder-480B-A35B-Instruct (#8344) Xu Wenqing 2025-07-26 01:58:12 +08:00
  • 1b9cea5ade [P/D] Support ipv6 in P/D scenario (#7858) Stepan Kargaltsev 2025-07-25 18:53:30 +03:00
  • 9045cc1eb8 [torch.compile bug] avoid biased_grouped_topk_impl func repeatedly triggering torch.compile in forward pass (#8353) Xiaoyu Zhang 2025-07-25 21:17:47 +08:00
  • 70e37b97bf chore: upgrade mooncake 0.3.5 (#8341) Shangming Cai 2025-07-25 16:17:26 +08:00
  • 15d2759174 [CPU] Add tutorial docs for SGL on CPU (#8000) Zaili Wang 2025-07-25 15:03:16 +08:00
  • af4b9bae95 [AMD] Add silu_and_mul, gelu_and_mul, gelu_tanh_and_mul, and gelu_quick kernels for AMD GPUs (#7135) Hubert Lu 2025-07-24 23:44:28 -07:00
  • 7ad6b766c5 fix: Fix failed functional tests https://github.com/meta-llama/llama-stack-evals (#8266) Ying Wang 2025-07-24 23:11:32 -07:00
  • c0fb25e949 DP Enhancement (#8280) Cheng Wan 2025-07-24 21:36:21 -07:00
  • 28d4d47280 [Feature] Integrate quick allreduce and select the best allreduce implementation (#6619) li haoyang 2025-07-25 11:48:42 +08:00
  • f4674df646 support idle batch for TBO (#8233) ZhichenJiang 2025-07-25 11:43:52 +08:00
  • d40846d456 breakdown kernel update (#8334) Zhiqiang Xie 2025-07-24 17:33:17 -07:00
  • 145482f422 HiCache Storage TP Refinement (#8307) Zhiqiang Xie 2025-07-24 17:31:47 -07:00
  • 39fe1e880d [router] add request format unit test (#8300) Simo Lin 2025-07-24 14:30:37 -07:00
  • 33c4b4d04e [router] add streaming unit test (#8299) Simo Lin 2025-07-24 14:30:27 -07:00
  • 8d1c5b948e chore: upgrade flashinfer v0.2.9rc1 (#8301) Swipe4057 2025-07-25 01:29:56 +04:00
  • a167fd0bcb [code style] Clean dead triton kernel code in fused_moe and useless vllm_ops import (#8310) Xiaoyu Zhang 2025-07-24 14:38:30 +08:00
  • 2f86f3ad62 [router] add endpoint unit test (#8298) Simo Lin 2025-07-23 23:26:44 -07:00
  • bfb118c01e fix bug when eos_ids==0 (#8315) Minho Ryu 2025-07-24 15:18:47 +09:00
  • f6e07f2796 [router] fix pd model completion request (#8303) Simo Lin 2025-07-23 23:18:29 -07:00
  • 5dd0f870ab [bug] fix pd completion protocol for batching support (#8317) Simo Lin 2025-07-23 23:18:17 -07:00
  • f7e102d56a Pin the version of petit kernel to fix the APIs (#8235) Haohui Mai 2025-07-23 17:57:20 -07:00
  • 0e5fa67773 [AMD] Pull latest image for AMD CI (#8070) michael-amd 2025-07-23 17:56:14 -07:00
  • 624a3b8d1f Fix incomplete tool call capture issue in streaming response of DeepSeek-V3 when enable MTP (#7562) xianzhiT 2025-07-24 08:40:23 +08:00
  • 01079e174f feat(function call): complete utility method for KimiK2Detector and enhance documentation (#8043) Chang Su 2025-07-23 17:37:31 -07:00
  • 0e7a5b2694 fix: prevent crashes due to logit bias dimension mismatch (#7685) J 2025-07-23 15:30:55 -07:00
  • 4953f4ca9a chore: upgrade sgl-kernel 0.2.7 (#8304) Yineng Zhang 2025-07-23 15:07:27 -07:00
  • 38000a5f44 Fix gemma3n with hybrid swa (#8240) Xinyuan Tong 2025-07-23 13:29:18 -07:00
  • 70251e935e fix: match chat-template for internvl3 (#8262) Xinyuan Tong 2025-07-23 13:29:03 -07:00
  • c87d4fec99 Fix the issue of incorrect finish reason in final stream response chunk returned during tool call (#7708) xianzhiT 2025-07-24 04:28:53 +08:00
  • a99801e075 [Performance][PD Disaggregation] optimize TokenToKVPoolAllocator by sorting free pages (#8133) YiXR 2025-07-24 04:28:12 +08:00
  • 4c605235aa fix: workaround for deepgemm warmup issue (#8302) Yineng Zhang 2025-07-23 12:01:51 -07:00
  • 6f8f4aeea4 [router] add common ut infra to mock worker and app (#8295) Simo Lin 2025-07-23 10:07:51 -07:00
  • 0c8dab9e67 [sgl-kernel] Opt per_token_quant_fp8 with warp reduce (#8130) Yuan Luo 2025-07-23 21:22:59 +08:00
  • f39037fffb HiCache Fix (#8288) Zhiqiang Xie 2025-07-23 01:51:32 -07:00
  • ce86e201df bug fix and tag (#8282) Zhiqiang Xie 2025-07-23 01:50:31 -07:00
  • b43263307f Hicache IO kernel refactoring (#8264) Zhiqiang Xie 2025-07-23 01:49:03 -07:00
  • 8abd3e77fe Introduce Stable LoRA ID System for Overlapped Updates and Prefix Caching (#8261) Lifu Huang 2025-07-23 00:32:16 -07:00
  • e885bfdc6a Fix sgl-kernel ci test (#8284) Ke Bao 2025-07-23 14:01:47 +08:00
  • e2d66f60c8 Skip llama4 vision module loading when multimodal disabled (#8272) Ke Bao 2025-07-23 12:41:25 +08:00
  • 01c000043c chore: bump v0.4.9.post3 (#8265) Yineng Zhang 2025-07-22 15:55:48 -07:00
  • 0dfe2491ac Preliminary Support for Qwen3XMLDetector (#8260) yhyang201 2025-07-23 06:49:38 +08:00
  • ff45ab7a5f [Benchmark] add disable-auto-run param for hicache/bench_multiturn (#7822) zhongwei 2025-07-23 05:02:40 +08:00
  • 0f8b538614 [fix] benchmark : routed_scaling_factor is None (#8059) Peter Pan 2025-07-22 23:55:35 +08:00
  • c33499a67b fix: sgl-router remove dead code (#8257) Rui Chen 2025-07-22 23:41:23 +08:00
  • e50109f2ed [AMD] Remove vllm's scaled_fp8_quant and moe_sum when SGLANG_USE_AITER=1 (#7484) Hubert Lu 2025-07-21 17:33:19 -07:00
  • 69adc4f81c fix: retrieve mm token by modality, raise error if none (#8221) Xinyuan Tong 2025-07-21 17:06:35 -07:00
  • 114837854f docs: update 2025 h2 roadmap (#8237) Yineng Zhang 2025-07-21 14:02:48 -07:00
  • 7b68d27111 [Feature] Add a test for Layer-wise Prefill (#8231) Xiaoze Fan 2025-07-21 22:06:15 +08:00
  • 74f59ae555 chore: upgrade sgl-kernel 0.2.6.post1 (#8202) Yineng Zhang 2025-07-21 02:10:24 -07:00
  • 6936be3221 Remve router gemm output dtype conversion (#8204) Ke Bao 2025-07-21 15:37:00 +08:00
  • 9b5de6cb06 [router] upgade router version to 0.1.6 (#8209) Simo Lin 2025-07-20 23:13:20 -07:00
  • 5c8365a051 [router] add ut for pd router (#8208) Simo Lin 2025-07-20 23:12:52 -07:00
  • 8430bfe3e9 [Refactor] simplify multimodal data processing (#8107) Xinyuan Tong 2025-07-20 21:43:09 -07:00
  • c9e8613c97 Apply fused sorted token ids padding (#8193) Ke Bao 2025-07-21 11:19:48 +08:00
  • 429bb0efa2 chore: bump sgl-kernel v0.2.6.post1 (#8200) Yineng Zhang 2025-07-20 19:50:28 -07:00
  • 7eebd44047 [fix] fix modelopt fp4 on b200 (#8195) JieXin Liang 2025-07-21 08:39:57 +08:00
  • 93d124ef5a [feature] enable NPU CI (#7935) ronnie_zheng 2025-07-20 23:12:42 +03:00
  • 1fc455e8b6 [router] add ut for pd request, metrics and config (#8184) Simo Lin 2025-07-20 10:53:42 -07:00
  • 465968b2e3 Fix dtype error in CI (#8197) Ke Bao 2025-07-21 00:27:55 +08:00
  • 750838adc4 fix: fix the bug of loading Internvl3 (#8067) GuoYipin 2025-07-20 22:22:54 +08:00
  • 99aefa037e Fix eagle3 cuda graph (#8163) Jay Zhou 2025-07-20 00:28:06 -07:00
  • bbcfbc1a02 feat: add h200 tp 16 kimi k2 moe config (#8183) Qiaolin Yu 2025-07-20 02:30:08 -04:00
  • 83c104b188 Feat: Support for Persimmon Model (#7983) Praneth Paruchuri 2025-07-20 11:37:47 +05:30
  • 2db6719cc5 feat: update nccl 2.27.6 (#8182) Yineng Zhang 2025-07-19 22:55:45 -07:00
  • 55381a46ac Revert "[Feature] Simple Improve Health Check Mechanism for Production-Grade Stability" (#8181) Lianmin Zheng 2025-07-19 22:41:30 -07:00
  • a589a07167 fix moe gate dtype, fix tbo, fix fake dispatch (#7825) Atream 2025-07-20 13:13:46 +08:00
  • f62d75b6a1 feat: add b200 tp 16 kimi k2 moe config (#8178) Yineng Zhang 2025-07-19 20:04:12 -07:00
  • 0f9b11e310 feat: add h200 tp 16 kimi k2 moe config (#8176) Yineng Zhang 2025-07-19 20:04:02 -07:00
  • 877e35d775 Add get_hidden_dim to qwen3.py for correct lora (#7312) Pavel Logachev 2025-07-20 05:31:16 +03:00
  • cbdfb77123 Enable FlashInfer support encoder models and add head_dim padding workaround (#6230) Clay 2025-07-20 10:30:16 +08:00