Commit Graph

3845 Commits

Author SHA1 Message Date
Lianmin Zheng
708f4ff490 Rename max_micro_batch_size -> pp_max_micro_batch_size (#11279) 2025-10-06 15:50:56 -07:00
Lianmin Zheng
e2daeb351c [Auto Sync] Update test_utils.py (20251006) (#11280)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Sehoon Kim <sehoon@x.ai>
2025-10-06 15:49:57 -07:00
Zhiyu
155cbb51f0 Enable native ModelOpt quantization support (1/3) (#7149)
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
2025-10-06 13:24:15 -07:00
Lianmin Zheng
eb30b888db Remove env var warnings for release (#11262) 2025-10-06 10:09:17 -07:00
sglang-bot
a4a3d82393 chore: bump SGLang version to 0.5.3 (#11263) 2025-10-06 20:07:02 +08:00
sglang-bot
0b13cbb7c9 chore: bump SGLang version to 0.5.3rc2 (#11259)
Co-authored-by: sglang-bot <sglang-bot@users.noreply.github.com>
2025-10-06 01:12:10 -07:00
fzyzcjy
efbc687c28 Support DeepSeek V3.2 Exp (#11061)
Co-authored-by: Stefan He <11166516+hebiao064@users.noreply.github.com>
Co-authored-by: Liangsheng Yin <95566987+hnyls2002@users.noreply.github.com>
Co-authored-by: Baizhou Zhang <56809903+fridge003@users.noreply.github.com>
Co-authored-by: DarkSharpness <76582120+darksharpness@users.noreply.github.com>
Co-authored-by: ZhengdQin <46387172+zhengdqin@users.noreply.github.com>
Co-authored-by: DarkSharpness <2040703891@qq.com>
Co-authored-by: hnyls2002 <lsyincs@gmail.com>
Co-authored-by: Zhengda Qin <zhengdqin@gmail.com>
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
Co-authored-by: HAI <hixiao@gmail.com>
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
2025-10-06 00:24:15 -07:00
Xinyuan Tong
0cd1996eae feat: add shortcut detection for multimodal templates in Jinja format (#11209) 2025-10-06 04:13:17 +00:00
Lianmin Zheng
f8924ad74b update sgl kernel version to 0.3.14.post1 (#11242) 2025-10-05 20:30:40 -07:00
fzyzcjy
2f80bd9f0e Bump torch_memory_saver 0.0.9rc2 (#11252) 2025-10-05 20:26:20 -07:00
Lianmin Zheng
366a603e95 Use cu128 for torch audio to fix some CI tests (#11251) 2025-10-05 19:52:32 -07:00
Bowen Bao
baee08601b [quantization] Enable aiter mxfp4 fused_moe for Quark (#10048)
Co-authored-by: HaiShaw <hixiao@gmail.com>
2025-10-05 19:51:34 -07:00
Bowen Bao
c7a104c12b [quantization] Fix scale remapping for mllama4 (#10042)
Co-authored-by: HAI <hixiao@gmail.com>
2025-10-05 19:51:15 -07:00
Mick
97d966a7f8 ci: make find_local_hf_snapshot_dir more robust (#11248) 2025-10-05 19:50:11 -07:00
sglang-bot
8e66d87f0a Fix spec_utils.py (#11247) 2025-10-05 19:01:11 -07:00
Lianmin Zheng
6b30e097ab [Auto Sync] Update io_struct.py (20251004) (#11206)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: jzhou-xai <jzhou@x.ai>
2025-10-05 18:06:07 -07:00
Lianmin Zheng
d645ae90a3 Rename runner labels (#11228) 2025-10-05 18:05:41 -07:00
Xinyuan Tong
652c24a653 Update transformers package version to 4.57.0 (#11222)
Co-authored-by: yhyang201 <yhyang201@gmail.com>
2025-10-05 23:45:14 +00:00
Shangming Cai
c560410da7 Refactor and optimize mooncake CI (#11162)
Signed-off-by: Shangming Cai <csmthu@gmail.com>
2025-10-05 14:08:52 -07:00
Yuan Luo
590f2da052 [Feat] Support Torch Symm Mem AllReduce (#10571)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
2025-10-05 13:55:19 -07:00
Vincent Zhong
36a6b8dbfc Update v1/responses to be more OpenAI-compatible. (#9624) 2025-10-05 18:47:46 +00:00
Liangsheng Yin
4cb5a5235e Tiny skip_sample adjust (#11225) 2025-10-05 23:41:04 +08:00
Ke Bao
31b49c0b51 EAGLE cache fix for HiCache (#11215) 2025-10-04 16:53:53 -07:00
Simo Lin
d736e0b65e [router] add grpc router pd mode for chat and generate (#11140) 2025-10-04 06:58:28 -07:00
Hank Han
666da3d59f [fix]enable flashmla when using draft model P/D attention select (#11012) 2025-10-04 20:59:34 +08:00
Alex Chi Z
d01b921482 fix sampling_seed handling when deterministic is enabled (#11096)
Signed-off-by: Alex Chi <iskyzh@gmail.com>
2025-10-03 20:41:46 -07:00
narutolhy
c61b9a1d01 fix self.enable_kv_cache_events (#11178) 2025-10-03 14:09:41 -07:00
Hank Han
3c3d6255d9 [fix]missing prefix_lens_cpu init when p/d disaggregation (#11196) 2025-10-03 13:39:59 -07:00
XSongQ
546914fa2d [Fix] Fix the bug of the calculation of base_gpu_id (dp offset) in data_parallel_controller.py (#10741) 2025-10-03 13:25:57 -07:00
Praneth Paruchuri
fad7ca73f8 model: support starcoder2 (#10609) 2025-10-04 00:11:19 +08:00
pansicheng
08af8ffb5c fix 3fs indices (#10855) 2025-10-04 00:06:38 +08:00
Shangming Cai
2c7f4ca2f2 Optimize debug log position of PD abort request (#11090)
Signed-off-by: Shangming Cai <csmthu@gmail.com>
2025-10-03 23:07:02 +08:00
shubham singhal
03def5e3b1 Fix [test]: Env:SGLANG_TORCH_PROFILER_DIR for pytest. (#10780) 2025-10-03 22:59:32 +08:00
ur4t
6ae3f05b33 Fix CUDA illegal memory access issues in speculative decoding (#10892) 2025-10-03 22:44:07 +08:00
fzyzcjy
fdc4e1e570 Tiny move files to utils folder (#11166) 2025-10-03 22:40:06 +08:00
Liangsheng Yin
04b86b3c5c [hot-fix] Fix CI break which caused by adding thinking_mode in eval (#11192) 2025-10-03 18:29:27 +08:00
hlu1
d6777a706d Add --thinking-mode to run_eval (#11189)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
2025-10-03 16:49:39 +08:00
Matt Nappo
8c57490210 [Feature] Option to save model weights to CPU when memory saver mode is enabled (#10873)
Co-authored-by: molocule <34072934+molocule@users.noreply.github.com>
2025-10-03 16:48:19 +08:00
Alex Chi Z
1a31229cd4 fix: radix cache memory accounting (#10637)
Signed-off-by: Alex Chi Z <iskyzh@gmail.com>
2025-10-02 22:47:33 -07:00
Liangsheng Yin
de89ef49da [CI]] Tee server logs to both file and stdout/stderr using PIPE (#11185) 2025-10-03 12:31:13 +08:00
jacky.cheng
b00a0c786f [Fix] Update to v0.1.5.post4 and refine HIP attention backend selection (#11161) 2025-10-02 21:19:30 -07:00
b8zhong
a2faf8940c [1/n] Enable DCA CUDA graph capture (#9537) 2025-10-03 11:30:00 +08:00
Vedant V Jhaveri
7e61737d3f [Generative Scores API] add performance tests to CICD (#10830) 2025-10-02 19:57:55 -07:00
Liangsheng Yin
3c699772c9 Introduce naming convention in io_struct and base sglang io classes. (#10133) 2025-10-03 10:55:13 +08:00
Dom Brown
e810077488 Allow use of TRTLLM_MHA backend for hybrid attention on Blackwell (#11138) 2025-10-02 16:04:58 -07:00
Chang Su
963175d5c0 [router][grpc] Support streaming for v1/chat/completions (#11179) 2025-10-02 14:35:16 -07:00
Lianmin Zheng
6a261aaca5 Minor fixes for server_args, parallel_state, and test_deterministic.py (#11159) 2025-10-02 12:12:49 -07:00
Liangsheng Yin
7ff740a6ce Remove dp balance metadata and minimul token balance. (#11170) 2025-10-03 01:48:15 +08:00
Liangsheng Yin
bfcd9b2433 [grpc] style fix for grpc compilation. (#11175) 2025-10-03 01:44:29 +08:00
Liangsheng Yin
458611de77 Unify forward output datastructure (#11124) 2025-10-03 00:28:57 +08:00