Commit Graph

3829 Commits

Author SHA1 Message Date
Lianmin Zheng
d645ae90a3 Rename runner labels (#11228) 2025-10-05 18:05:41 -07:00
Xinyuan Tong
652c24a653 Update transformers package version to 4.57.0 (#11222)
Co-authored-by: yhyang201 <yhyang201@gmail.com>
2025-10-05 23:45:14 +00:00
Shangming Cai
c560410da7 Refactor and optimize mooncake CI (#11162)
Signed-off-by: Shangming Cai <csmthu@gmail.com>
2025-10-05 14:08:52 -07:00
Yuan Luo
590f2da052 [Feat] Support Torch Symm Mem AllReduce (#10571)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
2025-10-05 13:55:19 -07:00
Vincent Zhong
36a6b8dbfc Update v1/responses to be more OpenAI-compatible. (#9624) 2025-10-05 18:47:46 +00:00
Liangsheng Yin
4cb5a5235e Tiny skip_sample adjust (#11225) 2025-10-05 23:41:04 +08:00
Ke Bao
31b49c0b51 EAGLE cache fix for HiCache (#11215) 2025-10-04 16:53:53 -07:00
Simo Lin
d736e0b65e [router] add grpc router pd mode for chat and generate (#11140) 2025-10-04 06:58:28 -07:00
Hank Han
666da3d59f [fix]enable flashmla when using draft model P/D attention select (#11012) 2025-10-04 20:59:34 +08:00
Alex Chi Z
d01b921482 fix sampling_seed handling when deterministic is enabled (#11096)
Signed-off-by: Alex Chi <iskyzh@gmail.com>
2025-10-03 20:41:46 -07:00
narutolhy
c61b9a1d01 fix self.enable_kv_cache_events (#11178) 2025-10-03 14:09:41 -07:00
Hank Han
3c3d6255d9 [fix]missing prefix_lens_cpu init when p/d disaggregation (#11196) 2025-10-03 13:39:59 -07:00
XSongQ
546914fa2d [Fix] Fix the bug of the calculation of base_gpu_id (dp offset) in data_parallel_controller.py (#10741) 2025-10-03 13:25:57 -07:00
Praneth Paruchuri
fad7ca73f8 model: support starcoder2 (#10609) 2025-10-04 00:11:19 +08:00
pansicheng
08af8ffb5c fix 3fs indices (#10855) 2025-10-04 00:06:38 +08:00
Shangming Cai
2c7f4ca2f2 Optimize debug log position of PD abort request (#11090)
Signed-off-by: Shangming Cai <csmthu@gmail.com>
2025-10-03 23:07:02 +08:00
shubham singhal
03def5e3b1 Fix [test]: Env:SGLANG_TORCH_PROFILER_DIR for pytest. (#10780) 2025-10-03 22:59:32 +08:00
ur4t
6ae3f05b33 Fix CUDA illegal memory access issues in speculative decoding (#10892) 2025-10-03 22:44:07 +08:00
fzyzcjy
fdc4e1e570 Tiny move files to utils folder (#11166) 2025-10-03 22:40:06 +08:00
Liangsheng Yin
04b86b3c5c [hot-fix] Fix CI break which caused by adding thinking_mode in eval (#11192) 2025-10-03 18:29:27 +08:00
hlu1
d6777a706d Add --thinking-mode to run_eval (#11189)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
2025-10-03 16:49:39 +08:00
Matt Nappo
8c57490210 [Feature] Option to save model weights to CPU when memory saver mode is enabled (#10873)
Co-authored-by: molocule <34072934+molocule@users.noreply.github.com>
2025-10-03 16:48:19 +08:00
Alex Chi Z
1a31229cd4 fix: radix cache memory accounting (#10637)
Signed-off-by: Alex Chi Z <iskyzh@gmail.com>
2025-10-02 22:47:33 -07:00
Liangsheng Yin
de89ef49da [CI]] Tee server logs to both file and stdout/stderr using PIPE (#11185) 2025-10-03 12:31:13 +08:00
jacky.cheng
b00a0c786f [Fix] Update to v0.1.5.post4 and refine HIP attention backend selection (#11161) 2025-10-02 21:19:30 -07:00
b8zhong
a2faf8940c [1/n] Enable DCA CUDA graph capture (#9537) 2025-10-03 11:30:00 +08:00
Vedant V Jhaveri
7e61737d3f [Generative Scores API] add performance tests to CICD (#10830) 2025-10-02 19:57:55 -07:00
Liangsheng Yin
3c699772c9 Introduce naming convention in io_struct and base sglang io classes. (#10133) 2025-10-03 10:55:13 +08:00
Dom Brown
e810077488 Allow use of TRTLLM_MHA backend for hybrid attention on Blackwell (#11138) 2025-10-02 16:04:58 -07:00
Chang Su
963175d5c0 [router][grpc] Support streaming for v1/chat/completions (#11179) 2025-10-02 14:35:16 -07:00
Lianmin Zheng
6a261aaca5 Minor fixes for server_args, parallel_state, and test_deterministic.py (#11159) 2025-10-02 12:12:49 -07:00
Liangsheng Yin
7ff740a6ce Remove dp balance metadata and minimul token balance. (#11170) 2025-10-03 01:48:15 +08:00
Liangsheng Yin
bfcd9b2433 [grpc] style fix for grpc compilation. (#11175) 2025-10-03 01:44:29 +08:00
Liangsheng Yin
458611de77 Unify forward output datastructure (#11124) 2025-10-03 00:28:57 +08:00
Chang Su
3511b37099 [proto] Add script to compile python protos (#11171) 2025-10-02 08:45:51 -07:00
fzyzcjy
afcd3e1089 Tiny remove duplicated code (#11164) 2025-10-02 21:56:31 +08:00
fzyzcjy
12d6818380 Tiny fix ep_gather behavior different in CI (#11130) 2025-10-02 21:55:53 +08:00
fzyzcjy
b65db0287b Tiny cleanup deepseek_v2.py (#11163) 2025-10-02 21:54:52 +08:00
b8zhong
948278f173 fix cpp JIT compilation issue of ngram speculative decoding (#10837) 2025-10-02 21:05:01 +08:00
Liangsheng Yin
7d00479950 Clean up ascend allocator (#11152) 2025-10-02 20:34:26 +08:00
ilyasch2
083629c235 [model] Add mamba2 and Falcon-H1 support. (#10988)
Co-authored-by: Younes Belkada <younes.belkada@tii.ae>
Co-authored-by: Younes B <49240599+younesbelkada@users.noreply.github.com>
2025-10-02 19:15:36 +08:00
fzyzcjy
5e786cca3a Support single batch overlap (#10422) 2025-10-02 18:04:36 +08:00
fzyzcjy
0b9dfba787 Support dispatch low latency (#10263)
Co-authored-by: Kaixi Hou <4001424+kaixih@users.noreply.github.com>
2025-10-02 18:02:19 +08:00
Liangsheng Yin
6a29003410 Remove unused pack .item() in paged allocator. (#11156) 2025-10-02 18:01:21 +08:00
fzyzcjy
2ac453b07f Tiny detect slow ranks (#10508) 2025-10-02 18:00:33 +08:00
fzyzcjy
f35def8652 Fuse quantize and rope in trtllm_mla MTP (#10779) 2025-10-02 17:59:37 +08:00
fzyzcjy
d61615fe93 Tiny fix missing alt stream in nextn layer (#10768) 2025-10-02 17:58:23 +08:00
fzyzcjy
b1ccaf01cd Tiny improve dumper (#11132) 2025-10-02 17:55:01 +08:00
Lianmin Zheng
097725bb66 Clean up parallel_state.py (#11148) 2025-10-02 01:09:13 -07:00
fzyzcjy
44b1fbe258 Fix DeepSeek chunked prefill memory issue (#11149) 2025-10-01 23:56:59 -07:00