Commit Graph

5680 Commits

Author SHA1 Message Date
jiapingW
a0010bf4e8 fix qwen2 eagle3 runtime error (#10517) 2025-10-04 00:19:52 +08:00
DiweiSun
307fc060e8 fix xeon ci check (#10838) 2025-10-04 00:17:36 +08:00
vikram singh shekhawat
586e81a28a [Test] Initialize mem_fraction_static in setUpClass to fix pytest VLM test crashes. (#10859)
Co-authored-by: svc_repro_tool <svc_repro_tool@habana.ai>
2025-10-04 00:14:48 +08:00
Praneth Paruchuri
fad7ca73f8 model: support starcoder2 (#10609) 2025-10-04 00:11:19 +08:00
pansicheng
08af8ffb5c fix 3fs indices (#10855) 2025-10-04 00:06:38 +08:00
Shangming Cai
2c7f4ca2f2 Optimize debug log position of PD abort request (#11090)
Signed-off-by: Shangming Cai <csmthu@gmail.com>
2025-10-03 23:07:02 +08:00
shubham singhal
03def5e3b1 Fix [test]: Env:SGLANG_TORCH_PROFILER_DIR for pytest. (#10780) 2025-10-03 22:59:32 +08:00
ur4t
6ae3f05b33 Fix CUDA illegal memory access issues in speculative decoding (#10892) 2025-10-03 22:44:07 +08:00
fzyzcjy
fdc4e1e570 Tiny move files to utils folder (#11166) 2025-10-03 22:40:06 +08:00
Liangsheng Yin
04b86b3c5c [hot-fix] Fix CI break which caused by adding thinking_mode in eval (#11192) 2025-10-03 18:29:27 +08:00
hlu1
d6777a706d Add --thinking-mode to run_eval (#11189)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
2025-10-03 16:49:39 +08:00
Matt Nappo
8c57490210 [Feature] Option to save model weights to CPU when memory saver mode is enabled (#10873)
Co-authored-by: molocule <34072934+molocule@users.noreply.github.com>
2025-10-03 16:48:19 +08:00
Keyang Ru
34151f173b [router] Steaming support for MCP Tool Calls in OpenAI Router (#11173) 2025-10-03 00:19:43 -07:00
fzyzcjy
6794d21051 Tiny add PD disaggregation + DP attention test (#11167) 2025-10-03 14:15:46 +08:00
Alex Chi Z
1a31229cd4 fix: radix cache memory accounting (#10637)
Signed-off-by: Alex Chi Z <iskyzh@gmail.com>
2025-10-02 22:47:33 -07:00
Liangsheng Yin
de89ef49da [CI]] Tee server logs to both file and stdout/stderr using PIPE (#11185) 2025-10-03 12:31:13 +08:00
jacky.cheng
b00a0c786f [Fix] Update to v0.1.5.post4 and refine HIP attention backend selection (#11161) 2025-10-02 21:19:30 -07:00
b8zhong
a2faf8940c [1/n] Enable DCA CUDA graph capture (#9537) 2025-10-03 11:30:00 +08:00
Vedant V Jhaveri
7e61737d3f [Generative Scores API] add performance tests to CICD (#10830) 2025-10-02 19:57:55 -07:00
Liangsheng Yin
3c699772c9 Introduce naming convention in io_struct and base sglang io classes. (#10133) 2025-10-03 10:55:13 +08:00
Dom Brown
e810077488 Allow use of TRTLLM_MHA backend for hybrid attention on Blackwell (#11138) 2025-10-02 16:04:58 -07:00
Chang Su
963175d5c0 [router][grpc] Support streaming for v1/chat/completions (#11179) 2025-10-02 14:35:16 -07:00
gongwei-130
0618ad6dd5 fix: shoudn't include CUDA_ARCH 100 and 120 for cuda12.6.1 (#11176) 2025-10-02 13:24:23 -07:00
Lianmin Zheng
6a261aaca5 Minor fixes for server_args, parallel_state, and test_deterministic.py (#11159) 2025-10-02 12:12:49 -07:00
Liangsheng Yin
7ff740a6ce Remove dp balance metadata and minimul token balance. (#11170) 2025-10-03 01:48:15 +08:00
Liangsheng Yin
bfcd9b2433 [grpc] style fix for grpc compilation. (#11175) 2025-10-03 01:44:29 +08:00
Liangsheng Yin
458611de77 Unify forward output datastructure (#11124) 2025-10-03 00:28:57 +08:00
Chang Su
3511b37099 [proto] Add script to compile python protos (#11171) 2025-10-02 08:45:51 -07:00
fzyzcjy
afcd3e1089 Tiny remove duplicated code (#11164) 2025-10-02 21:56:31 +08:00
fzyzcjy
12d6818380 Tiny fix ep_gather behavior different in CI (#11130) 2025-10-02 21:55:53 +08:00
fzyzcjy
b65db0287b Tiny cleanup deepseek_v2.py (#11163) 2025-10-02 21:54:52 +08:00
b8zhong
948278f173 fix cpp JIT compilation issue of ngram speculative decoding (#10837) 2025-10-02 21:05:01 +08:00
Liangsheng Yin
7d00479950 Clean up ascend allocator (#11152) 2025-10-02 20:34:26 +08:00
ilyasch2
083629c235 [model] Add mamba2 and Falcon-H1 support. (#10988)
Co-authored-by: Younes Belkada <younes.belkada@tii.ae>
Co-authored-by: Younes B <49240599+younesbelkada@users.noreply.github.com>
2025-10-02 19:15:36 +08:00
Chang Su
b658be6f6a [router][grpc] Support tool call parser in streaming (#11160) 2025-10-02 03:18:50 -07:00
fzyzcjy
5e786cca3a Support single batch overlap (#10422) 2025-10-02 18:04:36 +08:00
fzyzcjy
0b9dfba787 Support dispatch low latency (#10263)
Co-authored-by: Kaixi Hou <4001424+kaixih@users.noreply.github.com>
2025-10-02 18:02:19 +08:00
Liangsheng Yin
6a29003410 Remove unused pack .item() in paged allocator. (#11156) 2025-10-02 18:01:21 +08:00
fzyzcjy
2ac453b07f Tiny detect slow ranks (#10508) 2025-10-02 18:00:33 +08:00
fzyzcjy
f35def8652 Fuse quantize and rope in trtllm_mla MTP (#10779) 2025-10-02 17:59:37 +08:00
fzyzcjy
d61615fe93 Tiny fix missing alt stream in nextn layer (#10768) 2025-10-02 17:58:23 +08:00
fzyzcjy
b1ccaf01cd Tiny improve dumper (#11132) 2025-10-02 17:55:01 +08:00
Lianmin Zheng
097725bb66 Clean up parallel_state.py (#11148) 2025-10-02 01:09:13 -07:00
fzyzcjy
44b1fbe258 Fix DeepSeek chunked prefill memory issue (#11149) 2025-10-01 23:56:59 -07:00
sogalin
c0dbbdd12b [ROCm] To reduce the compiling time when using torch compile. (#10559) 2025-10-01 23:53:14 -07:00
Liangsheng Yin
25e7dbe8af Fix ngram spec with page size > 1 (#11135) 2025-10-02 12:34:23 +08:00
Zhang Junda
0b2aa8a70c Intoduce cpu tensor as metadata to avoid blocking gpu kernel launch (#10720)
Co-authored-by: hnyls2002 <lsyincs@gmail.com>
2025-10-02 10:51:25 +08:00
Lianmin Zheng
609f65ba23 Remove debug print statement from scheduler output (#11145) 2025-10-01 13:37:05 -07:00
Lianmin Zheng
2d62af6be5 Fix metrics and request tracing (TimeStats) (#11123) 2025-10-01 13:03:07 -07:00
Keyang Ru
a28b394fba [router] Add multi-turn tool calling loop support for MCP integration (#11143) 2025-10-01 12:50:21 -07:00