Commit Graph

3595 Commits

Author SHA1 Message Date
Lifu Huang
3f41b48c40 [2/2] Introduce Chunked-SGMV kernels and corresponding LoRA backend for improved performance (#10286) 2025-09-15 16:04:03 -07:00
Liangsheng Yin
c3c26f76b3 [Env] minimal version for organizing envs (#10479) 2025-09-16 03:51:25 +08:00
Liangsheng Yin
2cf811a9da Fix --dataset-path in bench_one_batch_server (#10475) 2025-09-16 02:55:02 +08:00
Praneth Paruchuri
a45d9a4ee8 model: support solar (#8189) 2025-09-16 02:21:13 +08:00
Jonas
28c79dc84a fix: gpt-oss streaming dropping normal content when tools are provided but not used (#9657) 2025-09-15 11:02:32 -07:00
Kevin Tuan
1fcccda4b2 fix(internvl): fix accuracy issue of normalization (#10375) 2025-09-16 01:56:01 +08:00
Yingchun Lai
b1721edbac [PD metrics] Add latency Histogram metrics of each stage for generate requests (#8710) 2025-09-16 01:52:49 +08:00
Yineng Zhang
86a32bb5cd chore: bump v0.5.3rc0 (#10468) 2025-09-15 03:55:18 -07:00
Yineng Zhang
5afd036533 feat: support pip install sglang (#10465) 2025-09-15 03:09:17 -07:00
fzyzcjy
059c13de5c Fix trtllm_moe wrong correction bias (#10440) 2025-09-15 01:02:05 -07:00
Lianmin Zheng
50dc0c1e9c Run tests based on labels (#10456) 2025-09-15 00:29:20 -07:00
Jimmy_L
76becc1dbc Add rtx5880 moe triton (#10439) 2025-09-15 00:12:10 -07:00
Jimmy
3795b6a43f fix(server_args): Skip chunked_prefill_size validation when disaggregation mode is decode (#10358) 2025-09-15 12:13:35 +08:00
Mick
0549f21c60 fix: fix max_new_tokens uninitialized error (#9343) 2025-09-15 12:06:55 +08:00
Vincent Zhong
0b14159fc4 Add reasoning examples for GPT-OSS in Markdown examples (#9626) 2025-09-15 11:27:40 +08:00
Yingchun Lai
fc2c3a3d8e metrics: support customer labels specified in request header (#10143) 2025-09-14 20:00:08 -07:00
fzyzcjy
010181388c Tiny fix wrong naming (#10437) 2025-09-14 19:24:41 -07:00
Cheng Wan
4844fac91d Refactor TopK to ensure readability and extensibility (#9338) 2025-09-14 19:16:25 -07:00
Liangsheng Yin
305c9e8c2d [4/N]DP refactor: support watching mode get_load and shortest queue strategy (#10201) 2025-09-15 10:06:08 +08:00
fzyzcjy
258d02c86d Fix correction bias undefined behavior for nvfp4 models (#10426) 2025-09-14 18:41:09 -07:00
Ke Bao
60d7beda6b Add split tile size for Triton attention (#10425) 2025-09-14 17:35:49 -07:00
Cheng Wan
2f8ba6fe82 [Fix] MoE: fix w8a8_fp8 MoE and add tests to cover this code path (#10429) 2025-09-14 17:34:28 -07:00
Feng Su
4c21b09074 [Feature] Sglang Tracing: Fine-Grained Tracking for Request Latency - Part 1 (#9962)
Signed-off-by: Feng Su <sufeng@linux.alibaba.com>
Signed-off-by: Huaixin Chang <changhuaixin@linux.alibaba.com>
Signed-off-by: Peng Wang <rocking@linux.alibaba.com>
2025-09-15 02:08:02 +08:00
艾力可
165abeebca Typo: in --enable-custom-logit-processor: agree with cli arg (#10076) 2025-09-14 02:27:09 -07:00
Yingchun Lai
21ca4c3afa [PD metrics] Fix some uncompleted PD related metrics (#8627) 2025-09-14 02:26:58 -07:00
fzyzcjy
4da5533682 Support profile args in Engine API (#6539) 2025-09-14 01:21:10 -07:00
fzyzcjy
ac964d2e58 Support global scale in addition to per expert scale for cutedsl moe (#10270) 2025-09-14 01:17:00 -07:00
fzyzcjy
fa46e2bd40 Support offloading in fp8 (#9948) 2025-09-14 01:14:28 -07:00
fzyzcjy
b047b553c2 [2/2] Speed up prefill mla attention concat (#10157) 2025-09-14 01:12:04 -07:00
fzyzcjy
2df532ef20 Fix the global scale fix does not support EPLB and improve enabling condition (#10369) 2025-09-14 01:07:47 -07:00
Ximingwang-09
b3c977622f Add h200 fused moe config for Qwen3-Next (#10404)
Co-authored-by: 纬杭 <ximing.wxm@antgroup.com>
2025-09-13 23:48:46 -07:00
Yuxuan Zhang
b8347b40b1 Add self.capture_aux_hidden_states For GLM-4.5V (#10228) 2025-09-13 23:31:55 -07:00
fzyzcjy
72dfa96aeb Fix cutlass moe accuracy drop caused by attention UB from DP padding mode (#10414) 2025-09-13 22:29:09 -07:00
lijin23
05b01ef4da fix duplicated logger in eager_utils (#10410) 2025-09-13 22:25:40 -07:00
Liangsheng Yin
55a6e644b0 [Hack] Add pd-disaggregation decode polling interval (#10411) 2025-09-14 10:18:23 +08:00
Liangsheng Yin
6897e06b69 Remove repeatedly lists adding in init_incremental_detokenization (#10412) 2025-09-14 10:05:52 +08:00
Sundara Raman Ramachandran
a360511d7b [Generative Score API] Scoring(Prefill-only) optimizations. (#9748) 2025-09-14 01:57:06 +08:00
Sundara Raman Ramachandran
94d0f656fb [Performance] Dynamic Batch Tokenizer (#9382) 2025-09-14 01:56:04 +08:00
Binyao Jiang
9752861002 [Fix] Support qwen3-next MTP+DP (#10392) 2025-09-13 17:45:04 +08:00
Yi Zhang
297d374510 support qwen3_next blackwell (#10403) 2025-09-13 17:18:26 +08:00
Binyao Jiang
31e9d3a5aa [Fix] Init mamba related memory pools with torch.zeros (#10400) 2025-09-13 14:16:48 +08:00
Xinyuan Tong
6f4676ef85 fix: tool parse in large streaming chunk beginning with normal content (#10397)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-09-12 22:29:35 -07:00
narutolhy
99757cc3e6 fix probs name which without temp scaling name (#9984) 2025-09-13 12:19:57 +08:00
Lianmin Zheng
cdddab056c [Auto Sync] Update xgrammar_backend.py (20250913) (#10395)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-09-12 17:46:56 -07:00
Teng Ma
49f169d53e [HiCache] doc: update deployment in readme (#10332)
Signed-off-by: Teng Ma <sima.mt@alibaba-inc.com>
2025-09-12 16:35:37 -07:00
Teng Ma
7fce2fd91a [HiCache] fix mooncake config in different tp size (#10377) 2025-09-12 16:34:23 -07:00
Even Zhou
16cd550c85 Support Qwen3-Next on Ascend NPU (#10379) 2025-09-12 16:31:37 -07:00
Muqi Li
d5e2a37414 Benchmark: Support API_KEY without 'bearer' (#10380) 2025-09-12 16:29:04 -07:00
Mohammad Miadh Angkad
321fecab74 Add sentencepiece to project dependencies (#10386) 2025-09-12 16:02:54 -07:00
kk
78b7465cad Fix GPU fault issue when run dsv3 with dp mode and enable torch-compile (#10361)
Co-authored-by: wunhuang <wunhuang@amd.com>
2025-09-12 15:05:51 -07:00