Commit Graph

2053 Commits

Author SHA1 Message Date
woodx
2c3ea29476 [Feature] support auto chat template (#4949) 2025-04-28 22:34:18 -07:00
Trevor Morris
8d463fe351 Cutlass MLA decode - fix dtype error (#5868) 2025-04-28 21:12:58 -07:00
Lianmin Zheng
26fc32d168 [CI] tune the test order to warmup the server (#5860) 2025-04-28 19:27:37 -07:00
Xiaoyu Zhang
1cc326032d simplify fused_moe config logging (#5801) 2025-04-28 17:04:54 -07:00
Chang Su
05ee219286 Support max_completion_tokens for OpenAIChatCompletions (#5857) 2025-04-28 13:50:13 -07:00
Yineng Zhang
dcae1fb2cd chore: bump v0.4.6.post1 (#5845) 2025-04-28 12:57:08 -07:00
Yi Zhang
a0251a3fd6 add fused moe config for qwen3moe fp8/bf16 (#5849) 2025-04-28 11:55:52 -07:00
Yineng Zhang
663037a7a0 feat: update is_fa3_default_architecture (#5854) 2025-04-28 11:53:22 -07:00
XTY
f4a9f60cbd [Fix] Missing bootstrap_port field (#5823) 2025-04-28 11:13:04 -07:00
HAI
d364b9b0f2 ROCm: update AITER (#5816) 2025-04-28 11:01:20 -07:00
Lianmin Zheng
849c83a0c0 [CI] test chunked prefill more (#5798) 2025-04-28 10:57:17 -07:00
JiLi
d73ddeb196 feat: Add fused moe triton config for qwen3-30b-fp8 moe on h20 (#5850) 2025-04-28 10:49:33 -07:00
ybyang
74cb12a878 [config] qwen3moe_tune_h20 fp8 tp4 (#5846) 2025-04-28 10:21:06 -07:00
ybyang
c6c6264073 [PD] support pd fake transfer for warmup (#5726) 2025-04-29 00:33:20 +08:00
yhyang201
92ab0a2055 feat: Add fused moe triton config for qwen3bf16 moe on h20 (#5839) 2025-04-28 09:30:59 -07:00
XinyuanTong
0045f4b2af feat: Add fused moe triton config for qwen3 moe on h100 (#5833) 2025-04-28 08:37:13 -07:00
mlmz
8601300beb fix: fix the error where the content is None when reasoning and tool … (#5838) 2025-04-28 08:36:08 -07:00
mlmz
6fa6f38ed3 Feat: add support for thinking mode via chat_template_kwargs.enable_t… (#5551)
Co-authored-by: shuaills <shishuaiuoe@gmail.com>
Co-authored-by: Chayenne <zhaochen20@outlook.com>
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
2025-04-28 07:07:45 -07:00
Lianmin Zheng
693723d1f7 Revert "Tiny refactor DefaultModelLoader.Source" (#5825) 2025-04-28 01:18:57 -07:00
fzyzcjy
644ed409d1 Tiny refactor DefaultModelLoader.Source (#5482) 2025-04-28 00:35:51 -07:00
Lianmin Zheng
3029889cb4 Turn on overlap scheduler for multimodal models (#5771) 2025-04-27 23:45:09 -07:00
Yineng Zhang
41ac0c6d48 chore: upgrade sgl-kernel 0.1.0 (#5690) 2025-04-27 21:00:50 -07:00
Trevor Morris
84810da4ae Add Cutlass MLA attention backend (#5390) 2025-04-27 20:58:53 -07:00
Liangsheng Yin
40d9b8acce Improve overlap scheduling (#5788) 2025-04-28 11:19:16 +08:00
Lianmin Zheng
daed453e84 [CI] Improve github summary & enable fa3 for more models (#5796) 2025-04-27 15:29:46 -07:00
Baizhou Zhang
84022c0e56 Release v0.4.6 (#5795) 2025-04-27 14:07:05 -07:00
Lianmin Zheng
a38f6932cc [CI] Fix test case (#5790) 2025-04-27 08:55:35 -07:00
Liangsheng Yin
beb65c7433 [PD]Reduce kv transfer threads (#5791) 2025-04-27 23:03:30 +08:00
Lianmin Zheng
621e96bf9b [CI] Fix ci tests (#5769) 2025-04-27 07:18:10 -07:00
Lianmin Zheng
35ca04d2fa [CI] fix port conflicts (#5789) 2025-04-27 05:17:44 -07:00
Lianmin Zheng
9c088829ee Revert "Use device_id in dist init to reduce NCCL communicator warmup & creation overhead" (#5786) 2025-04-27 04:03:02 -07:00
Lianmin Zheng
005aad32ad Revert "[fix] fix bench_one_batch_server" (#5785) 2025-04-27 03:48:33 -07:00
Lianmin Zheng
6e313c1b8b Revert "Revert "fix: import vllm_rotary_embedding error when head_size not in 64, 128, 256, 512"" (#5777) 2025-04-27 01:04:15 -07:00
Lianmin Zheng
8ba313304d Revert "fix: import vllm_rotary_embedding error when head_size not in 64, 128, 256, 512" (#5772) 2025-04-26 23:26:08 -07:00
zhanweidu
021020632a add switch to disable open api doc (#3744)
Signed-off-by: congcongke <zhanweidu@163.com>
2025-04-26 23:18:47 -07:00
Kebe
7e944246c3 Add memory_saver check (#4986)
Signed-off-by: Kebe <mail@kebe7jun.com>
2025-04-26 20:20:05 -07:00
lambert0312
a086a11305 Use sgl-kernel sgl_per_token_group_quant_int8 (#4971) 2025-04-26 20:19:42 -07:00
Michał Moskal
bdbe5f816b update llguidance to 0.7.11; adds StructTag (#4870) 2025-04-26 20:13:57 -07:00
aoshen524
9ad28f639e fix(srt): check if sample_indices is not None before usage. (#5633) 2025-04-26 19:51:01 -07:00
yan97ao
d7b1ce65a5 Handle JSONDecodeError while processing request data (#5599) 2025-04-26 19:50:50 -07:00
JieXin Liang
f55933e1cc [misc] more decode step log for batch_one_batch (#5565) 2025-04-26 19:50:28 -07:00
Stefan He
408ba02218 Add Llama 4 to FA3 test (#5509) 2025-04-26 19:49:31 -07:00
vzed
094891c01a fix: Use is not None instead of != None for None checks. (#5687) 2025-04-26 19:26:57 -07:00
Frankey_8080
a21ef36352 support for the DeepSeek model by enabling streaming response parsing (#5592) 2025-04-26 18:59:31 -07:00
JieXin Liang
3c4dc38a9a [fix] fix bench_one_batch_server (#5607) 2025-04-26 18:49:45 -07:00
DavidBao
d8fbc7c096 [feature] support for roberta embedding models (#5730) 2025-04-26 18:47:06 -07:00
Ke Bao
799c4bb502 Fuse MLA set kv cache kernel (#5748) 2025-04-26 18:42:22 -07:00
vzed
df2cf583ce we fix the non existent access of decrypted_config_file (#5685) 2025-04-26 18:32:37 -07:00
saltyfish66
133ded039a perf: update H20 fused_moe_triton kernel config to get higher throughput during prefilling (#5716) 2025-04-26 18:15:07 -07:00
Yuhong Guo
f87a6ab359 Resolves the 404 Not Found error when running compile_deep_gemm.py in multi-node setups (#5720) 2025-04-26 18:13:13 -07:00