Commit Graph

181 Commits

Author SHA1 Message Date
gongwei-130
0cf3fbeb18 should return invalide request for empty prompt (#9315) 2025-08-18 11:44:11 -07:00
Chengxing Xie
c1c7dc4534 feat: Add model version tracking with API endpoints and response metadata (#8795) 2025-08-14 12:13:46 -07:00
Hongbo Xu
2cc9eeab01 [4/n]decouple quantization implementation from vLLM dependency (#9191)
Co-authored-by: AniZpZ <aniz1905@gmail.com>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
2025-08-14 12:05:46 -07:00
eigen
4dbf43601d fix: zero_init buffer (#9065)
Co-authored-by: Yineng Zhang <me@zhyncs.com>
2025-08-14 02:39:09 -07:00
Jiaqi Gu
c9ee738515 Fuse writing KV buffer into rope kernel (part 2: srt) (#9014)
Co-authored-by: fzyzcjy <5236035+fzyzcjy@users.noreply.github.com>
2025-08-12 13:15:30 -07:00
Chang Su
f2a5de284b [Bugfix] Fix accuracy-test-1-gpu failure caused by builtin_tools (#9114) 2025-08-12 09:56:13 -07:00
Chang Su
a218490136 (gpt-oss, oai, chat): Remove Harmony Integration and Implement Native GPT-OSS Tool Call Support (#9043) 2025-08-11 18:59:18 -07:00
Chang Su
a6452b7188 bugfix: Fix output_ids extraction in detokenizer_manager (#9047) 2025-08-11 03:17:32 -07:00
zhyncs
f4ae50e97c fix: use flashinfer v0.2.11.post1 2025-08-11 02:49:25 -07:00
Yineng Zhang
84cb449eec Revert "chore: upgrade flashinfer 0.2.11 (#9036)" (#9057) 2025-08-11 00:16:39 -07:00
Yineng Zhang
dd001a5477 chore: upgrade flashinfer 0.2.11 (#9036) 2025-08-10 17:35:37 -07:00
Lianmin Zheng
4ea9d74a3e Simplify health check (#9034) 2025-08-10 17:35:05 -07:00
Stefan He
8ecf6b9d24 Support Flatten Tensor Update Weights to speed up MOE Update Weights by 20% (#8079) 2025-08-10 16:08:59 -07:00
Lianmin Zheng
9a44b643c6 Fix CI (#9012) 2025-08-09 13:33:42 -07:00
Yineng Zhang
326a901df4 chore: upgrade sgl-kernel 0.3.3 (#8998) 2025-08-09 01:22:01 -07:00
Lianmin Zheng
706bd69cc5 Clean up server_args.py to have a dedicated function for model specific adjustments (#8983) 2025-08-08 19:56:50 -07:00
ishandhanani
4e7f025219 chore(gb200): update to CUDA 12.9 and improve build process (#8772) 2025-08-08 13:42:47 -07:00
Zilin Zhu
dd650e0e21 [RL] fix skip_server_warmup and rl health_generate logic (#8757) 2025-08-08 04:34:38 -07:00
Lianmin Zheng
a947154286 Revert "Support Multi Process Tokenizer Manager" (#8960) 2025-08-08 02:28:27 -07:00
ybyang
7490e3f67d Support Multi Process Tokenizer Manager (#6555)
Signed-off-by: ybyang <ybyang7@iflytek.com>
Signed-off-by: huanglong <huanglong@linux.alibaba.com>
Co-authored-by: lw9527 <952799980@qq.com>
Co-authored-by: huanglong <huanglong@linux.alibaba.com>
Co-authored-by: Huang Long <121648372+LLLL114@users.noreply.github.com>
2025-08-08 01:45:50 -07:00
Xinyuan Tong
3e7ff1ab1f fix: reasoning parser when request have enable_thinking flag (#8933)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-07 15:52:06 -07:00
Xinyuan Tong
3fa3c6cd6a Enables force reasoning based on chat template for Qwen3-Thinking (#8369)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Chang Su <csu272@usc.edu>
2025-08-06 20:02:47 -07:00
Lifu Huang
6210e2c4f0 Support GPU pinning for LoRA (#8697) 2025-08-06 19:39:45 -07:00
Chang Su
92cc32d9fc Support v1/responses and use harmony in serving_chat (#8837)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-06 16:20:34 -07:00
Yineng Zhang
3ae8e3ea8f chore: upgrade torch 2.8.0 (#8836) 2025-08-05 17:32:01 -07:00
Yineng Zhang
4f4e0e4162 chore: upgrade flashinfer 0.2.10 (#8827) 2025-08-05 12:04:01 -07:00
Yineng Zhang
1ea94d3b92 chore: upgrade flashinfer v0.2.9 (#8780) 2025-08-04 21:59:18 -07:00
ybyang
6f9baf1002 [Improvements] Merge health check route (#8444)
Signed-off-by: ybyang <ybyang7@iflytek.com>
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
Co-authored-by: Kan Wu <wukanustc@gmail.com>
2025-08-03 01:59:06 -07:00
Guanhua Wang
f7b2853ff8 [feat] support minimum token load balance in dp attention (#7379) 2025-08-03 00:46:47 -07:00
Nicolas Castet
82e6c3a65a Add support for NCCL symmetric memory for TP allreduces (#8238) 2025-08-01 23:30:55 +00:00
Cheng Wan
7a1f7fc504 [Feature] Hybrid EP and TP (#8590) 2025-07-31 02:53:25 -07:00
Cheng Wan
e179e0b797 update sgl-kernel for EP: python part (#8550) 2025-07-31 00:14:39 -07:00
Chang Su
a79a5d7012 Revert "Fix the input tools format and history tool_calls in OpenAI API (#6556)" (#8584) 2025-07-30 13:12:05 -07:00
Lianmin Zheng
a4c3b121d8 Split the scheduler into multiple mixin classes to reduce the file size (#8483) 2025-07-29 12:46:50 -07:00
Timofey
c8f549d96d Fix parsing ChatCompletionMessage (#7273)
Co-authored-by: Timofey K <timosha1113@gmail.com>
2025-07-28 11:35:14 -07:00
harrisonlimh
747dd45077 feat: throttle requests at scheduler based on --max_queued_requests (#7565) 2025-07-28 22:32:33 +08:00
Chang Su
b47eda3316 bugfix: Fix multiple finish_reason chunks and tool_calls finish reason check (#8417) 2025-07-27 13:31:06 -07:00
Binyao Jiang
e983d66680 Fix: Improve test_openai_function_calling unit test and fix reasoning_parser.py think_start_token logic (#8316)
Co-authored-by: Chang Su <chang.s.su@oracle.com>
2025-07-27 13:12:59 -07:00
Yineng Zhang
10ee89559e chore: upgrade flashinfer v0.2.9rc2 (#8406) 2025-07-27 01:41:22 -07:00
Yingchun Lai
36d6f0ba5b fix: fix the missing metrics on non-rank0 nodes (#7720) 2025-07-27 00:55:25 -07:00
Lianmin Zheng
ed2e313eb6 Clean up server_args, triton cache manager (#8332) 2025-07-25 14:14:51 -07:00
Ying Wang
7ad6b766c5 fix: Fix failed functional tests https://github.com/meta-llama/llama-stack-evals (#8266) 2025-07-24 23:11:32 -07:00
Swipe4057
8d1c5b948e chore: upgrade flashinfer v0.2.9rc1 (#8301)
Co-authored-by: Yineng Zhang <me@zhyncs.com>
2025-07-24 14:29:56 -07:00
Simo Lin
5dd0f870ab [bug] fix pd completion protocol for batching support (#8317) 2025-07-23 23:18:17 -07:00
Yineng Zhang
4953f4ca9a chore: upgrade sgl-kernel 0.2.7 (#8304) 2025-07-23 15:07:27 -07:00
xianzhiT
c87d4fec99 Fix the issue of incorrect finish reason in final stream response chunk returned during tool call (#7708)
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
2025-07-23 13:28:53 -07:00
Yineng Zhang
74f59ae555 chore: upgrade sgl-kernel 0.2.6.post1 (#8202) 2025-07-21 02:10:24 -07:00
Lianmin Zheng
55381a46ac Revert "[Feature] Simple Improve Health Check Mechanism for Production-Grade Stability" (#8181) 2025-07-19 22:41:30 -07:00
ybyang
4540a4666a [Feature] Simple Improve Health Check Mechanism for Production-Grade Stability (#8115)
Signed-off-by: ybyang <ybyang7@iflytek.com>
2025-07-19 18:10:00 -07:00
Yineng Zhang
561dd7b2ce chore: upgrade sgl-kernel 0.2.6 (#8166) 2025-07-19 03:17:08 -07:00