Baizhou Zhang
|
75e6a7cde1
|
Support radix cache for Lora feature (#7216)
|
2025-08-11 10:14:11 -07:00 |
|
Cheng Wan
|
f003cd3548
|
[CI] Fix CI tests (#9050)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
2025-08-10 23:52:05 -07:00 |
|
Lianmin Zheng
|
2449a0afe2
|
Refactor the docs (#9031)
|
2025-08-10 19:49:45 -07:00 |
|
Stefan He
|
8ecf6b9d24
|
Support Flatten Tensor Update Weights to speed up MOE Update Weights by 20% (#8079)
|
2025-08-10 16:08:59 -07:00 |
|
Lifu Huang
|
e322a94d1f
|
Reduce CI duration of test_lora_update. (#9024)
|
2025-08-10 15:34:04 -07:00 |
|
Lianmin Zheng
|
2c7f01bc89
|
Reorganize CI and test files (#9027)
|
2025-08-10 12:30:06 -07:00 |
|
Stefan He
|
6345069f6c
|
[RL] Add test for /abort_request (#7626)
|
2025-08-10 09:14:19 -07:00 |
|
Lianmin Zheng
|
ef48d5547e
|
Fix CI (#9013)
|
2025-08-09 16:00:10 -07:00 |
|
Lianmin Zheng
|
9a44b643c6
|
Fix CI (#9012)
|
2025-08-09 13:33:42 -07:00 |
|
Binyao Jiang
|
f29aba8c6e
|
Support glm4.1v and glm4.5v (#8798)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: zRzRzRzRzRzRzR <2448370773@qq.com>
Co-authored-by: Minglei Zhu <mingleizhu1122@gmail.com>
Co-authored-by: Chang Su <csu272@usc.edu>
|
2025-08-09 00:59:13 -07:00 |
|
Binyao Jiang
|
7b81f956eb
|
Fix qwen2 audio not working bug (#8600)
|
2025-08-09 00:42:29 -07:00 |
|
fzyzcjy
|
442534aa44
|
Add CI for gpt-oss model on hopper (#8851)
|
2025-08-09 00:34:23 -07:00 |
|
Lianmin Zheng
|
706bd69cc5
|
Clean up server_args.py to have a dedicated function for model specific adjustments (#8983)
|
2025-08-08 19:56:50 -07:00 |
|
maocheng23
|
b3359dc9bf
|
Update qwen3_coder_detector.py for streaming (#8371)
|
2025-08-08 14:51:03 -07:00 |
|
Lianmin Zheng
|
a947154286
|
Revert "Support Multi Process Tokenizer Manager" (#8960)
|
2025-08-08 02:28:27 -07:00 |
|
ybyang
|
7490e3f67d
|
Support Multi Process Tokenizer Manager (#6555)
Signed-off-by: ybyang <ybyang7@iflytek.com>
Signed-off-by: huanglong <huanglong@linux.alibaba.com>
Co-authored-by: lw9527 <952799980@qq.com>
Co-authored-by: huanglong <huanglong@linux.alibaba.com>
Co-authored-by: Huang Long <121648372+LLLL114@users.noreply.github.com>
|
2025-08-08 01:45:50 -07:00 |
|
Minglei Zhu
|
6ee6619b7a
|
add zai-org/GLM-4.5-Air-FP8 model into nightly CI (#8894)
|
2025-08-08 01:44:19 -07:00 |
|
Zheng Wengang
|
2d120f8b18
|
[Feature][Multimodal] Implement LRU cache for multimodal embeddings (#8292)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
2025-08-06 23:21:40 -07:00 |
|
Xinyuan Tong
|
3fa3c6cd6a
|
Enables force reasoning based on chat template for Qwen3-Thinking (#8369)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Chang Su <csu272@usc.edu>
|
2025-08-06 20:02:47 -07:00 |
|
Lifu Huang
|
6210e2c4f0
|
Support GPU pinning for LoRA (#8697)
|
2025-08-06 19:39:45 -07:00 |
|
fzyzcjy
|
b114a8105b
|
Support B200 in CI (#8861)
|
2025-08-06 21:42:44 +08:00 |
|
Ke Bao
|
4fc5f2f977
|
Add unit test for triton swa kernel (#8853)
|
2025-08-06 16:10:38 +08:00 |
|
HouseWest
|
ca47e24f5d
|
[Feature] improve TBO: two chunk overlap (#8144)
|
2025-08-05 21:11:01 -07:00 |
|
Praneth Paruchuri
|
d26ca84f39
|
Support bailing moe (#8680)
|
2025-08-05 20:40:34 -07:00 |
|
Yuhao Yao
|
873f384a51
|
[feat] Add detail in image_data (#8596)
|
2025-08-05 14:01:38 +08:00 |
|
Chunyuan WU
|
08f8f49016
|
[CPU][sgl-kernel] biased_grouped_topk: fix correction_bias dtype to float32 (#8212)
Co-authored-by: jianan-gu <jianan.gu@intel.com>
Co-authored-by: YanbingJiang <yanbing.jiang@intel.com>
|
2025-08-04 18:28:31 -07:00 |
|
Even Zhou
|
fee0ab0fba
|
[CI] Ascend NPU CI enhancement (#8294)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
|
2025-08-03 22:16:38 -07:00 |
|
Guanhua Wang
|
f7b2853ff8
|
[feat] support minimum token load balance in dp attention (#7379)
|
2025-08-03 00:46:47 -07:00 |
|
Lifu Huang
|
8675bdf246
|
Support limiting max loaded loras in CPU. (#8650)
|
2025-08-03 00:02:23 -07:00 |
|
DarkSharpness
|
e273aa6dcf
|
[Feature] Radix Tree in C++ (#7369)
|
2025-08-02 19:50:14 -07:00 |
|
Stefan He
|
4ca43b061c
|
Add tensor.detach() back to update weight util (#8691)
|
2025-08-02 00:41:05 -07:00 |
|
YanbingJiang
|
1fe691a429
|
Fix FP8 block quantization when N or K is not multiples of 128 (#8648)
|
2025-08-01 15:57:19 -07:00 |
|
Lifu Huang
|
46e9d1c7c1
|
Increase tolerance to address CI failures (#8643)
|
2025-08-01 02:32:10 -07:00 |
|
Cheng Wan
|
6c88f6c8d9
|
[5/N] MoE Refactor: Update MoE parallelism arguments (#8658)
|
2025-08-01 01:20:03 -07:00 |
|
Xinyuan Tong
|
7e831efee8
|
Fix chat template handling for OpenAI serving (#8635)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
2025-07-31 21:49:45 -07:00 |
|
Chang Su
|
51c38163c1
|
model: support Step3V (#8583)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: nnnobody-code <nnnobody@foxmail.com>
Co-authored-by: ispobock <ispobaoke@gmail.com>
Co-authored-by: Qiaolin-Yu <qy254@cornell.edu>
Co-authored-by: Qiaolin-Yu <liin1211@outlook.com>
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
|
2025-07-31 02:41:00 -07:00 |
|
Binyao Jiang
|
59aab76f0a
|
Bug: Fix google gemma3n-mm audio input not working bug (#8365)
|
2025-07-30 21:23:09 -07:00 |
|
Lifu Huang
|
67e53b16f5
|
Bump transfomers to 4.54.1 to fix Gemma cache issue. (#8541)
|
2025-07-30 19:50:54 -07:00 |
|
Stefan He
|
c0fd77e839
|
bring back kimi vl ci (#8537)
|
2025-07-29 13:14:18 -07:00 |
|
harrisonlimh
|
747dd45077
|
feat: throttle requests at scheduler based on --max_queued_requests (#7565)
|
2025-07-28 22:32:33 +08:00 |
|
Binyao Jiang
|
581e7dcb92
|
GLM-4.5 Model Support Follow-up (#8445)
|
2025-07-27 23:35:20 -07:00 |
|
Yuxuan Zhang
|
6d6a8bc278
|
GLM-4.5 Model Support (#8224)
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
|
2025-07-27 22:54:07 -07:00 |
|
Stefan He
|
4ad9737045
|
chore: bump transformer to 4.54.0 (#8416)
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
|
2025-07-27 21:27:25 -07:00 |
|
Qiaolin Yu
|
2810338401
|
[feat] Support different attention backends for prefill and decode (#6338)
Co-authored-by: tianqilin.99 <tianqilin.99@bytedance.com>
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
|
2025-07-28 11:42:29 +08:00 |
|
Chang Su
|
58dd95fbc8
|
Fix test_openai_server (#8419)
|
2025-07-27 13:36:01 -07:00 |
|
Chang Su
|
b47eda3316
|
bugfix: Fix multiple finish_reason chunks and tool_calls finish reason check (#8417)
|
2025-07-27 13:31:06 -07:00 |
|
Binyao Jiang
|
e983d66680
|
Fix: Improve test_openai_function_calling unit test and fix reasoning_parser.py think_start_token logic (#8316)
Co-authored-by: Chang Su <chang.s.su@oracle.com>
|
2025-07-27 13:12:59 -07:00 |
|
Lifu Huang
|
df90645525
|
Support overlapped lora updates (#8213)
|
2025-07-27 13:00:44 -07:00 |
|
Kevin Xiang Li
|
44d600cd67
|
Support precomputed_embeddings for Llama 4 (#8156)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xiang (Kevin) Li <lik@nvidia.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
2025-07-27 01:14:49 -07:00 |
|
Lifu Huang
|
5c705b1dce
|
Add perf tests for LoRA (#8314)
|
2025-07-26 14:55:22 -07:00 |
|