Baizhou Zhang
|
75e6a7cde1
|
Support radix cache for Lora feature (#7216)
|
2025-08-11 10:14:11 -07:00 |
|
Lifu Huang
|
e322a94d1f
|
Reduce CI duration of test_lora_update. (#9024)
|
2025-08-10 15:34:04 -07:00 |
|
Lianmin Zheng
|
2c7f01bc89
|
Reorganize CI and test files (#9027)
|
2025-08-10 12:30:06 -07:00 |
|
Lianmin Zheng
|
ef48d5547e
|
Fix CI (#9013)
|
2025-08-09 16:00:10 -07:00 |
|
fzyzcjy
|
442534aa44
|
Add CI for gpt-oss model on hopper (#8851)
|
2025-08-09 00:34:23 -07:00 |
|
Lianmin Zheng
|
706bd69cc5
|
Clean up server_args.py to have a dedicated function for model specific adjustments (#8983)
|
2025-08-08 19:56:50 -07:00 |
|
Lianmin Zheng
|
a947154286
|
Revert "Support Multi Process Tokenizer Manager" (#8960)
|
2025-08-08 02:28:27 -07:00 |
|
ybyang
|
7490e3f67d
|
Support Multi Process Tokenizer Manager (#6555)
Signed-off-by: ybyang <ybyang7@iflytek.com>
Signed-off-by: huanglong <huanglong@linux.alibaba.com>
Co-authored-by: lw9527 <952799980@qq.com>
Co-authored-by: huanglong <huanglong@linux.alibaba.com>
Co-authored-by: Huang Long <121648372+LLLL114@users.noreply.github.com>
|
2025-08-08 01:45:50 -07:00 |
|
fzyzcjy
|
b114a8105b
|
Support B200 in CI (#8861)
|
2025-08-06 21:42:44 +08:00 |
|
Even Zhou
|
fee0ab0fba
|
[CI] Ascend NPU CI enhancement (#8294)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
|
2025-08-03 22:16:38 -07:00 |
|
harrisonlimh
|
747dd45077
|
feat: throttle requests at scheduler based on --max_queued_requests (#7565)
|
2025-07-28 22:32:33 +08:00 |
|
Qiaolin Yu
|
2810338401
|
[feat] Support different attention backends for prefill and decode (#6338)
Co-authored-by: tianqilin.99 <tianqilin.99@bytedance.com>
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
|
2025-07-28 11:42:29 +08:00 |
|
Stefan He
|
ce32bc2ba9
|
Extract update_weights from RL Engine to SGLang to keep simplicity and fix torch reduce (#8267)
Co-authored-by: CuiBo 82354186+SuperCB@users.noreply.github.com
Co-authored-by: GeLee 865038696@qq.com
Co-authored-by: 杨睿 yangruipis@163.com
|
2025-07-26 02:00:59 -07:00 |
|
Xinyuan Tong
|
38000a5f44
|
Fix gemma3n with hybrid swa (#8240)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
2025-07-23 13:29:18 -07:00 |
|
Lifu Huang
|
8abd3e77fe
|
Introduce Stable LoRA ID System for Overlapped Updates and Prefix Caching (#8261)
|
2025-07-23 00:32:16 -07:00 |
|
Pavel Logachev
|
877e35d775
|
Add get_hidden_dim to qwen3.py for correct lora (#7312)
|
2025-07-19 19:31:16 -07:00 |
|
Lifu Huang
|
4e3defe5a7
|
Support start up LoRA server without initial adapters (#8019)
|
2025-07-19 15:38:09 -07:00 |
|
Lifu Huang
|
3de617a75b
|
Fix LoRA buffer contamination during adapter eviction (#8103)
|
2025-07-19 13:14:08 -07:00 |
|
Lianmin Zheng
|
9c7a46180c
|
[Doc] Steps to add a new attention backend (#8155)
|
2025-07-18 16:38:26 -07:00 |
|
Hubert Lu
|
7750b91ca8
|
[AMD] Add triton awq_dequantize kernel to support AWQ on ROCm (#7661)
|
2025-07-18 14:27:25 -07:00 |
|
Zhiqiang Xie
|
9d33fcfb8e
|
Hicache Storage Layer Prototype (#7704)
|
2025-07-18 15:20:19 +08:00 |
|
Lifu Huang
|
e2ed9d049a
|
Refactor dynamic LoRA update to fix incorrect handling of variant weight shapes (#7844)
|
2025-07-13 18:36:01 -07:00 |
|
kyleliang-nv
|
dd445a41f5
|
[feature] Add start step profile argument in /start_profile (#7608)
|
2025-07-09 18:42:15 -07:00 |
|
Cheng Wan
|
d487555f84
|
[CI] Add deepep tests to CI (#7872)
|
2025-07-09 01:49:47 -07:00 |
|
Xinyuan Tong
|
136c6e0431
|
fix: Handles input_embeds in GenerateReqInput when n>1 (#7830)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
|
2025-07-08 14:00:42 -07:00 |
|
Lifu Huang
|
2b0e1d1ce0
|
[Minor] Fix sporadic CI timeout caused by underestimated tests. (#7850)
|
2025-07-08 01:01:49 -07:00 |
|
Hubert Lu
|
e00715eb66
|
[AMD] Add test_fused_moe.py and test_rope_rocm.py to AMD CI (#5246)
|
2025-07-06 01:47:16 -07:00 |
|
YanbingJiang
|
4de0395343
|
Add V2-lite model test (#7390)
Co-authored-by: DiweiSun <105627594+DiweiSun@users.noreply.github.com>
|
2025-07-03 22:25:50 -07:00 |
|
ronnie_zheng
|
1e0e549766
|
Ascend attention backend(PA&MLA) (#7722)
Co-authored-by: Maksim <makcum888e@mail.ru>
Co-authored-by: VDV1985 <vladdv85@mail.ru>
|
2025-07-03 09:23:19 -07:00 |
|
Hubert Lu
|
b116b21a46
|
[AMD] Temporarily disable test_no_overlap_scheduler and test_vision_chunked_prefill (#7717)
|
2025-07-02 12:39:18 -07:00 |
|
Lianmin Zheng
|
22352d47a9
|
Improve streaming, log_level, memory report, weight loading, and benchmark script (#7632)
Co-authored-by: Kan Wu <wukanustc@gmail.com>
|
2025-06-29 23:16:19 -07:00 |
|
Chunyuan WU
|
c5131f7a2f
|
[CPU] add c++ kernel to bind CPU cores and memory node (#7524)
|
2025-06-29 19:45:25 -07:00 |
|
Xinyuan Tong
|
8f335b5bd6
|
Fix stream reasoning parser and Adds Kimi reasoning parser (#7432)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
|
2025-06-29 14:39:05 -07:00 |
|
Yineng Zhang
|
a8c10aeeee
|
fix unit tests (#7618)
|
2025-06-28 00:32:41 -07:00 |
|
Lifu Huang
|
49538d111b
|
Support dynamic LoRA loading / unloading in engine/server API (#7446)
|
2025-06-27 21:00:27 -07:00 |
|
Lifu Huang
|
2373faa317
|
Fix flakiness in LoRA batch test. (#7552)
|
2025-06-27 19:51:43 -07:00 |
|
Ata Fatahi
|
031f64aa1b
|
Add e2e test for multi instance multi stage memory release/resume occupuation (#7208)
Signed-off-by: Ata Fatahi <immrata@gmail.com>
|
2025-06-26 17:40:38 -07:00 |
|
Chang Su
|
fa42e41962
|
ci: Revert openai_server related tests in AMD suites (#7449)
|
2025-06-23 15:28:22 -07:00 |
|
Chang Su
|
b7a2df0a44
|
refactor(test): reorganize OpenAI test file structure (#7408)
|
2025-06-21 19:37:48 -07:00 |
|
Chang Su
|
72676cd6c0
|
feat(oai refactor): Replace openai_api with entrypoints/openai (#7351)
Co-authored-by: Jin Pan <jpan236@wisc.edu>
|
2025-06-21 13:21:06 -07:00 |
|
Ata Fatahi
|
1ab6be1b26
|
Purge VerlEngine (#7326)
Signed-off-by: Ata Fatahi <immrata@gmail.com>
|
2025-06-19 23:47:21 -07:00 |
|
Stefan He
|
3774f07825
|
Multi-Stage Awake: Support Resume and Pause KV Cache and Weights separately (#7099)
|
2025-06-19 00:56:37 -07:00 |
|
Jinn
|
ffd1a26e09
|
Add more refactored openai test & in CI (#7284)
|
2025-06-18 13:52:55 -07:00 |
|
YanbingJiang
|
094c116f7d
|
Update python API of activation, topk, norm and rope and remove vllm dependency (#6614)
Co-authored-by: Wu, Chunyuan <chunyuan.wu@intel.com>
Co-authored-by: jianan-gu <jianan.gu@intel.com>
Co-authored-by: sdp <sdp@gnr799219.jf.intel.com>
|
2025-06-17 22:11:50 -07:00 |
|
woodx
|
e30ef368ab
|
Feat/support rerank (#6058)
|
2025-06-16 10:50:01 -07:00 |
|
Lianmin Zheng
|
ba589b88fc
|
Improve test cases for eagle infer (#7173)
|
2025-06-13 22:25:13 -07:00 |
|
Lianmin Zheng
|
0fc3d992bb
|
Split the eagle test into two files (#7170)
|
2025-06-13 20:14:26 -07:00 |
|
Baizhou Zhang
|
2a5f0100e0
|
Fix GGuf and add back test_gguf.py (#7067)
|
2025-06-10 21:07:20 -07:00 |
|
kyle-pena-kuzco
|
b56de8f943
|
Open AI API hidden states (#6716)
|
2025-06-10 14:37:29 -07:00 |
|
Yineng Zhang
|
2f58445531
|
Revert "Add sanity checks when a test file is not added to CI (#6947)" (#7063)
|
2025-06-10 12:43:25 -07:00 |
|