Commit Graph

889 Commits

Author SHA1 Message Date
Lifu Huang
3cf1473a09 Use monotonic clock for interval measurement (#6211)
Signed-off-by: Lifu Huang <lifu.hlf@gmail.com>
2025-05-17 16:49:18 -07:00
Kiv Chen
64825b8395 model(vlm): mistral 3.1 (#5099)
Co-authored-by: KivenChen <sleigh-queue-0y@icloud.com>
2025-05-16 18:36:18 -07:00
Lianmin Zheng
c2b7ddca49 [Minor] cleanup unused imports (#6358) 2025-05-16 14:52:38 -07:00
Yury Sulsky
f19a9204cd Support precomputed multimodal features for Qwen-VL and Gemma3 models. (#6136)
Co-authored-by: Yury Sulsky <ysulsky@tesla.com>
2025-05-16 12:26:15 -07:00
Lianmin Zheng
e07a6977e7 Minor improvements of TokenizerManager / health check (#6327) 2025-05-15 15:29:25 -07:00
Lifu Huang
3e350a931e [Bug] Fix accidental logger override caused by internVL. (#6282) 2025-05-13 23:29:25 -07:00
Ying Sheng
fb71725c98 Fix a bug in schedule_policy (#6276) 2025-05-13 18:04:00 -07:00
Kiv Chen
5380cd7ea3 model(vlm): pixtral (#5084) 2025-05-13 00:16:10 -07:00
Cheng Wan
b2e95f62b4 Fix two issues related to --moe-dense-tp-size=1 (#5657)
Co-authored-by: liusy58 <liusy58@linux.alibaba.com>
Co-authored-by: 颉沆 <xiehang.lsy@alibaba-inc.com>
2025-05-12 23:51:39 -07:00
Lianmin Zheng
d18c6b3358 Support incremental streaming of logprob/token_ids between scheduler and detokenizer (#6225)
Co-authored-by: SangBin Cho <rkooo567@gmail.com>
2025-05-12 14:33:38 -07:00
Lianmin Zheng
e8e18dcdcc Revert "fix some typos" (#6244) 2025-05-12 12:53:26 -07:00
Ying Sheng
bad7c26fdc [PP] Fix init_memory_pool desync & add PP for mixtral (#6223) 2025-05-12 12:38:09 -07:00
applesaucethebun
d738ab52f8 fix some typos (#6209)
Co-authored-by: Brayden Zhong <b8zhong@uwaterloo.ca>
2025-05-13 01:42:38 +08:00
Lianmin Zheng
fba8eccd7e Log if cuda graph is used & extend cuda graph capture to cuda-graph-max-bs (#6201)
Co-authored-by: SangBin Cho <rkooo567@gmail.com>
2025-05-12 00:17:33 -07:00
Cheng Wan
25c83fff6a Performing Vocabulary Parallelism for LM Head across Attention TP Groups (#5558)
Co-authored-by: liusy58 <liusy58@linux.alibaba.com>
2025-05-11 23:36:29 -07:00
fzyzcjy
3f2702ae51 Fix start_profile does not support with_stack and record_shapes (#6043) 2025-05-11 23:11:32 -07:00
Lianmin Zheng
01bdbf7f80 Improve structured outputs: fix race condition, server crash, metrics and style (#6188) 2025-05-11 08:36:16 -07:00
Yusong Gao
41273fd71f fix: handle None multimodal_inputs during merging and filtering batches in disaggregation decode mode (#6169) 2025-05-11 00:28:21 -07:00
applesaucethebun
2ce8793519 Add typo checker in pre-commit (#6179)
Co-authored-by: Brayden Zhong <b8zhong@uwaterloo.ca>
2025-05-11 12:55:00 +08:00
Lianmin Zheng
de167cf5fa Fix request abortion (#6184) 2025-05-10 21:54:46 -07:00
Lianmin Zheng
4319978c73 Fix data parallel perf regression (#6183) 2025-05-10 19:18:35 -07:00
huangtingwei
d2cb3024f2 fix bug that gpu0 occupies more memory when hicache is turned on (#5778)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-05-09 15:36:08 -07:00
yhyang201
cec98f1034 [Fix] Incorrect Memory Allocation on CUDA:0 by Non-Zero CUDA Processes in TP/DP (#5745) 2025-05-08 17:52:26 -07:00
fzyzcjy
cef91b1ed7 [PD] Add control to slow down a server (#5572) 2025-05-08 01:03:08 -07:00
fzyzcjy
b6cf3532b5 Tiny refactor ModelConfig.from_server_args (#5219) 2025-05-08 01:02:43 -07:00
Liangsheng Yin
a3e4e9bf9e Better PD initialization (#5751) 2025-05-07 01:12:57 +08:00
Zhiqiang Xie
b26cb1c55a Fix problem of large page size with chunked prefill (#6046) 2025-05-06 15:19:47 +08:00
Zhiqiang Xie
f8e460930a Fix prefill OOM error in the case of large page size (#5081) 2025-05-05 16:02:55 -07:00
xm:D
3409aaab32 Support InternVL3 (#5350)
Co-authored-by: Mick <mickjagger19@icloud.com>
Co-authored-by: Chayenne <zhaochen20@outlook.com>
2025-05-01 22:38:59 -07:00
Ying Sheng
11383cec3c [PP] Add pipeline parallelism (#5724) 2025-04-30 18:18:07 -07:00
liwenju0
8fefdd32c7 [Feature] add support kimi vl model (#5383)
Co-authored-by: wenju.li <wenju.li@deepctr.cn>
2025-04-29 21:31:19 -07:00
Chang Su
28b26dbf48 [Bugfix]: fix missing queue_time_start for requests from grammar_queue (#5696) 2025-04-29 17:31:44 -07:00
Ke Bao
dd408ee481 Auto set draft model path for MTP (#5793) 2025-04-29 16:25:40 -07:00
Lianmin Zheng
3029889cb4 Turn on overlap scheduler for multimodal models (#5771) 2025-04-27 23:45:09 -07:00
Trevor Morris
84810da4ae Add Cutlass MLA attention backend (#5390) 2025-04-27 20:58:53 -07:00
Liangsheng Yin
40d9b8acce Improve overlap scheduling (#5788) 2025-04-28 11:19:16 +08:00
Yi Zhang
1f963d7f64 Bugfix for minicpmo vision test (#5760) 2025-04-26 23:18:02 +08:00
Mick
feda9b11b3 fix: fix one more bug from merging mm_inputs (#5718)
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: XinyuanTong <115166877+JustinTong0323@users.noreply.github.com>
2025-04-25 17:28:33 -07:00
IAN
11e27d0926 [PD]: Support Muti Prefill in one node (#5704)
Co-authored-by: shuaills <shishuaiuoe@gmail.com>
2025-04-26 00:30:47 +08:00
Liangsheng Yin
c55550cbf0 [PD] Better logs (#5715) 2025-04-25 17:25:45 +08:00
Mick
c998d04b46 vlm: enable radix cache for qwen-vl models (#5349)
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
2025-04-23 20:35:05 -07:00
Cheng Wan
711efe7814 Integrating PD disaggregation with DP attention and DeepEP (#5435)
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
2025-04-23 01:46:01 -07:00
Byron Hsu
bf98d2e377 [PD] Support prefill overlap + Ensure no race condition (#5609) 2025-04-21 12:12:56 -07:00
Byron Hsu
e65b9f21e3 [PD] Support decode overlap schedule (#5608) 2025-04-21 12:06:16 -07:00
Zhiqiang Xie
70645f4d7d upstream hicache fixes (#5570) 2025-04-20 23:08:30 -07:00
Lianmin Zheng
eef9433b46 Fix flush cache (#5590) 2025-04-20 22:56:40 -07:00
fzyzcjy
1195182040 Tiny add Engine.flush_cache API (#5241) 2025-04-20 18:15:03 -07:00
Sundara Raman Ramachandran
f08154193c Perform Batch Tokenization. (#5141) 2025-04-20 18:10:37 -07:00
fzyzcjy
5fc4b6004e Add sanity check for max_running_requests (#5016) 2025-04-20 17:56:49 -07:00
fzyzcjy
475e2e378a [PD] Fix server crash when using batch requests (#5531) 2025-04-20 16:02:23 -07:00