Commit Graph

1630 Commits

Author SHA1 Message Date
JieXin Liang
0212d2e288 [Fix] use torch.inference_mode() instead of torch.no_grad() (#4372) 2025-03-16 22:54:16 -07:00
Byron Hsu
8cc300f536 Fix router test (#4483) 2025-03-16 22:49:47 -07:00
mlmz
452db50808 Constraint Decoding: Set xgrammar as the default grammar backend (#4386) 2025-03-16 18:53:43 -07:00
Rin Intachuen
d1112d8548 Add endpoint for file support, purely to speed up processing of input_embeds. (#2797) 2025-03-16 18:30:37 -07:00
woodx
48efec7b05 Feature: support code completion (#3612) 2025-03-16 18:26:19 -07:00
Zhiqiang Xie
f5bbf6037d Fix: Complete int32 to int64 conversion (#4465) 2025-03-16 18:14:27 -07:00
huiwq1990
5cbd709ea1 Fix: modelscope env comment (#4474)
Signed-off-by: huiwq1990 <huiwq1990@163.com>
2025-03-16 18:11:33 -07:00
Yinghai Lu
2e4a1e2d05 Initialize image processor for skip-tokenizer-init codepath (#4479)
Co-authored-by: Alex Kirillov <alex@iterationlab.org>
2025-03-16 18:10:09 -07:00
Mick
9d02bb3e2a Urgent model support: support gemma-3-it (#4424) 2025-03-16 17:37:32 -07:00
Yinghai Lu
799fb5f455 400 on empty input_ids (#4481) 2025-03-16 14:01:23 -07:00
lukec
a53fe428f9 Support FlashMLA backend (#4472)
Co-authored-by: yinfan98 <1106310035@qq.com>
2025-03-16 09:07:06 -07:00
Ying Sheng
1b859295f4 [Eagle] Remove the greedy branch and some redundant code (#4363)
Co-authored-by: Sehoon Kim <sehoon@x.ai>
2025-03-16 02:48:55 -07:00
JieXin Liang
1a3fa75f2f [Fix] use torch.cat instead of torch.concat to prevent entering the Autograd backends. (#4466) 2025-03-16 00:02:47 -07:00
Yineng Zhang
65b7c9b78f cleanup deps 2/n (#4464) 2025-03-15 23:06:17 -07:00
Lianmin Zheng
2c4f5ccac1 Fix minor style (#4460) 2025-03-15 21:51:12 -07:00
Wang Ran (汪然)
158430473e Fix typos (#4368) 2025-03-15 21:27:58 -07:00
Mick
8ec2ce0726 perf: update fused moe config (#4459) 2025-03-15 21:23:57 -07:00
Michael Feil
1fd0cf8a7b Update comment in qwen2.py (#4447) 2025-03-15 21:14:29 -07:00
vikram singh shekhawat
bf63ee54ed Auto-detect device if not specified in server arguments. (#4423) 2025-03-15 21:13:51 -07:00
Wang Ran (汪然)
2892b9bb97 bugfix: Update sampling_params.py (#4413) 2025-03-15 16:39:19 -07:00
Xu Song
470b474075 Update bench_serving.py (#4454) 2025-03-15 16:33:58 -07:00
Chen Shengzhi
86d9baedc2 [Fix] Fix errors when using the device except cuda. (#4455) 2025-03-15 16:33:00 -07:00
Mick
035ac2ab74 ci: update transformers==4.48.3 (#4451) 2025-03-15 13:27:26 -07:00
Yineng Zhang
ad1ae7f7cd use topk_softmax with sgl-kernel (#4439) 2025-03-14 15:59:06 -07:00
Lianmin Zheng
e73167ade3 Fix maximum recursion depth triggered on exception exit (#4438) 2025-03-14 15:12:26 -07:00
Baoyuan Qi
642ab418f3 [bug] fix duplicate variable MAX_PIXELS in qwen_vl.py (#4419) 2025-03-14 01:28:25 -07:00
wangyu
1ce4878d31 feat(remote_model): support variable remote backend for model loader (#3964)
Signed-off-by: wangyu <wangyu.steph@bytedance.com>
2025-03-14 00:40:44 -07:00
Yineng Zhang
977d7cd26a cleanup deps 1/n (#4400)
Co-authored-by: sleepcoo <sleepcoo@gmail.com>
2025-03-14 00:00:33 -07:00
Lu Changqi
0e0ec70200 Hierarchical Caching supports MLA (#4009)
Signed-off-by: Changqi Lu <luchangqi.123@bytedance.com>
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-03-13 20:42:14 -07:00
Yineng Zhang
ba80c102f9 bump v0.4.4.post1 (#4402) 2025-03-13 17:53:46 -07:00
Zhiqiang Xie
fbdb50501f Hot fix for hicache with new page aligned radixtree (#4397) 2025-03-13 15:50:49 -07:00
Qiaolin Yu
85d2365d33 Fix the output of hidden states after HTTP requests (#4269) 2025-03-13 14:54:06 -07:00
Chang Su
5fe79605a8 Fix Llama3.3 tool call support (#4320) 2025-03-13 14:01:41 -07:00
Lianmin Zheng
c6d7f8d370 Add some fused elementwise kernels for grok-1 (#4398)
Co-authored-by: dhou-xai <dhou@x.ai>
Co-authored-by: Hanming Lu <69857889+hanming-lu@users.noreply.github.com>
2025-03-13 13:39:10 -07:00
Lianmin Zheng
a5a892ffd3 Fix auto merge & add back get_flat_data_by_layer (#4393) 2025-03-13 08:46:25 -07:00
Lianmin Zheng
8e66fbecee Improve DP attention (#4390)
Co-authored-by: dhou-xai <dhou@x.ai>
Co-authored-by: SangBin Cho <rkooo567@gmail.com>
2025-03-13 08:23:56 -07:00
Lianmin Zheng
4fea040ca1 Fix a regression introduced by overlapping KV cache writing (#4375) 2025-03-13 03:49:05 -07:00
Yineng Zhang
6aaeb84872 chore: bump v0.4.4 (#4041) 2025-03-13 02:49:58 -07:00
Yineng Zhang
3623b6a7f5 upgrade sgl-kernel 0.0.5 (#4381) 2025-03-13 02:37:56 -07:00
Lianmin Zheng
45de89719c Revert "[XPU][CPU] Enable the native path of DeepSeek" (#4367) 2025-03-12 23:45:52 -07:00
Meng, Hengyu
71046fcd71 [XPU][CPU] Enable the native path of DeepSeek (#4086)
Co-authored-by: Zhang, Liangang <liangang.zhang@intel.com>
2025-03-12 22:26:29 -07:00
Lianmin Zheng
c76040e31b Support page size > 1 (#4356) 2025-03-12 22:22:39 -07:00
Cheng Wan
2f6bacee03 [moe] fix: correct the cache size in the last chunk (#3679)
Co-authored-by: Abatom <abzhonghua@gmail.com>
2025-03-12 22:22:13 -07:00
Wen Sun
4014804157 Ensure Usage Data in Streaming Responses Aligns with vLLM’s Implementation (#3814) 2025-03-12 22:12:55 -07:00
David Carreto Fidalgo
f7f88b706c HotFix: json serialization error when using OAI v1/batches endpoint with logprobs (#3896) 2025-03-12 22:04:29 -07:00
yiakwy-xpu-ml-framework-team
18c27131f5 [tools] add fp8 max/min constant in utils (#3959) 2025-03-12 21:44:55 -07:00
YR Chen
ccdd10c84b Move aiohttp into public dependencies (#3980) 2025-03-12 21:42:57 -07:00
vikram singh shekhawat
76f6c0ebf9 Add device detection and count functions to utils. (#3962) 2025-03-12 21:41:50 -07:00
Conghui Tan
6412c5e493 Avoid duplicated request ids in batch APIs (#4026)
Co-authored-by: conghuitan <conghuitan@tencent.com>
2025-03-12 21:38:17 -07:00
AniZpZ
85ef7f64e4 [FIX] fix incorrect output when enable both deepgemm and torch compile (#4359)
Co-authored-by: xuyongfei.xyf <xuyongfei.xyf@antgroup.com>
2025-03-12 21:34:09 -07:00