aoshen524
|
588865f0e0
|
[Feature] Support Tensor Parallelism and Weight Slicing for Lora (#4274)
Co-authored-by: ShenAo1111 <1377693092@qq.com>
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
|
2025-03-18 20:33:07 -07:00 |
|
Cheng Wan
|
3196999f63
|
Reduce computation and communication in DP attention (#4521)
|
2025-03-18 13:41:36 -07:00 |
|
James Liu
|
9e0186f352
|
[Feature] Support EAGLE 3 (#4247)
|
2025-03-18 07:35:23 -07:00 |
|
Wei Wu
|
8baf9a0c18
|
[Fix] Type annotation correction for UpdateWeightsFromTensorReqInput (#4532)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
|
2025-03-18 00:52:47 -07:00 |
|
Yineng Zhang
|
c787298547
|
use sgl custom all reduce (#4441)
|
2025-03-18 00:46:41 -07:00 |
|
Yineng Zhang
|
c16b33ccac
|
cleanup deps 3/n (#4541)
|
2025-03-18 00:11:36 -07:00 |
|
Xiaoyu Zhang
|
804d250a0d
|
remove useless backend forward in rotary_embedding (#4500)
|
2025-03-17 23:54:00 -07:00 |
|
Xiaoyu Zhang
|
dd865befde
|
[Hotfix] solve fp8 w8a8 ci test fail (#4531)
|
2025-03-17 23:17:04 -07:00 |
|
Mick
|
d373a48c98
|
fix: second_per_grid_ts should be used to get mrope position (#3682)
|
2025-03-17 18:12:38 -07:00 |
|
Zhiqiang Xie
|
a98290aea3
|
Unit test for Hierarchical Caching (#4486)
|
2025-03-17 17:45:00 -07:00 |
|
Xiaoyu Zhang
|
9b81f9bd34
|
sglang quant module remove vllm dependency (#4507)
|
2025-03-17 15:51:59 -07:00 |
|
Yineng Zhang
|
f81a27f65e
|
upgrade sgl-kernel 0.0.5.post3 (#4522)
|
2025-03-17 14:49:56 -07:00 |
|
Ke Bao
|
3ded4b215d
|
Revert "feat: update grouped_topk to support softmax and sigmoid" (#4505)
|
2025-03-17 11:30:26 -07:00 |
|
Lianmin Zheng
|
82dec1f70b
|
Remove redundant type conversion (#4513)
|
2025-03-17 05:57:35 -07:00 |
|
yiakwy-xpu-ml-framework-team
|
5f9b2c62ff
|
[ROCm] fix dtype (#4510)
|
2025-03-17 05:20:50 -07:00 |
|
Lianmin Zheng
|
5493c3343e
|
Fix data parallel + tensor parallel (#4499)
|
2025-03-17 05:13:16 -07:00 |
|
Wei Wu
|
91ba98fe50
|
[Fix] Resolve GPU Memory Leak in update_weights_from_tensor (#4446)
|
2025-03-17 08:54:30 +00:00 |
|
Yinghai Lu
|
c614dbdf95
|
Nicer standalone engine inferface (#4480)
|
2025-03-17 01:42:04 -07:00 |
|
Xihuai Wang
|
927ca935a7
|
Constraint Decoding: Tool call with text (#4067)
|
2025-03-17 01:06:46 -07:00 |
|
Stefan He
|
ef3c2dd08e
|
Support Online Quantization for W8A8 (#4485)
|
2025-03-17 00:28:56 -07:00 |
|
Wenbo Yang
|
75b656488a
|
Support serving DeepSeek-R1-Channel-INT8 with 32 L40S. (#4418)
|
2025-03-17 00:03:43 -07:00 |
|
Mick
|
0f52fb55ec
|
config: Update fused moe config (#4493)
|
2025-03-16 23:51:58 -07:00 |
|
萝卜菜
|
d6d21640d3
|
[Feature] Support Deepseek-VL2 (#2798)
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: Chayenne <zhaochen20@outlook.com>
Co-authored-by: Yi Zhang <1109276519@qq.com>
|
2025-03-16 23:07:59 -07:00 |
|
JieXin Liang
|
0212d2e288
|
[Fix] use torch.inference_mode() instead of torch.no_grad() (#4372)
|
2025-03-16 22:54:16 -07:00 |
|
Byron Hsu
|
8cc300f536
|
Fix router test (#4483)
|
2025-03-16 22:49:47 -07:00 |
|
mlmz
|
452db50808
|
Constraint Decoding: Set xgrammar as the default grammar backend (#4386)
|
2025-03-16 18:53:43 -07:00 |
|
Rin Intachuen
|
d1112d8548
|
Add endpoint for file support, purely to speed up processing of input_embeds. (#2797)
|
2025-03-16 18:30:37 -07:00 |
|
woodx
|
48efec7b05
|
Feature: support code completion (#3612)
|
2025-03-16 18:26:19 -07:00 |
|
Zhiqiang Xie
|
f5bbf6037d
|
Fix: Complete int32 to int64 conversion (#4465)
|
2025-03-16 18:14:27 -07:00 |
|
huiwq1990
|
5cbd709ea1
|
Fix: modelscope env comment (#4474)
Signed-off-by: huiwq1990 <huiwq1990@163.com>
|
2025-03-16 18:11:33 -07:00 |
|
Yinghai Lu
|
2e4a1e2d05
|
Initialize image processor for skip-tokenizer-init codepath (#4479)
Co-authored-by: Alex Kirillov <alex@iterationlab.org>
|
2025-03-16 18:10:09 -07:00 |
|
Mick
|
9d02bb3e2a
|
Urgent model support: support gemma-3-it (#4424)
|
2025-03-16 17:37:32 -07:00 |
|
Yinghai Lu
|
799fb5f455
|
400 on empty input_ids (#4481)
|
2025-03-16 14:01:23 -07:00 |
|
lukec
|
a53fe428f9
|
Support FlashMLA backend (#4472)
Co-authored-by: yinfan98 <1106310035@qq.com>
|
2025-03-16 09:07:06 -07:00 |
|
Ying Sheng
|
1b859295f4
|
[Eagle] Remove the greedy branch and some redundant code (#4363)
Co-authored-by: Sehoon Kim <sehoon@x.ai>
|
2025-03-16 02:48:55 -07:00 |
|
JieXin Liang
|
1a3fa75f2f
|
[Fix] use torch.cat instead of torch.concat to prevent entering the Autograd backends. (#4466)
|
2025-03-16 00:02:47 -07:00 |
|
Yineng Zhang
|
65b7c9b78f
|
cleanup deps 2/n (#4464)
|
2025-03-15 23:06:17 -07:00 |
|
Lianmin Zheng
|
2c4f5ccac1
|
Fix minor style (#4460)
|
2025-03-15 21:51:12 -07:00 |
|
Wang Ran (汪然)
|
158430473e
|
Fix typos (#4368)
|
2025-03-15 21:27:58 -07:00 |
|
Mick
|
8ec2ce0726
|
perf: update fused moe config (#4459)
|
2025-03-15 21:23:57 -07:00 |
|
Michael Feil
|
1fd0cf8a7b
|
Update comment in qwen2.py (#4447)
|
2025-03-15 21:14:29 -07:00 |
|
vikram singh shekhawat
|
bf63ee54ed
|
Auto-detect device if not specified in server arguments. (#4423)
|
2025-03-15 21:13:51 -07:00 |
|
Wang Ran (汪然)
|
2892b9bb97
|
bugfix: Update sampling_params.py (#4413)
|
2025-03-15 16:39:19 -07:00 |
|
Xu Song
|
470b474075
|
Update bench_serving.py (#4454)
|
2025-03-15 16:33:58 -07:00 |
|
Chen Shengzhi
|
86d9baedc2
|
[Fix] Fix errors when using the device except cuda. (#4455)
|
2025-03-15 16:33:00 -07:00 |
|
Mick
|
035ac2ab74
|
ci: update transformers==4.48.3 (#4451)
|
2025-03-15 13:27:26 -07:00 |
|
Yineng Zhang
|
ad1ae7f7cd
|
use topk_softmax with sgl-kernel (#4439)
|
2025-03-14 15:59:06 -07:00 |
|
Lianmin Zheng
|
e73167ade3
|
Fix maximum recursion depth triggered on exception exit (#4438)
|
2025-03-14 15:12:26 -07:00 |
|
Baoyuan Qi
|
642ab418f3
|
[bug] fix duplicate variable MAX_PIXELS in qwen_vl.py (#4419)
|
2025-03-14 01:28:25 -07:00 |
|
wangyu
|
1ce4878d31
|
feat(remote_model): support variable remote backend for model loader (#3964)
Signed-off-by: wangyu <wangyu.steph@bytedance.com>
|
2025-03-14 00:40:44 -07:00 |
|