Commit Graph

3307 Commits

Author SHA1 Message Date
Kevin Tuan
b21fdd5373 feat: (chat-template matching) enhance multimodal model detection with config.json (#9597) 2025-08-26 17:55:40 -07:00
hzh0425
c04c17edfa refactor(hicache): Introduce generic HiCacheStorageConfig for improved configuration management (#9555)
Co-authored-by: Teng Ma <805522925@qq.com>
2025-08-26 17:55:20 -07:00
Mick
16a6d21b95 chore: enhance bench_serving for vlms with a new dataset of configurable image count and resolution (#9583)
Co-authored-by: yhyang201 <yhyang201@gmail.com>
2025-08-26 17:42:54 -07:00
Stefan He
a530b3ffdc [RL] fix register the same ops multiple times (#9564) 2025-08-26 16:24:44 -07:00
Ke Bao
603b3446dc Fix FA3 swa spec verify topk>1 (#9658) 2025-08-26 15:03:14 -07:00
cicirori
b6c14ec0b4 add response_format support for completion API (#9665) 2025-08-26 15:01:29 -07:00
Zhiqiang Xie
43de1d7304 HiCache Storage fix host memory leak (#9648) 2025-08-26 10:49:40 -07:00
hzh0425
79ce3688bb BugFix(hicache): Fix host indices out of bound error (#9637)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-08-26 10:42:23 -07:00
Xiaotong Jiang
0936c766ed Fix kimi k2 function calling format (#9606) 2025-08-26 00:50:59 -07:00
GavinZhu-GMI
0ef583b7de fix: allow user to specify function as role (#9635) 2025-08-26 00:47:20 -07:00
Liu Shaohui
f7881a27f9 Add reasoning_effort param in TiktokenTokenizer.apply_chat_template (#9630)
Co-authored-by: Shaohui Liu <liushaohui3@xiaomi.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
2025-08-26 00:44:20 -07:00
ZhengdQin
f92b729d52 [new feat] ascend backend support fia fusion kernel (#8328)
Co-authored-by: Even Zhou <even.y.zhou@outlook.com>
2025-08-25 23:13:08 -07:00
Liangsheng Yin
0ff7241995 Improve bench_one_batch_server script (#9608)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-26 10:38:37 +08:00
ykwd
80dc76e11a [Fix] HiCache Bugfix & Mooncake Error Handling Enhance (#8901)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-08-25 19:05:10 -07:00
Jonas
a0a77d937b Fix Harmony reasoning parser for and auto-separation for gpt-oss models (#9190)
Co-authored-by: Chang Su <chang.s.su@oracle.com>
Co-authored-by: Chayenne <zhaochen20@outlook.com>
Co-authored-by: zhaochenyang20 <zhaochenyang20@gmail.com>
Co-authored-by: minleminzui <2969413251@qq.com>
Co-authored-by: maocheng23 <maocheng@berkeley.edu>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-25 15:26:26 -07:00
Binyao Jiang
24a8cee66d Fix GLM45v launch server cuda torch compile bug (#9554) 2025-08-25 13:46:28 -07:00
Binyao Jiang
3affa9dcc3 Fix GLM45 tool call multi-turn bug (#9500) 2025-08-25 13:46:13 -07:00
Sundara Raman Ramachandran
ea0696b924 [Performance] Batch Send from Tokenizer Manager. (#9436) 2025-08-26 01:43:54 +08:00
Yineng Zhang
e3e97a120b chore: bump v0.5.1.post2 (#9592) 2025-08-25 03:45:09 -07:00
Yineng Zhang
051068673c chore: update config (#9591) 2025-08-25 03:41:09 -07:00
fzyzcjy
9dcdf5da03 Tiny fix wrong comments (#9589) 2025-08-25 03:08:10 -07:00
Yineng Zhang
ebd9dbe71b fix: revert #8593 (#9581) 2025-08-25 01:29:06 -07:00
Yineng Zhang
938e986e15 chore: upgrade flashinfer 0.2.14.post1 (#9578) 2025-08-25 00:12:17 -07:00
Yuhao Zhou
17d5eda887 bugfix for undefined logging functions in HarmonyBrowserTool & HarmonyPythonTool (#9229) 2025-08-25 00:10:35 -07:00
fzyzcjy
71a7f1d86f Offload tensors by sharding on GPU (#9536) 2025-08-25 00:02:49 -07:00
fzyzcjy
433266c125 Reintroduce memory usage fix (#9535) 2025-08-25 00:02:31 -07:00
Qi Yuhang
fda4792620 Update CUTLASS 4.2 & Enable K-Major Scale Factor for SM90 FP8 Blockwise Group GEMM (#9559) 2025-08-24 23:24:43 -07:00
miter
a0b22f2f17 remove redundant rank0_log function. (#9560)
Co-authored-by: linhuang <linhuang@ruijie.com.cn>
2025-08-24 23:17:55 -07:00
SCDESPERTATE
b5c6529e17 [PD] Improve disaggregation metrics output: update the metrics to keep reflecting real stats (#7317) 2025-08-24 23:16:43 -07:00
Beichen Ma
dd6ec02965 Add target module validation for init adapters (#9429) 2025-08-24 20:24:50 -07:00
Yineng Zhang
e0ab167db0 chore: bump v0.5.1.post1 (#9558) 2025-08-24 01:14:17 -07:00
Yineng Zhang
c807cd7c75 chore: update configurer (#9557) 2025-08-24 01:05:00 -07:00
Vincent Zhong
327f7b7c87 fix(grok): remove duplicate replicate_lm_head configuration (#9549) 2025-08-23 19:49:24 -07:00
Lianmin Zheng
97a38ee85b Release 0.5.1 (#9533) 2025-08-23 07:09:26 -07:00
Lianmin Zheng
86d10d220f Update grok.py and tiktoken tokenizer (#9532) 2025-08-23 05:40:18 -07:00
hzh0425
83871aa12d feat(hicache): Supports 3fs-hicache compatibility with dp-attention (#9372) 2025-08-23 02:08:32 -07:00
fzyzcjy
b1b3f0b38f Partially unify triton per token group quant kernels (#9485) 2025-08-23 02:07:31 -07:00
fzyzcjy
34e5e11f0f Tiny make device_loading_context more static (#9478) 2025-08-23 02:07:15 -07:00
fzyzcjy
2600fc0d47 Overlapped weight offload (#8034) 2025-08-23 02:06:46 -07:00
hlu1
ccd3fb946e [fix] Fix mxfp4 triton MoE tp bug (#9473)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
2025-08-23 01:48:40 -07:00
Chang Su
c9dd70fbde tool-call(dsv3): Improve deepseek-v3 chat template and tool_choice = required (#9525) 2025-08-23 01:46:56 -07:00
Yineng Zhang
6b2b8bf0e1 fix: blackwell dsv3 fp8 issue temporary solution (#9530) 2025-08-23 01:33:21 -07:00
fzyzcjy
0374304a2c Add enable_flashinfer_mxfp4_bf16_moe for higher precision and slower moe backend (#9004) 2025-08-23 15:38:40 +08:00
Chanh Nguyen
127d4b0d5e Support GC Freezing to improve latency & throughput (#9241)
Co-authored-by: Chanh Nguyen <cnguyen@linkedin.com>
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
2025-08-23 13:43:09 +08:00
Moein Khazraee
7e880286b5 Add support for extensions of interface and pre-registrations to NIXL HiCache (#9211)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-08-22 20:06:13 -07:00
sogalin
c4500233ff Add Qwen3-30B-A3B-Thinking-2507 support on AMD GPUs. (#9456) 2025-08-22 13:14:42 -07:00
Hubert Lu
f445a1d9a3 [AMD] Fix Llama 4 FP8 accuracy issues on MI300X (#7699) 2025-08-22 13:13:45 -07:00
datdo-msft
110a65989b [MTP] Force greedy sampling on AMD (#9127) 2025-08-22 11:14:43 -07:00
Wenxuan Tan
0f587e80d3 Use Tensor Core Decode when gqa group size >= 4 (#8624)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-22 23:25:15 +08:00
huangtingwei
6078d5fcc0 [HiCacheStorage] backup optimization for MLA model (#8865)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-08-22 18:03:51 +08:00