Commit Graph

1221 Commits

Author SHA1 Message Date
JJJJOHNSON
694e41925e [eagle2] fix end check when target model verify (#2723) 2025-01-07 21:46:02 -08:00
Lianmin Zheng
b22f3f6475 Fix nightly accuracy tests (#2780) 2025-01-07 21:02:35 -08:00
Zhiqiang Xie
51caee740f Host memory pool for hierarchical caching (#2771) 2025-01-07 21:38:37 +00:00
Lianmin Zheng
bdc1acf6cd Misc fix for min_p_sampling, --cuda-graph-bs (#2761) 2025-01-07 02:52:53 -08:00
HAI
6d08ce2aa9 Use Optional with None default (#2770) 2025-01-07 01:35:08 -08:00
Lianmin Zheng
9dec582dab Remove --modelopt-config in server_args (#2758) 2025-01-06 16:35:45 -08:00
Xingyao Wang
1acbaf1b5a Add generator-style run_batch function (#2513)
Co-authored-by: openhands <openhands@all-hands.dev>
2025-01-06 15:04:55 -08:00
Zhiyu
287427e2e6 Enable Nvidia's ModelOpt fp8 quantized models (#2535) 2025-01-06 14:54:52 -08:00
Lianmin Zheng
b8574f6953 Clean up eagle code (#2756) 2025-01-06 14:54:18 -08:00
Xu-Chen
2329e1ddd0 Support llamafy/Qwen-Qwen2.5-7B-Instruct-llamafied (#2748)
Co-authored-by: chenxu02 <chenxu02@zhihu.com>
2025-01-06 13:56:28 -08:00
Yineng Zhang
2f0d386496 chore: bump v0.4.1.post4 (#2713) 2025-01-06 01:29:54 +08:00
Lianmin Zheng
3a22a303d1 Revert the GLOO_SOCKET_IFNAME change (#2731) 2025-01-04 20:13:16 -08:00
libra
bdb3929dbb Refactor SchedulePolicy to improve code organization (#2571) 2025-01-04 00:05:16 +08:00
Lianmin Zheng
0f9cc6d8d3 Fix package loss for small models (#2717)
Co-authored-by: sdli1995 < mmlmonkey@163.com>
2025-01-02 18:25:26 -08:00
yigex
c7ae474a49 [Feature, Hardware] Enable DeepseekV3 on AMD GPUs (#2601)
Co-authored-by: root <root@banff-cyxtera-s83-5.amd.com>
Co-authored-by: HAI <hixiao@gmail.com>
Co-authored-by: Bruce Xue <yigex@xilinx.com>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
2025-01-02 16:23:19 -08:00
Lianmin Zheng
bdf946bf81 Support loading pre-sharded moe weights (#2716) 2025-01-02 15:07:37 -08:00
yukavio
8c8779cd05 [Fix] fix retract error in eagle speculative decoding (#2711)
Co-authored-by: kavioyu <kavioyu@tencent.com>
2025-01-02 10:28:39 -08:00
Mick
1775b963db [Fix] fix incorrectly overwriting the port specified in ServerArgs (#2714) 2025-01-02 10:28:22 -08:00
Yineng Zhang
ba5112ff69 feat: support moe_align_block_size_triton (#2712)
Co-authored-by: WANDY666 <1060304770@qq.com>
2025-01-02 21:47:44 +08:00
yukavio
815dce0554 Eagle speculative decoding part 4: Add EAGLE2 worker (#2150)
Co-authored-by: kavioyu <kavioyu@tencent.com>
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
2025-01-02 03:22:34 -08:00
Lianmin Zheng
ad20b7957e Eagle speculative decoding part 3: small modifications to the general scheduler (#2709)
Co-authored-by: kavioyu <kavioyu@tencent.com>
2025-01-02 02:09:08 -08:00
fzyzcjy
9183c23eca Speed up update_weights_from_tensor (#2695) 2025-01-02 02:05:19 -08:00
kk
148254d4db Improve moe reduce sum kernel performance (#2705)
Co-authored-by: wunhuang <wunhuang@amd.com>
2025-01-02 01:11:06 -08:00
kk
b6e0cfb5e1 ROCm base image update (#2692)
Co-authored-by: wunhuang <wunhuang@amd.com>
2025-01-01 12:12:19 +08:00
Xiaoyu Zhang
286cad3ee3 h200 tuning fused_moe_triton config for Mixtral 8x7B/8x22B and Qwen2 57BA14B (#2689) 2024-12-31 23:17:36 +08:00
Ying Sheng
dc7eb01f19 [Fix] fix openai adapter (#2685) 2024-12-31 10:48:19 +00:00
Lianmin Zheng
b0524c3789 Eagle speculative decoding part 2: Fix cuda graph + DP attention hanging (#2684)
Co-authored-by: yukavio <kavioyu@gmail.com>
2024-12-31 02:25:05 -08:00
Yineng Zhang
d49b13c6f8 feat: use CUDA 12.4 by default (for FA3) (#2682) 2024-12-31 15:52:09 +08:00
Lianmin Zheng
f44d143949 Support target model verification in the attention backend (#2678)
Co-authored-by: yukavio <kavioyu@gmail.com>
2024-12-30 22:58:55 -08:00
Lianmin Zheng
339c69a243 Improve the computation for time_per_output_token Prometheus metrics (#2674) 2024-12-30 21:40:14 -08:00
Lianmin Zheng
21ec66e59e Minor follow-up fixes for the logprob refactor (#2670) 2024-12-30 05:42:08 -08:00
HAI
c5210dfa38 AMD DeepSeek_V3 FP8 Numerical fix (#2667) 2024-12-30 21:31:12 +08:00
mobicham
a29dd9501d Add GemLite caching after each capture (#2669) 2024-12-30 05:27:29 -08:00
Lianmin Zheng
9c6ba2484f Refactor logprob computation to return the real logprob used in sampling (#2664) 2024-12-30 04:51:38 -08:00
Lianmin Zheng
8c3b420eec [Docs] clean up structured outputs docs (#2654) 2024-12-29 23:57:16 -08:00
HAI
e6f523b5f2 fix typo in python/sglang/srt/layers/quantization/fp8.py (#2655) 2024-12-29 23:45:02 -08:00
Lianmin Zheng
03d5fbfd44 Release 0.4.1.post3 - upload the config.json to PyPI (#2647) 2024-12-29 14:25:53 -08:00
Shi Shuai
fad29f7f52 CI: Fix unittest for engine input token ids and output token ids (#2646) 2024-12-29 13:28:59 -08:00
Shi Shuai
35bdb48557 [Feature] Get Token IDs with Engine.generate() (#2636)
Co-authored-by: Chayenne <zhaochen20@outlook.com>
2024-12-29 12:28:27 -08:00
Yineng Zhang
3ccf566b0d chore: bump v0.4.1.post2 (#2643) 2024-12-30 00:11:46 +08:00
HandH1998
afa0341e57 Update Triton configs for block fp8 kernels (#2641) 2024-12-29 22:53:47 +08:00
HAI
30828e7192 AMD: set weights and scaling numbers properly for block FP8 (#2637) 2024-12-29 03:23:39 -08:00
Ying Sheng
e0e09fceeb [Session] Update session control interface (#2635) 2024-12-29 02:10:27 -08:00
Lianmin Zheng
9c05c6898e Add llama_eagle.py (#2640)
Co-authored-by: kavioyu <kavioyu@tencent.com>
2024-12-29 01:45:35 -08:00
Lianmin Zheng
3815b23ccb Clean up wrapper in flashinfer backend (#2638) 2024-12-29 00:45:57 -08:00
Tanjiro
8ee9a8501a [Feature] Function Calling (#2544)
Co-authored-by: Haoyu Wang <120358163+HaoyuWang4188@users.noreply.github.com>
2024-12-28 21:58:52 -08:00
fzyzcjy
fd28640dc5 Add update_weights_from_tensor (#2631) 2024-12-28 13:30:27 -08:00
Yineng Zhang
7863e4368a add configs for block fp8 related kernels (#2628)
Co-authored-by: HandH1998 <1335248067@qq.com>
2024-12-28 23:12:04 +08:00
Lianmin Zheng
855d0ba381 [CI] Fix nightly test and raise better error message (#2626)
Co-authored-by: Sangbin <rkooo567@gmail.com>
2024-12-27 22:16:39 -08:00
Xiaoyu Zhang
9254a33ad4 avoid fused_moe_triton padding circular import (#2624) 2024-12-28 14:01:35 +08:00