Lianmin Zheng
|
2ce32db6fb
|
Let reward model take text inputs instead of message lists (#1907)
Co-authored-by: Kyle Corbitt <kyle@corbt.com>
|
2024-11-03 13:27:12 -08:00 |
|
Yineng Zhang
|
793b79dbe9
|
feat: support truss endpoint for benchmark serving (#1906)
|
2024-11-03 12:56:10 -08:00 |
|
Iñaki Arango
|
1363b51983
|
Escape backwards slash (#1902)
|
2024-11-03 12:27:11 -08:00 |
|
Lianmin Zheng
|
0abbf289a8
|
Unify the model type checking (#1905)
|
2024-11-03 12:25:39 -08:00 |
|
Lianmin Zheng
|
c17c578108
|
Simplify tokenizer manager (#1904)
|
2024-11-03 08:38:26 -08:00 |
|
Lianmin Zheng
|
838dcda162
|
Simplify tokenizer manager (#1899)
|
2024-11-03 03:52:38 -08:00 |
|
Lianmin Zheng
|
efbc116a0f
|
Do not use longest prefix matching when #queue-req is large (#1896)
|
2024-11-03 01:45:20 -07:00 |
|
Chayenne
|
6aed0445ed
|
turn off log (#1895)
|
2024-11-03 00:19:12 -07:00 |
|
Ran Chen
|
146f613405
|
Fix incorrect context length for llama3.2-11b (#1873)
|
2024-11-02 00:04:50 -07:00 |
|
Ke Bao
|
16eb33ffe2
|
Update vocab embedding deps and add TP switch (#1856)
|
2024-10-31 20:13:07 -07:00 |
|
Liangsheng Yin
|
b9fd178f1b
|
Fix retraction + overlap (#1860)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
|
2024-10-31 18:27:42 -07:00 |
|
HAI
|
d8e9d61f86
|
[Build, ROCm] Dockerfile.rocm for Instinct GPUs, with package updates (#1861)
|
2024-10-31 16:38:16 -07:00 |
|
Lianmin Zheng
|
a2e0424abf
|
Fix memory leak for chunked prefill 2 (#1858)
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
|
2024-10-31 14:51:51 -07:00 |
|
geeker-smallwhite
|
8ce202a493
|
delete unused character (#1855)
|
2024-10-31 19:33:55 +08:00 |
|
Byron Hsu
|
438526a814
|
Refactor tokenizer manager (#1846)
|
2024-10-30 21:32:18 -07:00 |
|
Lianmin Zheng
|
f7102fbd2b
|
Fix mixed chunked prefill (#1850)
|
2024-10-30 21:20:41 -07:00 |
|
Byron Hsu
|
a7a0a6886b
|
Make decode log interval configurable (#1847)
|
2024-10-30 19:59:20 -07:00 |
|
HAI
|
2d4ce1b792
|
[Performance, Triton Kernel Args] _decode_grouped_softmax_reducev_fwd… (#1845)
|
2024-10-30 17:33:36 -07:00 |
|
HAI
|
5f65e2b830
|
[Performance, Hardware] MoE weights padding to AMD MI300x GPUs (#1836)
|
2024-10-30 12:17:32 -07:00 |
|
Ying Sheng
|
4e2af03cfa
|
[Production] Drain requests before exit when receive SIGTERM (#1838)
|
2024-10-30 10:22:56 -07:00 |
|
Lianmin Zheng
|
b548801ddb
|
Update docs (#1839)
|
2024-10-30 02:49:08 -07:00 |
|
Chayenne
|
539df95d2c
|
Imporve openai api documents (#1827)
Co-authored-by: Chayenne <zhaochenyang@g.ucla.edu>
|
2024-10-30 00:39:41 -07:00 |
|
DanielC12321
|
5e00ddebc0
|
Add new model: Gpt2 (#1833)
|
2024-10-29 17:52:33 -07:00 |
|
HAI
|
54dd3ea122
|
[FP8 KV Cache, Mixtral] Avoid KeyError at loading pre-quantized FP8 m… (#1835)
|
2024-10-29 13:58:03 -07:00 |
|
yizhang2077
|
d04899d7ca
|
stop_str of qwen2-vl template should be a tuple not a str (#1834)
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
|
2024-10-29 20:30:41 +00:00 |
|
Yanyi Liu
|
5e6c32657e
|
Support setting use_thread in the run_program for easier debugging. (#1823)
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
|
2024-10-29 06:51:47 +00:00 |
|
Byron Hsu
|
680cad2023
|
fix get_memory_pool_size deadlock for DP (#1830)
|
2024-10-28 23:07:14 -07:00 |
|
Byron Hsu
|
0a24eb850a
|
Fix update_weights deadlock for DP (#1825)
|
2024-10-28 12:02:23 -07:00 |
|
Byron Hsu
|
6fcd6d7d6d
|
Support token ids in engine.generate (#1820)
|
2024-10-27 14:02:34 -07:00 |
|
Ke Bao
|
c77762d57f
|
Fix Triton decode kernel & ut (#1819)
|
2024-10-27 10:54:38 -07:00 |
|
Lianmin Zheng
|
eaade87a42
|
Fix unit tests (#1817)
|
2024-10-27 03:04:54 -07:00 |
|
Lianmin Zheng
|
86fc0d79d0
|
Add a watch dog thread (#1816)
|
2024-10-27 02:00:50 -07:00 |
|
Lianmin Zheng
|
86e0dde555
|
Improve the user control of new_token_ratio (#1811)
|
2024-10-26 16:39:41 -07:00 |
|
Lianmin Zheng
|
2b80978859
|
Provide an argument to set the maximum batch size for cuda graph (#1809)
|
2024-10-26 15:09:33 -07:00 |
|
Chayenne
|
ced362f7c6
|
Simplify our docs with complicated functions into utils (#1807)
Co-authored-by: Chayenne <zhaochenyang@ucla.edu>
|
2024-10-26 17:44:11 +00:00 |
|
Lianmin Zheng
|
9084a86445
|
Update links (#1805)
|
2024-10-26 04:46:01 -07:00 |
|
Byron Hsu
|
c26507484f
|
fix int conversion for SGLANG_CPU_COUNT (#1803)
|
2024-10-26 00:09:44 -07:00 |
|
Liangsheng Yin
|
07bf2e846a
|
Allow consecutive ports when launching multiple sglang servers. (#1802)
|
2024-10-26 06:43:24 +00:00 |
|
Liangsheng Yin
|
a628dd8e31
|
Set ZMQ buffer size heuristic (#1801)
|
2024-10-25 23:15:56 -07:00 |
|
Liangsheng Yin
|
1e8903414a
|
Fix possible ZMQ hanging (#1800)
|
2024-10-25 23:07:07 -07:00 |
|
Hui Liu
|
9ce8e1a93c
|
move max_position_embeddings to the last (#1799)
|
2024-10-25 19:30:50 -07:00 |
|
Lianmin Zheng
|
fb99aaa527
|
[Fix] Fix --skip-tokenizer-init (#1798)
|
2024-10-25 18:51:59 -07:00 |
|
DarkSharpness
|
b77a02cdfd
|
[Performance] Support both xgrammar and outlines for constrained decoding (#1752)
|
2024-10-25 21:47:02 +00:00 |
|
Lianmin Zheng
|
30643fed7f
|
Release v0.3.4.post2 (#1796)
Co-authored-by: DarkSharpness <76582120+DarkSharpness@users.noreply.github.com>
|
2024-10-25 11:07:19 -07:00 |
|
Lianmin Zheng
|
e646c5901e
|
Fix logprob in the overlapped mode (#1795)
|
2024-10-25 11:06:57 -07:00 |
|
Lianmin Zheng
|
c555ce2ca2
|
Revert "Fix memory leak when doing chunked prefill" (#1797)
|
2024-10-25 10:24:44 -07:00 |
|
Lianmin Zheng
|
40900baea7
|
[Fix] Fix the log parsing in chunked prefill uni tests (#1794)
|
2024-10-25 08:31:08 -07:00 |
|
Liangsheng Yin
|
a2f5e7555f
|
Fix memory leak when doing chunked prefill (#1787)
|
2024-10-25 08:01:17 -07:00 |
|
Lianmin Zheng
|
2148914e1b
|
Fix log parsing in the chunked prefill unit tests (#1793)
|
2024-10-25 08:00:55 -07:00 |
|
yizhang2077
|
def55bc876
|
Qwen2vl support cuda graph and disable radix cache (#1780)
|
2024-10-25 10:45:17 -04:00 |
|