Commit Graph

914 Commits

Author SHA1 Message Date
James Xu
ddeb9d42de Add engine encode (#1995)
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
2024-11-11 11:48:17 -08:00
HAI
087ab83223 [Performance, Triton] Optimize over mask compute to tl.load in fused_moe_kernel (#1980) 2024-11-10 18:54:43 -08:00
Byron Hsu
8169c6f4ef Add gen-shared-prefix dataset in bench_serving (#1990) 2024-11-11 08:39:56 +08:00
yizhang2077
a8aad9357d qwen2vl fix bug for #1971 #1897 (#1984) 2024-11-10 08:10:45 -08:00
Yineng Zhang
b3523af8eb fix: update pyzmq version (#1983) 2024-11-10 21:33:23 +08:00
Lianmin Zheng
1929c06762 Simplify prometheus metrics (#1981)
Co-authored-by: Mohit Reddy <mohitreddy1996@users.noreply.github.com>
2024-11-10 04:39:32 -08:00
Huanzhi (Hans) Mao
ed53ac84b4 Specify zmq Version Requirement (#1982) 2024-11-10 01:32:07 -08:00
Lianmin Zheng
520f0094e4 [CI] balance unit tests (#1977) 2024-11-09 16:46:14 -08:00
Lianmin Zheng
9c939a3d8b Clean up metrics code (#1972) 2024-11-09 15:43:20 -08:00
Enrique Shockwave
f11eb90fe4 Initialize model_worker_batch variable (#1973) 2024-11-09 11:28:02 -08:00
Yudi Xue
95a4ed129a Fix metrics (#1963) 2024-11-08 23:21:11 -08:00
Lianmin Zheng
a509552087 [minor] Improve code style and compatibility (#1961) 2024-11-08 02:19:41 -08:00
aqweteddy
4ade15dd32 Adjust reward model's score module and pooler module order for reducing computation (#1956) 2024-11-08 00:10:54 -08:00
Lianmin Zheng
8dc84da084 Remove the useless to_srt_kwargs (#1955) 2024-11-07 23:15:08 -08:00
aqweteddy
f16eb15d0d Gemma2 reward model support (#1954) 2024-11-07 22:42:27 -08:00
HAI
67c424cce3 [Performance, Triton Kernel Args] extend_attention, optimize kern args to _fwd_kernel (#1941) 2024-11-07 18:24:02 -08:00
Chayenne
c77c1e05ba fix black in pre-commit (#1940) 2024-11-08 07:42:47 +08:00
Xuehai Pan
a5e0defb5a minor: Add basic editorconfig and pre-commit hooks to enforce style for whitespaces (#1926) 2024-11-06 13:46:04 +00:00
Lzhang-hub
a146d9990e support prometheus metrics (#1853)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
2024-11-05 20:42:53 -08:00
Chayenne
02755768d3 Change judge to classify & Modify make file (#1920) 2024-11-04 23:53:44 -08:00
Lianmin Zheng
65859754f1 Release v0.3.5 (#1908) 2024-11-03 13:48:11 -08:00
Lianmin Zheng
2ce32db6fb Let reward model take text inputs instead of message lists (#1907)
Co-authored-by: Kyle Corbitt <kyle@corbt.com>
2024-11-03 13:27:12 -08:00
Yineng Zhang
793b79dbe9 feat: support truss endpoint for benchmark serving (#1906) 2024-11-03 12:56:10 -08:00
Iñaki Arango
1363b51983 Escape backwards slash (#1902) 2024-11-03 12:27:11 -08:00
Lianmin Zheng
0abbf289a8 Unify the model type checking (#1905) 2024-11-03 12:25:39 -08:00
Lianmin Zheng
c17c578108 Simplify tokenizer manager (#1904) 2024-11-03 08:38:26 -08:00
Lianmin Zheng
838dcda162 Simplify tokenizer manager (#1899) 2024-11-03 03:52:38 -08:00
Lianmin Zheng
efbc116a0f Do not use longest prefix matching when #queue-req is large (#1896) 2024-11-03 01:45:20 -07:00
Chayenne
6aed0445ed turn off log (#1895) 2024-11-03 00:19:12 -07:00
Ran Chen
146f613405 Fix incorrect context length for llama3.2-11b (#1873) 2024-11-02 00:04:50 -07:00
Ke Bao
16eb33ffe2 Update vocab embedding deps and add TP switch (#1856) 2024-10-31 20:13:07 -07:00
Liangsheng Yin
b9fd178f1b Fix retraction + overlap (#1860)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
2024-10-31 18:27:42 -07:00
HAI
d8e9d61f86 [Build, ROCm] Dockerfile.rocm for Instinct GPUs, with package updates (#1861) 2024-10-31 16:38:16 -07:00
Lianmin Zheng
a2e0424abf Fix memory leak for chunked prefill 2 (#1858)
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
2024-10-31 14:51:51 -07:00
geeker-smallwhite
8ce202a493 delete unused character (#1855) 2024-10-31 19:33:55 +08:00
Byron Hsu
438526a814 Refactor tokenizer manager (#1846) 2024-10-30 21:32:18 -07:00
Lianmin Zheng
f7102fbd2b Fix mixed chunked prefill (#1850) 2024-10-30 21:20:41 -07:00
Byron Hsu
a7a0a6886b Make decode log interval configurable (#1847) 2024-10-30 19:59:20 -07:00
HAI
2d4ce1b792 [Performance, Triton Kernel Args] _decode_grouped_softmax_reducev_fwd… (#1845) 2024-10-30 17:33:36 -07:00
HAI
5f65e2b830 [Performance, Hardware] MoE weights padding to AMD MI300x GPUs (#1836) 2024-10-30 12:17:32 -07:00
Ying Sheng
4e2af03cfa [Production] Drain requests before exit when receive SIGTERM (#1838) 2024-10-30 10:22:56 -07:00
Lianmin Zheng
b548801ddb Update docs (#1839) 2024-10-30 02:49:08 -07:00
Chayenne
539df95d2c Imporve openai api documents (#1827)
Co-authored-by: Chayenne <zhaochenyang@g.ucla.edu>
2024-10-30 00:39:41 -07:00
DanielC12321
5e00ddebc0 Add new model: Gpt2 (#1833) 2024-10-29 17:52:33 -07:00
HAI
54dd3ea122 [FP8 KV Cache, Mixtral] Avoid KeyError at loading pre-quantized FP8 m… (#1835) 2024-10-29 13:58:03 -07:00
yizhang2077
d04899d7ca stop_str of qwen2-vl template should be a tuple not a str (#1834)
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
2024-10-29 20:30:41 +00:00
Yanyi Liu
5e6c32657e Support setting use_thread in the run_program for easier debugging. (#1823)
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
2024-10-29 06:51:47 +00:00
Byron Hsu
680cad2023 fix get_memory_pool_size deadlock for DP (#1830) 2024-10-28 23:07:14 -07:00
Byron Hsu
0a24eb850a Fix update_weights deadlock for DP (#1825) 2024-10-28 12:02:23 -07:00
Byron Hsu
6fcd6d7d6d Support token ids in engine.generate (#1820) 2024-10-27 14:02:34 -07:00