Support serving DeepSeek-R1-Channel-INT8 with 32 L40S. (#4418)
This commit is contained in:
@@ -18,6 +18,7 @@ SGLang is recognized as one of the top engines for [DeepSeek model inference](ht
|
||||
| **Quantized weights (AWQ)** | 8 x H100/800/20 |
|
||||
| | 8 x A100/A800 |
|
||||
| **Quantized weights (int8)** | 16 x A100/800 |
|
||||
| | 32 x L40S |
|
||||
|
||||
<style>
|
||||
.md-typeset__table {
|
||||
@@ -56,6 +57,7 @@ Detailed commands for reference:
|
||||
- [4 x 8 x A100](https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3#example-serving-with-four-a1008-nodes)
|
||||
- [8 x A100 (AWQ)](https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3#example-serving-with-8-a100a800-with-awq-quantization)
|
||||
- [16 x A100 (int8)](https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3#example-serving-with-16-a100a800-with-int8-quantization)
|
||||
- [32 x L40S (int8)](https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3#example-serving-with-32-l40s-with-int8-quantization)
|
||||
|
||||
### Download Weights
|
||||
|
||||
|
||||
Reference in New Issue
Block a user