初始化项目,由ModelHub XC社区提供模型
Model: RedHatAI/Qwen2.5-0.5B-quantized.w8a8 Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||||
168
README.md
Normal file
168
README.md
Normal file
@@ -0,0 +1,168 @@
|
|||||||
|
---
|
||||||
|
license: apache-2.0
|
||||||
|
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B/blob/main/LICENSE
|
||||||
|
language:
|
||||||
|
- en
|
||||||
|
pipeline_tag: text-generation
|
||||||
|
base_model: Qwen/Qwen2.5-0.5B
|
||||||
|
tags:
|
||||||
|
- chat
|
||||||
|
- neuralmagic
|
||||||
|
- llmcompressor
|
||||||
|
---
|
||||||
|
|
||||||
|
# Qwen2.5-0.5B-quantized.w8a8
|
||||||
|
|
||||||
|
## Model Overview
|
||||||
|
- **Model Architecture:** Qwen2
|
||||||
|
- **Input:** Text
|
||||||
|
- **Output:** Text
|
||||||
|
- **Model Optimizations:**
|
||||||
|
- **Activation quantization:** INT8
|
||||||
|
- **Weight quantization:** INT8
|
||||||
|
- **Intended Use Cases:** Intended for commercial and research use multiple languages. Similarly to [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B), this models is intended for assistant-like chat.
|
||||||
|
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws).
|
||||||
|
- **Release Date:** 10/09/2024
|
||||||
|
- **Version:** 1.0
|
||||||
|
- **License(s):** [apache-2.0](https://huggingface.co/Qwen/Qwen2.5-0.5B/blob/main/LICENSE)
|
||||||
|
- **Model Developers:** Neural Magic
|
||||||
|
|
||||||
|
Quantized version of [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
|
||||||
|
It achieves an average score of 43.93 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 44.03.
|
||||||
|
|
||||||
|
### Model Optimizations
|
||||||
|
|
||||||
|
This model was obtained by quantizing the weights of [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) to INT8 data type.
|
||||||
|
This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
|
||||||
|
Weight quantization also reduces disk size requirements by approximately 50%.
|
||||||
|
|
||||||
|
Only weights and activations of the linear operators within transformers blocks are quantized.
|
||||||
|
Weights are quantized with a symmetric static per-channel scheme, where a fixed linear scaling factor is applied between INT8 and floating point representations for each output channel dimension.
|
||||||
|
Activations are quantized with a symmetric dynamic per-token scheme, computing a linear scaling factor at runtime for each token between INT8 and floating point representations.
|
||||||
|
|
||||||
|
## Deployment
|
||||||
|
|
||||||
|
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
|
||||||
|
|
||||||
|
```python
|
||||||
|
from vllm import LLM, SamplingParams
|
||||||
|
from transformers import AutoTokenizer
|
||||||
|
|
||||||
|
model_id = "neuralmagic/Qwen2.5-0.5B-quantized.w8a8"
|
||||||
|
number_gpus = 1
|
||||||
|
max_model_len = 8192
|
||||||
|
|
||||||
|
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
|
||||||
|
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||||||
|
|
||||||
|
prompt = "Give me a short introduction to large language model."
|
||||||
|
|
||||||
|
llm = LLM(model=model_id, tensor_parallel_size=number_gpus, max_model_len=max_model_len)
|
||||||
|
|
||||||
|
outputs = llm.generate(prompt, sampling_params)
|
||||||
|
|
||||||
|
generated_text = outputs[0].outputs[0].text
|
||||||
|
print(generated_text)
|
||||||
|
```
|
||||||
|
|
||||||
|
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
|
||||||
|
|
||||||
|
|
||||||
|
## Evaluation
|
||||||
|
|
||||||
|
The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
|
||||||
|
```
|
||||||
|
lm_eval \
|
||||||
|
--model vllm \
|
||||||
|
--model_args pretrained="neuralmagic/Qwen2.5-0.5B-quantized.w8a8",dtype=auto,gpu_memory_utilization=0.9,add_bos_token=True,max_model_len=4096,enable_chunk_prefill=True,tensor_parallel_size=1 \
|
||||||
|
--tasks openllm \
|
||||||
|
--batch_size auto
|
||||||
|
```
|
||||||
|
|
||||||
|
### Accuracy
|
||||||
|
|
||||||
|
#### Open LLM Leaderboard evaluation scores
|
||||||
|
<table>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Benchmark</strong>
|
||||||
|
</td>
|
||||||
|
<td><strong>Qwen2.5-0.5B</strong>
|
||||||
|
</td>
|
||||||
|
<td><strong>Qwen2.5-0.5B-quantized.w8a8 (this model)</strong>
|
||||||
|
</td>
|
||||||
|
<td><strong>Recovery</strong>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>MMLU (5-shot)
|
||||||
|
</td>
|
||||||
|
<td>47.57
|
||||||
|
</td>
|
||||||
|
<td>47.35
|
||||||
|
</td>
|
||||||
|
<td>99.5%
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>ARC Challenge (25-shot)
|
||||||
|
</td>
|
||||||
|
<td>34.90
|
||||||
|
</td>
|
||||||
|
<td>34.47
|
||||||
|
</td>
|
||||||
|
<td>98.8%
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>GSM-8K (5-shot, strict-match)
|
||||||
|
</td>
|
||||||
|
<td>34.19
|
||||||
|
</td>
|
||||||
|
<td>34.19
|
||||||
|
</td>
|
||||||
|
<td>100.0%
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>Hellaswag (10-shot)
|
||||||
|
</td>
|
||||||
|
<td>51.83
|
||||||
|
</td>
|
||||||
|
<td>51.63
|
||||||
|
</td>
|
||||||
|
<td>99.6%
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>Winogrande (5-shot)
|
||||||
|
</td>
|
||||||
|
<td>55.80
|
||||||
|
</td>
|
||||||
|
<td>55.64
|
||||||
|
</td>
|
||||||
|
<td>99.7%
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>TruthfulQA (0-shot, mc2)
|
||||||
|
</td>
|
||||||
|
<td>39.90
|
||||||
|
</td>
|
||||||
|
<td>40.32
|
||||||
|
</td>
|
||||||
|
<td>101.1%
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Average</strong>
|
||||||
|
</td>
|
||||||
|
<td><strong>44.03</strong>
|
||||||
|
</td>
|
||||||
|
<td><strong>43.93</strong>
|
||||||
|
</td>
|
||||||
|
<td><strong>99.8%</strong>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
|
||||||
24
added_tokens.json
Normal file
24
added_tokens.json
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
{
|
||||||
|
"</tool_call>": 151658,
|
||||||
|
"<tool_call>": 151657,
|
||||||
|
"<|box_end|>": 151649,
|
||||||
|
"<|box_start|>": 151648,
|
||||||
|
"<|endoftext|>": 151643,
|
||||||
|
"<|file_sep|>": 151664,
|
||||||
|
"<|fim_middle|>": 151660,
|
||||||
|
"<|fim_pad|>": 151662,
|
||||||
|
"<|fim_prefix|>": 151659,
|
||||||
|
"<|fim_suffix|>": 151661,
|
||||||
|
"<|im_end|>": 151645,
|
||||||
|
"<|im_start|>": 151644,
|
||||||
|
"<|image_pad|>": 151655,
|
||||||
|
"<|object_ref_end|>": 151647,
|
||||||
|
"<|object_ref_start|>": 151646,
|
||||||
|
"<|quad_end|>": 151651,
|
||||||
|
"<|quad_start|>": 151650,
|
||||||
|
"<|repo_name|>": 151663,
|
||||||
|
"<|video_pad|>": 151656,
|
||||||
|
"<|vision_end|>": 151653,
|
||||||
|
"<|vision_pad|>": 151654,
|
||||||
|
"<|vision_start|>": 151652
|
||||||
|
}
|
||||||
3
config.json
Normal file
3
config.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:cecd930c9291952e38214ef39380b8d68a23d79ac6712b772fa7baf8ae08d02c
|
||||||
|
size 1933
|
||||||
1
configuration.json
Normal file
1
configuration.json
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}
|
||||||
3
evaluate_qwen2.5_bf16.sh
Normal file
3
evaluate_qwen2.5_bf16.sh
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:4dcf47c9869e6f8db01ecd32295aefb06c2edbb9be868ac64076a25f36b05e3d
|
||||||
|
size 2809
|
||||||
3
evaluate_qwen2.5_w4a16.sh
Normal file
3
evaluate_qwen2.5_w4a16.sh
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:a4c1ec144f57b943e9ddb7474d25eb69df6cd49db3397b1628dae32de4bdc17f
|
||||||
|
size 5057
|
||||||
3
evaluate_qwen2.5_w8a16.sh
Normal file
3
evaluate_qwen2.5_w8a16.sh
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:c4b0d7f494eeb1cd8f05aa1446c77f71d40e7472d9188b7ece26870f496b152c
|
||||||
|
size 5068
|
||||||
3
evaluate_qwen2.5_w8a8.sh
Normal file
3
evaluate_qwen2.5_w8a8.sh
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:a20d9b14a551d3616ddc8b8174f479aa662e56b56588b0356eebc441c837e889
|
||||||
|
size 6058
|
||||||
6
generation_config.json
Normal file
6
generation_config.json
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
{
|
||||||
|
"bos_token_id": 151643,
|
||||||
|
"eos_token_id": 151643,
|
||||||
|
"max_new_tokens": 2048,
|
||||||
|
"transformers_version": "4.45.1"
|
||||||
|
}
|
||||||
3
merges.txt
Normal file
3
merges.txt
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:8831e4f1a044471340f7c0a83d7bd71306a5b867e95fd870f74d0c5308a904d5
|
||||||
|
size 1671853
|
||||||
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:28cdedd19dd7c9e9f4b4b7b6a140860e9ea977082a8353e18c9ecf851dbae28c
|
||||||
|
size 903168128
|
||||||
39
quantize_qwen2.5_fp8.sh
Normal file
39
quantize_qwen2.5_fp8.sh
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
|
||||||
|
source ~/environments/clearml/bin/activate
|
||||||
|
|
||||||
|
recipe_template=$(cat <<'EOF'
|
||||||
|
quant_stage:
|
||||||
|
quant_modifiers:
|
||||||
|
QuantizationModifier:
|
||||||
|
ignore: ["lm_head"]
|
||||||
|
scheme: FP8
|
||||||
|
targets: ["Linear"]
|
||||||
|
observer: "mse"
|
||||||
|
EOF
|
||||||
|
)
|
||||||
|
|
||||||
|
for size in 0.5B 1.5B 3B 7B 32B 72B
|
||||||
|
do
|
||||||
|
for version in base instruct
|
||||||
|
do
|
||||||
|
|
||||||
|
|
||||||
|
if [ $version = "base" ]; then
|
||||||
|
model="Qwen2.5-${size}"
|
||||||
|
else
|
||||||
|
model="Qwen2.5-${size}-Instruct"
|
||||||
|
fi
|
||||||
|
|
||||||
|
prefix="${model//./_}""__llm_compressor__calibration__mse__512__8196__damp01"
|
||||||
|
|
||||||
|
python /cache/git/research/automation/pipelines/pipeline_llmcompressor_oneshot.py \
|
||||||
|
--model-id "Qwen/"$model \
|
||||||
|
--project-name "LLM quantization - FP8/llmcompressor/Qwen2.5" \
|
||||||
|
--task-prefix $prefix \
|
||||||
|
--recipe "${recipe}" \
|
||||||
|
--num-samples 512 \
|
||||||
|
--max-seq-len 8196 \
|
||||||
|
--tags "Qwen2.5" "W4A16" "calibration" $size "MSE" $version
|
||||||
|
|
||||||
|
done
|
||||||
|
done
|
||||||
3
quantize_qwen2.5_w4a16.sh
Normal file
3
quantize_qwen2.5_w4a16.sh
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:aeb4d42755e0894f7e192c9cd65d00c7c17e6efb4c8bcb440d4a3bb874034385
|
||||||
|
size 1093
|
||||||
3
quantize_qwen2.5_w8a16.sh
Normal file
3
quantize_qwen2.5_w8a16.sh
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:023f19180acd661ea3f74a4c5468b04fad69f95cb8b737d4e2a7d85bb11ae70b
|
||||||
|
size 6377
|
||||||
3
quantize_qwen2.5_w8a8.sh
Normal file
3
quantize_qwen2.5_w8a8.sh
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:9aa74e463dd28a2aaed0481cec2c516efb9849cf8f2eaae1e115dd9448e3b1b6
|
||||||
|
size 3785
|
||||||
3
quantize_qwen2.5_w8a8_sq.sh
Normal file
3
quantize_qwen2.5_w8a8_sq.sh
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:6fd17d76bd1ce071b9f5eeec469877e33bcdc7f8d58bff728e8063adbe5af526
|
||||||
|
size 1261
|
||||||
18
recipe.yaml
Normal file
18
recipe.yaml
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
quant_stage:
|
||||||
|
quant_modifiers:
|
||||||
|
SmoothQuantModifier:
|
||||||
|
smoothing_strength: 0.9
|
||||||
|
mappings:
|
||||||
|
- - ['re:.*q_proj', 're:.*k_proj', 're:.*v_proj']
|
||||||
|
- re:.*input_layernorm
|
||||||
|
- - ['re:.*gate_proj', 're:.*up_proj']
|
||||||
|
- re:.*post_attention_layernorm
|
||||||
|
- - ['re:.*down_proj']
|
||||||
|
- re:.*up_proj
|
||||||
|
GPTQModifier:
|
||||||
|
sequential_update: true
|
||||||
|
dampening_frac: 0.1
|
||||||
|
ignore: [lm_head]
|
||||||
|
scheme: W8A8
|
||||||
|
targets: Linear
|
||||||
|
observer: mse
|
||||||
31
special_tokens_map.json
Normal file
31
special_tokens_map.json
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
{
|
||||||
|
"additional_special_tokens": [
|
||||||
|
"<|im_start|>",
|
||||||
|
"<|im_end|>",
|
||||||
|
"<|object_ref_start|>",
|
||||||
|
"<|object_ref_end|>",
|
||||||
|
"<|box_start|>",
|
||||||
|
"<|box_end|>",
|
||||||
|
"<|quad_start|>",
|
||||||
|
"<|quad_end|>",
|
||||||
|
"<|vision_start|>",
|
||||||
|
"<|vision_end|>",
|
||||||
|
"<|vision_pad|>",
|
||||||
|
"<|image_pad|>",
|
||||||
|
"<|video_pad|>"
|
||||||
|
],
|
||||||
|
"eos_token": {
|
||||||
|
"content": "<|endoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"pad_token": {
|
||||||
|
"content": "<|endoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:bb73a25aba3c83c6c815a03a334b0440bd549f9a54fa3673e005f5532f6b32fe
|
||||||
|
size 11421995
|
||||||
3
tokenizer_config.json
Normal file
3
tokenizer_config.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:cefaa66de8fae4a09ca18a9c3a7fd8b61311ed568e5f4e634f6a3d95a2a9e889
|
||||||
|
size 7229
|
||||||
BIN
vocab.json
(Stored with Git LFS)
Normal file
BIN
vocab.json
(Stored with Git LFS)
Normal file
Binary file not shown.
Reference in New Issue
Block a user