初始化项目,由ModelHub XC社区提供模型
Model: RedHatAI/Qwen2.5-0.5B-Instruct-quantized.w8a8 Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
242
README.md
Normal file
242
README.md
Normal file
@@ -0,0 +1,242 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
|
||||
language:
|
||||
- en
|
||||
pipeline_tag: text-generation
|
||||
base_model: Qwen/Qwen2.5-0.5B-Instruct
|
||||
tags:
|
||||
- chat
|
||||
- neuralmagic
|
||||
- llmcompressor
|
||||
---
|
||||
|
||||
# Qwen2.5-0.5B-Instruct-quantized.w8a8
|
||||
|
||||
## Model Overview
|
||||
- **Model Architecture:** Qwen2
|
||||
- **Input:** Text
|
||||
- **Output:** Text
|
||||
- **Model Optimizations:**
|
||||
- **Activation quantization:** INT8
|
||||
- **Weight quantization:** INT8
|
||||
- **Intended Use Cases:** Intended for commercial and research use multiple languages. Similarly to [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct), this models is intended for assistant-like chat.
|
||||
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws).
|
||||
- **Release Date:** 10/09/2024
|
||||
- **Version:** 1.0
|
||||
- **License(s):** [apache-2.0](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE)
|
||||
- **Model Developers:** Neural Magic
|
||||
|
||||
Quantized version of [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
|
||||
It achieves an average score of 43.38 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark version 1 and 23.42 on version 2, whereas the unquantized model achieves 43.64 on version 1 and 23.39 on version 2.
|
||||
|
||||
### Model Optimizations
|
||||
|
||||
This model was obtained by quantizing the weights of [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) to INT8 data type.
|
||||
This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
|
||||
Weight quantization also reduces disk size requirements by approximately 50%.
|
||||
|
||||
Only weights and activations of the linear operators within transformers blocks are quantized.
|
||||
Weights are quantized with a symmetric static per-channel scheme, where a fixed linear scaling factor is applied between INT8 and floating point representations for each output channel dimension.
|
||||
Activations are quantized with a symmetric dynamic per-token scheme, computing a linear scaling factor at runtime for each token between INT8 and floating point representations.
|
||||
|
||||
## Deployment
|
||||
|
||||
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
|
||||
|
||||
```python
|
||||
from vllm import LLM, SamplingParams
|
||||
from transformers import AutoTokenizer
|
||||
|
||||
model_id = "neuralmagic/Qwen2.5-0.5B-Instruct-quantized.w8a8"
|
||||
number_gpus = 1
|
||||
max_model_len = 8192
|
||||
|
||||
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||||
|
||||
prompt = "Give me a short introduction to large language model."
|
||||
|
||||
llm = LLM(model=model_id, tensor_parallel_size=number_gpus, max_model_len=max_model_len)
|
||||
|
||||
outputs = llm.generate(prompt, sampling_params)
|
||||
|
||||
generated_text = outputs[0].outputs[0].text
|
||||
print(generated_text)
|
||||
```
|
||||
|
||||
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
|
||||
|
||||
|
||||
## Evaluation
|
||||
|
||||
The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
|
||||
```
|
||||
lm_eval \
|
||||
--model vllm \
|
||||
--model_args pretrained="neuralmagic/Qwen2.5-0.5B-Instruct-quantized.w8a8",dtype=auto,gpu_memory_utilization=0.9,add_bos_token=True,max_model_len=4096,enable_chunk_prefill=True,tensor_parallel_size=1 \
|
||||
--tasks openllm \
|
||||
--batch_size auto
|
||||
```
|
||||
|
||||
### Accuracy
|
||||
|
||||
#### Open LLM Leaderboard evaluation scores
|
||||
<table>
|
||||
<tr>
|
||||
<td><strong>Benchmark</strong>
|
||||
</td>
|
||||
<td><strong>Qwen2.5-0.5B-Instruct</strong>
|
||||
</td>
|
||||
<td><strong>Qwen2.5-0.5B-Instruct-quantized.w8a8 (this model)</strong>
|
||||
</td>
|
||||
<td><strong>Recovery</strong>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td rowspan="7" ><strong>OpenLLM v1</strong>
|
||||
</td>
|
||||
<td>MMLU (5-shot)
|
||||
</td>
|
||||
<td>46.83
|
||||
</td>
|
||||
<td>46.29
|
||||
</td>
|
||||
<td>98.9%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>ARC Challenge (25-shot)
|
||||
</td>
|
||||
<td>33.62
|
||||
</td>
|
||||
<td>33.36
|
||||
</td>
|
||||
<td>99.2%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>GSM-8K (5-shot, strict-match)
|
||||
</td>
|
||||
<td>33.21
|
||||
</td>
|
||||
<td>33.21
|
||||
</td>
|
||||
<td>100.0%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Hellaswag (10-shot)
|
||||
</td>
|
||||
<td>51.31
|
||||
</td>
|
||||
<td>50.97
|
||||
</td>
|
||||
<td>99.3%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Winogrande (5-shot)
|
||||
</td>
|
||||
<td>55.01
|
||||
</td>
|
||||
<td>55.01
|
||||
</td>
|
||||
<td>100.0%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>TruthfulQA (0-shot, mc2)
|
||||
</td>
|
||||
<td>41.85
|
||||
</td>
|
||||
<td>41.47
|
||||
</td>
|
||||
<td>99.1%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><strong>Average</strong>
|
||||
</td>
|
||||
<td><strong>43.64</strong>
|
||||
</td>
|
||||
<td><strong>43.38</strong>
|
||||
</td>
|
||||
<td><strong>99.4%</strong>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td rowspan="7" ><strong>OpenLLM v2</strong>
|
||||
</td>
|
||||
<td>MMLU-Pro (5-shot)
|
||||
</td>
|
||||
<td>17.49
|
||||
</td>
|
||||
<td>16.95
|
||||
</td>
|
||||
<td>96.9%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>IFEval (0-shot)
|
||||
</td>
|
||||
<td>31.17
|
||||
</td>
|
||||
<td>32.04
|
||||
</td>
|
||||
<td>102.8%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>BBH (3-shot)
|
||||
</td>
|
||||
<td>32.79
|
||||
</td>
|
||||
<td>32.51
|
||||
</td>
|
||||
<td>99.2%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Math-lvl-5 (4-shot)
|
||||
</td>
|
||||
<td>0.21
|
||||
</td>
|
||||
<td>0.17
|
||||
</td>
|
||||
<td>***
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>GPQA (0-shot)
|
||||
</td>
|
||||
<td>25.67
|
||||
</td>
|
||||
<td>26.12
|
||||
</td>
|
||||
<td>101.8%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>MuSR (0-shot)
|
||||
</td>
|
||||
<td>33.02
|
||||
</td>
|
||||
<td>32.75
|
||||
</td>
|
||||
<td>99.2%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><strong>Average</strong>
|
||||
</td>
|
||||
<td><strong>23.39</strong>
|
||||
</td>
|
||||
<td><strong>23.42</strong>
|
||||
</td>
|
||||
<td><strong>100.1%</strong>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
*** Reference value too low to report meaningful recovery.
|
||||
24
added_tokens.json
Normal file
24
added_tokens.json
Normal file
@@ -0,0 +1,24 @@
|
||||
{
|
||||
"</tool_call>": 151658,
|
||||
"<tool_call>": 151657,
|
||||
"<|box_end|>": 151649,
|
||||
"<|box_start|>": 151648,
|
||||
"<|endoftext|>": 151643,
|
||||
"<|file_sep|>": 151664,
|
||||
"<|fim_middle|>": 151660,
|
||||
"<|fim_pad|>": 151662,
|
||||
"<|fim_prefix|>": 151659,
|
||||
"<|fim_suffix|>": 151661,
|
||||
"<|im_end|>": 151645,
|
||||
"<|im_start|>": 151644,
|
||||
"<|image_pad|>": 151655,
|
||||
"<|object_ref_end|>": 151647,
|
||||
"<|object_ref_start|>": 151646,
|
||||
"<|quad_end|>": 151651,
|
||||
"<|quad_start|>": 151650,
|
||||
"<|repo_name|>": 151663,
|
||||
"<|video_pad|>": 151656,
|
||||
"<|vision_end|>": 151653,
|
||||
"<|vision_pad|>": 151654,
|
||||
"<|vision_start|>": 151652
|
||||
}
|
||||
3
config.json
Normal file
3
config.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:cc9d7ff9d7e7ed943291edd199303cd8943755acc8e7b8cf05d6fda15f420598
|
||||
size 1920
|
||||
1
configuration.json
Normal file
1
configuration.json
Normal file
@@ -0,0 +1 @@
|
||||
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}
|
||||
14
generation_config.json
Normal file
14
generation_config.json
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"bos_token_id": 151643,
|
||||
"do_sample": true,
|
||||
"eos_token_id": [
|
||||
151645,
|
||||
151643
|
||||
],
|
||||
"pad_token_id": 151643,
|
||||
"repetition_penalty": 1.1,
|
||||
"temperature": 0.7,
|
||||
"top_k": 20,
|
||||
"top_p": 0.8,
|
||||
"transformers_version": "4.45.1"
|
||||
}
|
||||
3
merges.txt
Normal file
3
merges.txt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8831e4f1a044471340f7c0a83d7bd71306a5b867e95fd870f74d0c5308a904d5
|
||||
size 1671853
|
||||
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5e8edffe394b0fef5422540357eedaa30e4b072060a728ce2a344402da8d24e2
|
||||
size 903168128
|
||||
9
recipe.yaml
Normal file
9
recipe.yaml
Normal file
@@ -0,0 +1,9 @@
|
||||
quant_stage:
|
||||
quant_modifiers:
|
||||
GPTQModifier:
|
||||
sequential_update: true
|
||||
dampening_frac: 0.01
|
||||
ignore: [lm_head]
|
||||
scheme: W8A8
|
||||
targets: Linear
|
||||
observer: mse
|
||||
31
special_tokens_map.json
Normal file
31
special_tokens_map.json
Normal file
@@ -0,0 +1,31 @@
|
||||
{
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>",
|
||||
"<|object_ref_start|>",
|
||||
"<|object_ref_end|>",
|
||||
"<|box_start|>",
|
||||
"<|box_end|>",
|
||||
"<|quad_start|>",
|
||||
"<|quad_end|>",
|
||||
"<|vision_start|>",
|
||||
"<|vision_end|>",
|
||||
"<|vision_pad|>",
|
||||
"<|image_pad|>",
|
||||
"<|video_pad|>"
|
||||
],
|
||||
"eos_token": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:bb73a25aba3c83c6c815a03a334b0440bd549f9a54fa3673e005f5532f6b32fe
|
||||
size 11421995
|
||||
3
tokenizer_config.json
Normal file
3
tokenizer_config.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:7e88129d9769a0b14b1587a7d5e829fe93ac0e1511636471fdfc0811951418e6
|
||||
size 7306
|
||||
BIN
vocab.json
(Stored with Git LFS)
Normal file
BIN
vocab.json
(Stored with Git LFS)
Normal file
Binary file not shown.
Reference in New Issue
Block a user