初始化项目,由ModelHub XC社区提供模型
Model: RedHatAI/Qwen2-1.5B-Instruct-quantized.w8a16 Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
257
README.md
Normal file
257
README.md
Normal file
@@ -0,0 +1,257 @@
|
||||
---
|
||||
language:
|
||||
- en
|
||||
pipeline_tag: text-generation
|
||||
license: apache-2.0
|
||||
license_link: https://www.apache.org/licenses/LICENSE-2.0
|
||||
---
|
||||
|
||||
# Qwen2-1.5B-Instruct-quantized.w8a16
|
||||
|
||||
## Model Overview
|
||||
- **Model Architecture:** Qwen2
|
||||
- **Input:** Text
|
||||
- **Output:** Text
|
||||
- **Model Optimizations:**
|
||||
- **Weight quantization:** INT8
|
||||
- **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct), this models is intended for assistant-like chat.
|
||||
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
|
||||
- **Release Date:** 7/2/2024
|
||||
- **Version:** 1.0
|
||||
- **License(s):** [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
||||
- **Model Developers:** Neural Magic
|
||||
|
||||
Quantized version of [Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct).
|
||||
It achieves an average score of 55.38 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 55.17.
|
||||
|
||||
### Model Optimizations
|
||||
|
||||
This model was obtained by quantizing the weights of [Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) to INT8 data type.
|
||||
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
|
||||
|
||||
Only the weights of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the INT8 and floating point representations of the quantized weights.
|
||||
[AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) is used for quantization with 1% damping factor and 256 sequences of 8,192 random tokens.
|
||||
|
||||
|
||||
## Deployment
|
||||
|
||||
### Use with vLLM
|
||||
|
||||
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
|
||||
|
||||
```python
|
||||
from vllm import LLM, SamplingParams
|
||||
from transformers import AutoTokenizer
|
||||
|
||||
model_id = "neuralmagic/Qwen2-1.5B-Instruct-quantized.w8a16"
|
||||
number_gpus = 1
|
||||
|
||||
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
|
||||
{"role": "user", "content": "Who are you?"},
|
||||
]
|
||||
|
||||
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
|
||||
|
||||
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
|
||||
|
||||
outputs = llm.generate(prompts, sampling_params)
|
||||
|
||||
generated_text = outputs[0].outputs[0].text
|
||||
print(generated_text)
|
||||
```
|
||||
|
||||
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
|
||||
|
||||
### Use with transformers
|
||||
|
||||
This model is supported by Transformers leveraging the integration with the [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) data format.
|
||||
The following example contemplates how the model can be used using the `generate()` function.
|
||||
|
||||
```python
|
||||
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||
|
||||
model_id = "neuralmagic/Qwen2-1.5B-Instruct-quantized.w8a16"
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_id,
|
||||
torch_dtype="auto",
|
||||
device_map="auto",
|
||||
)
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
|
||||
{"role": "user", "content": "Who are you?"},
|
||||
]
|
||||
|
||||
input_ids = tokenizer.apply_chat_template(
|
||||
messages,
|
||||
add_generation_prompt=True,
|
||||
return_tensors="pt"
|
||||
).to(model.device)
|
||||
|
||||
terminators = [
|
||||
tokenizer.eos_token_id,
|
||||
tokenizer.convert_tokens_to_ids("<|eot_id|>")
|
||||
]
|
||||
|
||||
outputs = model.generate(
|
||||
input_ids,
|
||||
max_new_tokens=256,
|
||||
eos_token_id=terminators,
|
||||
do_sample=True,
|
||||
temperature=0.7,
|
||||
top_p=0.8,
|
||||
)
|
||||
response = outputs[0][input_ids.shape[-1]:]
|
||||
print(tokenizer.decode(response, skip_special_tokens=True))
|
||||
```
|
||||
|
||||
## Creation
|
||||
|
||||
This model was created by applying the [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) library as presented in the code snipet below.
|
||||
Although AutoGPTQ was used for this particular model, Neural Magic is transitioning to using [llm-compressor](https://github.com/vllm-project/llm-compressor) which supports several quantization schemes and models not supported by AutoGPTQ.
|
||||
|
||||
```python
|
||||
from transformers import AutoTokenizer
|
||||
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
||||
import random
|
||||
|
||||
model_id = "Qwen/Qwen2-1.5B-Instruct"
|
||||
|
||||
num_samples = 256
|
||||
max_seq_len = 8192
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||||
|
||||
max_token_id = len(tokenizer.get_vocab()) - 1
|
||||
examples = []
|
||||
for _ in range(num_samples):
|
||||
examples.append(
|
||||
{
|
||||
"input_ids": [random.randint(0, max_token_id) for _ in range(max_seq_len)],
|
||||
"attention_mask": max_seq_len*[1],
|
||||
}
|
||||
)
|
||||
|
||||
quantize_config = BaseQuantizeConfig(
|
||||
bits=8,
|
||||
group_size=-1,
|
||||
desc_act=False,
|
||||
model_file_base_name="model",
|
||||
damp_percent=0.01,
|
||||
)
|
||||
|
||||
model = AutoGPTQForCausalLM.from_pretrained(
|
||||
model_id,
|
||||
quantize_config,
|
||||
device_map="auto",
|
||||
)
|
||||
|
||||
model.quantize(examples)
|
||||
model.save_pretrained("Qwen2-1.5B-Instruct-quantized.w8a16")
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Evaluation
|
||||
|
||||
The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
|
||||
```
|
||||
lm_eval \
|
||||
--model vllm \
|
||||
--model_args pretrained="neuralmagic/Qwen2-1.5B-Instruct-quantized.w8a16",dtype=auto,gpu_memory_utilization=0.4,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
|
||||
--tasks openllm \
|
||||
--batch_size auto
|
||||
```
|
||||
|
||||
### Accuracy
|
||||
|
||||
#### Open LLM Leaderboard evaluation scores
|
||||
<table>
|
||||
<tr>
|
||||
<td><strong>Benchmark</strong>
|
||||
</td>
|
||||
<td><strong>Qwen2-1.5B-Instruct</strong>
|
||||
</td>
|
||||
<td><strong>Qwen2-1.5B-Instruct-quantized.w8a16 (this model)</strong>
|
||||
</td>
|
||||
<td><strong>Recovery</strong>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>MMLU (5-shot)
|
||||
</td>
|
||||
<td>55.65
|
||||
</td>
|
||||
<td>56.08
|
||||
</td>
|
||||
<td>100.8%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>ARC Challenge (25-shot)
|
||||
</td>
|
||||
<td>42.83
|
||||
</td>
|
||||
<td>43.09
|
||||
</td>
|
||||
<td>100.6%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>GSM-8K (5-shot, strict-match)
|
||||
</td>
|
||||
<td>58.07
|
||||
</td>
|
||||
<td>58.00
|
||||
</td>
|
||||
<td>99.9%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Hellaswag (10-shot)
|
||||
</td>
|
||||
<td>67.43
|
||||
</td>
|
||||
<td>67.44
|
||||
</td>
|
||||
<td>100.0%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Winogrande (5-shot)
|
||||
</td>
|
||||
<td>63.69
|
||||
</td>
|
||||
<td>64.33
|
||||
</td>
|
||||
<td>101.0%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>TruthfulQA (0-shot)
|
||||
</td>
|
||||
<td>43.34
|
||||
</td>
|
||||
<td>43.38
|
||||
</td>
|
||||
<td>100.1%
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><strong>Average</strong>
|
||||
</td>
|
||||
<td><strong>55.17</strong>
|
||||
</td>
|
||||
<td><strong>55.38</strong>
|
||||
</td>
|
||||
<td><strong>100.4%</strong>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
5
added_tokens.json
Normal file
5
added_tokens.json
Normal file
@@ -0,0 +1,5 @@
|
||||
{
|
||||
"<|endoftext|>": 151643,
|
||||
"<|im_end|>": 151645,
|
||||
"<|im_start|>": 151644
|
||||
}
|
||||
3
config.json
Normal file
3
config.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:6e893c235d0e4b471cb508b04cb3e0e28452666ff383e34846036b5f56c360fd
|
||||
size 1306
|
||||
1
configuration.json
Normal file
1
configuration.json
Normal file
@@ -0,0 +1 @@
|
||||
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}
|
||||
6
generation_config.json
Normal file
6
generation_config.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 151643,
|
||||
"eos_token_id": 151645,
|
||||
"transformers_version": "4.42.1"
|
||||
}
|
||||
3
merges.txt
Normal file
3
merges.txt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8831e4f1a044471340f7c0a83d7bd71306a5b867e95fd870f74d0c5308a904d5
|
||||
size 1671853
|
||||
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:887f0d666be8996106023787674c6534f6c993a3fa0790149b28c2dcea7f3e7d
|
||||
size 1781306400
|
||||
20
special_tokens_map.json
Normal file
20
special_tokens_map.json
Normal file
@@ -0,0 +1,20 @@
|
||||
{
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>"
|
||||
],
|
||||
"eos_token": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a12e3ba5d5e0ad173cf7b408ab8534c6be8cbc6a146714e9c7dc8cf2346603b1
|
||||
size 7028043
|
||||
3
tokenizer_config.json
Normal file
3
tokenizer_config.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:c8f4ae7809a0d555cce69fa8632f03e1294c779f0103df48b2b1f85acb82d1d3
|
||||
size 1299
|
||||
BIN
vocab.json
(Stored with Git LFS)
Normal file
BIN
vocab.json
(Stored with Git LFS)
Normal file
Binary file not shown.
Reference in New Issue
Block a user