commit 0349408749632bc9e9fbc9fd08750f14276a977d Author: ModelHub XC Date: Wed May 6 12:49:34 2026 +0800 初始化项目,由ModelHub XC社区提供模型 Model: RedHatAI/Qwen2.5-14B-quantized.w8a8 Source: Original Platform diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..52373fe --- /dev/null +++ b/.gitattributes @@ -0,0 +1,36 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +tokenizer.json filter=lfs diff=lfs merge=lfs -text diff --git a/README.md b/README.md new file mode 100644 index 0000000..43b0a70 --- /dev/null +++ b/README.md @@ -0,0 +1,168 @@ +--- +license: apache-2.0 +license_link: https://huggingface.co/Qwen/Qwen2.5-14B/blob/main/LICENSE +language: +- en +pipeline_tag: text-generation +base_model: Qwen/Qwen2.5-14B +tags: +- chat +- neuralmagic +- llmcompressor +--- + +# Qwen2.5-14B-quantized.w8a8 + +## Model Overview +- **Model Architecture:** Qwen2 + - **Input:** Text + - **Output:** Text +- **Model Optimizations:** + - **Activation quantization:** INT8 + - **Weight quantization:** INT8 +- **Intended Use Cases:** Intended for commercial and research use multiple languages. Similarly to [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B), this models is intended for assistant-like chat. +- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). +- **Release Date:** 12/03/2024 +- **Version:** 1.0 +- **License(s):** [apache-2.0](https://huggingface.co/Qwen/Qwen2.5-14B/blob/main/LICENSE) +- **Model Developers:** Neural Magic + +Quantized version of [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B). +It achieves an average score of 75.43 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 75.66. + +### Model Optimizations + +This model was obtained by quantizing the weights of [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) to INT8 data type. +This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x). +Weight quantization also reduces disk size requirements by approximately 50%. + +Only weights and activations of the linear operators within transformers blocks are quantized. +Weights are quantized with a symmetric static per-channel scheme, where a fixed linear scaling factor is applied between INT8 and floating point representations for each output channel dimension. +Activations are quantized with a symmetric dynamic per-token scheme, computing a linear scaling factor at runtime for each token between INT8 and floating point representations. + +## Deployment + +This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below. + +```python +from vllm import LLM, SamplingParams +from transformers import AutoTokenizer + +model_id = "neuralmagic/Qwen2.5-14B-quantized.w8a8" +number_gpus = 1 +max_model_len = 8192 + +sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256) + +tokenizer = AutoTokenizer.from_pretrained(model_id) + +prompt = "Give me a short introduction to large language model." + +llm = LLM(model=model_id, tensor_parallel_size=number_gpus, max_model_len=max_model_len) + +outputs = llm.generate(prompt, sampling_params) + +generated_text = outputs[0].outputs[0].text +print(generated_text) +``` + +vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. + + +## Evaluation + +The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command: +``` +lm_eval \ + --model vllm \ + --model_args pretrained="neuralmagic/Qwen2.5-14B-quantized.w8a8",dtype=auto,gpu_memory_utilization=0.9,add_bos_token=True,max_model_len=4096,enable_chunk_prefill=True,tensor_parallel_size=1 \ + --tasks openllm \ + --batch_size auto +``` + +### Accuracy + +#### Open LLM Leaderboard evaluation scores + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Benchmark + Qwen2.5-14B + Qwen2.5-14B-quantized.w8a8 (this model) + Recovery +
MMLU (5-shot) + 79.71 + 79.38 + 99.6% +
ARC Challenge (25-shot) + 65.70 + 65.27 + 99.4% +
GSM-8K (5-shot, strict-match) + 84.46 + 83.93 + 99.4% +
Hellaswag (10-shot) + 84.28 + 84.16 + 99.9% +
Winogrande (5-shot) + 81.37 + 81.22 + 99.8% +
TruthfulQA (0-shot, mc2) + 58.46 + 58.65 + 100.3% +
Average + 75.66 + 75.43 + 99.7% +
+ diff --git a/added_tokens.json b/added_tokens.json new file mode 100644 index 0000000..482ced4 --- /dev/null +++ b/added_tokens.json @@ -0,0 +1,24 @@ +{ + "": 151658, + "": 151657, + "<|box_end|>": 151649, + "<|box_start|>": 151648, + "<|endoftext|>": 151643, + "<|file_sep|>": 151664, + "<|fim_middle|>": 151660, + "<|fim_pad|>": 151662, + "<|fim_prefix|>": 151659, + "<|fim_suffix|>": 151661, + "<|im_end|>": 151645, + "<|im_start|>": 151644, + "<|image_pad|>": 151655, + "<|object_ref_end|>": 151647, + "<|object_ref_start|>": 151646, + "<|quad_end|>": 151651, + "<|quad_start|>": 151650, + "<|repo_name|>": 151663, + "<|video_pad|>": 151656, + "<|vision_end|>": 151653, + "<|vision_pad|>": 151654, + "<|vision_start|>": 151652 +} diff --git a/config.json b/config.json new file mode 100644 index 0000000..bb61315 --- /dev/null +++ b/config.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0bbb2d296998a650376303a665f9db4336a064821cf5514242b564c65cae0f4a +size 1790 diff --git a/configuration.json b/configuration.json new file mode 100644 index 0000000..bbeeda1 --- /dev/null +++ b/configuration.json @@ -0,0 +1 @@ +{"framework": "pytorch", "task": "text-generation", "allow_remote": true} \ No newline at end of file diff --git a/generation_config.json b/generation_config.json new file mode 100644 index 0000000..7f732c8 --- /dev/null +++ b/generation_config.json @@ -0,0 +1,6 @@ +{ + "bos_token_id": 151643, + "eos_token_id": 151643, + "max_new_tokens": 2048, + "transformers_version": "4.46.3" +} diff --git a/merges.txt b/merges.txt new file mode 100644 index 0000000..80c1a19 --- /dev/null +++ b/merges.txt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8831e4f1a044471340f7c0a83d7bd71306a5b867e95fd870f74d0c5308a904d5 +size 1671853 diff --git a/model-00001-of-00004.safetensors b/model-00001-of-00004.safetensors new file mode 100644 index 0000000..31008ef --- /dev/null +++ b/model-00001-of-00004.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72926571b0149cc7489a67230560949f1c9bafc1cc96602f4c59509a449dd802 +size 4995436856 diff --git a/model-00002-of-00004.safetensors b/model-00002-of-00004.safetensors new file mode 100644 index 0000000..8214173 --- /dev/null +++ b/model-00002-of-00004.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:379fa761200346b9cb3d647ce65b5164e72796375d5800382d70dbe42050947e +size 4956808544 diff --git a/model-00003-of-00004.safetensors b/model-00003-of-00004.safetensors new file mode 100644 index 0000000..3373f13 --- /dev/null +++ b/model-00003-of-00004.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e31ce13fba87beed5e6f3e9a1602bc7854aab180729d812cafd0fe73e67e29d +size 4823057400 diff --git a/model-00004-of-00004.safetensors b/model-00004-of-00004.safetensors new file mode 100644 index 0000000..eecc964 --- /dev/null +++ b/model-00004-of-00004.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d83f275ba785ba929f36888efba32616c0d01e39556471d45bb5bcfd8fc2f17d +size 1557135488 diff --git a/model.safetensors.index.json b/model.safetensors.index.json new file mode 100644 index 0000000..b659e40 --- /dev/null +++ b/model.safetensors.index.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5c3536ebf41d514dd2f0cf89fa423a1d4cbe999917a4a0666632a852fd3a321 +size 76778 diff --git a/recipe.yaml b/recipe.yaml new file mode 100644 index 0000000..1ab789e --- /dev/null +++ b/recipe.yaml @@ -0,0 +1,21 @@ +quant_stage: + quant_modifiers: + SmoothQuantModifier: + smoothing_strength: 0.8 + mappings: + - - ['re:.*q_proj', 're:.*k_proj', 're:.*v_proj'] + - re:.*input_layernorm + - - ['re:.*gate_proj', 're:.*up_proj'] + - re:.*post_attention_layernorm + - - ['re:.*down_proj'] + - re:.*up_proj + GPTQModifier: + sequential_update: true + dampening_frac: 0.01 + ignore: [lm_head] + config_groups: + group_0: + targets: [Linear] + weights: {num_bits: 8, type: int, symmetric: true, strategy: channel, observer: mse} + input_activations: {num_bits: 8, type: int, symmetric: true, strategy: token, dynamic: true, + observer: memoryless} diff --git a/special_tokens_map.json b/special_tokens_map.json new file mode 100644 index 0000000..17305b3 --- /dev/null +++ b/special_tokens_map.json @@ -0,0 +1,31 @@ +{ + "additional_special_tokens": [ + "<|im_start|>", + "<|im_end|>", + "<|object_ref_start|>", + "<|object_ref_end|>", + "<|box_start|>", + "<|box_end|>", + "<|quad_start|>", + "<|quad_end|>", + "<|vision_start|>", + "<|vision_end|>", + "<|vision_pad|>", + "<|image_pad|>", + "<|video_pad|>" + ], + "eos_token": { + "content": "<|endoftext|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + "pad_token": { + "content": "<|endoftext|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + } +} diff --git a/tokenizer.json b/tokenizer.json new file mode 100644 index 0000000..33d22a4 --- /dev/null +++ b/tokenizer.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb73a25aba3c83c6c815a03a334b0440bd549f9a54fa3673e005f5532f6b32fe +size 11421995 diff --git a/tokenizer_config.json b/tokenizer_config.json new file mode 100644 index 0000000..acb1852 --- /dev/null +++ b/tokenizer_config.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cefaa66de8fae4a09ca18a9c3a7fd8b61311ed568e5f4e634f6a3d95a2a9e889 +size 7229 diff --git a/vocab.json b/vocab.json new file mode 100644 index 0000000..6c49fc6 --- /dev/null +++ b/vocab.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910 +size 2776833